Transcript for:

[Music] AWS is a cloud computing platform provided by Amazon AWS is popular among startups and Enterprises because it allows them to scale their resources up or down as needed hey everyone welcome to AWS solution architect full course by edureka by the end of this full course video you will have a complete working understanding of AWS from Theory to actual applications if you like watching the videos like this then subscribe to edureka's YouTube channel and hit the Bell icon to never miss any updates from us we also have the websites with hundreds of training programs and certifications so if you are interested in them then check out the links given in the description box let me start by discussing the agenda of this AWS solution architect full course video we will start with the introduction to AWS solution architect certification next you will understand the basics of cloud computing and AWS then you will learn about the instances in AWS followed by AWS Lambda AWS elastic beanstack cloud storage on AWS and AWS A3 after that you will also learn about AWS networking fundamentals where you will cover very basics of what you must ensure know about networking later you will understand about AWS services such as AWS cloudfront Amazon Cloud watch AWS cloud formation next you will also learn how to use Auto scaling and load balancer in AWS now once this is done you will also look into Cloud security and AWS IM moving further you will look into the services like Amazon redshift AWS Genesis AWS Gateway Amazon SES AWS iot Amazon poly and AWS recognition so these are the topics that are covered here further you will cover some topics in devops on AWS which includes AWS code Pipeline and kubernetes on AWS and finally you will dive into the AWS solution architect interview questions and answer that will help you to prepare your interviews now let's get started with the introduction to AWS solution architect certification [Music] now let us start by understanding who is an AWS solution architect a solution architect is an individual who manages an organization's cloud computing architecture they are responsible for solving complex business problems and designing the best architectural solution using AWS services for the clients they are often referred as professionals who are responsible for creating the blueprints for application designs the solution architect is one among the three major Cloud engineering roles the other two are the cloud developer and the sysovs administrator now let us take a look at some of the roles and responsibility of a solution architect they develop the best technical Cloud strategy using the right architecture principles and services which means solution Architects have in-depth knowledge of architectural principles and services which they use to develop technical Cloud strategies solution Architects are also responsible for Designing describing and managing the solutions to various business problems in the cloud infrastructure they also assist companies in Cloud migration effort Cloud migration is when an organization moves its data and applications from on-premises architecture to the cloud the job also includes reviewing workload architectures and providing guidance on how to address high risk issues these were just some of the roles and responsibility of a Solutions architect now let us move on to the next topic and see why should one be a solution architect according to the Forester research the global public Cloud infrastructure Market will grow 35 percent to 120 billion dollars in 2021 by this we know there will always be a demand for skilled Cloud Professionals in the near future according to a survey by CIO Cloud architect is one of the top in demand jobs there is also good enough job opportunities for a solution architect with over 5000 job vacancies in India 20 000 Plus in the U.S and more than a lag worldwide solution architect is also one of the highest paying job roles in the IIT industry according to Blaster the average salary for a cloud architect in India is 19 lakhs 50 000. the average salary for a cloud architect in United States is 131 thousand dollars the average salary for a cloud architect in the UK is 71 696 pounds adding to the list of why should one be in solution architect we have the fifth point which is it provides the most flexible future learning path after you're certified as a solution architect associate level you can either opt for the professional level or choose any of the specialty certification such as Big Data machine learning or security now let us move on to the next topic for today and see the road map to AWS Solutions architect certification first we have the cloud practitioner certification next we have the solution architect associate level certification and then the solution architect professional level certification we will discuss about each of the certification in details now the first certification is a cloud practitioner certification the certification will help you have an overview of the AWS Concepts and services and will also give you a fundamental understanding of the cloud Concepts this is the easiest certification and a beginner level certification but it will require six months of experience with the AWS cloud in any role it could be technical manageable sales purchasing of financial the next certification is a cloud architect associate certification the certification is for individuals who perform a solution architect role the certification will help you validate your technical expertise in designing and deploying scalable highly available and fall tolerance system on the AWS platform for this certification you should have knowledge of how to architect and deploy secure and robust application on AWS Technologies and also provide guidance on best practices to the organization throughout the life cycle of the project the certification requires one or more years of experience designing distributed system on the AWS platform the next certification is AWS certified solution architect professional certification this is the expert level of certification the certification is for individuals who want to excel as a Solutions architect to take up this examination you should have practical experience the certification is very tough and most of the questions are based on real-time scenarios you will have to migrate complex multi-tire applications on AWS and design and deploy enterprise-wide scalable operation on the AWS platform a difference between the professional and the associate level certification is in the associate level certification you do not require in-depth knowledge of every Concepts in AWS but in the professional level you should have in-depth knowledge of many Concepts in the AWS platform now let us move on to the next topic and see the overview of the exam guide the overview of the examination guide is of the associate level certification in the format you will have two types of question one is multiple choice and the other one is multiple responses in multiple choice there is one correct answers in four option and a multiple responses there are two correct answers out of five options the time for this examination is 130 minutes and the cost for this examination is 150 US Dollars you can write this examination in either English Japanese Korean or simplified Chinese now let us take a look at the domains from which the questions are asked the first domain is to design resilient architecture from which 30 percent of the questions are asked the second domain is designing High performing architectures from which 28 of the questions are asked the third domain is to design secure applications and architectures from which 24 percent of the questions are asked and finally the fourth domain is to design cost optimized architectures from which 18 of the questions are asked finally moving on to a last topic for today which is how can you prepare for this role firstly I would advise you to start from the basics if you are a fresher then I would advise you that before you start practicing cloud computing these are some of the fundamentals you should consider the first one is networking which includes routing IP addresses networking layouts and networking protocols second learn about computer security which would cover basics of access policies encryption data security third learn about computer architecture and just try to understand how system design principles and fundamental surrounding it works and finally learn about SQL and Linux fundamentals you can learn these Topics by watching YouTube videos or researching about them online next I would definitely recommend you to take the cloud partitional certification as this will give you the fundamental understanding about the important Cloud Concepts learning for this certification would cover security and compliance Cloud technology Cloud Concepts and billing and pricing which are very helpful for solution architect certification The Next Step would be working on AWS projects you can work on projects on different services start with the easier project initially and then move on to the difficult ones you can find some good projects on Amazon web services official website The Next Step which will help you prepare for the role is learning and practicing the important AWS services for the certification some of the important services are elastic compute Cloud simple storage service relational database service virtual private Cloud Amazon can assist and Amazon Lambda you can practice some of the services using AWS free tire account AWS free tire account allows you to access over 85 AWS services for free all you have to do is submit the required information asked and you can start practicing 85 Plus services for free moving on to our next step which is referring white papers and the frequently asked questions white papers will give you technical knowledge about various AWS Concepts and services two of the best white papers for the certification would be architecting for the cloud AWS best practices and AWS well architecture framework the frequently asked questions will help you clear your dot regarding the certification the next step for preparing for the certification is solving practice test questions you can practice what you have learned in the previous step by solving practice test questions you can find practice test questions on various websites such as knowledge Hut with labs and digital cloud training now if you want to follow a structured approach then opt for an online training certification course I would highly recommend our instructor-led training sessions through which you will be able to effectively architect and deploy The Cure and robust applications using AWS this AWS training course will also help you identify the appropriate AWS service based on databases network storage cost optimization compute and security requirements take a look at some of the highlights of our AWS training features we have 30 hours of online live instructor-led classes and there will be weekend classes which will be 10 sessions of three hours each and each class will be followed by a quiz to access your learning you will get lifetime access to LMS web presentation quizzes installation guides and class recordings are available also if you have any doubts you can reach out to us we have 24 into 7 expert support and after you complete the certification course you get a certification from edureka stating you have completed an online training certification from us and also you will be a part of a forum where you can interact and share your knowledge with us foreign Computing everything nowadays is moved to the cloud running on the cloud assessed from the cloud or may be stored in the cloud so what exactly is the cloud simply put cloud computing often referred to as the cloud is a service that allows people to use online services that are generally available through any device with an internet connection this means that the user does not need to be at a certain location in order to access certain data from Computing and analytics to secure and safe data storage and networking resources everything can be delivered within no time thanks to the cloud the goal of cloud computing is to deliver these Services over the internet in order to offer faster Innovation flexible resources and economies of scale have you ever realized that you probably have been using different cloud-based applications every day whenever you share an important file over OneDrive with your colleague through the web or use a mobile app download a picture binge watch a Netflix show or play an online video game it all happens on the go the best part it saves you a lot of money and time you don't have to buy any Machinery or install any kind of software everything will be handled by the cloud platform which is running these applications whether it's Google Microsoft or Amazon many such Tech Giants have already switched from traditional computer hardware to more advanced Cloud architecture not just that these companies are also the most popular cloud service providers in the market today as more and more company undergo strategic digital Transformations designed to utilize the power of the cloud they need more ID professionals and Leaders with the expertise to extract the best business results out of their Investments foreign firstly let's understand why Cloud to understand this we need to understand the situation that existed before Cloud came into existence so what happened back then well firstly in order to host a website you had to buy a stack of servers and we all know that servers are very costly so that meant we ended up paying a lot of money next was the issue of traffic now as we all know if we are hosting a website we are dealing with traffic that is not constant throughout the day and that meant more pain we would understand that as we move further and the other thing was monitoring and maintaining your servers yes this is a very big problem now all these issues they led to certain disadvantages what are those as I mentioned servers are very costly yes the setup was again costly and thus you ended up paying a lot of money and there were other factors contributing to this point let's discuss those as well one troubleshooting was a big issue since you are dealing with a business your Prime Focus is on taking good decisions so that your business does well but if you end up troubleshooting problems or you focus more on infrastructure related issues then you cannot focus more on your business and that was a problem so either you had to do multitasking or you had to hire more people to focus on those issues thus again you ended up paying more money as I've discussed the traffic on a website is never constant and since it varies you are not certain about its patterns say for example I need to host a website and for that what I decide is I am reserving two petabytes of total memory for my usage based on the traffic but as the traffic varies there would be times when the traffic is high and my whole two petabytes of data is consumed or space is consumed rather but what if the traffic is very low for certain hours of the day I'm actually not utilizing these servers so I end up paying more money for the servers than I should be so yes upscaling was an issue so all these things were an issue because we were paying more money we did not have sufficient time to take our decisions properly there was ambiguity there was more trouble monitoring and maintaining all these resources and apart from that one important point which we need to consider is the amount of data that is being generated now and that was being generated then then it was okay but nowadays if you take a look at it the amount of data that is generated is huge and this is another reason why Cloud became so important yeah as I've mentioned the data now we all know that everything is going online these days and what that means is we shop online we buy food online we do almost everything that is required as in whatever information we need we get everything online your bookings your reservations everything can be taken care of that means we have a lot of data that is being generated these days and this is Digital Data back in those times we were communicating through verbal discussions and all those things so through paperwork and that was a different data to maintain since everything is moving on cloud or moving online the amount of data that we have is huge these days and then when you have this huge amount of data you need a space where you can actually go ahead and maintain this data so yes again there was a need of this space and all these issues that is your cost your monitoring your maintenance providing sufficient space everything was taken care by Cloud so let us try to understand what this cloud is exactly well think of it as a huge space that is available online for your usage now this is a very generic definition to give you to be more specific I would be saying that think of it as a collection of data centers now data centers again are a place where you store your data or you host applications basically so when you talk about these data centers they were already existing so what did Cloud do differently well what cloud did was it made sure that you are able to orchestrate your various functionings your applications managing your resources properly by combining all these data centers together through a network and then providing you the control to use these resources and to manage them properly to make it even more simpler I would say there was a group of people or organizations basically that went ahead and bought these servers these compute capacities storage places compute services and all those things and they have their own channel or Network all you have to do was go ahead and rent those resources only to the amount you needed and also for the time that you need it so yes this is what cloud did it let you rent the services that you need and use only those services that you need so you ended up paying for the services that you rented and you ended up saving a lot of money the other thing is these service providers they take care of all the issues like your security your underlying infrastructures and all those things so you can freely focus on your business and stop worrying about all these issues so this is what cloud is in simple words it's a huge space which has all these services available and you can just go ahead and pick and rent those that you want to use so what is cloud computing well I've already discussed that just to summarize it I would say it is nothing but an ability or it is a place where you can actually store your data you can process it and you can access it from anywhere in the world now this is an important Point say for example you decide to choose a reason for your infrastructure somewhere in U.S you can sit in maybe China or maybe in India and you can still have access to all your resources that is there in U.S all you need is a good internet connection so that is what cloud does it makes the world accessible it lets you have your applications wherever you want to and manage them the way you want to so this was about cloud computing guys next we would be discussing different service models now you need to understand one thing you are being offered cloud services the platform to use your services or your applications basically but then different people have different requirements there are certain people who just want to consume a particular resource or there are certain people who actually want to go ahead and create their own applications create their own infrastructure and all those things so based on these needs we have particular service models that is your Cloud providers provide you with a particular model which suits your needs so let us try to understand these models one by one well we have these three models that is your iaas your paas and your saas I would be discussing them in the reverse order that is I would be talking about saas first and then I would go upwards so let us start with saas or SAS saas is nothing but a software as a service now what happens here is basically you are just consuming a service which is already being maintained and handled by someone else to give you a valid example we have a Gmail all you do is you send mail to people and you receive mails and whatever functionality you do is you just use the service that is there you do not have to maintain it you do not have to worry about upscaling down scalings security issues and all those things everything is taken care by Google say for example your Gmail is what I'm talking about Google manages everything here so all you have to worry about is consuming that service now this model is known as software as a service that is saas next we have pass that is platform as a service now here you are provided with a platform where you can actually go ahead and build your own applications to give you an example we have our Google app engine now when you talk about Google app engine what you can do is you can go ahead and you can create your own applications and you can put it on on Google app engine so that others can use it as well so in short you're using that platform to create your own applications and lastly we have iaas that is infrastructure as a service now what do I mean by this well the whole infrastructure is provided to you so that you can go ahead and create your own applications that is an underlying structure is given to you based on that you can go ahead and choose your operating systems the kind of Technology you want to use on that platform the applications you want to build and all those things so that is what an iaas is infrastructure as a service basically so these were the different models that I wanted to talk about now this is the architecture that gives you a clear depiction as in what happens as far as these service models are concerned now you have something called as your SAS now here as you see all you're doing is you're consuming your data that's it or using it everything else is managed by your vendor that is your application's runtime middleware OS virtualization servers networking everything as far as your pass is concerned your data and applications are taken care by you that is you can go ahead you can build your own applications you can use the existing platform that is provided to you and finally you have your iaas now what happens here is only the basic part that is your networking storage servers and virtualization is managed by your vendor deciding what middleware OS runtime applications and data that resides on your end you have to manage all these things that is you are just given a box of car for example or maybe parts of car you go ahead and you fix it and you use it for your own sake that is what iaas is to give you another example think of it as eating a pizza now there are various ways of doing that one you order it online you sit at home you order the pizza it comes to your place you consume it that is more of your saas that is software as a service you just consume the service next is your platform as a service now when I say platform as a service you can think of it as going to a hotel and eating a pizza say for example I go there they have the infrastructure as in I have tables chairs I have to go sit just order the pizza it is given to me I consume it and I come back home and iaas now this is where you go ahead and bake your own pizza you have the infrastructure you buy it from somewhere or whatever it is you use your pizza you put in an oven you put spices all those things and you eat it now this is the difference between these three services so let us move further and discuss the next topic that is the different deployment models that are there now when you talk about deployment models you can also call them as different types of clouds that are there in the market we have these three types that is your public Cloud your private cloud and your hybrid Cloud let us try to understand these one by one now as the name suggests the public Cloud it's available to everyone you have a service provider who makes these services or these resources available to people worldwide through the internet it is an easy and very inexpensive way of dealing with the situation because all you have to do is you have to go ahead and rent this cloud and you're good to use and it is available publicly next we have the private Cloud now this is a little different here you are provided with this service and you can actually go ahead and create your own applications and since it's a private Cloud you are protected by a firewall and you do not have to worry about various other issues that are there at hand and next we have our hybrid Cloud now it is a combination of your private cloud and your public Cloud say for example you can go ahead and build your applications privately you can use them you can consume them you can use them efficiently when you sense that peak in your traffic you can actually move it to public that is you can move it to the public cloud and even others can have access to it and they can use it so these are the three basic deployment models that are there for your exposure or your usage rather and you can go ahead and use those as well so let us move further and try to understand the next topic that has different Cloud providers that are there in the market now as I've mentioned what happened was since Cloud came into existence quite a few people went ahead and they bought their own infrastructure and now they rent these services to other people and when you talk about this infrastructure there are quite a few people out there who are actually providing these cloud services to different people across the globe now when you talk about these Cloud providers the first thing that should come to your mind is Amazon web services because it is highly popular and it leaves other Cloud providers way behind the reason I'm saying this is the numbers that talk about Amazon web services to give you an example if you talk about its compute capacity it is six times larger than all the other service providers that are there in the market say for example if you talk about the other service providers in the market if their compute capacity combined was X Amazon web services alone gives you a capacity of 6X which is huge apart from that it's flexible pricing and various other reasons that is the services it provides and all those things it is rightly a global leader and the fact that it had a head start that is it started way before many other services that are there in the market it actually gained popularity and now we see quite a few organizations going ahead and using Amazon web services apart from that we have Microsoft Azure which is a Microsoft product and we all know that when Microsoft decides to do something they expect that they kill all the competition that is there in the market it is still not in terms with Amazon web services or few other service providers that are there in the market but not very neck to neck but it is probably the second best when you talk about Amazon web services or the cloud service providers in the market so yep it has a lot of catching up to do when you compare it with Amazon web services but it is still a very good cloud service provider that is there in the market then we have something called as Google Cloud platform again a very good cloud provider in the market now why am I saying this we all know the infrastructure that Google has to offer to you it is one of the best search engine that is there in the market and the amount of data they deal with every day is huge so they are the Pioneers when you talk about big data and all those things and they know how to actually handle this amount of data and how to have an infrastructure that is very good that is why they have a very good facility and that leads to it being one of the cheapest service providers in the market yes there are certain features that gcp offers which are better even than Amazon web services when you talk about its price and the reason for it is it has various other services that are there what it does is it helps you optimize various costs how it uses analytics and various other ways by which it can optimize the amount of power you use and that leads to less usage of power and since you are paying less for power that is providers are paying less for power you end up paying less for your services as well so that is why it is so cost efficient then there are other service providers that is we have digital ocean we have Telemark we have IBM which is again very popular as far as these service providers go these are the major ones that is we have Amazon web services we have Microsoft Azure we have gcp which are talked about a lot this was about the basic Cloud providers and the basic intro which I wanted you all to have foreign so what exactly is AWS well AWS is Amazon web services and it is one of the best cloud service providers in the market so when I say cloud service provider we need to understand what cloud is first so let me throw some light on this topic first let me give you a scenario that existed before Cloud came into existence in order to host a website I had to buy a stack of servers and that was very costly second thing upscaling and downscaling was a huge problem because I wasn't certain as in how much space I would be needing and that meant wastage of resources or under or over utilization of resources thirdly maintaining these servers was a huge pain so these were all the problems that came into existence now Cloud helped you solve all these problems how well what organizations did was some of them they went ahead and they bought some space online and they made arrangements for your databases your services complete services and all those things and businesses could go ahead and rent those services that meant you could use or rent Services which was needed and you paid only for those Services you could easily upscale and downscale here so all your issues were taken care of and when you talk about cloud services the first thing that should come to your mind is AWS why because it is the best in the market let's try to understand little more about AWS well it is a complete software suit or a cloud service provider which is highly secure it provides with various compute storage database and N number of other services which we would be discussing in further slides as well and when we talk about the market it is the best and it has various reasons to be the best in the market one being its flexibility its scalability and its pricing other reasons being its compute capacity now why is it so important the compute capacity well if you talk about the compute capacity you need to understand one thing if you take all the other cloud service providers in the market and you combine the compute capacity that is you leave out AWS and you take all others into consideration the space would be somewhere equal to say x and if you compare it with AWS it is 6X so AWS has more compute capacity which is six times more than all all the other service providers that are there in the market so that is a huge amount so these are the reasons that make AWS one of the best in the market and let's try to find out what are the other reasons about AWS that make it so good what are the services features and its uses basically so I would be discussing some use cases now if you are talking about a manufacturing organization now the main focus is to manufacture Goods but most of the businesses they focus so much on various other services or practices that need to be taken care of that they cannot focus on the manufacturing goal now this is where AWS steps in it takes care of all the IIT infrastructure and management that means businesses are free to focus on manufacturing and they can actually go ahead and expand a lot architecture Consulting now the main concern is prototyping and rendering AWS takes care of both the issues it lets you have automated or speed up rendering as far as prototyping is concerned and that is why architectural business benefit a lot when it talk about using AWS or any cloud provider but AWS being the best in the market again their services are the best media company now as far as a media company goes the main concern is generating content and the place to dump it or to store it again AWS takes care of all these situations or both these situations large Enterprises when you talk about large Enterprises their reach is worldwide so they have to reach their customers and their employees globally or across different places so AWS gives you that option because it has a global architecture and your reach can be very wide as far as these points are concerned okay if you talk about real life examples to give you an example we have quora which is an online answering platform if you all are aware about it and that is like used all across the world Amazon itself we know that it is one of the biggest e-markets as far as shopping is concerned and again it is taken care by Amazon web services other examples would be Netflix which we all love I'm sure that we all watch online series and what better place to do it than we have our Netflix right now advantages of AWS as I've mentioned I won't say advantages exactly I would say features as well flexibility now as far as AWS is concerned it is highly flexible now there are various reasons to support it and one of the major reasons is it's very cost effective let us try to understand these two points together rather now when you talk about flexibility the first concern you should have is you are dealing with big organizations they have a lot of data that needs to be managed deployed and taken care of now when you talk about a cloud provider if it is flexible all these things are taken care of the second thing is it is highly cost effective now when I say cost effective AWS takes care of almost every aspect if you are a beginner or a learner they have something called as a free tier that means you have sufficient resources to use for free and that too for one long year so you would have sufficient Hands-On without paying anything plus it has something called as pay as you go model now when I say pay as you go model what it does is it charges you only for the services which you are using and only for the time being you are using them again that lets you scale up nicely Lee and hence you end up paying very less since you are paying very less and since you have so many options when you are actually buying its services what that does is that gives you a lot of flexibility scalability again the first two points are related to this point now how is that now when I say scalability what happens is as I've mentioned it is very affordable so you're paying on hourly basis if you're using a particular service for one hour you'll be paying it only for one hour that is how flexible it is and what that does is that gives you a freedom to scale up and even scale down since it is easy to scale up it is always advisable that you start with less and then scale as for your needs plus there are quite a few services that are there which can be automatically scheduled now what that means is you'd be using them only when there is an uptime and in downtime you can miss those get automatically shut down so you do not have to worry about that as well so when you talk about scalability scaling up and down is very easy as far as AWS goes security again now security has been a topic of debate when you talk about cloud services especially but AWS puts all those questions to rest it has great security mechanism plus it provides with various compliance programs that again help you take care of security and when you talk about real-time Security even that is taken care of you can take care of all the suspicious activities that are there and not you AWS takes care of all those things and you're let free to focus on your business rather so these are the advantages which I feel that AWS adds value to and apart from that there are quite a few other points like we have automated scheduling which I just mentioned you have various integrated apis now these apis they're available in different programming languages and that makes it architecturally very strong to switch from one programming language to another so these are some of the features I feel that make AWS a wonderful wonderful service provider in the market so let's move further and try to understand other things as far as AWS is concerned it's Global architecture when you talk about AWS as have mentioned it is the best service provider in the market so what makes AWS this popular one of the reasons is its architecture now when I talk about its architecture it is very widely spread and it covers almost every area that needs to be covered so let's try to understand how it works exactly well if you talk about AWS architecture now the architecture is divided into two major parts that is Regions and availability zones now when you talk about the regions and availability zones regions are nothing but different locations across the world where they have their various data centers put up now as far as one reason goes it might have more than one Data Center and these data centers are known as availability Zone you being a consumer or an individual you can actually access or access these Services by sitting anywhere in the world to give you an example if I am sitting in some part of the world say for example I'm in Japan right now I can actually have access to the services or data centers that are there in US right now so that is how it works you can choose your region and accordingly you can pick your availability zones and use those so you do not have to worry about anything to throw some more light on it you can take a look at this small map which is the global map and it shows the different places which has its regions and availability zones now as far as this map goes I believe it's fairly old and it has been upgraded in recent times because AWS is putting in a lot of efforts to have more data centers or more availability zones as far as their wide reach is concerned and we can expect some in China as well so yes they are actually reaching far and wide so when you talk about these reasons and availability zones if you take a look at this map what you can see is you have your reason which is in orange color and the number that is inside it is the number of availability zones that there has to be now to give you an example we have Sao Paulo which says that it has three availability zones so that is how it is and the ones that are in the green are the ones which are coming soon or the regions that are in progress and some of these have actually gone ahead and already started or have been made available to people so yes this is how the architecture works and this is how the AWS architecture looks like okay so let's move further and take a look at the next concept domains of AWS now when you talk about its domains the first domain that we are going to discuss is compute and when you talk about compute the first thing that should come to your mind is ec2 now when I say ec2 it is elastic Cloud compute and what it does is it lets you have a resizable compute capacity it's more of a raw server where you can host a website and it is a clean slate now what do I mean by this say for example you go ahead and buy a laptop it is a clean device where you can have your own OS you can choose which OS you want and all those things accordingly your ec2 is again a clean slate and you can do so many things with it now next you have elastic Beanstalk which lets you deploy your various applications on AWS and the only thing you need to know about this thing is you do not have to worry about the underlying architecture now it is very similar to your ec2 and the only difference between the two is as far as your elastic bean stock is concerned you can think of it as something that has predefined libraries whereas your ec2 is a clean slate now when I say predefined libraries say for example you want to use Java as far as easy to goes now this is just an example don't take it literally you'll have to say for example install everything from the beginning and start fresh but as far as your elastic bean stock is concerned it has this predefined libraries and you can just go ahead and use those because there's an underlying architecture which is defined let me say it again I just gave you an example don't take these sentences literally so next we have migration when you talk about migration you need to understand one thing AWS has a global architecture and there would be a requirement for migration and what AWS does is it lets you have physical migration as well that means you can physically move your data to the data center which you desire now why do we need to do that say for example I am sending an email to somebody I can do that through internet but imagine if I have to give somebody a movie so instead of sending it online I can actually go ahead and give it to someone if that person is reachable for me and that way it would be more better for me my data remains secure and so many other things so same is with data migration as well and when you talk about AWS it has something called as snowball which actually lets you move this data physically now it's a storage service and it actually helps you in migration a lot security and compliance now when you talk about security we have various services like I have IIM we have KMS now when I say im it is nothing but your identification and authentication management tool we have KMS which lets you actually go ahead and create your own public and private keys and that helps you keep your system secured now there are quite a few other services as well but I would be mentioning one or two services from each domain because as we move further in future sessions we would be discussing each of these services in detail and that is when I would be throwing a lot more light on these topics for now I would be giving you one or two examples and because I want you all to understand these to some extent getting into details of all these things would be too heavy for you people because the quite a few domains and quite a few services that we need to cover and as we move further definitely we would be covering all those services in detail then we have storage now when I talk about storage again AWS has quite a few services to offer to you we have something called as your S3 now S3 it works as a bucket object kind of of a thing your storage place is called as a bucket and your object which you store in are nothing but your files now these objects have to be stored in their root files which act as the bucket basically and then we have something called as your Cloud front which is nothing but your content delivery Network we have something called as Glacier now when you talk about Glacier you can think of it as a place where you can store archives because it is highly affordable next we have networking now when you talk about networking we have services like VPC Direct Connect Route 53 which is a DNS now when I say VPC it is a virtual Network which actually lets you move or launch your resources that is your AWS resources basically when you talk about Direct Connect you can think of it as a least internet connection which can be used within AWS next on this list we have something called as messaging yes AWS assures secured messaging and the quite a few applications to take care of that as well now we have something called as Cloud trial we have Ops works all these things they help you in messaging or communicating with other parties basically databases now storage and databases are similar but you have to understand one difference when you talk about your storage that is where you store your executable files so that is the difference between the two and when you talk about databases we have something called as your Arora which is something which is very SQL like and it lets you perform various SQL options at a very faster rate and what Amazon claims is it is five times faster than what SQL is so yes when you talk about Aurora again a great service to have we also have something called as dynamodb which is a non-relational dbms and when you talk about non-relational dbms I won't be discussing that but this helps you in dealing with various unstructured data sources as well next on this list we have the last domain that is the management tools now when you talk about management tools we have something called as Cloud watch which is a monitoring tool and it lets you set alarms and all those things hopefully today when we are done with the demo part you'd be having at least one part of your Cloud watch card because we would be creating alarms using cloudwash today so stay tuned for that as well so this is about AWS and its Basics as in the points which we just discussed that is what it is its uses its advantages its domain it's Global architecture so yes guys what I've done is I've gone ahead and I've switched into my AWS account the first thing you need to understand is what AWS does is it offers you a free tier now while I was talking about these things I just rushed through it because I know that I was going to give you a demo on these things so and I wanted to discuss this thing in detail now when you talk about AWS if you are a beginner this is where you start now what AWS does is it provides you with its free tier which is accessible to you for 12 months and there are quite a few Services which we just discussed which are available to you for free and when I say free the certain limitations on it as in these many hours is what you can use it for and this is the amount of memory or storage you can use in total and all those things and its capacity and everything based on that you have different instances which you can create and all those things now what AWS does is it gives you these services for free and as long as you stay in the limits that AWS has set you won't be charged anything Android trust me when it is for learning purposes that is more than enough and let's quickly go ahead and take a look at these Services first and then there are few other points which I would like to discuss as well but firstly the free tier Services now see this is what it has to offer to you 12 months of free and always free products when you talk about ec2 which is one of its most popular compute Services 750 hours and that is per month next you have Amazon quick site which gives you one GB of spice capacity now I won't get into the details of these things as in what spice capacities and all those things when you have time I would suggest that you go ahead and explore these things as in what do these things do today we are going to focus more on the ec2 part so for now let's quickly take a look at these one by one first Amazon RDS which is again which gives you 750 hours of your T2 micro instance Amazon S3 which is a storage which again gives you 5gb of standard storage and AWS Lambda 1 million free requests per month so there are some of the videos here actually which would introduce you to these things that would help you get started with how to creating an account and all those things and this is the other important point which I would like to mention when you do create an AWS account the first thing you need to consider is they'll be asking you for your credit card details so how does the login process work firstly you go there you given your email ID and your basic details as in why do you want to use it and all those things next what it would do is just to verify your account it would ask you for your credit card details even the debit card details work I've actually tried those so you can go ahead and give your credit card or debit card details and when you do that what it does is it subtracts a very small amount from your account I did this in India and I know that I was charged two rupees which is fairly less and that was again refunded back to me in two to three working days the only reason they cut those two rupees was just for the verification purpose that my account is up and running and I am a legitimate user now I as long as you stay in the limits you won't be charging anything but if you do cross those limits you'll be charged now you might be worried as in what if I do cross the limit would I be charged yes you would be but the fact is you actually won't go beyond it and even if you do you'll be notified saying that you are going about the limit or above the limit even when your free subscription ends you are notified saying that do you want to enter your billing details and do you want to start billing and if you say yes only then you'd be charged for the subsequent months and that is a very stringent process you don't have to worry about it that is you won't be losing out on any money as long as you follow these rules so if you do not have an account my suggestion would be you go ahead you would log into AWS and create your free tier account which is a very easy and two to three step process [Music] so first and foremost guys we would be talking about an instance so when you talk about an instance we have this definition here let's try and understand what does this definition has to say first and then probably I would throw in some light on that so as far as this definition goes it says an instance is nothing but a virtual server for running applications on Amazon ec2 it can also be understood like a tiny part of a larger computer a tiny part which has its own Hardware network connection operating system Etc but it is actually virtual in nature so there are a lot of words here and a lot of stuff has been said let me try and simplify this particular definition for you people so guys when I say a virtual server running on your application not on your application a virtual server that basically hosts your application is what I should say so what do I mean by this what do I mean by a virtual instance a virtual presence of a particular device well guys when you talk about software development application development what you do is you are supposed to build in applications and run those on servers right but at times there are a lot of constraints like the space that you use the resources that you want to use say for example certain applications run on Windows certain run on Mac OS and certain run on your Ubuntu OS right so in that case I cannot always go ahead and have different systems run different operating systems on them and then run my applications on top of that right because it is time consuming exhaustive and also consumes a lot of money that you invest into it so what is the solution for that what if I could have a single device and on top of which I could create virtual compartments in which I could store my data differently store my applications run my applications differently wouldn't that be nice well when you talk about an instance that is what it exactly does you can think of it as a tiny part of a computer well that is what it is trying to symbolize I mean you have a system on top of which you can run different applications and how it works is if you are running an application a in part 1 and running an application B in Part B of your server uh these applications have a feeling that they are running individually on that system and there is no other system running on top of it so this is what virtualization is it creates a virtual environment for your application to run and one such instance of this virtual environment is called as an instance so when you talk about virtualization it is not something that is very complicated as you can see in the first image you can see a man surrounded by various virtual images something that you see in an Iron Man movie when you talk about virtualization it is very simple it can be a simple computer which is shared by different people and those people are working quite independently on that server that is what virtualization is that is what an instance is in this image the second image each one of this individual would be using a different instance so this is what an instance is when you talk about virtualization so guys let us move further and take a look at some other pointers now we've understood what an instance is what virtualization is to some extent at least guys as far as this session goes I believe this information is enough if you wish to know more about virtualization you can visit our YouTube channel and take a look at VMware tutorial it talks about this particular Topic in a lot more detail so guys let us move further and try to understand ec2 now now ec2 is an Amazon web services compute service it stands for elastic compute cloud now what do I mean by this when you say an elastic Cloud compute that means basically it is a service which lets you actually go ahead and Carry Out computation practice and when I say elastic it means that it is fairly resizable and fairly reusable once we get into the demo part probably you'd get a better picture what do I mean by elasticity because it is highly flexible highly scalable it is very cost efficient and it serves a lot of purposes now these are some of the features that I just mentioned right let me throw in some more light on these pointers as well what do I mean by scalable now when you talk about a cloud platform one of its best features is it gives you high amount of scalability that means your applications can scale up and down depending upon the data that you want to use on top of it so if the traffic increases more you need more performance so your application should be able to scale to those needs right so that is what cloud computing provides you with and that is what ec2 also provides you with when I say an instance basically what you are doing is you're launching a virtual machine it is called as instance in terms of AWS so this virtual machine should be scalable that means it should scale up and scale down both in terms of memory storage and even in terms of the computation that it is providing so when you talk about ec2 it is highly scalable once we get into the demo part you would see this now it being scalable and it being cost efficient makes it highly flexible so that is the third Point let us try and understand the second Point as well what makes ec2 cost efficient now when you talk about cost optimization what ec2 does is it lets you scale up and down I just mentioned that right so instead of buying n number of instances or instead of buying a number of services you can actually go ahead and scale this instance up and down with minimal cost changes so you're saving money plus apart from that there are burstable instances um there are various pricing models that ec2 boasts of using which you can actually save a lot of money as we move further we'll be talking about those models as well so meanwhile just bear with me so ec2 well it is a service which is a computation service and it takes care of following pointers I mean it is easily resizable uh it is cost efficient it is highly scalable and all these features make it highly flexible as well so guys let us move further and take a look at uh some other pointers as well so what are the types of instances now when you talk about ec2 it is one of the oldest AWS services so if you talk about the type of instances that are there in the market well there are quite a few types of instances that you can deal with and these are some of the popular ones again once I move into the demo part I would maybe talk about other instances but to keep it simple basically these instances they have different families I mean you have the T-Series you have the M series the C Series well basically these series consist of different kind of instances that serve different purposes to simplify this process what AWS has done is it has gone ahead and categorized these instances into following types the first one is your general purpose instance now it is basically suited for applications that require a balance of performance and cost that means places where you require quick responses but it is still cost effective I mean say for example the example shown here email response systems now you require a quick response and there will be n number of responses or a number of emails that would pop in but you do not want to pay a lot of money for this kind of service so in this case you need cost optimization as well and you need a quick response as well so this is where your general purpose instances come into picture next on this list you have your compute instance now what are compute instances these are for applications that require a lot of processing now when you say computation they have better computation power that means if there is a lot of data that need quicker computation power you can use these kind of instances what is an example you have your analyzing streaming data now if you know what streaming data is it is the data that continuously flows in and flows out that means you are streaming the data say for example this session it is being streamed right I mean the information or whatever is happening here it is going live so in order to process this kind of data you need systems that give you good computation power which are very active and very good in nature so when you talk about compute instances they provide you with these kind of services and that is why if you are dealing with streaming data if you wish to analyze this kind of data you can definitely go for compute instances so next on this list we have memory instances now what are these instances for now these are the instances that are required for applications that require more memory or in better terms more RAM right random access memory so these are for applications that require good computation power again like the previous one but uh when you talk about Ram it is something that resides in your local system right so you need instances which have good memory capacity and what kind of application it serves well you can think of applications that need multitasking multi-processing say for example I need a single system that does fetching data for me as well process it for me as well dashboard it for me as well and then gives it to the End customer as well so these kind of applications require memory instances moving further guys we have the storage instances as the name suggests these applications are or these instances are for applications that require you to store huge amount of data say for example you have large size applications like your big data applications where the amount of data is huge in number so you would be requiring more storage more storage flexibility in that case you can opt for instances that are specifically optimized for storage kind of requirements and then you have your GPU instances if you know what GPU is you would understand what it serves that means if you are interested in graphical kind of work where you have basically heavy Graphics rendering in that case you can opt for GPU kind of instances which basically help you serve purposes like 3D modeling and stuff like that so guys this was about the different kind of instances now let us try and understand um what are the different instance pricing models that are out there so guys when you talk about pricing ec2 or AWS in general it ensures that you can save a lot of money but normally what people do is they are under the imagination that if we just go ahead and take in Cloud probably you'd go ahead and save a lot of money yes Cloud does support applications in such a way that you would spend very less amount but it involves a lot of planning guys so each time you use a particular service it is very important you understand how does that particular service work and if you actually plan in the services in that manner you'd actually end up saving a lot of money so guys let us try and understand how the pricing models work when you talk about ec2 in particular so Guys these are some of the pricing models that ec2 has to offer to you you have your on-demand dedicated on the spot and reserved instances now let me try and simplify what these instances are and what do I mean by these now when you say an on-demand instance as the name suggests it is an instance that you demand and you get it now these instances are made available to you for a limited time frame say for example um I need a particular instance for an hourly basis so I would be wanting to use that instance for only that duration so to use that instance for that particular duration what I do is I actually go ahead and demand this instance so um AWS would give me that instance but it would work for an R only so my prices for that instance would be fixed on that manner I mean the fact that I would be using it for one instance or for n1r basically so I would be charged only for that 1R and once that R is complete that instance it gets terminated on its own it's similar to renting a flat for one month suppose if I move to a new city and I'm looking something temporary say for example I'm looking for a hostel or a pain guest kind of a living system right so in that case what I would do is I would upfront go and tell the owner that I would be staying here for a month you can charge me for a month only if it is 1000 more than normal charge it is fine but once the month is over I would like to leave right so that kind of service or that kind of instance uh demand is called as on-demand instances basically dedicated now Guys these instances are kind of given to a particular organization so that their security is defined better than other say for example if I need to protect my data I need my data to be privatized Now understand this thing AWS or the other Cloud platforms are highly secure your data is secured no matter whether they are on dedicated instance or not but what happens is you normally share your space with someone else data remains private but there are companies that deal with highly confidential data and in that case they want that extra Assurance as in okay I am using a space which is not shared by anyone so in that case you have dedicated instances um which basically serve your needs like high security and basically an isolation from the other vendors as well so that is what dedicated instances do they are costlier but yeah they give you that isolation on spot now guys when I say an on spot instance it is like bidding say say for example I am buying a particular share so I have a particular budget right so I might have a budget of uh 300 so what I do is I go ahead and buy that share and I sit in a cap as an okay to the max I can bid for 300 for this share so if the price goes above 300 I'm not taking that share right so if there is a particular instance you can bid for that instance I said okay this is the maximum price that I pay for this instance so if that instance is available at that price it is given to you and if after a particular duration the price of this instance can change so it is available to you for a limited period of time so if you are dealing with data that is volatile and you want to work on the data in real time so you can opt for this instance because after a while the price of this instance might change and this instance might be terminated and you might not be able to use it for a longer while but the thing it does is it is available to you at a cheaper price and at a pricing bid that you put on it so that that is why it is more affordable but again it is good for volatile data only finally you have the reserved instance it is like renting an apartment on a lease for a longer period right I mean suppose if I am getting a flat on an agreemental basis where I sign an agreement for a year that means I am reserving this flat for one complete year right so nobody else can come and say that okay you have to vacate this flat right so that is one benefit and the other thing is you have a fixed set of rent so if you are taking something for a longer duration there is a chance that you might end up paying lesser money for that as well now what happens here is when you talk about it from the instance perspective suppose you know that you would be needing this much configuration for this duration you can rent that particular instance for that duration and probably you end up saving a lot of money now when you talk about AWS it gives you flexibility where you can actually go ahead and upscale downscale your instance access to your needs you can kind of terminate stuff and move to the next stuff but if you are certain about certain things as in okay I have to use this no matter what happens for a longer duration in that case you can opt for reserved kind of instances and those are more affordable to you so Guys these were different types of instances based on the pricing that is there now we've talked about General clusterization of instances like the general purpose the GPU that was based on their functioning right then we learned about their pricing models as well now there is one more type that we need to understand or one more classification that we need to understand let us try and take a look at those as well so we are classifying instances based on that General functioning now what do I mean by this well these are the types let us take a look at those one by one first so when I say burstable instance we have talked about general purpose instances right so what happens is there is a category of general purpose instances which start with a base utilization power available to you that means if you want to utilize your CPU for a certain amount burstable instances are good here let me throw in some more light as in what am I talking about exactly suppose I need a CPU utilization of 20 and I know that so I can go for burstable instances what they do is they start with the functioning of 20 percent but in case if I'm dealing with data that is not constant that might change with time say for example if my website experiences more traffic so I might need more performance right so in that case what burstable instances do is they burst out of their current performance to 100 CPU utilization so that you can get more performance now what happens here is um you are charged a particular amount for these instances and you have certain credits for which you can use the burst triple performance now if you do not use the burstable performance those credits can be used later as well so you are getting optimized performance as well and you are saving some money as well in case if there is an urgent traffic that you experience you have something called as EBS optimized now when you talk about EBS optimized now these are the applications where basically you are processing data at a higher speed say for example there is some application where the data is flowing in continuously so I need quick response right so EBS backup or EBS optimized instances what they do is they give you high input output processing and that is why these are good instances to opt for these situations cluster networking basically they form clusters of instances now a particular cluster what it does is it serves one kind of purpose say for example in my application what I want is I have different sections and in different sections my first section requires to be processing data at a faster rate the other one I want it to be storage optimized so I can Define different clusters of instances that serve different purposes here and then I have the dedicated one we've already talked about dedicated one it is more related to the data security part so Guys these were the different types of instances I know I've talked about a lot of stuff once we get into the demo part probably this would ease up a little more for you people I believe you people are with me and you are following this session so guys now let us move further and take a look at the use case so that we can just move further and take a look at the demo part as well for this use case I have considered edureka itself let us try and understand what could be the possible problems that can be solved by using these instances now imagine that if edireeka used AWS as their Cloud partner and they used the ec2 service so what kind of problems could be solved by these instances that we just talked about suppose we have the first problem where you have to analyze the data of the customer so what kind of application would you use can you guess that for me I won't be looking at your answers let me just quickly go ahead and give you other examples as well so that we can discuss these one by one suppose you also have an auto response email system now compare these two and let me know which one would you believe would be served better by these instances that we've just talked about so when you talk about the performance here guys um when you talk about analysis of data for the customers data it is never constant right at times the data is used at times it is less so in this case I would need burstable performs so my general purpose burstable performance instances would serve me better right auto response email system I need a quick response but I do not want to invest a lot of money EBS optimized instances with iops would help me better search engine and browsing I believe it is fairly clear I'm talking about browsing and search engine two different things I want to do I would be opting for clustered Network instances right and confidential data well I would be opting for the dedicated instances here so guys this was a very simple use case so let us move into the demo part and try and understand ec2 a little more shall we so guys what I've done is I've gone ahead and I've signed into my AWS Management console uh please forgive me guys I have a lot of code today and that is why my voice is little uh jiggly and echoing so I hope you people are not offended by that uh moving further guys this is the AWS Management console you can sign into AWS free tier account and probably Avail these Services you can practice a lot of stuff by signing into your free tier account um how do you do that just go ahead and look for AWS free tier and sign in with your credit card or debit card you won't be charged you have these services for free for one complete year and you can practice most of the services that are there there is some free tier limit on these services so check the upper cap as in what those limits are so that you don't get charged so guys this is how the console looks like we are going to go ahead and learn about ec2 here that is the instant service in AWS so let's search for ec2 and you would be redirected to this page guys now when you talk about ec2 there are a lot of things that you can do you have Amazon Marketplace where you have Amis I will tell you what Amis are do not worry uh you can just go ahead and launch our instances you can attach volume to it you can detach volume storage from these instances and when I say Amis those are Amazon machine images that means once you create an instance you can create an image of that instance as well that means a template of that instance as well suppose you have certain applications running on top of that instance certain specific settings that you've done for those instance and you do not want to do those settings again and again you can create images of that instances as well so let us see what all we can do with these instances so let us first launch an instance so guys once you click on that launch instance button you would be given n number of options to choose from you can launch Linux instances Ubuntu instances Windows instances and you can choose the EBS Backup non-ebs backup so there are a lot of choices when you actually go ahead and launch these instances you can see there's Ubuntu red hat Microsoft Windows and there are specific instances specialized in deep learning some of our server specification you can see that there are quite a few instances but ensure that if you are practicing choose the free tier eligible one for now I'm gonna go ahead and launch a simple Windows instance let's not get into the Ubuntu one because it requires a partition to sign for that so let us not do that so guys once you click on launch an instance you can see that you are redirected to this page now if you take a look at the information here it talks a lot now this instance is general purpose we've discussed the other families right this is one this one is T2 micro there are T2 T3 micro and medium and bigger instances as well the sizes vary guys the t2 micro one is free tier eligible you have T2 Nano you have small right so you have medium and other large instances as well so when you say a microphone it has one vcpu and one gigabyte of memory instant storage it is EBS backed up and what kind of network performance it gives you low to moderate so I would say configure further these are some configuration details what network it is following what subnet ID it is following that means it falls under the cloud Network guys that means uh your Cloud would have a network and under that Network lizard instance so that its access policies security policies can be managed so let it be basic for now let us move further storage now guys this is the storage it is your root storage and 30 GB of space you can change it if you want say 100 but let us stick to 30 for now and guys you can see these are the types you have a general purpose you have your provisioned magnetic now there is one more type of instance guys that is HDD kind of an instance but guys when you talk about root storage you cannot attach sdd to it right because um root storage is something that is constant if you wish to have sdd kind of storage it has to be attached secondary so if I add new volume here you can see and if I search for this now it gives me an option of cold sdd right so that is what guys I mean in order to have this kind of HDD kind of a volume you need to use secondary storage for it so let us cancel this for now and just go ahead and say next you can add in tags guys for the Simplicity of namesake say for example sample today and let's just say next Security Group guys um Security Group what do I mean by this well basically uh you have a set of policies as in who gets to access what kind of traffic do you want to your instance what kind of traffic do you want to flow out of your instance so you can create a security group and you can use customized as well uh when you create one this type is RDP that means it can allow traffic from a desktop or a remote desktop app and through which I can log into my system I can add other rules as well I can add TCP HTTP kind of rules and these are the port ranges you can specify those for now I'm allowing traffic from everywhere through RDP and I can say review and launch improve your security it says but this is a basic one guys you can add in more rules as I've already mentioned so let's not do that let's say launch generate a key pair now a key pair is something that lets you log into your instance it is a double security for your instance you do not uh want your instance to be left insecure right so in that case you need to generate a key pair you can use an existing one or you can create a new one as well so let's just say that I want to create a new key pair so I say create and let us say Vishal three four one two one and let's just say download so guys once you download this instance what you do is Ctrl X cut it from here and I'm gonna go ahead and paste this instance to the desktop guys and let's just say paste here it is so the reason I'm doing this is because basically we would be needing this thing guys if you lose this key there is no other way to access your instance so make sure you keep it safe and I say launch so guys now this process it takes a minute or two to go ahead and launch our instance so meanwhile you'd have to bear with me so what happens is once you do actually go ahead and launch this instance it involves a couple of steps like basically it does some Security checks some status checks and while these statistics happen it takes a minute or two and once the instance is up and ready we can actually go ahead and take a look at this instance so meanwhile guys what I'm gonna do is I'm gonna go ahead and take you to the ec2 part now there are three instances that are running guys now this is somebody else's account so there are quite a few other instances that are running you can see that there must be some instance here which basically is initializing so this is the one that we are going to use this is the ID let's not remember that we know that this is getting initialized so Guys these are the other instances this one is stopped let us take a look at this instance as well to understand as in what happens so Guys these are the options that I have right so you can actually go ahead and get the password you can create a template for your instance uh what you can also do is you can start stop now this instance is already stopped so you do not have these options that is stops hibernate and reboot you can start this instance and probably you can go ahead and do that now when you stop an instance if you want to actually make uh snapshots you want to take snapshots you want to create Amazon machine images out of it what you do is you stop that instance so that you prevent any activity from happening on that that instance so that you can take an exact snap of it so that is why you stop an instance when you wish to do these kind of operations once you start it again you can make it function normally at it was functioning if you are done using an instance you can terminate it there and there guys so these are the options instance setting okay so Guys these are the options you can add tags to it you can attach replace IAM rules that is access management policies guys so you have a user access management policies here you can attach roles to it as well you can change the instance type guys you can click on it and you can go ahead and do that you can change it to higher versions as well now why do you need to do this suppose I am experiencing a particular traffic and my instance supports that need but if I move further and in future I need to cater more traffic what do I do in that case in that case guys I can actually go ahead and update it to a larger version unlike your other applications your on-premise infrastructure where you have to actually go ahead and have new servers new data on top of it here what you do is you just click on this thing and it happens in a couple of seconds your instance gets optimized or upscaled to a better level and that is why it is highly scalable guys what you can also do is you can change termination protection now this is for data security suppose if I am using a particular instance and in that case I accidentally deleted my data would be lost right so what this thing does is it changes or turns my termination protection on that means if I had to delete this instance I have to get into the instance I have to change the policy and then delete it I mean I cannot delete it unknowingly right so that is why this service helps now while talking about these things guys our instance is up and ready let us just launch it I say connect and it says download remote desktop file the RDP path that I talked about right and I need to get in my password as well guys to login how do I do that I click here I choose the file for that I'm gonna go to the desktop I'm gonna scroll down there is a file called as Vishal I open it and I decrypt it and there you go guys my password is here I can just copy it so if this is copied I can launch this remote desktop file it would ask me for the password I would say take this and okay do you want to login unsecurely yes and guys a Windows instance would be launched it is just like your Windows operating system but it is running on my existing system guys they can say personalized settings it is setting up personalized setting for me and in half a minute or maybe in 10 seconds my Windows app would be up and running so just like my Windows device I have one more Windows device so I can do something in this device and something else in my normal Windows device as well guys so this is what your instance does it basically creates an instance a virtual machine for you to work on I Believe by now you understood what a virtual machine is so guys we are done with this part so let us just close it for now let us see if there is anything else that we need to talk about now if I come back here I've mentioned that you can take snapshots right so these are Amis what Ami is it is an image basically so I can actually go ahead and launch an Emi for an instance that I already have I can create an image of it there is a volume here so my instances are EBS backed up right so there is a block storage attached to it can I add another storage to it yes I can remove the previous storage and attach a different storage to it say for example this is the storage that I have with me if I click on it and I'll go into actions I can create a snapshot out of it once I create a snapshot out of it I can attach it to the existing instance so we just launched an instance right so if I want to replace the volume that is already attached to it what I do is I actually go ahead and detach the volume that is already attached so I would be stopping my instance first once I stop the instance I can come to the volume assume that this volume is attached to some instance so I need to detach it from here and the snapshot that I've already created or if I have created one I can select that and I can attach that to the existing instance all I have to do is I have to go ahead and create an image here once I create an image it would ask me what can I do with it it would ask me to actually go ahead and given the region in which the instance was created now my instance that I just used was created in a particular region I'm working in Ohio region for now what do I mean by these reasons well basically what happens is AWS has different data centers in different regions of the world so you can choose the region that is convenient to you that suits your business needs right so I can create instances in those particular regions so if my instance was in particular region I need to create a snapshot in that region and then attach that snapshot or that volume to my instance so guys um I believe by now you've understood a lot of things you've understood what instances are how to launch those how to create those and how to make those work so as far as this session goes guys I wanted to talk about these point centers one more important point that I would like to mention here is make sure that you terminate your instances so that to avoid any charges if there are any now this being a free tier account I don't think there would be a lot of charges but still I would request you to actually go ahead and terminate the instances even if they don't charge you a lot because that is a good practice because there are certain services that might charge you a lot more guys so I'm gonna terminate my instances the ones that I have created today so let's just hit a minute and in a minute or two guys these instances would be terminated from end to end [Music] according to Amazon AWS Lambda is a serverless compute service this means that developers don't have to worry about which AWS resource to launch or how they're going to manage them all they have to do is just put the code on Lambda and it gets executed it's as simple as that this not just helps in Saving Time but it also allows you to focus on the core competency that is app building or the code itself AWS Lambda is used to execute your backend code by automatically managing the AWS resources when we say manage it includes launching or terminating instances Health checkups audio scaling updating or patching new updates Etc so how does Lambda actually work the code that you want Lambda to run is known as a Lambda function now as we know a function runs only when it is called isn't it here the Event Source is the entity which triggers a Lambda function and then the task is executed let me take an example in order to make your understand than this suppose you have an application for image uploading now when you want to upload an image there are a lot of tasks that are involved such as storing the image resizing it applying filters compression Etc now this task of uploading an image can be defined as an Event Source or the trigger this trigger is used to call the Lambda function and then all these tasks can be executed using the Lambda function in this example a developer just has to define the Event Source and upload the code so now let us take another instance of the same example in this instance we'll be uploading images in the form of objects to an S3 bucket this uploading of an image to an S3 bucket will become an Event Source or the trigger the whole process can basically be divided into five steps first the user uploads an image or the objects to a source bucket in S3 which has the notification attached to it for the Lambda function this notification is read by S3 and the design lights where to send the notification then S3 sends the notification to Lambda and this notification acts as an invoke call for the Lambda function execution role in Lambda can be defined as an IAM or identity and access management to give access permission for the AWS resources for this example where we've used S3 bucket finally it invokes the desired Lambda function which works on the object that has to be uploaded to the S3 bucket now if you were to solve this scenario traditionally along with development you would have to have hired people for managing various tasks such as size provision and scale up group of servers managing the operating system updates applying security patches and also monitoring this infrastructure for performance and availability now this would have been an expensive TDS and a tiresome task therefore the need for AWS Lambda is Justified AWS Lambda is compatible with node.js python Java Etc all you have to do is upload your zip file Define an event source and you're all ready to go so by now you would have understood what is AWS Lambda and how it actually works moving ahead let us understand where to use the Lambda compute service and also a brief comparison between Lambda and other AWS compute services so now to understand this part of the session consider that you were an architect and you need to design a solution for some problem now as an architect you have various AWS services that can be used to execute a task now these Services can be anything such as the AWS ec2 AWS elastic Beanstalk AWS Ops works and AWS Lambda AWS Ops works and AWS elastic Beanstalk are used to deploy an application so our use case is not to create an app but to execute the backend code then why not ec2 so AWS Lambda versus AWS ec2 if you were to use the AWS ec2 service you would have to architect everything that is the load balancer EBS volumes software Stacks Etc in the AWS Lambda service you don't have to worry about anything all you have to do is just insert your code and the AWS will manage the rest for example in ec2 you would be installing the software packages on your virtual machine which would support your code whereas in Lambda you don't have to worry about any virtual machine all you have to do is just insert the plain code and Lambda will execute this for you now at this point I want you all to know something very important if your code will be running for hours together and you expect a continuous stream of requests you should probably go with ec2 this is because the architecture of Lambda is for a sporadic kind of workload wherein there will be some quite ours and some spikes in the number of requests as well say for example logging the email activity for a small company now a small company would have more activity during the day than the night also there could be days when there are less emails to be processed and sometimes the whole world could start emailing you in both the cases Lambda is at your service considering this use case for a big social networking company where the emails are never ending because of its huge user base Lambda may not be an app choice so that was about ec2 and Lambda I hope you guys have understood where to use Lambda and where to use ec2 now moving on and understanding the pricing in AWS Lambda like most of the AWS Services AWS Lambda is also a pay-per-use service this means you only pay for what you use therefore you are charged for the number of requests that you make to the Lambda function and the duration for which your code executes when it comes to the requests you are charged for the number of requests that you make across all the Lambda functions AWS Lambda counts a request each time it starts executing in response to an Event Source or invoke call including a test that is invoked from the console the duration is calculated from the moment your code starts executing till the moment it returns or terminates now this is is rounded up to the nearest 100 milliseconds the price depends on the amount of memory you allocate to your function so now moving on towards the most interesting part of the session which is the Hands-On demonstration of AWS Lambda service now if you don't have your account in AWS Lambda and want to create your own account please make sure to check out AWS crash course video of edureka now presuming that you guys already have an account and have logged into your AWS console let's move on and take a look at how to create a Lambda function okay so creating a Lambda function now like I've already mentioned before the first thing that you will have to do is open up the AWS console now if you already have an AWS account just log into your account else if you're creating a new account please make a note that AWS will provide free services for an ear and if you wish to know more about this check out the AWS free tile Services page on Amazon's official website now since I already have an account what I'm going to do is just log into my account and get started okay now as you can see on the screen this is the AWS Management console as you can see there is a huge list of services that are provided by AWS now if you just scroll down you will notice that the Lambda service is present under the compute section so just click on that to move on to the functions list page now this page basically shows the functions that you have created in the current AWS region please make a note that recently created functions might not immediately appear and it may take a while for them to show up so as you can see this page has several options such as create function filter actions Etc also if you are a returning user who has already created some Lambda functions you will see a list of functions that you've created else this list will be empty now if you want to create a Lambda function simply click on the create function option that is present at the right top corner of the page also if you can notice the actions drop down list is not clickable this is because the actions are to be performed on the Lambda functions that are already created so if you do not have any function you will not be able to click on the actions drop down list however if you have some function present already select the function and you will see that the actions list gets enabled and it provides three options that is view details test and delete as the name suggests the three actions will help you view the Lambda function details test it or delete the function if it is no longer required now since we are focusing on creating a Lambda function let me just click on the create function option okay so this is the create function page as you can see there are three options that are present over here the first option that is author from scratch allows you to create a Lambda function completely on your own so if you want to create your own Lambda function just given a suitable name and specify the runtime you prefer in our case the name of the function will be hello world the runtime or the language that I'm going to make use of is python you can go ahead and select any runtime that you wish to use okay so I'll just select python 3.8 so next up after runtime you see something called as permissions now this basically refers to the function permissions by default Lambda will create an execution role with permissions to upload logs to the Amazon Cloud watch logs Amazon cloudwatch logs is nothing but a monitoring and Management Service it provides data and actionable insights for AWS hybrid and on-premises applications and infrastructure resources when you make use of AWS cloudwatch you can collect and access all your performance and operational data in form of logs and metrics from a single platform however the default role can be customized later when you wish to add triggers okay so now I'll just go ahead and click on the create function option now it is going to take a few minutes to do so okay so let us wait for the same now once the function has been created you will see the message just as you can see on the screen that is successfully created the function hello world which is the name of the Lambda function that I just created now this page basically allows the user to manage the code and the configurations once you have provided the required code and its configurations simply click on the test option in order to execute the code now before doing that let me walk you through this page the first element that you see over here is the page configuration this page has something called as the designer here you can add triggers layers and destinations to the Lambda function that you've created a trigger over here refers to an AWS service or a resource that is used to invoke the Lambda function now if you want to connect your function to a trigger click on this add trigger option as you can see there is a drop down list over here with a huge list of triggers that you can choose from choose a service from the list you can choose a service from the list to see the available options now for the first part of this demonstration I'll not be using any of these so I'll just click on cancel and I'll get back to the previous page so the next thing that you see right in the middle of the screen are two options that is the function that you've just created which in my case is hello world and layers by default the function is selected and when you scroll down the page you'll be able to see the code editor and a sample code that has been generated automatically for my function so now this is a very simple function it is called as the Lambda Handler which simply returns a message saying hello from Lambda the code editor in AWS Lambda console enables you to write test and view the execution results of the Lambda function code for the purpose of this tutorial let me just change it to hello from edureka and then I'll just save it okay now let's scroll back up and select the layers option so layers are resources that contain libraries or custom runtime or other such dependencies you can create layers to separate your function code from its dependencies Now by default as you can see no layer has been added to add a layer all you have to do is click on layers and then choose add a layer option to include libraries in a layer place them in the directory structure that corresponds to your programming language now since we have a very simple function I'll not need any extra layer for it but for the sake of showing you guys let me just click on add a layer option okay so as you can see there is a drop down list present over here as well when you click on this you will be able to see that by default there are two layers present over here one is for Pearl and the other one includes Scipio library for python 3.8 also there are two other options that are present over here one is custom layers and the other is specify an Arn in the custom layer option you will see any of the previously used or created layers since I've not used any my list is flank the next option is to use an Arn Arn stands for Amazon resource name entering an Arn will enable you to use a layer that is shared by another account or a layer that does not match your function's runtime the format in which you should specify an Arn is shown in the text bar itself okay so now let's get back to our previous page okay so I'll just click on cancel now towards the right side of this page is something called as destinations destinations are AWS resources that receive a record of an invocation after success or failure now you can configure Lambda to send invocation records when your function is invoked asynchronously or if your function processes records from a stream to add a destination choose add destination option the contents of the invocation record and supported destination Services vary by the source so the next element that is present on this page is permissions like I've already mentioned before the default execution role that was used for this hello world Lambda function has the permission to store logs in the Amazon Cloud watch logs now this is the only permission that we have as of now the resource permissions can vary depending on the role that you select for your function finally there is something called as monitoring now when you click on this monitoring element you can see some graphs which do not show any data as of now this is because I've not invoked my function yet once the function has been invoked you'll be able to monitor trace and debug your Lambda functions so coming back towards our hello world function so let me just scroll down to the code editor so now to invoke this function I will have to test it before clicking on test let me show you the default test event now you can open the test configuration either by clicking on select test event dialog box or by opening the drop down list that is present next to the test option in the code editor so let's click on configure test events now you can see there is some default test event present over here also by default the create new test event option has been selected the other option that is edit safe test events cannot be selected as I have not created any test event before so let's just give some name to this event I'll say event one and click on create so now our function is all set to invoke this Lambda function I'll have to click on the test option like I've already mentioned before okay so let me just click on that now as you can see I have a dialog box that says execution result has succeeded when you open the details you will be able to see the summary and the log output the summary section shows the key information such as the time taken for the execution build duration which is 100 milliseconds request ID Etc that are reported in the log output the log output section will show the logs generated by the Lambda function execution now when you check the monitoring page this time you will see that all the metrics here would be updated for instance let me hand over the arrow count and the success rate metrics as you can see it clearly shows that our program has executed with 100 success rate and zero errors similarly you can check various other metrics such as the number of invocations the duration Etc now let us do something that will show us okay so now let us do something that will show us the real-time metrics what I'm going to do over here is add a trigger and that will be an API Gateway this way I'll be able to see how many times my function has been invoked the average duration Etc so let us go back to the configurations element and click on ADD trigger then I will open this drop down list and I will select API Gateway so over here I will create an API with the default HTTP API type and I can keep the security open as this is just for demonstration however please make a note that in real world scenarios this is not the case the name will just be hello world API and the deployment stage will be default so just create the API so now as you can see on the screen I have an API endpoint over here let me just open that link in a new tab and I will repeat the same thing a few times I've opened that link in four new tabs and now I'll get back to our console I'll open the metrics page and let's see what happens so as you can see I have some real-time data that is present over here I have four invocations the corresponding minimum and maximum duration Etc so now there are many more things that you can do with AWS Lambda functions however for the sake of this session we won't be getting into any further details however there is one last and important task that is left once you're done using your Lambda function make sure that you delete it to do so all you have to do is click on the actions drop-down list and select delete function option so congratulations guys you have learned how to create and manage your AWS Lambda function which is very important in learning how to run applications without needing to provision or manage servers [Music] if I have to Define elastic Beanstalk in Amazon terminology then it is a platform as a service where you can deploy your application which you might have developed with programming languages like java.net PHP node.js and many others on familiar servers such as Apache nginx passenger and topgat the definition which I just mentioned seems to have a lot of technical terms isn't it well let's try to figure out what elastic Beanstalk is and simple terms all right let's say you need to build a computer tonight well you have two ways to go at it first you can go to a computer Warehouse Computer Warehouse is a place where you have different components of computer laid out in front of you like you have CPU motherboards router disk drive modems and many other components you can choose whichever component you need and assemble them and form a brand new computer this is similar to situation when you try to deploy an application without using elastic Beanstalk when you try to develop an application by yourself you'll have a list of tasks which you need to do like you might have to decide on how powerful you want your ec2 instance to be then you have to choose a suitable storage and infrastructure stack for your application you might have to install separate surface for monitoring and security purposes as well moving on to option b you can always visit an electronic retail store which has brief configured computers laid out in front of you let's say you're a graphic designer and you want a computer which has a modern graphical user interface install in it all you have to do is specify this requirement to a salesperson and work out with a computer of your choice well I personally prefer this option this is similar to the situation where you're trying to deploy an application using elastic Beanstalk when you use elastic Beanstalk to develop your application all you have to do is concentrate on your code list of the tasks like installing ec2 instances Auto scaling groups maintaining security and monitoring Etc is done by elastic Beanstalk that is the beauty of elastic Beanstalk so let's go back and take a look at definition again and see if we'll understand it this time well elastic Beanstalk is a platform as a service where developers just have to upload their application load balancing Auto scaling and application Health monitoring or all handled automatically by elastic Beanstalk now let's try to understand how elastic bean stock as a platform as a service is beneficial to app developer I'm sure most of you know what platform as a services but let's try to refresh what we know platform as a service is a cloud computing service which provides you a platform where you can deploy and host your application elastic Beanstalk makes the process of app development much more fun and less complex and I have five points to prove that to you firstly it offers quicker deployment suppose you're developing an app by yourself then you'll have to do a lot of tasks by yourself like you might have to decide on easy to instance choose a suitable storage and infrastructure stock as well as install auto scaling groups as well and then you might have to install separate softwares for monitoring and security purposes well this will take quite a lot of time but if you have used platform as a service to develop your app then all you have to do is develop a proper code for your application rest will be handled by platform as a service or elastic Beanstalk in this case which makes the entire process of app development much more faster secondly elastic bean stock simplifies entire app development process like I said all developers have to do is concentrate on developing a code for their application rest like monitoring servers storage Network Etc and manage aging virtualization operating system databases is done by elastic Beanstalk which simplifies the entire process for a developer using platform as a service to deploy application makes entire app development process more cost effective if you're trying to deploy an app by yourself then you might have to install separate softwares for monitoring and security purposes and I'm sure for that you'll have to pay a lot of money extra money but if you're using an elastic bean stock to deploy your application it will provide you all this additional software as a package and you can avoid paying unnecessary operating costs also elastic Beanstalk offers multi-tenant architecture by that I mean it makes it easy for the users to share their application on different devices and that too with high security when I say high security platform as a service will provide you a detailed report regarding your application usage different people or users who are trying to access your application as well with this information you can be sure that your application is not under any cyber threat and finally platform as a service provides you an option where you can know if the user who is using your application is getting a better experience out of it or not with platform as a service you can collect feedback at different stages of your app development like during development stage like testing Stage production stage design stage by doing so you'll have a report regarding how your application is performing at every level and you can make improvements if needed so this is how platform as a service like AR and elastic Beanstalk makes it easy for developer to develop an all-around perfect app guys you'll be able to relate to this point when we try to deploy an application using elastic Beanstalk in the later part of the session you will understand how elastic Beanstalk is beneficial to our app developer in Market there are quite a lot of application hosting platforms which are providing platform as a service let's have a look at few of them first we have something called openshift it is a web hosting platform offered by Red Hat then you have Google app engine which we all know a scalingo it is a platform as a service where you can deploy your application in just two minutes apparently it'll provide you a production ready enrollment where all you have to do is deploy your application code then you have python anywhere it is an online integrated development platform and web hosting service as well but based on Python language then you have elastic Beanstalk offered by Amazon moving on we have Azure app Services by Microsoft and many others but today our main focus will be on elastic Beanstalk which is a web hosting platform offered by Amazon now that you have basic understanding of elastic Beanstalk let's go ahead and take a look at few of its features mostly all these features are similar to the ones which we discussed earlier like elastic Beanstalk makes an app development process more faster and simpler for developer moreover all developer has to do is concentrate on developing code rest of the configuration details and managing and monitoring details will be handled by elastic Beanstalk also elastic Beanstalk automatically scales up your AWS resources which have been designed to your application by elastic Beanstalk based on your application specific needs but there is one feature which is specific to elastic Beanstalk suppose you have deployed an application using elastic Beanstalk but now we want to make changes took the configurations which have been already assigned to your application by elastic Beanstalk though Beanstalk is a platform as a service it provides you with an option where you can change the pre-assigned configurations like you do an infrastructure as a service well if you remember when if you're trying to use infrastructure as a service to deploy an application you will have full control over AWS resources similarly Beanstalk also provides you with full control over your AWS resources and you can have access to the underlying resources at any time let's try to understand elastic Beanstalk a little deeper first we'll be discussing few components of elastic Beanstalk then we'll have a look at its architecture let's see what we have here first we have something called application suppose you have decided to do a project so what you do you go ahead and create a separate folder on your personal computer which is dedicated to your project let's say your project needs Apache server SQL database and a platforming software like Eclipse so you install all the software and store them in the folder which is dedicated to your project so that will be easy for you to access whenever you need all the softwares similarly when you try to deploy an application on elastic Beanstalk Beanstalk will create a separate folder which is dedicated to your application and in AWS terms this folder is what we call an application if I have to Define folder or application in technical terms then it is a collection of different components like environments your application versions and environment configuration let's try to understand each of these components one by one we have something called application version suppose you have written a code stored it in a file and deployed this code on elastic Beanstalk and your application has been successfully launched but now if you want to make certain changes to the code so what you do you go ahead open the file make changes to it save it and then again deploy it on elastic Beanstalk elastic Beanstalk again successfully launches your application so you have two versions of your application now it's just a copy of your application code but with different changes and elastic bean stock will provide you with an option where you can upload different versions of your application without even deleting the previous ones then we have something called environment environment is a place where you actually run your application when you try to launch an elastic Beanstalk environment Bean stocks thoughts assigning various AWS resources like easy to instances Auto scaling groups load balancer security groups to your application the point which you have to remember is at a single point of time environment can run only a single version of your application but elastic Beanstalk will provide you with an option where you can create multiple environments for your single application suppose I want a different environment for different stages of my app like I want an environment for development stage one for production stage and one for testing stage I can go ahead and do that create a different environment for different stages of my application and suppose you have same version or different version of your application installed on all these environments it's possible to run all this application versions at the same time I hope that was clear well you'll understand them practically when we try to deploy an application in the later part of the session then we have something called environment Tire when you try to launch an elastic bean stock environment elastic Beanstalk asks you to choose among two environment tires which are web server environment and then you have worker environment if you want to adapt application to handle HTTP request then you choose web server environment and if you want your application to handle background tasks that is where a walker environment comes into picture I'll show which to choose either a web server or worker environment and how to work with them when we'll try to deploy an application later but and lastly we have something called environment Health based on how your application is running Beanstalk reports the health of your web server environment and it uses different colors to do so first they indicate that your environment is currently being updated let's say you have installed one version and now you're trying to upload different version well it's taking a lot of time so that time it shows gray color it means your environment is still under updating process then you have green which means that your environment has passed the recent health check then you have hello which means that your environment has failed one or more checks and read till three or more checks moving on let's try to understand the architecture of elastic Beanstalk like I said earlier when you try to launch an elastic Beanstalk environment Beanstalk asks you to choose among two different environment tires firstly we have web server environment web server environment usually handles HTTP requests from clients and it has different components firstly we have something called environment you know what environment is it's a place where we actually run your application and Beanstalk provides you with an option where you can create multiple environments and the main point is at a point of time this particular environment can run only one version of your application moving on we have something called elastic load balancer let's say your application is receiving a lot of requests so what elastic load balancer does is it distributes all this requests among different ec2 instances so that all the requests are handled and no request is being denied what actually happens is when you launch an environment a URL is created and this URL in the form of cname is made to point elastic load balancer cname is nothing but alternate name for your url so when your application receives request all these requests are forwarded to elastic load balancer and this load balancer distributes these requests among easy to instances of Auto scaling group then we have Auto scaling Group Well if your web server is trying to handle a lot of traffic and it's having a scarcity of ec2 instances then Auto scaling group automatically installs few easy to instances similarly if traffic is very low then it automatically terminates underused ec2 instances then we have easy to instance so whenever you try to launch an elastic Beanstalk environment Beanstalk will assign your application with a suitable ec2 instance but the software stacked like the operating system the servers and different softwares which are supposed to be installed on your instance are decided by a device called container type for example let's say my environment as a Apache Tomcat container so what it does it installs Amazon Linux operating system Apache web server and Tomcat software on my ec2 instance similarly depending on your application requirements it installs different software stack on your ec2 instances then we have a software component called host manager which runs on every easy to instance that has been assigned to your application this host manager is responsible for various tasks firstly it will provide you a detailed report regarding performance of your application then it provides instant level events it monitors your application log files as well and it monitors your application server you can view all this metrics log files and create various alarms on cloud watch monitoring dashboard then you have security groups Security Group is like a firewall to your instance not anybody can access your instance it's just for the security purposes so elastic bean stock has a default Security Group which allows client to access your application using port 80. you can Define more security groups if you need and then elastic Beanstalk also provides you with an option where you can define a security group for your database for security purposes moving on we have something called Walker environment first question that comes to our mind is what is worker well suppose your web server has received a request from client but on the way while it's trying to process the request it has come across tasks which are consuming lot of resources and taking a lot of time because of which it's quite possible that your web server might deny other requests so what it does it forwards these requests to something called worker this worker handles all the stars on behalf of web server so basically worker is a process that handles background tasks which are time intensive and resource intensive and in addition if you want you can use walker to send email notifications to generate metric reports and clean up databases when needed let's try to understand why we need worker with the help of use case so I have a client he has made a request to web server and the web server has accepted the request and it starts processing the request but while it's processing the request it comes across tasks which are taking a lot of time meanwhile this client has requested or sent another request to web server since web server is still processing the first requests it denies second request so what is the result of this is the performance and the number of requests accepted by a web server will drastically decrease alternatively let's say your client has made a request and your web server has accepted it and it starts processing the request and again it comes across tasks which are taking a lot of time this time what it does it transfers or it passes all this tasks to Walker environment and this worker environment will handle all these tasks and request 1 is successfully completed meanwhile if it receives a second request since it has completed processing request 1 it will accept request too I hope the scenario was clear well all we are doing by installing worker environment is we are avoiding spending lot of time on single request here now you know what web server environment is and worker environment is and why do we need worker environment but there has to be some way so that this web server environment can pass on this task to worker environment let's see how so you have your web server environment it has received a request and while processing it as encountered tasks which are taking a lot of time so what it does it creates an sqs message sqs is a simple queue service offered by Amazon and this message Sage is then put into sqsq and the different requests are arranged based on priority in sqsq meanwhile when you're trying to install worker environment elastic be installs install something called Daemon what is demon does it pulls sqs message from sqsq and then it sends the Stars to web application which is running on Walker environment as a result or as a response this pin stock application handles all the stars and responds with an HTTP response option so this is how the entire process of handling task transferring and then handling task goes on so you have a client he has made a request to web server but the web server is encountered with tasks which are time consuming and resource consuming so it passes this request to sqsq and when you try to install worker environment there's a demon which pulls out all those messages are to us from your sqsq and then this Daemon sends all the stars to your application application resolves all the stars and then it responds with the HTTP response option so this is how your two application communicate I think that was a lot of theory don't worry we have arrived at the fun part of session where we'll be trying to deploy an application using elastic Beanstalk here you'll by doing or by creating an application on elastic Beanstalk practically you'll understand different concepts its architecture and different environment tires and all this so let's go ahead so this is my AWS Management console and if you want to take a look at all the services then you have all the services here but we are mainly concerned with elastic bean stock which I have recently used so it shows the all recently used resources or Services here so I'm gonna choose that elastic Beanstalk and this is my Beanstalk console if you're trying to deploy an application for first time this is the page where you land when we scroll down it says that I can deploy an application in three easy steps all I have to do is select a platform of my choice then upload our application code if I have one or use a sample application code and then run it let's see if it's as easy as it says here so go ahead and click on create new application option here it'll ask you for application name and description I'm going to name my application as Tomcat up then description as my new web app and then I'm going to click on this create option see when I try to create an application it has created a separate folder which is dedicated to my application and in that folder we have different components as you can see here I have my environment then I have application versions and if I've saved any configuration it will show all the saved configurations here now let's go ahead and create an environment on the right side you see an actions option and you click on that you get different choices you can just select create environment here so again it's asking you to choose among two different web environment ties you have web server environment and worker environment in web server environment your application handles HTTP requests from clients then you have Walker environment where your application will process background tasks like time intensive and resource consuming tasks in this demo I'm gonna work only with Observer environment you can go ahead explore and create worker environment once you understand how to deploy an application on elastic Beanstalk so I'm going to click on the select option here it will take me to a page where I'll have to give a domain name or in technical terms or URL to my application you can give any URL of your choice and see if it's available so let's say my Tom app and let's see if it's available it says the domain name is available then description I'm going to give it a same as before so my new web app then when I scroll down it asks me for a platform of my choice there are different options you have go then you have dot net Java Ruby PHP node.js python tomcat and if you're trying to deploy an application on the platform which is not here you can configure your own platform and deploy it on elastic bean stock it provides you an option here you can see there's a custom platform here so I'm going to choose Tomcat platform for my application and since I'm not any kind of developer I'm just gonna go ahead and use the sample application provided by Amazon but if you have any application code if you have created or developed some code you can store that in a file and upload here it says you can upload your code then you have a zip you need to convert your file to zip or War file and then upload it here so I'm gonna just select sample application and then click on create environment here so it's going to take a while for elastic Beanstalk to launch my environment though it's not as much time as it would have taken me to develop entire application by myself while elastic Beanstalk is trying to launch environment let's discuss some points uh in the earlier part of the session we'll discuss some benefits of elastic Beanstalk firstly I said that it fastens your process of developing an entire app so it's true doesn't it all I did was select the platform of my choice rest is done by elastic billing stock itself so thereby it's saving a lot of time similarly it simplifies the process of app development again all I did was select a platform of my choice like installing easy to instances security groups Auto scaling groups and assigning IP addresses rest is done by elastic Beanstalk I even mentioned a point where I said that it will provide elastic Beanstalk provides you with an option where you can change the pre-assigned configuration we'll explore that once the environment is created let's go ahead and see what elastic Beanstalk is doing it says that it has created a storage for my environment where S3 buckets so that all my files where I have my application code are stored there then a test created a security group as well an elastic IP address then it says it's launching an ec2 instance so you see it's as easy as that all you have to do is select a platform of your choice rest is handled by elastic Beanstalk and later on if you're not satisfied if you want to change some configuration you can go ahead and do that here look at this this is the IP address which our domain name which I assigned to my app it says new instance has been added and in addition it's showing each task while it's doing Isn't that cool you'll know what your environment is currently doing so it's still taking a while so it says it says tall and added instance to my application and my environment has been successfully launched it is finished almost all the tasks it should have taken to environment page now so this is my environment page or you can see our dashboard first you have environment healthier it says green it means that my environment has successfully passed the health check then it shows the sample version of your application since I have used the sample application and saying sample application here since I've chosen Tomcat as my platform it has installed suitable infrastructure stack like Amazon Linux and you have Java 8 programming language let's go ahead and explore this page first we have something called configurations here like I said though it is a platform as a service it provides you with an option where you can change configuration so you will have full control of your resources first we have something called instances here when I click on modify option you can see that elastic bean stock has designed micro instance to our application if I want I can go ahead and change it to different instance based on my application requirement scrolling down I have Cloud watch monitoring if I want detail monitoring then I can go for one minute if I want basic monitoring or not so detailed monitoring then I can choose five minutes here then I have an option of resigning storage to my application as well it says we have magnetic storage general purpose and provision iops as well when we scroll down again we see different security groups I can just click on that and the security group will be added to my application so once you've made the changes you can click on apply option here though I haven't made any changes I'm just going to click here so now elastic bean stock is trying to update my environment so it's showing gray color here if you recollect I mentioned during the earlier path that gray indicates my environment is being updated okay let's go back to configurations we did have a look at instances then you have something called capacity apparently elastic Beanstalk is designed a single instance to my application if I want I can go ahead and change to Auto scaling groups you have an option called load balancer you can click on that here and you can set the minimum and maximum number of instances that your auto scaling group can install as well then if you have chosen a load balance option earlier then a low balance would have been enabled here then we have monitoring details which provides you with two options enhanced monitoring and basic monitoring and when we scroll down you can see a streaming to cloudwatch logs option here so if you want your log files you can view them on cloudwatch dashboard as well you can set the retention period according to your choice and suppose you want your application for some private purpose then you can create a generate a private VPC for your application similarly you can add or decrease the amount of storage as well so by explaining all this what I want to say is your hands are not tied you can make changes to configurations if you want then we have logs option if you want to have a look at the last 10 lines of your log files then you have an option it says last hundred line sorry lost hundred lines then if you want full lock files then you click on that it'll provide you a file in download format you can just download it then we have health option here where it provides the health of your AWS resources basically it shows easy to instance here it says it's been 7 minutes or six minutes since my ec2 instance has been installed then you have monitoring where it shows different monitoring details like CPU utilization Network and network out if you want you can go ahead and create an alarm with alarm option here suppose you want the notification to be sent to you when the CPU utilization or when the number of ec2 instances are scarce in your auto scaling group then you have events here events basically are nothing but it's a list of things which has happened since you started launching an environment when I go down it says we have seen earlier on the black screen the same things are replied here so it says create environment is starting then we saw that ec2 instance has been installed security groups elastic IP address so basically it shows all the events that has happened from the time elastic bean stock has started to launch your environment until the time you terminated the environment so that's it then you have files you can assign different key values as well let's go back this is a sample application which I've tried to use now let me try to upload and deploy a new application version here okay I'm going to go to documentation here I'm interested with elastic bean stock I'm going to select on that and then develop a guide click on getting started here when you scroll down and deploy a new application version here based on your sample application you have different versions of your application since I've selected a tomcat as my platform I have a tomcat zip file here I have already downloaded that so I'm just going to upload the file there so let's go back and it says upload and deploy but let's go back to our folder then there's an application versions option here so it gives you deploy and upload option separately here I'm just going to upload first then we'll deploy it version label new version and upload the file I have it here zip file I'm just going to attach the file and then click on upload option the new version of my application has been uploaded but it's not been deployed yet so when I go back here you can see that I can still see the same version which was there before now let's go back and deploy it okay I'm going to select this and then I'm gonna click on deploy option and select deployer let's go back to environment and check so my environment is being updated so again the gray color here once it's updated it will show the new version name here it is uploaded so as you can see it's showing the version name of my new version application version like I said earlier both my application versions are there I haven't deleted any you don't have to delete your application versions when you create a new one similarly you can upload multiple versions of your application going back actions option then you have load configuration which will definitely load your configuration then you have save you can save this configuration suppose you want to create an application with the same configurations again you don't have to start from the beginning from creating application environment all that you can just save the configuration and use for the other application or other environment of your application then you can clone your environment as well rebuild the entire environment and terminate as well so here I have saved configuration if I have saved this configuration the configuration have been listed here and like that configuration I can use when I'm creating a new environment okay just let's see if you have explored all the options environment well I forgot to show you one most important thing when I click on this URL it takes me to a page where it shows but my application has been successfully installed well that's it so now you know how to deploy an application using elastic Beanstalk so I've used the sample application here you can go ahead and upload a code of yours if you have any and try it out well all the options here seems to be user friendly so you will know what to do it seems to be easier process you will understand it better when you try to reply an application by yourself foreign [Music] so what exactly is cloud storage now first and foremost let me tell you what prompted me to actually go ahead and take this session well recently I had been interviewing and where I asked people what did what did they know about cloud computing and they told me that cloud computing is a place or it is a place online where you actually store data I mean to some extent I agree yes cloud computing helps you store data but that is not the definition on the longer run so that is why I thought that we should actually go ahead and have this session so that we can discuss some of the myths that surround cloud computing and cloud storage in particular so guys um let's start with the basic definition first storage well it is something that is made available in the form of service which is connected over a network so guys um this is a very basic definition and to throw some more light I would like to actually go ahead and give in certain examples as well to specify what does this definition mean but to some point this definition is correct it says that it is nothing but a storage which is available as a service which is connected over a network now again you might wonder as in this is what people told me in the interview right I mean it is a place where you store data so yes cloud storage to some extent yes this is what it is but when you talk about cloud storage it is a lot more than this basic definition let's try to understand what all does cloud storage exactly has to offer to you people well first and foremost as I've already mentioned it is storage it can let you store emails media now when I say media you can store in different kind of media whether it's your images whether it's your videos or maybe other kind of files it also lets you hold Services as well yes we are living in the world of internet right now and there are various Services websites that are online and this data can be stored by using Cloud platform and finally I'm sorry guys um finally it is nothing but the backup now when I say backup guys we're talking about um large Enterprises that let you backup the data and they're using Cloud platform to do that but again it's still holds the same point right I mean when I say emails Media Services backup for large organizations I mean it is still a simple storage no now let me tell you what it does when I say backup for large organizations we are referring to a lot of pointers here data coming in from different sources the way it is processed the weight is integrated and stored into a particular storage how it is handled and what all can you do with it now when you talk about a cloud storage it actually takes care of all these things that means it's not a redundant or a dead storage where you just take your data and put in your data you can think of it as smart data storage so to understand that let's talk about cloud computing a little so what cloud computing does is it lets you have this data on their platform and it is a platform where it has n number of services that lets you compute or process this data to suit your business needs now it can be using machine learning Big Data finding out certain patterns using power bi tools or not power bi tools bi tools and also do a lot of other things like maybe use a cloud platform where the data can be used for marketing purposes maybe building iot Bots and stuff like that so this is what a cloud computing platform does it basically lets you use different sources and use this particular data to do multiple or different kinds of things so when I say a cloud storage it basically ensures there is a mechanism that in first place it stores data and lets you perform some of the actions that you can actually perform on this data so as we move further I would be discussing quite a few pointers that support this claim or this definition of mine so let's just move further and try to understand little more pointers or some other pointers that talk about cloud storage but to keep it simple it is a storage that lets you do a lot of things with your data primary reason being storing the data and the other reasons being processing it or managing it also so let's move further and take a look at the next pointer so what are the myths that surround a cloud storage well when you talk about the myths this is what some people had to say that cloud computing is suitable only for large-scale organizations no this is not true let me give you an example recently what happened was one of my friends he actually happened to format his mobile phone and he lost all the images and all the data that was there on that phone so the problem was he never backed that data on any Drive neither on Google Drive or anywhere so he lost the data so he came to us and he told us that this is what happened so we told him that probably he should have backed it up maybe on Google Drive so next time he did that and again he being used to losing his data he lost his data again so he again comes up and he's like I've lost the data so he reminded him that he had his data stored on the Google Drive so when you talk about Google right it is nothing but an online storage where you actually make a copy of your data so he made a copy of his data and he would actually get that data back so when I say cloud storage it gives you a simple application or a simple service where you can actually go ahead and just put in your data just like Google where you can put in your data as well so it is not limited to large scale organizations only if even you are a single individual where you just need to store your data you can use cloud storage now there are there are various cloud service providers that actually meet or cater different cloud computing needs So based on that the cloud storages might get complicated and might give you more functionality but even if your need is as basic as storing data don't worry cloud computing or cloud storage is for you as well now if you talk about small scale businesses yes these days the amount of data that is generated is huge and that is why what happens is even for small scale organizations you need a place where you can store your data and somebody can manage that data for you so you can focus on your business goals so this is where cloud storage comes into picture for even small scale businesses as well so if you ask me yes large scale organizations are suitable for cloud computing or only large scale organizations are suitable for cloud storage this is a myth complexity with cloud guys now um what does this term symbolize people normally assume that having that private infrastructures um makes it easier for them to actually go ahead and put in a data that is not true the fact that people are used to certain methods or methodologies they feel comfortable with it whether cloud is complex or not I would say it is not why because if you get used to certain Services you would realize that storing or moving your data to cloud is actually a lot more easier than your normal infrastructures or your previous or traditional infrastructures is what I would say so whether cloud is complex I would say no as we move into the demo part probably we would be talking about this pointer or once I give the demo probably you would have a clearer picture how easy it is to actually move your data to Cloud not eco-friendly now this might sound out of the blue I mean you might wonder this is not a sociology session so where did this point come in from I mean not eco-friendly yes what people assume is the fact that a large amount of data is being stored on these platforms so we have huge amounts or huge numbers of data centers which are big in size and they consume a lot of electricity so there is power wastage electricity wastage uh well that is a myth again first and foremost the fact that we are getting a centralized storage somewhere that means most of the data would be stored there so yes you are automatically saving out on your power consumption when you talk about it from a global or an Eco perspective the other thing is um I would not want to point out a particular cloud service provider but when you talk about gcp that is Google Cloud platform they normally provide their cloud services at a very affordable price now why is that the reason for that is um they've actually put in a lot of effort into the research part where they've researched a lot on how they can actually minimize the cost and how did they do it they basically ensured that the amount of power that is consumed by the resources um they've tried and optimized that amount to a minimum amount so that they are charged less and in a way you are charged less so if they're optimizing that particular process obviously you're consuming less amount of electricity so whether it's eco-friendly definitely it is equivalently friendly zero downtime again there's no such thing as zero downtime now the fact that I'm talking about cloud storage does not mean that I tell you that it has zero downtime and you are completely secure no there is a possibility that there might be a downtime the fact that cloud ensures that this downtime is very less now that is a plus Point what cloud also does is it ensures that there is a disaster recovery and there is always a backup of your data or your resources so even if something goes down for a very little time I mean normally it happens for a very less time if it does happen and it happens very rarely but even if it happens care is taken that nothing harms your resources or your data so zero downtime no that is not true but definitely downtime is taken care of when you talk about Cloud storages there is no need of cloud storage okay this is one of the biggest myths whether people agree or not if you go back like 10 years from now probably people did not know a lot about cloud computing but with time people are actually moving to cloud and if you take a look at recent statistics they would agree as well I mean people would be wanting to switch to cloud in near future and the reason for that is um there are quite a few Services quite a few facilities that cloud gives you and that is why people are moving to cloud and if you do move to Cloud you'll be using cloud storage inevitably so yes that is going to happen and if you think that there is no need for cloud storage definitely near future I would assure you that even you would be moving to Cloud so Guys these are some of the major myths there are some other myths as well as we move further not worry we would be discussing that as well in some other pointers so let's just go ahead and talk about some of the benefits of using a cloud storage for data storage or basically using Cloud for data storage so what are the benefits of this thing now I've purposely kept this pointer for the later half and I first discussed the myth because these pointers would definitely help you understand some of those myths better now your Cloud platform is customer friendly what do I mean by this well uh first and foremost when you talk about cloud storage what you're able to do is you're able to scale up your storage scale down your storage keep it secure monitor it and you can ensure that there is constant backup taken of your data so when I talk about it from a security perspective it is secure as well plus what cloud service providers do is they've had so many services that are there in the market you talk about any popular cloud service provider they have a lot of services that are made available what do these services do is they ensure that you are functioning on cloud platform is very smooth and same is for cloud storage as well you can utilize various Services which ensure that you are functioning or you are working on cloud becomes easy again which I've been reiterating for a while now that I would be talking about these in future slides don't worry as we get into the demo part you would understand how user friendly these Cloud platforms are security now again this is an important Point um when you talk about Cloud platforms Cloud storages are they secure or not definitely they are very secure and there was a time when people believed that these platforms were not secure to a greater extent and that doubt was understandable I mean if there is something that is new in the market you tend to doubt that but if you talk about Cloud platforms these platforms are actually more secure than your on-premise or your traditional architectures which people are used to using um the reason for this is if you talk about cloud service providers let's talk about AWS that is Amazon web services in this case what it does is it gives you a shared security model now what do I mean by this you have service level agreements where you and your customer or maybe the customer and the AWS providers they basically come to a term where they decide as in what kind of security or what kind of principles are to be implemented on the architecture and you can take control as and you can decide um what accesses do you want to give to the vendor and what are the accesses you want to keep to yourself so when you do combine this approach it ensures that security is is at the optimum and you get to be or you get to take control of your security as well so yes if you talk about cloud storage being secure or not yes it is very secure to name some we have S3 and AWS it is highly durable and it is highly reliable so when you talk about disaster recovery and durability it is almost up to there and as I've already mentioned not everything is hundred percent when I talked about the downtime or yeah the downtime part so yes not everything is 100 but when you talk about security and durability when you talk about S3 in particular it is 99. some six or seven times nine that is 99.9999999 durable so that does make your system very secure another benefit guys it is pocket friendly now if you talk about cloud service providers whether it's storage whether it's compute service database Services all these Services you can actually go ahead and use these services for rental basis it's just like paying for your electricity I mean if you're using a particular service you'd be paying for that service for the duration you use that service and you'll be paying only for the resources that you've used so it is pay as you go kind of a model where you pay only for the resources you use and only for the time duration you use so whether it's pocket friendly or not yes it is pocket friendly and as you move further I mean if you are using more storage the cost again it cuts down to a greater extent so it is already cheaper and if you decide to scale up it would be more cheaper or it would be cheaper is what I should say so yeah these are some of the benefits now if you talk about cloud computing and storage again there are other benefits like as I've already mentioned durability scalability and various other benefits but these are some core ones I would not want to get into the details because I wish to keep everyone on the same page for people who have been attending this session for the first time and for people um who probably know a bit about cloud computing again guys if some of the terms that I'm talking about in this session you feel that these terms are fairly new for you and I'm probably going at a faster Pace I would suggest that you actually do go ahead and check into the other sessions that we have on our YouTube channel because we've talked about a lot of stuff there I mean other cloud services what cloud computing is um what cloud service providers are what are different service models and quite a few other videos and sessions to be honest so I would suggest that you go through those sessions as well and I'm sure that by now many of you might have been in wondering as in whether this session would be recorded and the copy of it would be available to you people or not not worry um most of our sessions they go on YouTube so probably a copy of it would be there on YouTube and if not you can actually share your email IDs as well if it does not go on YouTube somebody would share a copy of the session with you people so guys um if I'm if I'm happening to go a little faster than what you're expecting do not worry uh you'd be having a copy of this as well but for now just try to keep up with the page that I am going with and I'm sure that by the end of this session we all would be good so guys um what are some of the cloud storage practices that you should take care of now these are the practices that should concern somebody who's planning to move to Cloud again if you are a newbie and you're just here to practice we are not talking about you in particular but these pointers are important for you as an individual as well but I'm talking about it from more business business perspective or more industrial perspective so if your organization is planning to move to Cloud definitely these are some of the practices or pointers that you should take care of thank you so first and foremost scrutinize SLA so as I've already mentioned you have SLS where your service providers or your vendors basically come to a term where you actually go ahead and decide on particular rules as an okay these are the terms and these are the services as a vendor I would be providing you people and you as a customer you agree to certain uh terms as in okay this is what you would be giving us and this is what we would be paying you so there are certain pointers that you should consider while you're actually signing your slas one thing that you need to understand is when they say that you would be this is the base charge try to understand how the charges would be when you decide to scale up and stuff like that other thing that you need to consider is I've talked about downtime right so normally you have slas where people talk about the stuff that there won't be an outage which is more than 10 minutes so yes I mean this sounds fairly good right so in an hour's time this is a hypothetical example do not consider that there would be a downtime of 10 minutes this is for your understanding let's assume that there's a downtime of maybe 10 minutes in an hour's time which is too high for now but let's assume that so um what a service provider would claim is if there is a downtime once probably this is what the charge would be but if it goes down after that probably you get some more consistent discount and those kind of things so if there is an SLA where you say that it is 10 minutes and what if there were two down times of nine nine minutes in an hour and that is fairly close right so you've been robbed off your right so that's what I'm trying to say I mean if you do actually go ahead and have particular slas make sure that you consider in right points that suit in your business as well follow your business needs again guys storage as we move further we will be discussing what are the different kinds of storages so uh when you talk about cloud service providers they provide you n number of storages or n types of storages is what I should say so depending upon the business you're dealing with the kind of data that is generated you should be able to choose a proper storage for your requirements I mean whether you're dealing with the real-time data whether it's stationary data archival data based on that um you should be able to actually go ahead and set up your cloud storage also you need to understand as in okay um this is the data I would be putting in and these are the Integrations I would be needing because I'm using these kind of tools so are those compatible with my cloud platform so probably you need to consider these pointers as well and if you follow these rules probably your business would end up saving a lot of money now there have been use cases where uh businesses have actually gone ahead and saved uh lakhs of dollars thousands of dollars so yes considering these Point as understanding your business also becomes important you need to ensure that the security which you are actually managing or monitoring is defined properly I've already mentioned that if you talk about cloud service providers they let you have an SLA where you both come to a similar agreement so understand the security what are the accesses that you have what are the accesses you want to give what kind of data are you dealing with and based on that probably you can come to terms when you are actually moving to Cloud plan your storage future what we are trying to say here is plan the future of your storage again do you need to scale up in near future what are the peak times that you can expect and stuff like that so when you initially actually set your storage up probably you would be in a much better position to scale up I'm not refraining from the fact that cloud providers are already scalable but just to be secure you can do that when you talk about Cloud providers mostly they give you an option of scaling right away or instantly but still having an understanding of how much storage you need where you're gonna move in like two years three years time probably having an understanding of all the those things would definitely hold you in a much better position be aware of hidden costs again guys I've talked about the first SLA right so it is similar to that understand what you're paying for how much are you paying for it is a pay-as-you-go model but having an understanding of which Services would cost you how much would help you in forming proper slas or having proper policies for your storage so these are some of the do's and don'ts of cloud storage guys again um if you need more insights on different Services as well we have a video or a session on YouTube which is called as AWS best practices you can take a look at that as well where we talk about different services and how can you actually perform certain tasks which would ensure that you are in the best possible position so guys uh we've talked about quite a few things we've understood what cloud storage is we've understood what are the benefits what are some of the myths and what are some of the practices that you should take care of now let's take a look at some of the uh different cloud service providers that provide you with these services and once you are done with it then probably would move into the demo part so guys there are quite a few cloud service providers which also provide you with storage Services we have Google cloud platform which is um one of the leading ones digital ocean probably it's everywhere whether you search for internet um ads um companies it's there Mark again this is a popular cloud service provider IBM is there in storage or in Cloud for a very long time guys now if you go way back I happen to did like I attend a session where I believe it was AWS and some reinvent session where I do not remember the name of the speaker but that person made a very valid point he said that in 1980s he remembered or he happened to visit a facility I believe it was IBM's I'm not sure who's um I think it was IBM's so he said that they had this huge machine which was for storage I mean it looked very cool in 1980s a huge machine and it was very costly it was like somewhere around thousands of dollars and the storage space was 4 MB yes for 4mb the cost was thousands of dollars so you can understand how far storage has come how far cloud has come and yes IBM it has been there I mean it has been there since then so if you talk about IBM if you talk about Google slot platform these are principal cloud service providers then you have Microsoft Azure now if you talk about current market I mean if you go by these stats alone Microsoft azure and AWS these are the leading cloud service providers AWS is way ahead of all the other cloud service providers I'm so sorry but if you talk about Microsoft Azure it is actually catching up with Amazon web services and recent stats show that uh Microsoft Azure is doing fairly fairly well so yes these are some of the popular cloud service providers and more or less all of them have good storage Services as well but as I've already mentioned Amazon web services is one of the best in the market and in today's session we would be understanding some of the popular cloud service services that Amazon web services has to offer to you and when I say popular Services I would be focusing on storage Services specifically so guys let me switch into the console and we can discuss some of these Services there and directly move into the demo part so yes guys I hope this screen is visible to you people um this is how the AWS Management console looks like so again for people who are completely new to Cloud platform let me tell you that um what Amazon web services or most of the other cloud service providers do is they give you a free tier account what they're trying to say here is you come you use our services for free for a short duration of period And if you like then go ahead and buy our services so these services are actually made available to you for free for one complete Year yes there are certain limits or bounds on these services so if you exceed those limits you would be charged but if you stay in the bounds or limits you won't be charged and if you talk about exploring these Services these limits are free to your services are more than enough so again guys if you are completely new you should come here that is Amazon web services Management console create a free tier account it is a very simple process put in certain details where you work um why do you want to use these Services your basic details and then probably you would have to enter your debit card or credit card details don't touch it they won't charge you but this is for the verification purpose and again if you're worried about whether you would be charged or an amount would be minus from your credit amount that or your credit card that does not happen guys AWS gives you notifications saying that okay you've been using these services and probably you might be over using some of your services also you can set in alarms where if you reach a particular limit after that you can actually go ahead and ensure that there is an alarm so that you do not exceed the free tier limit so yes once you do have an account you can Avail all these services that are here guys so let's just go ahead and take a look at the console a little and just jump into the storage Services right away so when you click on this icon here storage guys um or Services rather you get access to all these Services as I've already mentioned AWS prior to um quite a few Services there's somewhere around 100 Services guys and they cover different domains you can see the domain names at the top compute robotics analytics business applications um storage you have management and governance security identity management and all those Services guys so the N number of services whether it's migration whether it's Media Services you have services for almost everything so guys um we would be focusing on the storage Services before we go there this is one thing um probably you can select a region where you want to operate from that is you want to create your resources in this particular region you can always have this option of using it so what is the reason guys your data is based in a data center right I mean your data is copied somewhere so if you are using those resources probably your data would be fetched from that particular location so you can choose a region probably which is close to you if you like or if your business is located somewhere else probably you can choose that region as well so you need to go through the list of regions that are available and accordingly make a decision now this being a simple demo guys I'm I would be sticking up or sticking to um Ohio basically so let's just go ahead and jump into the cloud services part and let's talk about storage in particular so guys if you take a look at the storage services that are here you can see that these are the storage services that AWS has to offer to you we have S3 we have EFS you have FSX you have S3 Glacier storage Gateway and AWS backup let me just try and throw some light on some of these services and probably we would just go ahead and get into the demo of one or two of these services at least so guys um when you talk about S3 it is simple storage service so that is SSS now this storage is basically object bucket kind of a storage I mean your container where you put in your data where you store your data is called as bucket and your data or your files are basically stored in the form of objects let's just go ahead and quickly create a small bucket this would be a very small introduction to this service let's just go ahead and do that so when you keep on click on this icon guys um that is S3 it redirects you to the S3 console guys where you can actually go ahead and create a bucket I've mentioned the pointer that there are certain services that make your job very easy with cloud service providers and when you talk about storage Services it is no different I mean there are Services which ensure that your job is fairly easy so let's just go ahead and see um how easy it is to work with S3 if you wish to create a bucket guys if you wish to create a container it is very easy just go ahead and click on create bucket and give it some names say sample for today maybe guys I'm very bad at naming conventions but please forgive me for that again um the names here should be unique I mean if the name is taken somewhere else probably you cannot rename I mean you cannot use that name again so yes ensure that your name is unique and probably guys you should try and name your buckets in such a way that those are more relatable say for example if you have a bucket for maybe creating a particular application so maybe bucket for that application or something like that so that you have a hierarchy and in that way you can assign IM users or access to those buckets in a particular order because you would not want all your users to have access to that bucket right so naming convention becomes very important so just go ahead and say next keep all the versions guys um versioning becomes very important again let's not get into the details but let me give you a small idea what happens here versions that means each time my buckets get updated probably I would want a version or a copy of it and I would want the latest one so when I version it it maintains those copies and if I need to go back I can actually go back to a particular level or a benchmark which I set the previous time in this case let's stick to basic one and I do not want any logging details either so just next again guys um there are certain public accesses which have been given so permissions and access we will talk about that do not worry for now just say next and I would say create a bucket and guys the bucket is already ready I mean my container is already ready so I can just go ahead and probably open this bucket and put in a file if I want and that too is very easy guys I say upload and if I'm connected to my local system I just say add files let's pick this random file which uses this name and I say upload and there you go guys the file is already there I mean we've created a bucket a container we've put in our files it's as simple as that um permissions as I've already mentioned now let me talk about this point I skipped this point right so let's discuss this a little so guys um security is something that you can handle so you would decide or you need to decide what are the users that need to access a particular bucket suppose your organization has different people working on different different teams I mean you have somebody who is a developer there's somebody who's working on maybe the administrative part or maybe on the designing part so for particular bucket you have particular data so you can decide who gets to access what so setting in policies becomes important you can create your own policies as well um initially we saw that certain public accesses were restricted to this bucket I said let's skip let's skip that for now so when I say that Public Access is restricted that means not any public policy can come in and dictate terms here saying that use this policy why because there is a restriction this is a private bucket and not anyone can use it so guys when you talk about S3 in particular you can create buckets you can have backups uh you can have your EBS backups also moved here you can have your you can move your data from here to Glacier we would be talking about Glacier do not worry um you can have your elastic bin stock applications your patch applications and the data can be stored in your S3 buckets you can have your CI CD pipelines and the data can be moved again to the S3 bucket now this is highly durable and highly reliable source of storing data and it gives you fast retrieval of data as well let's go ahead and try to understand some other services as well guys so when I come back here and I see EFS elastic file storage or system browser so here basically in this storage you can store files yes we are talking about data that is in the form of files and if you wish to connect it better with the network you can go for EFS as well guys then you have something called as S3 Glacier yes we talked about S3 right where data is durable and it can be accessed very quickly S3 on the other hand lets you store archival data let me tell you what archival data is first so guys when you talk about archival data basically what happens with archival data is you're dealing with um data that you do not need to use every day let me give you an analogy I'm not sure whether you would be able to relate to that so guys um your birth certificate now I belong to India and we've been teching up a lot but we still have a lot of data that is in the form of papers even if you go to hospitals at times you request for a birth certificate it might take days for you to get that birth certificate why because there is some person who will be going through all those documents and giving you that document this is just an example do not relate it like very seriously but yeah so it might take a couple of days right so and the birth certificate thing I mean I might not need birth certificate every day it might be once in a decade that I might go to a hospital and probably request that particular birth certificate right so this is a kind of data probably which I do not need regularly or in real time um so I can compromise a little on the fact that if the person is giving me that data in two days time it's okay because that does not cost me anything I can wait for two days maybe but that's not the case at times um you need the data to be retrieved very quickly so if that is the case you should store it where in S3 but if you are fine with this delay probably you would want to store it in Glacier why Glacier normally takes a longer while to retrieve your data but the advantage of Glacier is it is profitable because it is very affordable compared to S3 S3 is already affordable you can check in for the prices but if you have archival data which you won't be using every day you can store it here and the fact that it takes a longer while it won't cost you um I mean it won't cost you in that perspective of um accessing your data in real time right so if the data is something that is not needed regularly you can move to S3 Glacier right so what happens is S3 you can actually move in all your data and then if you realize that there is certain data which you do not need every day just move it from S3 to S3 Glacier where the data is stored in archival form and it is or it does not cost you a lot so again guys I won't be getting into the demo of S3 Glacier um we have a session on S3 Glacier or Amazon web services Glacier rather and to do that what you need is you need probably a third party tool which makes it easier for you to retrieve the data so I won't be getting into the stuff where I download that tool and then show you how it works it's very simple just like we created bucket share you create vaults there and you probably move in your data and you can retrieve that data but again it takes a long while to retrieve that data so it is similar to S3 but little different so yeah that is S3 Glacier we've understood what EFS is and what S3 is um then again guys you have some other services as well here if I scroll down you have your storage Gateway you have your AWS backup as well so what are these things and what do these things do well storage Gateway and AWS backup basically backup as it says you can have backup of your data and you can like save it from going down and stuff like that when you talk about storage Gateway these are services that let you move your data from on-premise atmosphere or your infrastructure rather to Cloud so if you already have data that is on your um existing on-premise or infrastructure rather you can actually move that data to Cloud as well so there are services to help you do that and those services are your storage Gateway services so guys um we've discussed some of these Services there is something else which is called as elastic block storage now elastic Block store is what it does is it lets you create volumes snapshots and copies of the volume that is attached to your instances let's go ahead and take a look at how this works I mean there are a lot of pointers to talk about it so as I move further I would be discussing those pointers while I also show you how to do it so guys um when I say EBS or elastic block storage what it does is it lets me attach some kind of volume to my instance now instances let me tell you what instances are first now when you talk about cloud services they give you compute Services where you can spawn instances or spawn temporary servers or servers where you want to host your data now each time I won't be going ahead and buying a new machine right instead what cloud does is it and what happens uh yes guys um okay guys I'm not sure whether there was a lag while you were going through this session what happened is let me tell you what happened uh my connection the streaming connection to my software which I'm using to stream this session did go down a minute back and it shows now that it is connected so I would like to know whether I'm audible to you people or not if yes then we can continue with this session guys no thank you okay I'm guessing we are fine so I'm just gonna go ahead and continue with the session I was talking about instances let me talk a little more about it so when I talk about these servers that are ready to use um basically these servers are something that you can use and you can have some memory attached to it so what we're gonna do is we're gonna go ahead and launch one instance and understand how memory or how storage works with it so to do that we are we're going to go ahead and just launch that particular service it is called as ec2 which is a compute service guys so here I can actually go ahead and create servers or launch instances in simple words so let's just go ahead and launch a particular instance now I have the freedom of launching both um Linux based Windows based ubuntu-based kind of instances so you have the freedom of choosing what kind of instance do you want this being a simple demo guys I'm gonna stick with the windows instance I'm not going to show you how to uh deal with that instance because I've done that in previous sessions you can take a look at some of those sessions as well guys uh let's just go ahead and launch this particular session or this particular instance rather now guys um this is a Windows instance and okay not this let me learn some basic one this is also free tier guys but yeah I would want this make sure that your instance is EBS backed so guys your backing up Works in two ways you can back it up on S3 you can back it up on EBS that is elastic block storage now elastic block storage is important why it lets you create images and volumes what are those we'll talk about that once we create this instance so ensure that for now it is EBS so if I click on okay this is the thing if I click on this icon it would give me details what kind of instance I'm launching when I say T2 micro it is a small instance which has one CPU and one gigabytes of memory for now and I can just go ahead and say next okay some of the other details whether you want a VPC or not let's not discuss that and then you get into the storage part guys this is the device with to which I'm attaching my root volume so this is the path rather so I need to focus on this it is sda1 guys that is slash Dev slash sda1 you need to remember this when you create new volumes and the types of volumes that you can attach to your instance are these you have general purpose SSD provision iops and magnetic magnetism take is something that is getting outdated probably might be replaced so these are the few ones you also have some other kind of volumes that you can attach but the point that you need to remember is when you talk about having a primary uh volume in that case you have only these options because these are bootable guys so there are certain other volumes that you can attach if I attach a secondary volume you see the options are more I have hsd for throughput optimization and then I have cold SSD as well but um this is a basic thing we're not going to get into the details of that you would skip that so guys all I'm trying to say is this is the device um this is the size and probably this is the type of um instance or volume sorry is that would be attached to my instance so let's just go ahead and say next add tax for now let's not add anything and then let me say configure the settings so guys when I launch an instance it says that security is not Optimum but it's okay I mean you can assign the ports you want to when you use it for a higher security purpose and then this is important guys for your each instance you need a key pair which is a secret way of logging in or a secure way of logging in not secret a secure way so this is a second place authentication once you are logged into your account you would be needing a key pair if you wish to use this instance so make sure you create one and you store that one as well if you have one which you can use probably you can do that else you can just create one say new key I said download guys once you download it keep it safe somewhere it is stored in the form of dot PM file so do that and then I say launch an instance so guys um once this happens if I just go back to the ec2 dashboard probably I can see that there is an instance which is running for now it is 0 why because guys my instance is still getting launched it takes a couple of minutes or one and a half or one minute probably to launch an instance the reason for this is probably a lot of things happen in the background I mean certain network is associated if you talk about an instance it needs to communicate with other instances right so in that case um probably you need to have a network that lets all these instances connect so a network is set here basically and probably all the storage volume is attached and a lot of things happen that is why there are certain statistics that your instance needs to go through and hence it takes a minute or so to launch this instance so if you take a look at this the status checks it says that it is initializing so if you refresh it probably it happens at times so let's just try our luck see whether it's no it's still initializing but guys we can see the volume that would be attached to it so let me just come here and rather go here if I click on volumes guys there is a volume that is attached to it so there is a 30 GB volume so there is a volume that probably has a size of 30 GB so it is here already and it is in you so it would be attached to my instance once it is up and running so the point I'm trying to make here is what elastic block storage does is it lets you manage all these things now there are two ways to manage these things either you create a copy of this volume disable this volume and then attach the next one or probably you can directly um scale your existing volume or make changes to it right away so what elastic block storage does is it lets you manage the storages so again let me tell you how it works so when I create an instance probably it is created in a practical particular region right so in that particular region say for example now I'm based in India so I have a data center in Mumbai so my instance would be created at that data center and probably the stories for it would also be there so there is no latency when I try to use that storage so this is what EBS does it lets you manage that particular storage so how it works is I can create a copy of it so what this copy does is it serves two purposes so next time if I wish to make changes to that storage I can do that if this particular storage or volume goes down I have a backup copy again I can create snapshots as well now what snapshots do is basically they let me replicate my instance and the volume that is attached with it so instead of creating an instance again and again with if I've defined certain properties for my instance um I do not have to worry about defining those properties again and again I can just create snapshot or I can rather create an Emi out of it which I can store and use it next time if I want to spawn a similar instance so this is where EBS helps in it lets you have backups of all these storages it lets you create copies of it so even if something goes down you can work on the copy that you have so guys by now our instance would be created let's just go ahead and take a look at it it says it is running guys and we've already taken a look at the volume let's just create a copy of this volume to do that I'm going to go to the actions my instance is selected already I can just go to modify and make changes to this volume right away which is an easier way but I'm gonna show you how it can be done the other way as well how it used to work previously so I can just say that create a snapshot details sample and I say create so guys a snapshot is created if I come here I can take a look at the snapshot again it is pending might take half a minute for the snapshot to get created so I can just come here and replace or refresh other these things at times take a little while so guys we would be creating a copy of it probably we would be detaching the volume that we've created and it is attached to our instance and we would replace that with the copy that we are creating now so once this thing is done and created we can do that for some reason it's taking a longer while today let's hope that it gets done quicker look it's still processing let's bear with me or just bear with me meanwhile this happens thank you again guys if I was too fast and if I missed out on certain things I would like to tell you that you can go through our other sessions on YouTube and probably you'd be in a much better state to understand what has happened here again there was an outage where not outage um my software did not work properly the streaming software and probably there was a lag of a minute or two so I'm hoping that you all did not miss out on anything that was happening meanwhile just hope that this snapshot gets created quickly it is still pending and this is irritating at times when it takes a longer while it's completed guys our snapshot is ready I can just go ahead and say create a volume out of it which I wish to attach so guys um there are certain details that we need to do so for that let's just go back first um let's go back to the instance that we have and let's see where the instance is created guys so as you can see if you come here it would give you the details of the place where the instance is created so it is U.S east 2C so when you create an volume a volume it is necessary that you created in the same region guys because as I've already mentioned the benefits of having it in same reason is reason is that you can attach it to your existing instance and it saves you from various latencies so yep let's go back to the snapshots part and say create a volume of it I say create and then I probably let's say I want more storage guys let's say 19. okay this is general purpose it is two way so let's go to 2C if I'm not wrong it was 2C let's just go ahead and create it in 2C and say create volume close so guys our instances or our volume is created successfully again guys now you can take a look at it from this perspective I have my Snapshot here right so this snapshot says 30 GB that does not mean that the snapshot which I took its size is 30 GB it says that it was created from an instance whose size is 30 GB so there's a difference between these two things guys understand that as well so I have a volume which is based in availability Zone 2C I have an instance which is here and it again is an availability Zone 2C so we can attach to it let's just again go back to the volumes part so guys I have two volumes I created this one and this is attached to my instance let me just try and remove this first detach volume okay it's giving me an error try to understand why this error is there guys my instance is already running so I cannot directly remove this volume from here for that I would have to select this instance go to instant State and say stop so it stops working for now and once it does I can attach the volume so for now what you can see is there are these volumes here it is in use right so once the instance stops it would be available and won't be news so I can replace it with this instance so it is stopping it hasn't stopped yet so guys do not worry we would be done with the session very soon and once we are done probably uh you all would be free to leave I believe that this session has taken longer than my normal sessions but yeah there was a lot of stuff to talk about we talked about the complete story services that AWS has to offer to you people hence this session was so long so let's just quickly go ahead and finish the stuff okay it has stopped so guys um I can now go ahead and remove the volume or detach this volume and go ahead and attach the other one so if I say detach it would detach yeah see both are available now let's try to attach this volume and say um attach volume search this is the instance guys which I we've created and you need to given the device details which are slash um what were the details let's just go back and take a look at the details that we're supposed to enter in here so guys you need to given the path that we talked about which is uh the drive that we've discussed right so that is the path that you need to enter and then you actually go ahead and say SD A1 slash and probably you'd be more than what to go so this is the other thing I do not remember the other part so you need to go ahead and put in these details here if you put in these part details guys you can just go ahead and attach your volume right away and this volume would get attached to your instance so this is how it works and you can actually go back and do other things as well so if I just come here I have this instance so what you have to do is you have to actually go ahead and click on this thing for now it's not working but if you just come back here or to the volume part so if you just go to the volumes part which we were at in the previous slide you can actually go ahead and attach the volumes now here you go if I just go to instances probably go back and I say ec2 again yeah if I come back to volumes guys you can attach the volumes that are there you can delete those and you can do n number of changes that you wish to do so just go ahead and attach these volumes and you'll be more than good to actually go ahead and launch your instances or manage the storages that are there again the only thing that I missed out on is the path I told you to note the path the device name right you just have to go ahead and enter in the device name here and if you enter in the device name while creating your volume or attaching your volume your instance would get attached to that or your volume would get attached to that instance right away so yes guys uh that's pretty much sums up today's session we've talked about quite a few things here guys uh we've talked about S3 Services we've talked about um we've talked about EBS in particular we've understood like how to detach a volume how to attach one I just did not show you how to attach the volume but you can do that the reason I'm not showing you that is um I probably lost out on the device name here which normally comes in here so before you deactivate your device make sure that you have this name and when you do launch or attach your volume to that particular thing all you have to do is you just go to the volumes part and probably when you say attached to a particular instance put in that device name there and your instance would be attached or your volume would be attached to your instance and you can just go ahead and say um launch or just start this um so called instance again and you'd be good to go guys [Music] so let us just try to understand what the storage exactly is so guys when you talk about S3 now it is a simple storage service which is simple or easier to use in real sense it lets you store and retrieve data which can be in any amount which can be of any type and you can move it from anywhere using the web or Internet so it is called as storage service of the internet what are the features of this particular service it is highly durable guys now why do I call it durable it provides you durability of 99.999999 some 11 9 now when you talk about that amount of durability it is understandable how durable the services what makes it this durable it uses a method of checksum where it constantly uses checksums to analyze whether your data was corrupted at a particular point and if yes that is rectified right away and that is why this service is so durable then it is highly flexible as well as already mentioned S3 is a very simple service and the fact that you can store any kind of data you can store it in any region or any available reason is what I would mean by the sentence it makes it highly flexible to store the data in this particular service and the fact that you can use so many apis you can kind of secure this data in so many ways and it is so affordable it meets different kinds of needs thus making it so flexible available is it available yes definitely it is very much available as we move into the demo part I would be showing you which regions basically let you create these kind of storages and how can you move and store data in different regions as well so if you talk about availability yes it is available in different regions and the fact that it is so affordable making it available becomes all the more easy cost efficient yes now to start with we normally do not get anything for free in life but if you talk about S3 storage AWS has a free tier which lets you use AWS services for free for for one complete year but this happens in certain limits now when you talk about S3 you can store 5gb of data for free at least to get started or get used to the service I believe that is more than enough and what it also does is it lets you have 720 000 get requests and some around 2000 put requests as well so these are something that let you store and retrieve data apart from that you can move in 15 GB of data every month outside of your S3 Service as well so if you're getting this much for free it is definitely very much affordable also it charges you on pay as you go model now what do I mean by this well when I say pay as you go model what we do here is we pay only for the time duration that we use this service for and only for the capacity that we use this service for so that is why as you move along if you need more services you would be charged more if you do not need more amount of the service you won't be charged to that greater extent so is it cost every efficient definitely it is scalable yes that is the best thing about AWS Services most of them are scalable I mean you can store huge amount of data you can process huge amount of data you can acquire huge amount of data if it is scalability that is your concern you do not have to worry about it here because even this service readily scales to the increasing data that you need to store and the fact that it is pay as you go model you do not have to worry about the cost Factor as well is it secure definitely it is now you can encrypt your data you have various bucket policies as well that let you decide who gets to access your data who gets to write data who gets to read data and when I said you can encrypt your data you can actually go ahead and encrypt your data both on client-side and on your server side as well so is it secure I believe that answers the question on its own so Guys these were some of the features of Amazon S3 so guys now let us try to understand how does S3 storage actually work I know it works with the concept of objects and buckets now bucket you can think of it as a container whereas an object is a file that you store in your container these can be thought of as AWS S3 resources now when I say an object basically object is your data file I've already mentioned that you can store any kind of data whether it's your image whether it's your files docs whatever it is these are nothing but your data and this data comes with metadata when I say an object it is combination of your data plus some metadata or information about that data what kind of information basically you have the key that is the name of the file that you use and version ID is something that tells you which version are you using as we discuss versioning probably I would talk about version ID a little more but meanwhile I believe this is more than enough your objects are nothing but your files with the required metadata and the buckets as I've already mentioned these are nothing but containers that hold your data so how does it work guys well what happens is basically you go ahead and create buckets in regions and you store your data in those regions how do you decide what buckets to use what reasons to use where to create the bucket and all those things well it depends on quite a few factors when I say I have to create a bucket I need to decide what reason would be more accessible to my customers or to my users and how much cost does that region charge me because depending upon the reason your cost might vary so that is one factor that you need to consider and latency as well I mean if you put your data in an S3 bucket that is far away from you fetching it might cause high amount of latency as well so once you consider these factors you can create a bucket and you just store your objects when I said version ID key actually your system automatically generates these features for you so for you it is very simple create a bucket pick up your object put it in it or just go ahead and and retrieve that data from the bucket whenever you want to so I believe this gives you some picture about what S3 is now let me quickly switch into the demo part and let me give you a quick idea or quick demo as to how S3 works so that it is not too much theory for you people so guys what I've done is I've actually gone ahead and I've switched into my Amazon Management console now as I've already mentioned AWS gives you a free tier for which you can use AWS services for free for one complete year mine is not a free tier account but yeah if you are a starter you can create a fresh account you just have to go ahead and give in certain details all you do is you just go to your web browser search for AWS free tier and sign in with the required details they would ask you for your credit card or your debit card details enter any one of those for the verification purpose and you can actually go ahead and set up alarms as well which would tell you as in okay this is the limit to which you have used the services and that way you won't be charged for Access of data usage or service usage having said that guys this is about creating an account I believe it is fairly simple you can create an account once you create an account guys this is the console that would be available to you what you have to do is you have to go ahead and search for Amazon S3 if you search S3 here it would kind of redirect direct you to that service page so guys as you can see this is the company's account probably somebody uses it in the company and they have their buckets that are already created let's not get there let us just go ahead and create our own bucket and just go ahead and put in some data into it it is fairly simple guys I've already mentioned it is very simple to use kind of service all I have to do is click on create bucket and enter in name for some bucket guys now this name is unique it is globally unique once you enter a name for the bucket you cannot use the same name for some other bucket so make sure you put in valid name and the fact that I use the term Global something reminded me to be explained off so guys as you can see if I go back here I wanted to notice this part so guys when you are into the Management console or you open any service by default the region is North Virginia okay so if I create a resource it would go to this region but when I select the service that is S3 you can see that this region automatically goes to Global that means it is a global Service it does not mean that you cannot create bucket in particular regions you can do that but the service is global is what they're trying to say so let us just go ahead and create the bucket Let Us call it today's demo you cannot use caps guys you cannot use some symbols so you have to follow the naming Convention as well today is demo sorry I'm very bad at naming conventions guys I hope it is okay let it be in U.S east you can choose other regions as well guys but for now let it be whatever it is so I'm gonna stick to North Virginia there are 76 buckets that are being used let us just say next bucket name already exists so this was already taken guys see so you cannot use it let's call it say Ramos bucket one three one one three okay do you want to keep all the versions of the object we will talk about what versions are okay guys uh meanwhile just bear with me I'm just gonna go ahead and create a bucket create a bucket and there you go guys I'm sure remorse bucket should be here somewhere here it is if I open it I can just go ahead and create folders inside it or I can directly upload data so I say upload select a file let's just randomly select this file it is Van rusam founder of python basically let's just say next next next and the data is uploaded guys you can see the data being uploaded and my file is here for usage so guys this is how object and bucket kind of stuff works you can see that this is the data that I have if I click on it I would get all the information what is the key what is the version value for now let's not discuss version but this is the key or the name of the file that I've uploaded so it is fairly clear right guys so let us just quickly switch back to the presentation and discuss some other stuff as well now guys another important topic that is to be discussed here is S3 storage classes now we've discussed how the data is stored or how buckets and objects work but apart from that we need to discuss some other pointers as well as in how does AWS charge me or what kind of options do I have when it comes to storing this data so it provides you with three options guys standard infrequent and Glacier let me quickly give you an explanation to what do these storage classes mean and what all they offer to us when I say standard it is the standard storage which gives you low latency so in case if there is some data that needs to be fetched right away you can actually go ahead and use standard storage say for example I wish to go to a hospital for certain kind of checkup so in that case my details would be entered in and the fact that I am getting myself checked in a hospital or diagnosed in the hospital what happens is this data is important and if it is needed right away it should be available so this can of data can be stored in your standard storage where the latency is very less the next we have infrequent access now what do I mean by that now in this case my latency period has to be low because I'm talking about data that I would actually need any time if I want to but when I store this data for a little longer duration all I want is this data to be retrieved quickly say for example I get a particular report or a particular test done so in that case I would actually go ahead and submit my details or say for example my blood samples but I need this information maybe after three days so what happens is in this scenario I would want to store this data for a longer term but the retrieval should be faster here in the first case that was not the case if I needed that data right away and if I wanted it to be stored for a very short duration I would use standard but if I want to store it for a longer duration and I want a quick retrieval in that case I would be using infrequent access and finally I have Glacier we have already discussed this here your retrieval speed is low and the data is to be put in for a longer duration and that is why it is more affordable if we take a look at the stats that are there in the image above you can see that minimum storage duration is nothing for standard for infrequent it is 30 days and for Glacier it is 90 days if you take a look at latency it is milliseconds milliseconds and 4 hours so that itself explains a lot of stuff here so what are storage classes and what do they do I believe some idea is clear to you people again as we move into the demo part we would be discussing this part as well and we would also discuss expiration and transition that supports these terms but let us move further and try to understand something else first versioning and cross region replication now guys when I say versioning I'm actually talking about keeping multiple copies of my data now why do I need versioning and why do I need multiple copies of my data I've already mentioned that AWS S3 is highly durable and secure how is that because you can fix the errors that are there and you can also have multiple copies of your data you can replicate your data so in case if your data center goes down a copy of it is mentioned or maintained somewhere else as well how is this done by creating multiple versions of your data say for example an image I store it in my S3 bucket what happens here is there is key the name is same image and version is some 333333 right now take a look at the other image if I actually go ahead and create a copy of the first image its name would remain same but its version would be different so suppose both of these images they reside in one bucket what they these images are doing is they are having multiple copies or giving me multiple copies now in case of image not a lot would change but if I have doc files or data files in that case versioning becomes very important because if I make changes to particular data if I delete a particular file a backup should always be there with me and this is where versioning becomes very very important what are the features of versioning by default versioning is disabled when you say or when you talk about S3 you have to go ahead and enable this versioning it prevents overwriting or accidental deletion we've already discussed that you get non-con current version by specifying version ID as well what do I mean by this that means if I actually go ahead and create one more copy of the data and store it so the latest copy would be available on top but I can go to the versions option put in the ID that belong to the previous version and I can fetch that version as well so what is cross region replication now guys we've discussed versioning let us talk about another important topic that is cross region replication now when you talk about S3 basically what happens is you create a bucket in a region and you store data in that region but what if I want to move my data from one region or from one bucket in one region to other bucket in other region can we do that yes cross region replications let you do that so what you do is you basically go ahead and create a bucket in one region you create another bucket in another region and probably you give access to the first bucket to move data from itself to the other bucket so this was about versioning this was about cross region replication and I believe we've also talked about storage classes let me quickly switch into the demo part and discuss these topics to little more detail so guys moving back what we have done is we've actually gone ahead and created a bucket already right when you talk about what was the name of the bucket it was remorse if I'm not from yep so if you click on the bucket name removes what it does is it basically shows you these details guys now you can see that your versioning is disabled right so if I click on it I can actually come to this page and I can say enable versioning that means a copy of the data that I create is always maintained so if I go to remorse bucket or I just move back okay this interface can be little irritating at times you have to move back and forth every now and then so guys there is a file which we have stored you can just take a look at this date first it says that it is 235 that was the time when the object was moved let me just say that upload the same file this was the file we uploaded I say next next next upload so guys this file is getting uploaded you can see the name of the file is still same we have only one file here why because it was recently modified at 245 from 2 25 to 35 it got changed to 245 so it is fairly clear guys what is happening here your data is getting modified and if you wonder as in what happened to the previous version don't worry if you click on this show option you can see that both of your versions are still here guys this was created at 235 and at 2 45 so this way data replication and data security works much better so you can secure your data you can duplicate your data so in case if you lose your data you always have the previous versions to deal with how does the previous version thing works so guys what happens is if I delete this file what Amazon history would do is it would set a marker on top of this file and once I delete it if I search for that ID that ID won't be available why because the marker has switched to the next ID now so whatever I want to do I can do with the next ID as well so guys one more thing that you also need to understand here is what happens to the file I mean I have actually deleted a file but a version is there with me can I delete all the versions yes you can specify the ID and you can delete all the versions that you want you can also do one thing that is you can set a particular life cycle for your files when I say life cycle you can decide as an okay now I have a file in standard storage we have discussed these storages right standard storage in frequent and Glacier what you can do with your lifecycle management is you can decide as an okay for a particular time duration I want this file to stay in standard maybe after a while I want to move it to infrequent and after a while I want to move it to Glacier say for example there is certain data which was very important for me but having used that data I don't want to use it for next few months so in that case I can move to the substitutes or to the other storage classes where where probably I won't be needing to use that data for a longer while and doing that I won't be paying for this data as I used to pay for the standard because standard is the costliest of the three so let us quickly see can we do that or how does it work at least if I just go back this is my file I can actually just go ahead and switch to management in that I have the option of life cycle if I click here there is no life cycle add a life cycle you can add a lifecycle rule as well guys new let me call it new and let me say next it asks me what do I want to do you can add rules in lifecycle configuration to tell Amazon S3 to transition objects to another storage class there are three request fees when using lifecycle to transition data to any other S3 or sa Glacier storage so which version do I wish to use current I can say yes add transition and I can select transition to this tier when after 30 days and if I say next it would agree expiration you can select other policies as well so guys when I say transition first thing what it does is it tells me what time to transition to which storage class an expiration it tells me when does this expire so I can this decide when to clean up the objects and when not to let's not do that for now let's just say next next so guys what will happen here is after 30 days my data would move to a standard 1A storage so you can actually go ahead and decide whether you want to move to Glacier in that drop down you had more options as well I did not show you that but it is pretty understandable you can move to Glacier as well so this is about life cycle guys one more thing you have something called as replication you can add replication as well if you wish to replicate your data cross region replication I believe guys I do not have access to do that because I'm using someone else's account for now but let me just give you some ideas to what you can do to replicate your data you can just go ahead and click on get started so a replication to remind you people it is nothing but a process of moving your data from bucket in one region to other bucket in some other region so for that I need to select a source bucket so let us just say that this is the bucket that I have next now guys in my case I haven't created the second bucket what you can do is you can just go ahead and create one more bucket once you create the bucket you can select the destination bucket for now let us just say that this is a bucket that has been created by someone else I'm not gonna transfer data here but let's just select this for the demo sake this is the bucket that I have see it says that bucket does not have versioning enabled this is very important Point guys I showed you how to enable versioning right if you select the bucket there is an option on the right side saying versioning you can actually go ahead and enable versioning there so once you enable versioning you would be able to use this bucket do you want to change the storage class for the replicated objects if you say yes it would give you or the option of selecting what storage class do you want to select right if you don't you don't have to you can say next you have to enter an IM role if you do not have any you just say create a role and then the rule name in this case I do not have any details about this and I don't want to create a role because this account does not belong to me sorry for that inconvenience but you can actually go ahead and select create a role and just say next and I'm sure that you can actually go ahead and your bucket starts working or your cross region replication starts working what happens after that is once you store your object in a particular file you can actually move that object not in a particular file in a particular bucket you can move the data from that bucket to the other bucket and a copy of your data is maintained in both the buckets that you use so this is what cross region replication is guys I believe that we have discussed what are storage classes we've discussed what is cross region replication and we've discussed versioning in general let us quickly move back to the presentation and discuss the remaining topics as well so guys I've switched into the presentation part till time we've discussed how cross region replication Works we've discussed how versioning works and we've seen how to carry out that process the other important topic that we need to focus on is we've known like how to create versions how to move data from one place to the other but the thing is what if I have to move data from a particular location to a location that is very far away from me and still ensure that there is not too much latency in it because if you're moving data from one location to a location that is far away from you it is understandable that it would take a longer while why because we are moving data from internet so the amount of data that you move and the further you move it should take a longer while for that so how do you solve that problem you have S3 transfer acceleration you can do that by using other services as well we discussed snowball and snowmobile as well but they physically move the data and at times it takes n number of days to move your data with S3 transfer acceleration that is not the issue because it moves that data at a very fast pace so that is a good thing so how can you move your data at a faster Pace by using S3 transfer acceleration now guys let us first understand what it is exactly so what it does is it enables fast easy and secure transfers of files or long distances between your client and S3 bucket and to do that it uses a service called as cloudfront and the edge locations it provides you as I move further I would be talking about what cloudfront is do not worry about it first let us take a look at this diagram so normally if you are moving your data or directly uploading your data to a bucket that is located at a far away distance I mean suppose I'm a customer and I wish to put my data into an S3 bucket which is located maybe a continent away from me so using internet it might take a longer while instead what I can do is I can use transfer acceleration so how is it different now guys there is a service called as AWS cloudfront what it does is it basically lets you cache your data when I say cache data that means you can store your data at a location that is in the interim or that is close to your destination now this service is basically used to ensure that data retrieval is fast suppose I am searching for a particular URL what happens is when I type that URL a request is sent to the server it fetches the data and sends it to me so if it is located at a very far location it might take long while for me to fetch the data so what people do is they analyzers and how much requests are coming from a particular location and if there are frequent and a lot of requests what they do is the setup and age location close to that particular region so you can put your data you can cache your data on that edge location and that data can be fetched from that edge location at a faster rate so this is how Edge locations work what transfer acceleration does is it basically puts in your data at the edge location so that it can be moved to your S3 bucket at a quicker pace and that is why it is fast so guys this was about S3 data acceleration let us quickly move into the console part and try to understand how S3 acceleration works books so guys I've switched into the console S3 acceleration or data transfer acceleration is very easy thing to do I do not remember the bucket name I think it was remove something okay if I select this and open it I can actually go to the properties part guys there are other things that you might want to consider you can come here and take a look at those as well for now I'm just gonna say go ahead and enable transfer acceleration it is suspended I can enable it it gives me the endpoint as well and I say save so guys what this means is if I'm putting my data into this bucket it would be transferred very quickly or I can use this bucket to transfer my data at a quicker Pace by using data transfer acceleration by S3 again guys I missed out on one important point the fact that we have been talking about buckets and stuff like that there is something important that I would like to show to you people first let us just go back and disable this part I do not wanted to have the transfer acceleration thing going and I just wanted to show to you people how it is done I just say go back to suspend it and one more thing guys if you once you actually enable the transfer part and if you upload a file you can see the difference in the speed the problem is you need a third party tool to do that so you can actually go ahead and download a third party tool as well and using that you can actually go ahead and see how it works having said that I was talking about buckets in general so let us just go back and go to remote again there you go and I'm gonna copy the Arn I'll tell you why I've copied the Arn now when I open this bucket guys we have quite a few things permissions I talked about security right so you can decide Public Access as in who gets to access your bucket so guys you can actually go ahead and decide who gets to access what kind of buckets say for example here in your blog Public Access you can decide who gets to access what data publicly for that you have access control lists using these ACLS you can actually decide who gets to access how other thing you can do is you can just go ahead and create a bucket policy and decide who gets to access your bucket or who gets to put your data delete your data and do all these things let us just go ahead and create a policy now you can write your own policy or you can just use a policy generator which again is a third party tool so I want to create a bucket policy for my S3 so let's just say S3 bucket policy and what kind of effect I want I mean do I want someone to access my system or do I want to deny someone from accessing my system I can decide that so let's for now just say that I want to deny someone from doing something and what I wanted someone to do is to deny a particular thing for that person for all the objects I mean I do not want that person to access any of the objects that is there so what I say is star that means nobody should able to do anything to any of the objects that are there in this bucket so it says star service Amazon S3 what action I want I want to prevent someone from deleting an object there you go and this is the Arn that is why I copied it it should be followed by a forward slash and a star add a statement and I say generate policy so guys the policy has been generated I just have to copy it if I copy this thing and I go back to the console if I paste it here I can say save it's saved I'll save it again just to be safe so guys we have actually gone ahead and let me just go ahead and again go to Remos so guys now there is an object here let me just try and delete this object if I just go to the actions part here and I say delete see the file is still here is it the other version no it's not deleted see there's an error here if I click on it it says 100 failed why access denied because I do not have the access to delete the object right now why because I've created a bucket policy guys so that is what bucket policies and ACLS do they let you make your objects or your data more secure and as you saw in the option there are quite a few options that you have at your disposal which you can choose from which you can mix and match and decide as in okay this is what I want to do I want to probably give someone an access to delete a bucket I want to give someone an access to do this or do that so guys this was about S3 data transfer acceleration and we've also seen how you create a bucket policy how you attach it to your bucket and stuff like that now let me just go back and kind of brush up this session or finish this session up with a use case so that you can probably understand the topics that we've discussed a little more first let us go back to the use case guys so guys I've switched into my presentation console again and we would be discussing IMDb media now for people who watch movies they might know what IMDb is it is a website that gives you details about movies they tell you what are the movies that are nice if you probably select or type a particular movie name they would give you details about it as in who were the actors how was the movie how was the review a short snippet explaining you what the movie is about its genre and stuff like that plus they have their own ratings too kind of gauge in the customers even but as in IMDb being a popular site and when they say that this movie is this person good or liked by these many people people normally believe it so they have that score as well so if you talk about a website that basically deals with movies you understand the number of movies that are released worldwide and if most of them are present here on IMDb that means that database is huge but we are talking about data that is being processed in create numbers great amounts I mean when you talk talk about the data that is here what is happening here is you have n number of movies that are being released so if someone searches for a particular movie it has to go through the database and the data has to be fetched to him right away so how do you deal with the latency issue well this would answer a lot of questions or it would sum up lot of topics that we've discussed here let us go through this use case probably so what happens here is in order to get the lowest possible latency all the possible results for a search are pre-calculated with a document for every combination of letters in the search what this means is probably based on the letters you have a document that is created and it is traversed in such a order that all the data is scanned letter wise when you actually go ahead and put forth a query what happens is suppose if there is a 20 character letter or a word that you put in so there are somewhere around 23 into 1030 combinations that are possible so your computer has to go through these many combinations what SD he does is it basically lets you store the data that IMDb has and once IMDb has stored that data they use cloudfront again we have discussed what cloudfront is they use cloudfront to store this data to the nearest possible location so that when a user fetches this data it is fetched from that location so what happens is basically when these many possibilities or combinations are to be dealt with it becomes complicated but in practice what IMDb does is it basically uses analytics in such a way that these combinations become lesser so in order to search for a 20 character letter they basically have to go through 15000 documents and because of S3 and cloudfront you basically can distribute all the data to different Edge locations and to buckets within us and since we are talking about huge amount of data it is more than terabytes it is like hundreds thousands of terabytes of data so we can understand how much data are we talking about and S3 actually features or serves a number of such use cases or requirements so guys I Believe by now you've understood what S3 is [Music] what is Amazon VPC Amazon virtual private cloud is a service that lets you launch AWS resources in a logically isolated virtual Network that you define which means you will have a part of AWS Cloud that can be used only by your AWS resources and can be accessed only by you or the people you permit to access now this people could be your business partners your employees or anyone you want you will have complete control over your virtual networking environment which would include selecting your own IP address this range creating of subnets and configuration of Route tables and network Gateway I will explain all this term in some time now Amazon VPC allows you to create multiple layer of security including security groups and network access control list which will help you control access to your Amazon ec2 instances in each subnet you can use ipv4 and IPv6 for most resources in your virtual private Cloud which will help you ensure secure and easy access to resources and applications now that you have some idea about what exactly is AWS VPC let us move on to our next topic and see how does it work but before we get into the working part I would like to explain the terms which I mentioned before so if you're wondering what are subnets these are logical subdivision of a larger Network you can launch your AWS resources into a specified subnet and when I say AWS resources I mean your ec2 instances and so on the two types of subnet the public and the private you use a public subnet for resources that must be connected to the internet and with a private subnet the resources won't be connected to the internet now one of the interesting pointers IP addresses of all the Subnet in the network will start with the same prefects next we have is a route table now a route table contains a set of rules called route that are used to determine where the traffic in a VPC is directed you can explicitly associate a subnet with a particular route table otherwise the subnet is implicitly associated with the main route table each route in a route table specifies the range of IP addresses where you want your traffic to go which is the destination and the Gateway which is the network interface or connection through which to send the traffic which is the target now as I've mentioned about public subnets and private subnet let us see the difference between them so in a public subnet resources are exposed to the internet using the internet gateway the make use of both public and private IPS and are mainly used for external facing applications like web servers where you want information to be visible to the users whereas in a public subnet resources are not exposed to the out of world and it uses only the public IPS they're mainly used for back-end applications like database and application servers now that you have some idea about subnet and Route table let us move on to the working of VPC and understand it now if your account was created after 4th of December 2013 your account comes with the default VPC that has a default Subnet in each availability Zone we will see this in the demo part now your default VPC includes an internet gateway and has the benefits of advanced feature provided by ec2 VPC and is ready for you to use now if you launch your instance in a default VPC and do not specify a subnet where to launch your instance the instance is automatically launched in your default subnet which is a public subnet you can also create your own VPC and configure it as you want this is known as a non-default VPC now the subnets that you create in a non-default VPC and any additional subnet that you create in your default VPC are called as non-default subnets now you can see we have something called a internet gateway now an internet gateway is a gateway that allows your instances to connect to the internet you can do this through Amazon ec2 Network Edge each instances that you launch in your default subnet as a private ipv4 address and a public ipv4 address as you can see here by default each instance that you launch in a non-default subnet has a private ipv4 address but not a public ipv4 address unless you're specifically assign it these instances can only communicate with each other but cannot access the internet but you can enable internet access for this instances by attaching the internet gateway to a free PC and associating an elastic IP address with the instance now how can you connect your VPC to other VPC or to your on-premises network so for this you can create a VPC piring connection between the two vpcs that enables you to Route traffic between them privately now instances in either VPC can communicate with each other as if they were in the same network you can also create a Transit Gateway and use it as an interconnection between your VPC and your on-premises network the transit Gateway acts as a regional Virtual Router for traffic flowing between its attachment which could include VPC VPN connection AWS Direct Connect gateways and Transit Gateway pairing connections now I hope you have some idea about the working of VPC let us move on to our next topic and see some of the use cases of VPC with Amazon VPC you can host a simple web application such as a blog or a simple website with additional layer of privacy and security you can help securing the website by creating Security Group rule which will allow the web servers to respond to inbound requests from the internet while simultaneously prohibiting the web servers for initiating outbound connection to the internet now what this means is you can control your data traffic in and out of your VPC you can create a VPC that supports this use Case by selecting VPC with a single public subnet only from the Amazon VPC console without with VPC you can also host multi-tire web application and strictly enforce access and security restriction between web servers application servers and databases you can launch web servers in a publicly accessible subnet while running your application servers and databases in a private subnet this will ensure that your application servers and databases cannot be directly accessed from the internet to create a VPC that supports this use case you can select VPC with public and private Subnet in the Amazon VPC console wizard but Amazon VPC you can also backup and recover your data after a disaster by using Amazon VPC for Disaster Recovery you will receive all the benefits of a disaster recovery site at a fraction of the cost you can periodically backup critical data from a data center to a small number of Amazon ec2 instances with Amazon EBS volume or you can also import your virtual machine images to Amazon ec2 to ensure business continuity Amazon VPC allows you to quickly launch replacement compute capacity in AWS these were some of the use cases of using VPC now let us move on to our next topic and see an overview of the other networking Concepts in AWS first let us talk about elastic load balancer elastic load balancing automatically distributes your incoming traffic across multiple Target the targets could be ec2 instances containers and IP address in one or more availability zones it will monitor the health of your register Target and Route traffic only to the healthy Target it scales your load balancer as incoming traffic changes over time you can add and remove compute resources from your load balancer as you need changes it will not disturb the overall flow of request to your application now elastic load balancing offers four types of load balancer first is the classic load balancer which provides basic load balancing across multiple Amazon ec2 instances and it operates both at request level and the connection level a classic load balancer is intended for applications that were built within the ec2 classic Network second we have application load balancer it is best suited for load balancing of HTTP and https traffics and provides Advanced request routing which is helpful in modern application architecture including microservices and containers third we have Network load balancer which is best suitable for load balancing of TCP UDP TLS where Xtreme performance is required but Network load balancer routes traffic to Target with an Amazon VPC and is capable of handling millions of requests per second but managing ultra low latencies fourth we have the Gateway load balancer it is used to deploy scale and run third-party virtual networking appliances Gateway load balancer is transparent to the source and destination of traffic which makes it well suitable for working with third-party appliances for Security Network analytics and other use cases this was about elastic load balancer now let us take a look at AWS Direct Connect AWS Direct Connect is a cloud service solution that makes it easier for you to establish a dedicated network connection from your on-premises to AWS cloud using AWS Direct Connect you can establish a private connection between AWS cloud and your data centers or your office you can increase the bandwidth throughput and provide a more consistent Network experience than internet based connections AWS Direct Connect is also compatible with all the AWS services and is available in the speed starting from 50 Mbps and can be scaled up to 100 GB per second it helps you build hybrid environment which allows you to use the benefits of AWS and continue to utilize your existing infrastructure now let us move on to our next service which is Route 53 Amazon Route 53 is a highly available and scalable Cloud domain name system or DNS web servers it is designed to give developers and businesses a reliable and cost effective way to Route end users to internet applications by translating names into numerical IP addresses this can be used for computers to connect to each other now Route 53 performs three main function first one is it registers your domain name every website needs a name be it edureka.com or anything like facebook.com so Route 53 lets you register a name for a website a web application known as a domain name the second function is it routes internet traffic to the resources for your domain so when a user opens a web browser and enters your domain name or the subdomain name in the address bar Route 53 helps connect the browser with your website or web application the third function is it checks the health of your resources Route 53 sends automated requests over the internet to Resource such as a web server to verify if it is reachable available and functional you can also choose to receive notification when a resource becomes unavailable and choose to Route internet traffic away from the unhealthy resources now these are some of the networking services in AWS now let us move on to a demo part where I will teach you how you can create a VPC your subnet your route table and an internet gateway so for our demo I've logged in into my AWS console if you do not have an AWS account yet and want to practice AWS Services I would highly recommend you to create an AWS free tar account where you can access more than 75 AWS services for free for a new so a point to remember is your VPC will be set in this region in my case it is oio you can see there are so many regions available here your VPC can be set in your selected location now let us start a demo the search for bpc you can just search over here VPC here it is now when you click on your VPC you can see there is a default VPC as I mentioned before it will also have a default subnet inside it so now let us create our own VPC so I click on create VPC so it will ask us to name our VPC let me name it demo VPC now when you create a VPC you must specify a range of ipv4 address for the VPC in the cidr block which stands for classless interdomain routing we will go for the primary cidr block which is 10.0.0.0 16. now mention 16 over here which means 16 bits is reserved for my VPC Network now we'll go with the default I do not want any IPv6 cidr blog next we have tenancy now here you can see we have two options one is default the other one is dedicated so what dedicated means is you can run your instances in your VPC on a single tenant or a dedicated Hardware so we'll just stick to default here and create a VPC if you get a message saying you have successfully created your VPC this is your VPC ID and this is your VPC name now let us create a subnet now as I've told you in the theory session a default subnet is created and you see we have three default subnets over here when I just scroll here you see the three default submets are created in three different availability zone so each availability Zone has one Subnet in it so now let us create a new subnet we have to select a VPC from here this is a VPC it's to come down I will name a subnet let us name it demo subnet now we have to select availability Zone we have three availability zones in oio we'll just select one of these now we also have to mention our ipv4 cidr block for a subnet so we'll just set it to 24. so when I type 24 over here it means 24 bit is reserved for my VPC Network now the starting of the IP address of the subnet should be the same as the starting address of your VPC now after this let us create a subnet this might take a few seconds and you can see you have successfully created your subnet this is your subnet ID and this is your subnet name next we'll create our internet gateway we'll get back to Route table in a few minutes now as you can see there is a default internet gateway but we will go ahead and create our own internet gateway so here it will ask us to name our Gateway let's just name it demo internet gateway and just create our internet gateway it is very simple now our internet gateway is created this is our internet gateway ID but the state is detached now you can go attach your VPC to your internet gateway just click on attach to a VPC over here and just select a VPC from here here is a VPC so we'll just select it and now attach internet gateway so here the state is attached now let us move on to the next step and create a route table now as I've mentioned before route table determines where the network traffic from a VPC is directed now these are some of the default route tables but we'll create a new route table so now we have to name our route table let us name it demo Cloud table and we have to attach a VPC to it here's a VPC that is created we'll get a message saying a following route table was created and here is the route table ID we close it so after we create a route table we have to associate a subnet to it so we'll just click on Route table over here go to subnet Association edit subnet Association now select a subnet so just a subnet demo subnet so we'll save it over here after this update Association we have to edit the route to access the internet gateway so we click on route edit our routes now let us add another route so destination will give us 0.0.00 now this means your traffic can flow anywhere the target will keep it as internet gateway so we are trying to access our internet gateway so we'll select this and then save the route now we get a message saying route successfully edited we close it now with this we have created a VPC a subnet a route table and an internet gateway now let us see if we can launch an instance in a VPC but let me go to new tab AWS Management console you may tap ec2 over here a good instances I'll go to launch instances here let me launch in window instance so I'll select window now I'll select this instance and then go to configure instance now here under Network I'll remove the default VPC Network and set my network I'll change the subnet also and we also have to change the auto assign public IP to enable we will enable this to create a public IP so after this we'll make no changes go to review and launch before that let us create a security group let us just name it demo Security Group and now review and launch we'll use the existing keypad and our launch our instance now let us click on view instance now this might take a couple of minutes while I instance is getting created let us go back to a VPC we go to VPC dashboard and here you can see there are two VPC one is a default VPC and the other one which we've created and the fourth submit three are defaults and one which you have created now two default route table and one which we created there was one internet gateway which was default and then we created another one now let us check our ec2 instance we'll refresh it now you can see two on two status check is passed so we click on our instance ID now this is the instance ID this is a public ipv4 address a public ipv4 address now when we come down and see the VPC ID over here you can see the instance was created in our VPC ID that is the name of our VPC and it is also created in our subnet which is the demo subnet and you can also see it was set up in the availability Zone you had mentioned now the instance can be used only by me or the people I permit a part of AWS virtual Cloud was given to me to run my resources [Music] suppose you are a particular user and you are trying to visit a particular website and imagine that that website is based somewhere at a very far location suppose you are based somewhere in USA and that website its server actually hosts or is based in Australia now in that case when you make a request for particular object or particular image or maybe content now your request is sent to the server that is in Australia and then it gets delivered to you and in this process too there are quite a few interrelated networks that deal which you are not aware about the content directly gets delivered to you and you have a feeling where you feel that you type in a particular URL and the content is directly made available to you but that is not how it works quite a few other things happen in the interim and due to that what happens is the data that gets delivered to you it does not get delivered to you very quickly why is that because you'd be sending in a request it would go to the original server and from there the content is delivered to you now if you are based in USA the situation would be convenient if the data is delivered to you from somewhere close by now when you talk about a traditional system where you are sending a request to somewhere in Australia this is what happens your data or your request is sent to the server based in Australia and then it processes that request and that data is made available to you which gets delivered to you but if you have something like cloudfront what it does is it sets in an intermediate point where your data actually gets cached first and this cache data is made available to you on your request that means the delivery happens faster and you save a lot of time so how does AWS cloudfront exactly do it let's try to understand that but when you talk about AWS cloudfront what it does is first and foremost it speeds up the distribution process and you can Avail any kind of content whether it's static or dynamic and it is made available to you quickly what cloudfront does is it focuses on these three points one is your routing two is your Edge locations and three is the way the content is made available to you let's try to understand these one by one when you talk about routing I just mentioned that the data gets delivered to you through a series of networks so what cloudfront does is it ensures that there are quite a few Edge locations that are located close to you and the data that you want to access it gets cached so that it can be delivered to you quickly and that is why the data that is being delivered to you is more available than in any other possible case [Music] so what happens exactly and how does this content gets delivered to you let's try to understand this with the help of this diagram suppose you are a user so basically what you would do is you would send in a request that needs to reach a particular server now in this case what happens is first your request it goes to an edge location and from there to your server to understand this too you have to understand two scenarios first and foremost suppose you are based in USA and you want to fetch a particular data that is based in Australia you would be sending in a request but what AWS does is instead of sending that request directly to your server which is based in Australia maybe it has these interim Edge locations which are closer to you so the request it goes to the edge location first and it checks whether the data that you are requesting is already cached there or not if it is not cast then the request is sent to your original server and from there the data is delivered to the edge location and from there it comes to you now you might wonder as in this is a very complex process and if it is taking these many steps how is it getting delivered to me quicker than in normal situation well think of it from this perspective if you do send in this request directly to the main server again the data would flow through some Network and then it would be delivered to you instead what happens here is at your Edge location the data gets cached so if you requested again it would be delivered to you quicker if it is requested by anyone else it would be delivered to them quicker plus how Edge locations work is when you do send in this request and when there's location fetches this data from your so-called original server in that case too when the first byte it arrives at your age location it directly gets delivered to you and how does this content exactly get stored here well first and foremost what happens is what your is location has is it has some Regional cache as well now this cache would basically hold all the content that is requested more frequently in your region suppose a website has some a number of content and out of it some content is kind of requested a lot in a particular region so surrounding that region the closest is location would have a regional cache which would hold all the content that is more relevant for those users so that it can be frequently delivered to these users and can be made available to them quickly in case if this data gets outdated and it is no longer being requested then this data can be replaced with something else that is requested more frequently so this is how cloudfront works what it does is it creates a distribution and you have some Edge locations through which you can actually request that data faster [Music] so what are the applications that cloudfront has to offer to you now I won't say applications instead I would say some of the benefits of using cloudfront let's try to understand those one by one first and foremost what it does is it accelerates your static website content delivery we just discussed that that means if you are requesting a particular image or something like that it gets delivered to you quicker why because it is cached at your Edge location and you do not have to worry about any latency issues next what it does is it provides you various static and even Dynamic content suppose you need some video or a live session or something like that even that gets delivered to you quickly I just mentioned that when you request a particular thing when the first byte it arrives at your age location your Cloud front starts streaming that to you or start delivering that to you same happens with your live streaming videos as well you'd be getting that streams instantly without any latency whatsoever encryption now when you do access this content what AWS cloudfront does is it lets you have this so-called domain where you put in HTTPS and you get secured data so you already have one layer of security but it also lets you add another layer of security by giving you something called as encryption by encrypting your data or by using your key value pairs which is the same you're actually ensuring that your data is more secured and it can be accessed privately as well customization at the edge now what do I mean by this now there is some content that needs to be delivered to the user or to the end user if the customization it happens at the server again it might be time consuming and there are quite a few drawbacks of it say for example I need a particular content and it needs to be processed or customized at the very last moment so these things can be done at the edge location as well thus helping you save time money and various other factors as well and finally what it does is it uses something called as Lambda Edge which again lets you deal with various customizations and lets you serve your content privately so these were some of the applications or uses of cloudfront [Music] because I'm going to switch into my aw console and I'm going to talk about AWS cloudfront distributions and how can you go ahead and create one so stay tuned and let me quickly switch into the console first so yes guys what I've done is I've gone ahead and I've logged into my AWS console now for people who are completely new to AWS what you can do is you can actually go ahead and create a free tier account you have to visit AWS website and search for free tier you would get this option just create an account they would ask you for your credit or debit card details probably but they won't charge you a minimal amount is charged and that is reverted back to your account that is for verification purpose and after that what AWS does is it offers you certain Services which are made available to you for free for one complete year that is as long as you stay in the limits or the specified limits which AWS has set so those limits are more than enough to practice or to learn AWS so if you want to do go ahead and get a proper Hands-On on various AWS Services I would suggest that you do visit their website and create this free tier account once you do have that account you have all these services that are made available to you as I just mentioned there are 70 plus services and these are the services that are there which you can actually go ahead and use for different purposes our Focus today however is creating a cloud front distribution which we just discussed in the so-called theory part I would be repeating few topics here too while we do go ahead and create our Cloud Trend distribution now as I've already mentioned we want to fetch data or fetch a particular object and if that is placed at a particular Edge location that would be made available to me so what we are doing here is imagine that our data is placed at a particular original server in our case let's consider it as an S3 bucket now S3 is nothing but a storage service with AWS that is simple storage service rather that is SSS and that is why we call it S3 so what we are going to do is we are going to go ahead and create an S3 bucket in that we would be putting in certain objects and we would be accessing that by using our Cloud Trend distribution so let's just go ahead and create a bucket first you can see we have S3 in my recently used Services you can just type S3 here and that would made available to you you can click on it and your simple storage service opens you would be required to go ahead and create a bucket this is how you do it you click on Create and you give it some names say maybe bucket your small letters bucket for AWS demo maybe and I would give in some number zero zero zero I see next next next I need a basic bucket so I won't be putting in any details do we have a bucket here ah there you go we have a bucket here and in this bucket what I'm going to do is I'm going to put in some content that we can actually request for so let's just go ahead and create an HTML file and put in maybe an image or something so I have a folder here in that folder I have a logo of edureka I would be using that logo and I would want to go ahead and create an HTML file which I can refer so I would open my Notepad and I would write a simple HTML code I won't get into the details of how to write an HTML code I assume that you all know it if not you can use this code so let's create a head file basically or a head tag rather let's say demo tag maybe and I close this head tag I need some body in here right so let's say Let the Body be say welcome to edireeka and I and the body here and I save this file I say save as where do I want to save it and save it here and I would save it as a maybe index Dot HTML I say save it probably got saved somewhere else let me just copy it and paste it here I've done that this is the file now we have these files let's upload it to our S3 bucket come here I say upload I want to add files so add files where do I go I go to the app folder I go to demo and I select these two files and I say upload there you go my files are here and I say upload small files so should not take a long time 50 successful 100 successful there you go you have these two files now we have our S3 bucket and we have two files this is our origin server now I need to create a distribution and use it to do that I would click on services and come here and I would search for Cloud front there you go and I say create a distribution so I click on this icon now you have two options first one is something that lets you have your static data moved in or moved out or if you want to live stream your data you should go for this option but that is not the case we would be sticking with this thing I say get started I need to enter in a domain name so it gives me suggestions and this is the first one which I just created original path is something that you can given for the folders from where you want to access the data but mine directly resides in the bucket there are no extra folders so I don't need to enter anything original ID this is what I have here basically I can use this or I can just go ahead and change the name if I want to but I would let it stay the waiters rested bucket access yes I want to keep it private so I say restrict and I create a new identity and there you go I have a new user created here apart from that Grant read permissions on bucket yes update my bucket policy accordingly is what I would say then I would scroll down customer headers and all I don't need to put in these details how do I want my data to be accessed the protocol policy I would say redirect HTTP to https so that it is secured if I scroll down I have some other options as well cast HTTP methods and all those things do I need to change these object caching can I customize it yes I can but again I would be using the by default one if you want to you can customize it smooth streaming no these are some of the things that you need to focus on if you have some streaming data you can put in details accordingly but we are not doing that what is the price class that you want to choose you have some options here which you can pick from I would be going for the default one and then I just scroll down and I say create a distribution so your distribution is getting created now and this process might take a long while if you click on this thing you realize that it is in progress and it takes somewhere around 10 to 12 minutes for this distribution to get created so meanwhile I'm gonna pause this session and I would come back with the remaining part once this distribution is completed so bear with me for that while so there you go the distribution has been deployed the status is deployed here so we can actually go ahead and use this thing now we have a domain name here which I can use and I can just enter it here and we would be redirected to the page and what happens here is you would be actually given access to this page through the edge location that means you are not going to the server instead the data is being cached to you from your distribution or your is location rather so you enter this website and you hit the enter button there's an error it shouldn't have been oh I know what just happened when you do go ahead and create your so-called distribution in that you actually have an option of selecting a by default file which I did not so I'll have to given an extension here saying slash index dot HTML and if I hit the enter button now it should redirect you to the demo tag which says welcome to edureka right so this was the HTML file that we created and we also had a PNG file which we wanted to access the name was logo.png okay this is funny this should not happen why is this happening let's take a look at it whether we have that file there because if it was there we should be able to access it and what was my bucket this was the one oh this has happened when I uploaded that file it got saved with this extension.png.png so if I come here and I type dot PNG here there you go you have that object delivered to you through your so-called distribution [Music] foreign first let us try to understand why we need cloud-based monitoring with couple of scenarios in our first scenario consider that you have hosted a messenger app on cloud and your app has gained a lot of Fame but lately the number of people using your application has gone down tremendously and you have no idea what the issue is well it could be due to two reasons firstly since your application is complex multi-tier architecture monitoring the functionality of every layer by yourself will be a difficult task don't you think and secondly since you're not using any kind of monitoring tool here you wouldn't know how your application is performing on cloud well one solution for that is to employ a monitoring tool this monitoring tool will provide you insights regarding how your application is performing on cloud and with this data you can make necessary improvements and you can also make sure that your application is in par with today's customer needs and definitely after a while you'll notice that the number of people using your application has increased moving on to our next scenario let's say your manager has assigned you with a project and he wants you to make this project as cost effective as possible so as you can see in this project you're using five virtual servers which perform highly complex computations and all these servers are highly active during daytime that is the handle most traffic during daytime but during night time the servers are idle by that I mean the CPU utilization of these servers during night time is less than 15 percent and yet as you notice here in both the cases you are paying same amount of money you have to notice two points here firstly all your virtual servers are underused during night time and secondly you're paying for the resources which you are not using and this definitely is not cost effective so one solution is to employ a monitoring tool this monitoring tool will send you a notification when these servers are idle and you could schedule to stop the servers on time so guys this is one way to make your project most cost effective and avoid paying unnecessary operating costs let's consider another scenario for better understanding so let's say I have hosted an e-commerce website on cloud and during sales season many customers are trying to access my website which is definitely a good thing but for some unfortunate reason application downtime has occurred and you guys have to remember that I'm not using any kind of monitoring tool here so it'll be difficult for me to identify the error and troubleshoot it in reasonable amount of time and it's quite possible that in this period my customer might have moved on to different website so you see that I've lost a potential customer here so if I have had a monitoring tool in the situation it would have identified the error in earlier stages itself and rectify the problem well I could have easily avoided losing my customer so I hope guys with help of these use cases you were able to understand as to why we need cloud-based monitoring so so let me just summarize what we have learned till now we need monitoring firstly because it provides a detailed report regarding performance of your applications on cloud and secondly it helps us to reduce unnecessary operating costs which we are paying to the cloud provider moreover it detects problems at earlier stage itself so that you can prevent disasters later and finally it monitors the user's experience and provides us insights so that we can make improvements so well guys in this session we will be discussing about one such versatile monitoring tool called Amazon Cloud watch Amazon Cloud watch basically is a powerful monitoring tool which offers your most reliable scalable and a flexible way to monitor your resources or applications which are currently active on cloud it usually offers you with two levels of monitoring which are basic monitoring and detail monitoring if you want your resources to be eligible for basic monitoring all you have to do is to sign up for AWS free tier in basic monitoring your resources are monitored less frequently like say every five minutes and you're provided with a limited choice of metrics to choose from whereas in detail monitoring all your resources are monitored more frequently like say every five minutes and you're provided with a wide range of metrics to choose from but if you want your resources to be eligible for detail monitoring you'll have to pay a certain amount of money according to AWS pricing details now let's have a look at few monitoring services offered by Amazon cloudwatch Amazon Cloud watch firstly it provides a catalog of standard reports which you can use to analyze Trends and monitor system performance and then it monitors stores and provide access to system and application log files moreover it enables you to set up high resolution alarms and send notifications if needed and Amazon cloudwatch also sends system events from AWS resources to AWS Lambda functions SNS topics Etc so if you have not understood any terms which I have used here don't worry we'll get to know more about these terms as we progress through the course of this session earlier I mentioned that Amazon Cloud watched a loss administrators to monitor multiple resources and applications from single console these resources include virtual instances hosted in Amazon ec2 database is located in Amazon RDS data stored in Amazon S3 3 elastic load balances and many other resources like Auto scaling groups Amazon Cloud trial Etc so guys now let's try to understand Amazon Cloud watch a little deeper firstly we will have a look at few Amazon Cloud watch Concepts and then I'll explain you how Amazon cloudwatch actually operates so it's metric a metric represents a time-auded set of data points that are published to Cloud so what I mean by that is suppose let's say you have three variables X Y and Z and you have created a table which has values of X with respect to Y over a period of time in this scenario the variable X which I have been monitoring till now is a metric so you can think of metric as a variable which needs monitoring next we have Dimensions let's consider same variables X Y and Z previously you had created a table which has values of X with respect to Y now let's create another table which has values of X with respect to Z so basically we have two tables which describes same variable X but from two different perspectives these are nothing but Dimensions so basically our Dimension is a name value pair that uniquely identifies a metric and Amazon Cloud watch allows you to assign up to 10 Dimensions to a metric then you have statistics previously we had created two tables which are values of X with respect to Y and as well as set you can combine data from these tables like to create a chart or maybe plot a graph for analytical purposes this combination of data is nothing but statistics statistics are metric data aggregations over specific period of time then you have alarm let's say you have been monitoring this variable X for some time now and you want a notification to be sent to you when the value of x reaches certain threshold all you have to do is set an alarm to send your notification so basically alarm can be used to automatically initiate actions on your behalf now that you have clear understanding of concepts of Amazon Cloud watch let's see how Amazon cloudwatch operates Amazon cloudwatch has complete visibility in your AWS resources and applications which are currently running on cloud so firstly it collects metrics and logs from all these AWS resources and applications and then by using this metrics it helps you visualize your applications on cloudwatch dashboard moreover if there is some sort of operational change in AWS environment Amazon Cloud watch becomes aware of these changes and responds to them by taking some sort of corrective action like maybe it sends your notification or it might activate a Lambda function Etc and finally it provides your real-time analysis by using cloudwatch metric math so if you're wondering what cloudwatch metric math is it is a service which integrates multiple Cloud watch metrics and creates a new time series and you can can view this time series on cloudwatch dashboard as well so working this way Amazon Cloud watch provides you with system-wide visibility it even provides you actionable Insight so that you can monitor your application performance moreover it allows you to optimize resource utilization if needed and finally it provides a unified view of operational health of your AWS environment so I hope that but now if you know what Amazon Cloud watches so now let's try to understand how Amazon cloudwatch works with help of a demo so guys this is my AWS console let's say AWS Management console and the services which you can see on the screen are the services offered by Amazon AWS but in this demo we're gonna use only few Services let's say cloud watch and then you have PC2 and a service called Simple notification service and when I click on ec2 it takes me to easy to dashboard where you can see that I have four instances which are currently active you know that here in this demo I'm supposed to get a notification saying that CPU utilization of my instance is less than 25 for me to receive a notification first I'll have to create a topic And subscribe to it with my email ID so let's explore a service called Simple notification service where you can create a topic And subscribe to it once you reach SNS dashboard click on topics optional navigation Pane and click on create new topic give your topic a name let's say CW topic and give the display name as well let's give the same name and click on create topic option here you can see that I've successfully created a topic now click on the topic which you have created and select actions and subscribe to topic option well I want notification to be sent to me in form of email you have different options as well in form of Lambda function or Json Etc but I'm going to choose it as email and give my email ID which is here and then click on create subscription option so now whenever AWS console wants to send me a message it will send to the email ID which I used to subscribe the topic now let's go back to cloudwatch dashboard so guys this is my cloud watch dashboard and you can see different options on navigation pane firstly I have dashboard where I can view all my metrics at same place then you have alarms which shows the list of alarms which you have configured and then you have events and locks which will be exploring later our topic of interest is the last one which is metrics select the metrics option here and then choose ec2 and then per instant metrics when you do that a list of metrics will be shown to you like Network out CPU utilization Network packet in network packets out and various other metrics for various resources which are currently active on your Cloud so but we are interested only with CPU utilization so I'm gonna type that here well it shows the list of instances which are active on my cloud and I'm going to choose Windows to instance and then click on graph metrics option here okay let's select Windows 2 only and then on the right side you can see you have a alarm button when you click on that a dialog box will be open where you can configure your lock firstly let's give a llama name let's say low CP utilization and a brief description as well let's say lower than 25 percent lower than 25 percent CPU utilization now I'm going to set the threshold Which is less than 25 in this case and on the light side you can see a period option if your resources are eligible for basic monitoring the spirit option by default is 5 minutes and if your resources are eligible for detailed monitoring it's usually one minute and when you scroll down you can see a send notification to option here so select the topic which you have previously created that will be C Topic in my case and then click on create alarm well there is some error okay it says there's an alarm already with this name so let's give it another name of my instance now let's try again and when you click on the alarm button and click on refresh option here it says that I've successfully created a alarm here you can see that closely visualization of my instance and when you click on that it shows you all the details like description threshold and what action it is supposed to take when alarm is configured and all the details so guys try it out it'll be easy for you to understand Cloud watch console much better okay guys now if you know what Amazon Cloud watches what it does and the way it operates but to understand the capability of Amazon cloudwatch completely we should be aware of two important segments of Amazon Cloud watch which are cloudwatch events and cloudwatch logs so let's discuss them one by one firstly we have Amazon Cloud watch events consider the scenario let's say I've created an auto scaling group and this Auto scaling group currently has terminated an instance so you can see this as some sort of operational change in AWS environment when this happens Amazon cloudwatch becomes aware of these changes and responds to them by taking some sort of corrective actions like in this case it might send you a notification saying that your auto scaling group has terminated an instance or it might activate an Lambda function which updates the record in Amazon Route 53 zone so basically what Amazon cloudwatch events does is it delivers a real-time stream of system events that describe change in your AWS resources now let's have a look at few concepts related to cloudwatch events firstly we have event here and event indicates change in AWS environment and AWS resources generate events whenever the state changes let's say you have terminated an active ec2 instance so the state of this ec2 instance has changed from active to terminated and hence an event is generated then you have rules rules are nothing but constraints every incoming event is evaluated to see if it has met the constraint if so the event is routed to Target Target is where the events are handled Target can include Amazon ec2 instances or a Lambda function or an Amazon SNS topic Etc now let's try to understand Amazon cloudwatch events better with help of use case in this use case we are going to create a system that closely mimics the behavior of Dynamic DNS and for those who don't know what Dynamic DNS is Let me Give an example let's say you want to access internet at home then internet service provider assigns you an IP address but since internet service provided users different kind of online systems this IP address keeps changing because of which it might be difficult for you to use this IP address with other services like webcam security camera thermostat Etc so this is where Dynamic DNS comes into picture what Dynamic DNS does is it assigns a custom domain name to your home IP address and this domain name is automatically updated when IP address changes so basically Dynamic DNS is a service that automatically updates a name server in domain name system and Amazon offers you with a similar kind of service called Amazon Route 53 so in this use case we are going to update Amazon drop53 whenever Amazon ec2 instance changes its state now let's see how the use case actually works this use case precisely works this way so whenever an easy to instance changes its states Amazon cloudwatch event becomes aware of these operational changes and it triggers a Lambda function this Lambda function uses different kind of information regarding the instance like its public and private IP address and it updates a record in appropriate raw 53 hosted zone so let's say you have an ec2 instance and you have terminated the instance so Amazon cloudwatch events become aware of this and it triggers a Lambda function and this the Lambda function deletes the record from Amazon route 53. similarly if you have created a new instance again Amazon cloudwatch events become aware of this and it triggers a Lambda function and this Lambda function creates a new record in Amazon route 53. I hope you have understood what Amazon cloudwatch even sees and what it does now let's discuss how Amazon cloudwatch events works with help of a demo so in this demo we will schedule to stop and start ec2 instances with help of Lambda function and Cloud watch events so let's go ahead with demo so guys you can see that I have four instances which are currently active first I'm going to create a Lambda function which is going to stop my windows 2 instance and you guys need to know that for Lambda function to do that we need to assign permission so Amazon provides you with the service called IAM which is identity and access management where you can assign permissions when you search for IM in the tab it shows you the service select that and on IM dashboard on the navigation pane you can see a policies option here select that and click on create policy option first it's asking you for a service here which will be easy to in our case click on EC to function and actions which will be to start and stop my ec2 instances so let's search for start instance well a predefined function is already there so you can choose that then you have stop instance again select that and then I want this to be eligible for all the resources so I'm going to choose all resources here and click on review policy option let's give our policy a name that is to start and stop ec2 instances and description as well a brief description let's say to start and stop instances and now click on create policies it's taking a while so I've successfully created a policy here next we have to assign this policy to Lambda function so click on roles here then click on create role choose Lambda function here and click on next permission search for the policy which we have created earlier that is to start and stop from the policy select that and click on next review option it's asking for a name let's give a name start stop instances and click on create role I've successfully created a role so what we have done here is we have assigned permission for Lambda function to control ec2 instances now let's create a Lambda function you can search for Lambda in the search Tab and there click on create function give your Lambda function a name let's say to stop instance and select the role which you have previously created and click on create function you can see that I've successfully created a Lambda function now I'm just going to copy the code to stop ec2 instances here I'm going to select this and paste it over here and make sure to save it as you can see here in this function it asks for instance region and instance ID so let's configure the details let's give it as stop instance and here you will have to insert instance region and ID and stands region and instance ID now we'll have to copy the instance region and ID of the instance whichever need so let's go to ec2 dashboard here now let's say I want my windows 2 instance to be stopped but this is the instance ID which I'm gonna paste it over there similarly instance region now well in this case I am choosing Windows 2 instance you can choose whichever instance you want to stop once you've done that you click on create option here test the configuration details when you scroll down you can see the execution results here it says that my instance has been successfully stopped let's go and check in the ec2 dashboard here on the ec2 dashboard I'm going to refresh it and you can see that my windows 2 instance has successfully stopped now we'll create another Lambda function which will restart this function again the same search for Lambda function in the search tab and click on create function option it asks for a name so let's say start instance and choose the role which you have previously created and click on create function again you'll have to paste the code to start the instances over here and click on Save option let's try to configure this let's name it as start instance and again a task for two attributes which are instance region and ID now all we have to do is copy the instance region and ID here like we did earlier let's go to easy to dashboard and copy the instance ID and region well you guys can see that here my windows 2 instance has been successfully stopped now I'll copy this and paste it over there similarly instance region as well and click on create option now test the configuration and when you scroll down you can see that my instance has successfully restarted in the ec2 dashboard I'm going to refresh this well my windows 2 instance is on its way to get restarted till now I've used Lambda function to start and stop my instances but now I'm going to automate this process with help of Amazon Cloud watch so let's go to cloudwatch dashboard here well it's taking a while to load then choose events option and click on create rule so here we are going to schedule to stop my instances every day at 6 30 pm and to restart this instances every day at 6 30 am so click on schedule if you want to know more about Chrome Expressions you can visit Amazon documentation so let me show you it has six wheels firstly it's minutes then you have hours then day of month day of week and year we are concerned only with minutes and hours because we want our instances to be start and stopped every day every month so let's keep the details so if you're going to create a rule to stop the instance let's say 6 30 in the evening 30 minutes and 18 which is nothing but 6 pm and then rest all you don't have to mention anything when you give a proper chronic expression sample timings will be provided to you you can see here the rest of the sample timings and now let's add the target function which is Lambda function in our case and select on stop instance function and click on configure details give your ruler name let's say stop my ec2 instance and description to stop my ec2 instance at 6 30 PM every day and click on create rule here you can see that I've successfully created a rule to stop my instance every day at 6 30 PM now let's create another rule to restart this instance every day at six am in the morning again the same choose the schedule here and Crone expression which will be 6 am in the morning again the sample timings are shown here then let's add Target function again Lambda function and select the function that is to start instance and click on configure details let's name it as dot my ec2 instance and description as to start my ec2 instance every day at 6 am and click on create rule so now we have successfully created two rules to start and stop the easy to instances at 6 30 pm and 6 30 am respectively so what we have done is we have saved our time here we've automated the process of stopping and starting ec2 instances so try it on yourself it'll be easier for you to understand so guys now let's discuss our next topic which is Amazon Cloud watch logs have you guys heard of log files well log files are nothing but detailed record of events that occur when you're using your AWS environment you can view your log files on your on-premise server as well search for an app called even viewer select the app and click on Windows locks and select systems a list of log files will be shown to you when you choose a particular log file all the details regarding the clock files will be shown like the number of keywords the login time number of hours the file has been logged on to and various other details similarly you have log files created when you use AWS environment as well so you can consider this log files as a data repository most of the metrics are generated from these log data so whenever a metric is generated a part of data is extracted from this log data so your designing metrics according to your like by choosing a part of data from this log data so basically this log files are what you call a primary data storage place and Amazon cloudwatch logs is used to monitor store and access log files from AWS resource like ec2 instances cloudtrail raw 53 Etc let's try to understand Cloud watch logs better with help of some features firstly you can use Amazon Cloud watch locks to monitor your application and system log files let's say you have made lot of Errors By trying to deploy your application on cloud in this scenario you can use cloudwatch logs to keep track of your errors and send a notification to you when the error rate increases certain threshold so that you can make avoiding errors again then you have log retention by default logs are kept indefinitely part Cloud watch provides you with an option where you can set the period between 10 years to one day then you have log storage you can use cloudwatch logs to store your log data in highly durable storage and in case of system errors you can access raw log data from the storage space and then you have DNS queries you can use cloud watch logs to log information about the DNS queries that Route 53 receives now let's have a look at few Concepts regarding Cloud watch logs firstly we have something called log even so log event is just a record of activity that has occurred in AWS environment it's straightforward then you have log stream a log stream as a sequence of log events that have same source then you have something called log group log group defines group of log streams that has same monitoring and access control settings by default you have to make sure that each log stream belongs to one or the other log group guys not let's try to understand Cloud watch looks better with help of this use case in this use case we are going to use Amazon cloudwatch logs to troubleshoot the system errors you can see that I have three instances here and a cloud watch agent which is monitoring all these three instances so what cloudwatch agent does is it collects custom level metrics from all these easy to instances and then all these metrics and logs collected by the agent are processed and stored in this Amazon Cloud watch logs Amazon cloudwatch logs then continuously monitors these metrics as you can see here by then you can set an alarm which will send you notification when some sort of error occurs in the system so whenever you receive a notification saying that some sort of error is there in their system you can access the original log data which is stored in Cloud watch logs to find the error so this is how you can use Amazon Cloud watch logs to troubleshoot the system errors so basically you are having a look at original data so you can solve your problems faster and quicker foreign so why do we need cloud formation so for example you have an application now most of you guys know that for and we have done this in the previous sessions as well that we created an application right now that application is actually dependent on a lot of AWS resources now if we were to deploy and manage all these resources separately it will take up a lot of time of yours right so to reduce that time or to manage all these resources what if I told you you have a service yes you got that right so you have a service called AWS cloudformation so using AWS cloudformation you can manage and create and provision all these resources at a single place now this is what cloud formation does but now what is cloud formation exactly so a cloud formation is basically a service which helps you model and set up your AWS resources so that you can spend more time on your application rather than setting up and provisioning these resources right so basically it's a tool using which you can create your applications quickly also you can create templates in AWS cloudformation now how do you create templates basically you'd be using the cloud formation designer you'd be putting in all the resources that are needed you would Define finding the dependencies of these resources and then you'll be saving this design as a template right now what will you do with this template this template can be used to create as many copies as you want right say for example you have a use case wherein you want your application in multiple regions for backup purposes right so if you want that you won't be implementing or you won't be creating each and every resource one by one in each of the regions what you can do is you will create it at one place in cloud formation have that template in your hand and deploy that template in the other regions as well right so what will this do so first of all your replication will be very precise right so there won't be any changes in the copies that you have made second of all you'll be doing that quickly because you don't have to do the process all over again you just have to click a button and that template will be provisioned or will be launched in that region so this is what AWS cloud formation is all about it makes your life simpler by handling all the creation and the provisioning part right so this is what is AWS cloud formation now how do we get started in cloudformation since it's a very useful service how can you as a user use the service so let's move on so for using the cloud information service first of all you need a Json script now why do you need a Json script because you'd be creating a template right in the cloud formation designer you'd be using the drag and drop option and filling in the AWS resources right now when you'll be doing that in the back end it will actually be creating a Json script now what you can do as a user is if you're good in Json you can create your own Json script otherwise you can use the cloud formation designer to create a template now for creating a template like I said you need a Json script now what is the Json script then so a Json script is basically a JavaScript object notation file which is an open standard format that means it is human readable so you can read it as well as well as the computer so if you don't need the programming knowledge for this what you as a user would be doing is you would be designing your template in the cloud formation designer and that really automatically create a Json script you can do it the other side as well like I said you can create your own Json script and feed it in the cloudformation design app so this is how cloud formation works this is how you'll be using AWS cloud formation but then how can you learn the Json script so it's very easy so basically you have to follow a structure in the Json document what is this structure so that structure is like this you would be creating the following Fields so the first field will be the AWS template format version so this will basically contain version of your template next up is the description so description is a text only file or is a text only field wherein you'll be describing your template in words right so if I'm a user and I want to know what your Json does without reading your Json script from beginning to end I can read the description in simple English and understand what your Json script will do right then you have the metadata so metadata will basically contain the properties of your template then you have the parameter so any values that you have to pass to the template will be included in the parameters next comes mappings so mappings would basically include the dependencies between your AWS resources then comes conditions so conditions are basically the conditions that you'd be giving to your template when the stack will be created or while the stack is upgraded so if your stack is being created or their stack is being updated these conditions will be looked on to then comes output so whatever outputs your template will provide or your creation or stack will provide will come in the outputs header then you have the resources field so resources field will basically include all the AWS resources that you want to include in your infrastructure right now if you look carefully you actually will be only dealing with the resources part right because you will just be populating in the resources and creating the dependencies right so basically you'd be populating the resources part and that is what it was all about the resources part right now this is Theory now how does a Json document actually look like right a Json document looks something like this so like I said you'd be working on the resources field right so you'll be including the resources field and in that say you so this Json document is all about if you had noticed it's about S3 right so you are basically including an S3 bucket and the type you'd be specifying the type of service that will be including this bucket right like in this example a Json document doesn't know what service you're talking about so you'd expect divide the name of the bucket and inside the braces you will be specifying which service so over here you will be specifying the S3 service don't worry I'll be showing you guys this Json document in a moment but before that you should understand how a Json document is structured and this is what we're doing right now now guys this is the cloud formation dashboard now you have to create a stack over here right and for the creation of a stack you require a template so first we'll design a template and then we'll create a stack so this is my cloud formation designer let's go back to our slide and see what we actually have to do so this is our first demonstration herein we'll be creating a S3 Bucket from cloudformation so we'll be designing a template around that for first and then we'll be deploying this code right so let's do that so let's go to our cloud formation window now so we have to create a S3 bucket so we'll scroll down to the SV service so here is the SC server so we click on the SV service we'll click on bucket and drag it over here right so this is the S3 bucket guys now you can edit the name of the template over here uh you can name it as adureka CF that means edurica cloud formation right so you specify that now this is your Json code now you can compare the Json code guys let me make it a little bit bigger for you guys yeah so this is the Json code guys now I didn't code this Json script right I just dragged and dropped this Bucket over here in cloud formation and it automatically generated the script comparing it with the code that we have in our presentation let's see so we have resources yes we have resources we have the name of your bucket part so basically this is the name of your bucket and then it's a type wherein you'll be specifying the SV service so you have type and you're specifying the SC service over here all right so if you want to change the name of the bucket we can do that over here let's specify it as a Eureka CF all right so we are done this is it guys this is all you have to do so now for running this in cloud formation all you have to do is click on this icon create stack now this will lead me to this page which is the create stack page now it has automatically uploaded this template to the S3 bucket and it has specified the URL here right you click on next you specify the stack names let's specify it as eleurica CF right so you don't have to specify anything here let's click on next click on create so you'll be seeing the events on this page uh let's refresh this so it says create in progress right so my template is now being created into a stack and that stack will have the AWS resource in it which is the S3 bucket right so I think the time is enough let's refresh it and check if our stack has been created so it's still in the creation phase let's wait all right so now it shows me that the create is complete all right guys so let's go to our S3 service and check whether we have that bucket that our AWS cloudformation created for us so we'll go to the SV service and here it is guys so this is the bucket that we created all right I see you can see the time it's March 28 2017. today is March 28 2017 and the time is 7.5 and the time is seven seven here all right so this bucket has just been created by cloudformation so guys like I said it is very easy it is easy to understand and to deploy as well you basically just have to create a template and that is it AWS cloud formation will devote the rest for you and the cool part is that you can replicate the template as many times as you want right so it will save you the time okay this demonstration is done so we have created an S3 bucket using cloud formation let's see what our second demonstration is all about so now we'll be creating an ec2 instance in which we'll be deploying the lamp stack which means in that easy to instance you're installing Linux you're installing Apache you'll be installing MySQL and we will be installing PHP as well right so let's see how will we do that so for our second demonstration uh we will again go back to the cloud formation console we'll click on create stack and now we have to launch lamp stack so a lamp stack is basically a sample template in AWS right so we can select the sample template and we'll click on view or edit template in designer so a lamp stack is basically an easy to instance with Linux Apache MySQL and PHP installed onto it right you can see in the designer that you have only specified an ec2 instance anyway to ask the security group to it so you need the security group obviously because you have to connect to the cc2 instance right now a lamp stack is basically a web server remember now let's see the template for this lamp stack so we discuss the structure of a Json document if you guys remember so the first part was the AWS template format version then you have description then you have parameters so parameters if you guys remember it is basically the values that you'll be passing to the template right now if you're creating a lamp stack you'd be needing the database name you'd be needing the database password you'd be needing a lot of things right if you're installing MySQL you've been needing the username you'd be needing the password so all of that you can feed in here in the parameters so you can specify the key name so if you are connecting to the easel instance through SSH connection you'd be needing a key pair right so you would be specifying the key pair here then you will be specifying the DB name and the other details now how will that look when you will be creating a stack so let's do that we'll click on this icon which will now create a stack automatically so we'll be prompted on this page click on next then you will reach this page wherein you are filling the entry right so you would specify the stack name so this is by default so stack name so we'll be specifying the stack name first so let us stack name B Lam demo and then we move on to the parameters part so whatever you specified in the Json parameters field will be reflected over here so we specified DB name over here so it was asking me for the DB name so let's give it as lureka and let's give the DB password as something can the blue password DB user as a Eureka instance type as steven.micro white even.micro because if you guys noticed in the template we didn't specify a virtual private Cloud that is a VPC now all the instances which are launched these days of which all the new instances which are there in easy two are to be by default launched in a VPC but since we are creating a Json file and we didn't specify a VPC you have to select an older version of your Eco instance so let it be T1 so T1 is an older version it runs without a VPC as well and then you have to specify a key name the key name would basically be used to create a SSH connection to your instance right so our key pair was area car underscore a we'll select that and we'll click on next now SSH location is basically your IP address if you want to specify I don't want to specify it so we'll click on next you don't have to enter anything over here click on next confirm and click on create now is happening in the background is it is picking up that Json file and is creating a stack for us launch an ec2 instance it will then install Linux onto that it will then install Apache MySQL and in the end a PHP installation so what we will do we once it says that the creation is completed we will go and check if everything has been installed on our server by creating an SSH connection right so let's wait until this stack is complete all right guys so as you can see in the events that the creation is now complete so let's check that if our installation has been correct we'll go to the Eco instance now this is our instance which has just been created uh we can check that it's been created on March 28th right so today is 28th all right so now let's connect to this instance so for that we'll have to copy the IP address this is the putty software for those of you who don't know how to connect to ec2 uh you'll be pasting your IP address here right and then you have this private file right so this is of the pem extension but the putty software needs a PPK extension so you have to convert this pem file to PPK that can be done using the puttygen software so this is the puttygen software so I'll be dragging this file here okay it doesn't work so we'll click on load go to downloads click on all files select my pen file click on open click on OK and then click on save Private key so let's name it as edureka underscore a click on save so my file has been saved we'll close it go back to our putty software you have entered the IP address here you'll click on SSH click on authentication click on browse go to your PPK file click on open and click on open here so now you'll be connected to your SSH through your SSH to your ec2 instance so any Linux installation on your AWS infrastructure the login will be ec2 hyphen user all right so you're in let's see if you can connect to a MySQL installation so MySQL hyphen Edge so it is on localhost hyphen P port number 306 and then the user that we gave was edureka and the password was this okay guys so we are in so that means we successfully created the edureka username which we specified in the Json script that works well and then you specify okay we also specify that we need a database right so let's see if it is showing our databases or our databases have been created as well okay so it has a database called edureka right so the Json script worked well now the thing here to notice is that how granularly you can configure your Json file right first of all it launched an Eco instance then install Linux then install MySQL it configured its settings and inside MySQL it gave you a database right so this is awesome guys so this gives you the whole control of AWS just through Json script right and this is the power of cloud formation now if you want this infrastructure or whatever you have created right now to be replicated again to some other instance that can be done with a single click of a button right and it is actually pretty awesome because if you were to install this lamp stack on a server or on AWS again if you launch Eco instance with the Linux OS installing Apache MySQL and PHP may take time it actually takes time because you have to open the console you have to open the terminal you have to enter the commands and depending on your internet speed you will install all those packages so this is neat it does everything for you automatically right so guys this is what cloud formation was all about [Music] snapshots and emis let us see what are those so I guess most of you are aware of what an ez2 instance is so for those those of you who are not an easy to instance is just like a roster it's a fresh piece of computer that you have just bought it's just like that right so on that computer you can choose any operating system that you want so once you have the operating system you can install any kind of software on it all right so you have to install every time you launch a new ec2 instance you have to install all the required softwares on it all right but there's a workaround what if you want a specific configuration of ec2 instance say I want five easy to servers which are exactly like this uh like each other right so one way of doing that would be to launch a new instance every time install required packages every time and going about it right the other way of doing it would be uh to actually create an image of once you will be configuring your ez2 instance and after that you'll be creating an image of your ec2 instance and then using that image you can actually deploy four more ec2 servers all right so this image is basically what is an Ami so Ami which is an Amazon machine image is nothing but an executable image of your already existing ec2 instance right but before an Ami can be created there's a thing called snapshot now what are snapshots snapshots are nothing but um the copy of the data the data the copy of the data that you have have on your hard drive so basically if you have your C drive right and you want to copy your C drive you copy your C drive onto some external drive so that becomes a snapshot but if you can boot from that external drive so that as to your whole operating system comes up on some other machine then it becomes an Ami so this is basically the difference between the two a snapshot is not a bootable copy and Ami is a bootable copy that you have all right so I hope you you got the difference between what is an Ami and what is a snapshot so I'll repeat it again and you use an Ami to basically replicate an easy to win easy to instance again so that you don't have to do the configurations um all over again right so now you'll be wondering we were we were to talk about what are what is auto scaling what is load balancing why do we need Amis but be patient you'll be clear with everything with the session all right moving on guys let's now discuss why do we need Auto scaling now uh before that right now the way I'll be going through this session is I'll be explaining you each topic and then I'll show you it in the AWS console all right so um we we just discussed what are snapshots and what are Amis so let me quickly show you how you can configure or how you can create an Ami of an already existing ec2 instance in the AWS console so let me give me a second so give me a second I'll just go to my browser and my AWS console so guys this is my AWS console I hope it's visible to you so uh the first thing that you'll be doing is you'll be going on to your ec2 console all right so in your easy to console um you will have all your servers that are running right now right so uh for the for the sake of Simplicity I have deployed I've already deployed two servers which are server one and server two now I have configured them both with Apache so that they can have your they can host a website right so let me quickly show you how the website actually looks like so if I'll go to this particular IP address of server one this is in part so what one right so this is how the website looks like right similarly for my server too if I go to go into my server too this is how my server 2 will look like here it is all right so these are my two servers now what I want is I will create an exact copy of the of these servers so that they can be replicated all right so when I say replicated everything from softwares to this website will be copied onto an image and that copy or that image when I will deploy it it will be deployed inside one more ec2 server in which I don't have to do anything this website would be there I just have to go to the IP address and I can see this website all right so now what I'll be doing is I'll be creating an Ami of both these servers so let's create an AMF for Server one first I'll select the server one I'll go to actions I'll go to image I'll click on create image and all I have to do is give an image name for it so let me give the name as live server one right this is my image name I click on create image and that is it it takes in your request for creating an Ami and it does that right pretty simple now similarly I'll be doing it for Server 2 as well I'll select server 2. I'll go to image I'll create an image and I'll name the image say live server 2. so once I've done that uh you can see the images in your Ami tab so if you look at over here in the images section you you can look at Amis if you go to your Amis you can see there are two images which are just being created which are in the pending State as of now and they are live server one and live server two now using these images you can create any kind of server that you can create the exact same server with just a click of a button right you don't have to configure anything much all right so this is how you create an MRI pretty straightforward guys let's move on and discuss why do we need Auto scaling now so you learned how to create an Ami let's go ahead and understand Auto scaling and see how they are connected to Amis all right so uh say you have an application you have a website and every machine now this website is hosted on some server guys right and servers are nothing but machines now every machine has its uh limitation right for example um say this machine is say around 8 GB and say i5 processor so say it can host around 100 people right only 100 people can come to this website and easily navigate inside the website but if more than 100 people comes in this computer or the server becomes slow all right so say there are 100 people as of now and they are trying to access your website and they can easily access it now your website becomes a hit overnight all right and now a lot of people are trying to access your website which makes your server overburdened now in this scenario you can do only one thing that is deploy more servers and distribute the traffic equally among those servers so that the requests can be handled all right now this thing is a manual task and manual is a big no in the IT world guys so we invented a service called Auto scaling and using Auto scaling what happens is it sees it it actually analyzes the kind of load which is coming in right and it deploys the servers according to that so say around 300 per people are coming in and it sees that that you need three servers to handle those kind of requests it will do that automatically right and that is where your Ami comes in guys because the the new servers that you'll be launching those new servers have to be taken out of some template right so the first server has to be the exact copy of the uh sorry the second server has to be the exact copy of server one the third server as well has to be the exact copy of server one right and that is where the Ami comes in so what what basically happens is in the order scaling service you basically attach your Ami which you created and using that Ami it deploys more servers all right this is why Ami is significant or this is how am I is related to Auto scaling and this is why do we need Auto scaling let's move ahead and just just give us a definition that what Auto scaling exactly is so like I said whenever you your load increases and you have to scale automatically up and down you use Auto scaling so it's not only about scaling up that is when a load increases say three or four servers you have deployed and when your load decreases still four servers are there to sitting idle right so that is not the case with auto scaling you can also scale down as per your needs you can configure everything which you can imagine about uh scaling up and scaling down in the order scaling properties all right so this is why we need Auto scaling now one more thing uh that you need with order scaling is if you would have noticed I said the number of servers that deploy gets deployed in the order scaling say there are four servers which get which get deployed uh you during order scaling right now the traffic has to be distributed equally right so this traffic which has to be distributed has has nothing to do with auto scaling it has to be done by a separate entity and that is what we are going to discuss in the next section but before that let me show you how you can configure or how you can configure the auto scaling properties and attach the related Ami so that the related servers are launched all right so let me go to my AWS console so here am I and as you can see the Amis have already been created they are live server one and live server 2. now what I'll be doing is I'll be creating Auto scaling groups or I'll be configuring the auto scaling properties so that these servers can be Auto scaled as and when required right so before that I actually have to create a launch configuration now what is a launch configuration so if you look at the Ami guys you have only specified what kind of data should be there in your server what you have not specified is what kind of machine you should launch every time there's a need all right so that is exactly what you do in launch configuration so you have the data but you don't have the uh the information about the kind of machine that you want to launch so that that that kind of stuff you'll be specifying in the launch configuration so what I'll be doing is I'll click on create launch configuration and now it will give me a wizard as same as that of any issue do so right in the ec2 server I had to choose an operating system right so same it will give me the wizard but I don't have to go here I'll have to go to a separate tab which is called my Amis right so I'll select my Amis and now I'll select the newly created Amis which uh the Ami which I just created which is uh say we are creating a launch configuration for our server one right now so I'll select the live server one I'll click on select and now it will ask me the kind of the configuration that I want for my server right so I need a t2.micro because we are doing a demo today right so we don't need much uh of uh of of computing power so we just have to um we'll select e2.micro and we'll name our launch configuration a thing so let's name it as live server one right and the IM role is not required and I'll click on next now it'll ask me for adding the storage so 8GB is enough for an Ubuntu machine I'll go to configure security groups right and in these security groups I just have to add the HTTP rule because I have to connect to all the instances that I'm launching right so I'll select the HTTP rule from here right and I click on review so that is it guys nothing else has to be configured here all right and uh it is asking me to check everything that I've just configured everything seems fine I click on create launch configuration now it will ask me for the key pair right so every server which will be launched it will be associated with the with a key pair which you'll be specifying here right you can create a new one if you don't have already I already have a key pair so let me choose my key pair so that is hey month underscore two and I'd acknowledge that I have this keeper and I'll create the launch configuration it just takes a second or two to do that and we are done all right so now we have created a launch configuration we have specified what kind of machine we want we specify what kind of data should go into that machine now we'll be creating the auto scaling group in which we'll be specifying in which cases we want to Auto scale all right so let's create an auto scaling group now all right so it has automatically picked up the launch configuration that we have just created that is live server one right let's name this group as live server one group right and what is the initial size that you want in your launch configuration that is the minimum number of servers that you want so let it be one and remember guys this is the most important part um when you are creating a launch configuration ensure that you are doing it in your default VPC to be on the safe side because there are a lot of settings that you have to do if you create a VPC on your own and that becomes a hassle all right so if you accidentally delete your default VPC which I did right so you have to contact the AWS support team and they'll help you out with it they'll basically create one for you you cannot create one on your own all right so always ensure that you are in your default VPC whenever you're creating your auto scaling group all right so now I'll be specifying the subnet so basically you have to select a minimum number of two subnets right I'll be not getting into uh what are subnets because then it will be like a three hour session uh we'll click on configure scaling properties now over here you can specify the properties that I was talking about the when do you want your server to scale right so over here you can specify the average CPU utilization now what do you mean by average speed CPU utilization so there are four servers running as of now right so it takes the average of all the four servers all right and if the average goes beyond whatever number you've specified here say I specified 70 over here right so in that case whenever the average PCP utilization will go beyond 70 it will launch one more server similarly if it goes uh I can configure one more property here which says if it goes below 20 percent uh like scaled down from one server all right so if there are five servers in the uh and CP utilization has gone uh less than 20 percent it'll uh it'll uh it'll scale down from one seven and come down to four servers all right and you can also set how many seconds should it wait say um the traffic is spiking down and up like too frequently right so for that what you can do is you can set a time so if the 20 Mark has been not crossed till say like five minutes then it will scale down a server or if the 70 Mark of the CPU utilization has been crossed over five minutes it will then only scale up it will not scale up if at only once for only one second it becomes 71 percent all right so you can specify all of that over here but since I cannot load test my instance over here I'll just keep it at its initial size which just means that it'll even if I delete uh my instance that is I one instance has to be there in any case if I delete the instance it will automatically launch it again all right so let's we'll select the keep this group at an at its initial size and we'll go to configure notifications so I don't want to configure the notifications neither the tags I click on review and I'll click on create order scaling group all right so I've successfully created an auto scaling group for my live server one all right similarly I'll do the same steps for my server 2 as well I'll click on create auto scaling group and I'll select a launch configuration uh which was there for my server too so I've not done that so let's create a launch configuration first for our server too uh we'll go to Amis and we'll select the server 2 part here all right so I've selected server two I'll do the same steps that I did earlier right so let me give it the name as live server to group I'll click on add storage configure Security Group over here I'll add the HTTP rule click on review and launch configuration select the key pair acknowledge it create launch configuration now doing the same step Skype I'm not doing any new thing here I've created the launch configuration now I'll create the order scaling Group which is live server to group right and then the VPC as I said should be default subnets minimum of two subnets you should select you'll click on scaling properties I'll keep it at initial size configure review and create the order scaling group all right nothing much guys so same things that I did for my server one I've done for my server 2 as well all right so since I've created or an auto scaling group if you go to your ec2 dashboard you would notice that two more servers are now being deployed all right so you can actually identify them over here see these two servers are being initialized so these these have just been created by your auto scaling group because we specified that a minimum number of One servers should be there at all times right now if you try to go to the IP address of this server right you will see that it will have the exact same settings for my easy loose instance so this is my server one right so as you can see a new instance called created but with the exact same settings I didn't have to do anything it automatically created an instance with the same settings all right and same is the case with server 2 as well guys if I go to my server 2 and try to access it I'll see the same things over there as well so I'll show you in a bit Yeah so this is my server too all right so my order scaling group is functioning fine so let us come back to a slide now so we are done with auto scaling now like I said you need to have an entity which will equally divide the traffic between the servers that you've just deployed all right so the essay in I've created two Auto scaling groups guys as of now right the and why I have created a second order scaling group I will tell you in a bit but for now understand that there is an auto scaling group all right and inside that auto scaling group say there are five servers and if a person is coming in or a customer who has logged onto your website is coming in how do how would uh his traffic be treated how would he know which server to go to right so there comes in the third entity which is called the load balancer so what load balancer does is a load balancer your your customer will basically basically be coming to your load balancer and the load balancer will decide based on the usage of your servers that which server is more free and then will give the connection to that server all right so this is basically the role of a load balancer so like I said a load balancer is a device that acts as a proxy and distributes Network or application across a number of servers now I've been saying it repeatedly that yours your your servers are actually sorry your traffic is actually distributed equally among the servers right but in a few moments I'll tell you that there is one more uh one more way of Distributing your traffic all right so before that let me again stress on the point that this was your order scaling group guys this is just the example that I took in the beginning right so there are like these set of users and they're trying to access your website and they are being routed to these servers so this routing is actually done by a load balancer all right now like I said the traffic which is distributed it is distributed in two types right the first type would be to equally distribute them among the number of servers like say there are five servers so it will distribute it among the five servers but if there are uh say there are two kind of servers now so uh your load balancer can identify what kind of uh request is being made by your user for example in your website on in your application you have uh you have a part wherein you can process the image right and you have a part where you can where you have your blogging section all right so if you want to process the image you want your traffic to go to a different set of servers which are Auto scaled at their own in their own Auto scaling group right and if you have the blogging section you have a different order scaling Group which is auto scaled at a different uh with a different order scaling group but you want everything to go from one single link so the way to do that is using an application load balancer so let me just repeat what I just said so say the this set of servers they host your image processing part they do all your image processing and this set of servers that the host your blogs that you have on your application all right a user comes in he just logs on to your website and he goes to a URL which says say edureka.com image all right if it goes to slash image your load balancer will see okay he's asking for the image kind of content so he should go to this set of servers because this these servers serve the image purpose and if you go to edit edureka.com blog your load balance over identifier okay this user he is asking for the block content so you should go to this set of servers all right so all of that is done using your load balancer if you compare it with the classic load balancer it is it does not have that kind of intelligence right what it will do is basically all the traffic that it has got incoming to it it will equally distribute it among the number of servers that are under it all right but with application load balancer you wherein you can divide the traffic uh according to the needs of the customers all right now when you have divided the traffic again the same thing will happen here as happens in classic load balancer that at this point it'll equally distribute the traffic among the number of image servers right and similarly the people who want to access the blog it will equally distribute the traffic among the number of people who want to access the blog server all right so this is what an application load balancer is all about so classic load balancer was something which was invented earlier and these days nobody uses the classic load balancer anymore people are using application load balancer all right and that is what our demonstration is going to be all about today all right so enough of clocks let's move on to the Hands-On that is the demo part so let me quickly show you what we are going to accomplish today so basically a user will come in he will have the address of your load balancer and if he asks for the image path or say server one in our case he will go to the order scaling group of server one if he asks for Server 2 he will go to server 2 but all of them will have the same address that is using your address of your load balancer all right so this is what we are going to accomplish today now now for those of you who didn't understand that why did we create two Auto scaling groups is because we want these uh servers that is the image processing servers to be scaled as well and as as at the same time we want the blog servers to scale as well all right so that is the reason we want uh we we created two Auto scaling groups so I created a server one which you can imagine is for your image processing and I created an auto scaling group for server two which you can imagine is is for your blogging section all right having said that guys now let's move on to my AWS console and go to our load balancers all right so what I'll be doing now is I'll be creating a new load balancer and that load balancer would be of the type application load balance so as you can see I have two options here I either I can create a classic load balancer or I can create an application load balancer so I'll go on with the application load balancer and I will name it as live load balancer and the scheme is internet facing so since mine is a website that I want you guys to access right so it could be internet facing otherwise you if you are working in a company and that company wants a load balancer for their internal websites that uh the companies have you can actually opt for an internal uh internal load balancer as well but since ours we have a website and we want that to be uh used we will use the internet facing load balancer all right and the listeners it's HTTP that's fine and the availability zones like I said you have to select a minimum of two availability zones and you click on configure security settings all right um so now you'll be specifying the security group right so in Security Group you it's better to create a new Security Group remember guys don't include the default Security Group for your load balancer it's it's a good practice to always create a new security group so that you can and customize your rules according to your needs all right so I'll create a new security group and specify the HTTP Rule and I click on next and now comes the part wherein will be specifying the targets all right now what are targets now in application load balancer guys um targets are basing basically but or scaling groups right so Target one would be your order scaling group one your target two would be order scaling group two target three Target four you can have as many targets as you want but in this wizard you have to specify a minimum number one right so we'll create a new Target group we'll call it as say Auto scaling live Auto one all right and the protocol is HTTP Port is 80. I will click on next and I'll review everything I think everything is fine and I'll create this load balancer all right so we have not done all the settings guys I'll show you how to do all the settings for now we have just created a plane load balancer all right so I've created a load balancer which is pointing to a Target group one and that Target group is not pointing to my auto scaling group as of now all right we will do that now in this part so we have created uh I just created a Target group called live Auto one I'll create one more Target Group which will be called live Auto 2 for my second Auto scaling group all right so I'll create this and done so I now have two target groups that is live Auto one and live Auto 2. now these two target groups have to point to my auto scaling groups respectively all right now the way to do that you cannot uh Point them here you have to go to your auto scaling groups right and in your order scaling groups you have to select the auto scaling group that you have just launched so it is live server one group and live server two groups so I'll go to live server one group I'll go to details and over here you'll click on edit all right and inside edit you have this option for Target groups you don't have to specify anything in the load balances this option is only for classic load balancer but we are creating an application load balancer right so we'll be specifying everything in the Target groups so for live server one group we'll be specifying the demo server one so demo server one has already been sorry um um sorry it will be live Auto one the target group that I just created and live Auto One is connected to your load balancer so basically your load balancer will point to your target group and your target group is now pointing to your auto scaling group one which are pointing to your instances all right so this is how it uh the visibility comes in so I'll save it the target group one is uh live server one group and the target group two I'll be specifying in the second Auto scaling Group which is here that is live Auto 2. all right I'll save it and let me quickly verify if I've done everything right so this is a live server one group and this is live Auto One Fine this is live server 2 group and it is live Auto too fine so my load balancer can now see the auto scaling groups that I've just configured so let me quickly go to my load balancer now comes apart guys wherein I'll be specifying when to go to auto scaling Group 1 and when to go to auto scaling group two like I said we'll be specifying it uh using the um uh using the kind of request that the that the user has made right so the way to do that is using uh is by first selecting your load balancer and going to listeners so once you go to listeners guys uh you will reach this particular page now in this you have to click on view or edit rules right so once you click on view or edit rules you will reach this page which is kind of an if else which is kind of FL structured so now what will you do is so you can see that there's a default rule as of now that anything any request which is made it will go to live Auto one all right which means any requested which is made it will straight away pointed to the order scaling group one now we'll specify if the request is is if the user is asking for server 2 he should be pointed to server 2. so let us do that uh the way we'll do it is like this we'll click on ADD rules we'll click on insert rule and now I'll specify so you have two options here either it could be uh the the the routing could be based on your host that is the address of your uh of your website or it could be based on the path now what is the difference say um edureka.com this is the host name right now if I TR if I type in resources.edureka.com it is still point to my uh domain but if if I have specified resources.edureka.com and if I write it over here and I specify it has to go to server 2 it will go to server 2. otherwise if you type in resource.allureka.com nothing will happen because now you've not configured anything right so that is the host path with paths the differences that say you write edureka dot Co slash blog all right so that slash blog becomes the path but with host the thing is the difference is resources.edudaycar.com so that becomes one hostname right but with path you're basically putting a slash and you are going into a particular folder all right so you can specify the path here right it doesn't matter if you have not specified in a server uh for different uh for different uh uh say uh you could the way you could have done the image processing and blog the other way around rather than having it on two servers was that you have you could have configured it inside two servers in your root directory right it could be server one for your image processing and server 2 for your uh blogs but I don't want that because your as distributed as your system is it becomes more reliable right and that is the reason we have two different servers for two different set of things so the way you can route your traffic to both the servers is by uh typing in the path so say uh if I have to go to server one I'll type in server one slash star so star basically means anything after server one um could can be accepted but it has to go through the request will be forwarded to live Auto one all right so if I have server one in my path anywhere in my path it will go to live Auto one so I'll save this rule similarly I say that if it has a server 2 in its path and anything after that it has to go to live Auto 2 all right I'll save it and that is it guys now uh my load balancer has has saved its settings let's hope for the best and try executing it so this is the link guys right if you just type in this link it will by default go to server one right so if I go to this link you can see it is going to server one as of now but if I specify slash server one it will go to my server one and if I specify slash server 2 it will go to my second server now you might be wondering um that Heyman you might have a different directory in your same server so let me clear your doubt according to that so what I'll do is I will go to my ec2 dashboard right and I'll go to server one and I'll quickly show you if what happens if I type in server 2 here all right so this is the IP address right so if I type in this IP address I'm going to server one if I type in slash server 2 it will give me a 404 because there is no folder called server 2 right same as the case here so if I go to this iPad you can see Server one if I don't specify anything after my address it will still go to the same server that is here that is this IP address right but if I specify slash server 2 over here it will not be able to do so because this is not a load balancer it is directly your IP address but over here if I specify server 2 it will redirect me to the second server one second right it will redirect me to the second server and that is all that I need all right so with one address you are actually pointing to two servers which will be solving your two problems now the real life use case like I told you it could be for different kind of tasks say you have a blogging section on your website and you have an image processing section on your website if you want uh two different servers to host your two different Services you can do that easily using using a load balancer all right guys so with this I conclude my session for today let me quickly go to my my PPD so we created um we we uh we we created a load balancer which intelligently identifies what the user is asking if he's asking for image he'll go to server one if you're asking for blog he'll go to server 2 in place of server image I just specified server one and server 2 so that it becomes easy for you to understand all right one more part that I missed uh whenever you are you have specified the path that path also has to be specified in your server for example I said load balancer address slash server one right so it'll go to your server one but then over there it'll look for the server One Directory all right similarly if I specify slash server 2 it will go to Second server but it will look for the SEC server 2 directory so keep that in mind [Music] why Cloud security is important so let's take an example here and talk of three very popular companies LinkedIn Sony and iCloud so LinkedIn in 2012 experienced a Cyber attack wherein 6.5 million usernames and passwords were made public by the hackers after that Sony experienced some most aggressive Cyber attack in history wherein their highly confidential files like their financials their upcoming movie projects were made public by the hackers right and this made a huge impact on the business front of Sony iCloud which is a service from Apple also experienced a Cyber attack wherein personal or private photos of users were made public by the hackers right so guys now in all these three companies you can see there's a breach in security which needs to be addressed right so Cloud security has to be addressed it needs to be there in the cloud computing world so since now we've established that cloud security is really important let's move on to understand what cloud security actually is so what is cloud security so it is a use of latest Technologies and techniques in programming to secure your application which is hosted on the cloud or the data which is hosted on the cloud and the infrastructure which is associated with the cloud computing right and the other part of this is that whatever security techniques or whatever techniques or technology that you're using to secure your application should be updated as frequently as possible because every day new threads are coming up right every day there are new workaround to problems right and you should be able to tackle these problems or these work arounds and hence you should upgrade your security as frequently as possible Right Moving ahead let's understand how we can choose between a public a private and a hybrid Cloud so we have understood that what cloud security actually is now let's talk in terms of security and understand how we can choose between a public a private and a hybrid cloud so if you were to choose between these three infrastructures what should be our basis of judging which Cloud we should choose right so you would opt for a private Cloud when you have highly confidential files then you want to store on the cloud platform right now there are two stories or there are two ways of thinking a private infrastructure you can either opt for private servers or private infrastructure on your own promises or you can look up for servers dedicated servers by a cloud provider right so that all comes under the private infrastructure then we have the public Cloud infrastructure in public Cloud infrastructure you would basically use websites that are public facing so say if you have a products page where you have application which can be downloaded by the public so that can be hosted on the public Cloud because there's nothing that has to be secret over there right so things like websites things like data that is not confidential and you don't mind public seeing it can be hosted on your public cloud the third infrastructure is the most important infrastructure which is the hybrid infrastructure and this is the setup that most companies go for right so what if there's a use case wherein you have private files of Highly confidential files and a website as well right so if you have this kind of use case you might go for a hybrid infrastructure which is kind of Best of Both Worlds you get the security or the comfort of the private infrastructure and the cost effectiveness of the public Cloud as well right so you your hybrid cloud is basically if you want your highly confidential be stored on your own premises and your website be hosted on your public Cloud this infrastructure would be hybrid Cloud infrastructure so basically you would choose a private Cloud if you have a highly confidential files if you choose a public Cloud if you have files that are not that important or files that you don't mind people seeing and you would choose a hybrid Cloud infrastructure if you want Best of Both Worlds right so this addresses how we can choose between a public private and hybrid Cloud moving on let's understand whether Cloud security is really a concern so we will discussed that why Cloud security is important we've discussed what is cloud security right now let's talk about whether this really makes sense right so if we say that cloud security is really important there's no one who is actually thinking about it there's no point right so let's see if companies were making a move to the cloud actually think about Cloud security so here's a gardener research on companies who are making a plan to move to the cloud or who has not moved to the Cloud yet right so what are their concerns why not they're doing so so uh the topmost reason listed by these companies was security and privacy concerns right so as you can see these companies who want to make a move to the cloud are also worried about this Security on the cloud infrastructure and this makes it clear that cloud security is actually very important right now we've understood that cloud security is very important we have understood that companies are looking for cloud security are actually following the practices for cloud security but now how secure should you make your application right what do they extend to which you should make your application secure so let us start with this line so it is said that cloud security is a mixture of Art and Science right why let's see that so it's a science because obviously you have to come up with new technologies and new techniques to protect your data to protect your application right so it's a science because you have to be prepared with the technical part but it is art as well why because you should create your techniques or you should create your Technologies in such a way that your user experience is not hindered let me give you guys an example suppose you make an application right and for making it secure you think okay after every three or four minutes I'll ask the user for a password right from the security point of view it seems okay but from the user's point of view it is actually hindering its user experience right so you should have that artist in you that you should understand when to stop or till where should you extend your security techniques and also you should be creative as to what security techniques can be implemented so that the user experience is not hindered for example there is a two-step authentication you get that when you're logging into your Gmail account right so if you know your password that is not enough you should have an OTP as well to log into your Gmail account right so this might be hindering with user experience to some extent but it is making your application secure as well so right so you should have a balance between your science and the r part that you're applying on cloud security moving on let's now discuss the process of troubleshooting a threat in the cloud so let's take an example here so like you're using Facebook right and you get a random message from some person saying there is some kind of story like you usually get that while you're using Facebook right that such and such thing happened and click here to know more right you get that similar kind of message here and by mistake you actually click on that link you didn't know that it's a Spam and you click on that link now what happens is all the users that are there or all your friends on the Facebook Chat gets that message right and they get furious as to why this kind of spam message is there in their inbox right and you get scared now you get angry as well and you have to bring your frustration out on Facebook so you contact Facebook and you get to know that they already know the problem and they're already working on it and they're near to their solution now how did they come to know that there is this kind of problem and needs to be solved right so basically Cloud security is done in three stages so the identification process or the thread identification process is done in three stages the first stage is monitoring data so you have ai algorithms which know what a normal system behavior is and any deviation from this normal system Behavior creates an alarm and this alarm is then monitored by the cloud experts or the cloud Security Experts sitting over there and if there's a threat they see there's a thread they go to the next step which is gaining visibility right so you should understand what caused that problem right and or who caused that problem precisely so your Cloud Security Experts look for tools which give them the ability to look into the data and find or pinpoint that statement or pinpoint that event which caused this problem right so that is done using gaining visibility stage and once we have established okay so this is the problem then comes stage three which is managing access so what this basically will do is it will give you a list of users in case we are tracking the who it'll give you a list of users who have access and we will pinpoint the user who did that right and that user can be wiped out of the fifth system using the managing access stage right so these are the stages which are involved in Cloud security now if you were to implement these stages in AWS how would we do that let's see that so the first stage was monitoring data right so if you have an application in AWS and you are experiencing this same kind of thing what will you do for monitoring data so you have a service in AWS called AWS cloudwatch now what is AWS cloudwatch so basically it's a cloud monitoring tool so you can monitor your ec2 and your other AWS resources on cloudwatch how you can monitor them you can monitor their Network in network out of your resource and you can also monitor the traffic which is coming on to your instance right you can also create alarms on your Cloud watch so if there's deviation from normal system Behavior like I said so it will create an alarm for you it'll escalate the event and alert you about that thing so that you can go on and and see what that problem actually is right so this is cloud the monitoring tool right so this was about AWS cloudwatch let me give you a quick demo of how the AWS cloudwatch dashboard actually looks like okay guys so this is your AWS dashboard so now for accessing Cloud watch you can go under the management tools here is cloud watch you'll click on cloudwatch now over here you can monitor anything right it will go to metrics and you can see there are three metrics over here you can monitor your EBS you can monitor your ec2 you can monitor your S3 right now say suppose I want to monitor my ec2 so as you can see so I have two instances running in my ec2 one is called for batch instance and the other is called WPS instance right now these are all the metrics which are there so I can check metrics for my WPS instance for network in I can check the disk read Ops so let me select the network out metric and there'll be a graph over here so I can see this graph and as you can see between six o'clock and 6 30 I experienced a search in my traffic right so basically this is how you monitor your instance in Cloud watch and you have all these default metrics to check how your instance is doing in your AWS right so this is what cloudwatch is you can also set alarms here right so if you go to alarms click on create alarm I'll go to ec2 and you can select your metric from over here now I've selected discrete bytes over here now once I do that it'll ask me if there's a Time range to which I want to monitor that instance right okay let's not set any timer let's click on next so when you click next you will be prompted with this page so you can set your alarm name you can set the alarm description here and then you can specify that for what read write number you should get this alarm for right so you'll be setting that over here after that you will go to action so once an alarm is triggered where should that alarm go who should that alarm go to right so you can set it over here now whenever the state is alarm right what should we do so when the state is alarmed you can send your notification to your SNS topic now what is destination SNS so basically it's a notification service we'll be discussing what SNS is in the next session uh don't worry if you don't understand so basically for now what you can understand is that SNS is a protocol where a new set if you get a notification what to do with that notification or whom to send to that notification right so if there's a topic called notify me in SNS so in notify me I have configured an email address that is my email address that whenever a notification comes to the SNS service or the notify me topic to be precise it sends an email to me right with that message so I'll get a message with this alarm that's such and such thing that has happened in Cloud watch now you do whatever is required the other thing that you can do over here is in the same SNS topic you can also configure a Lambda function to be executed right now what that Lambda function will do so say suppose I configure the metric to be of CPU usage right and I say whenever the 40 percent metric is crossed create an alarm or like go to an alarm stage and it notifies the SNS notify me topic about this in the notify me topic I can configure a Lambda function to clear all the background processes in that easy Romance right so if I do that the CPU usage will automatically come down right so this becomes a use case that you want to launch a Lambda function whenever your CPU usage goes beyond 40 percent right and hence this is the way you would do it so this is about Cloud wash there's nothing much to it you create alarms and you monitor metrics right moving ahead let's move on to the second process which is gaining visibility so for gaining visibility basically you have to track your whatever activity is happening in your AWS account so there's a service in AWS called cloudtrail right so the cloud rail service is basically a logging service wherein each and every log to each and every API call is made now how is it useful okay let's talk about the security perspective right so your hacker got access to your system so you should know how he got access to your system so if you have a time frame say he got access to your system or you started to face the problem say around four o'clock right so you can set the time between two o'clock and whatever the time is right now and monitor what all has been going around and hence you can identify the place where that hacker got access to your system right now this is the part where you'll get to know who that person actually is or you can isolate the problems or which cause that so if you take a cue from a Facebook example over here you can actually pinpoint who is responsible for those spam messages because you'll all have those logs right you will see the origin of those messages now once you've done that the next step is managing this guy out of the system or wiping this guy out of the system but before that let me show you guys how cloudtrail actually looks like so let's go back to our AWS dashboard and go to our cloud trail service so again under the management tools you have the cloud drill service you click on the cloudtrail service and you will reach this dashboard all right so here you have the logs so as you can see you can set the time range here but I'm not doing that I'm just showing you the logs so even for logging into my console it is showing me that I've logged into my console at this time on this date right so every event is logged guys every event that is happening on your AWS console is being logged so let's talk about the S3 bucket so somebody deleted a bucket and that has again been locked right so it happened at 7 38 pm on 28th of March 2017. right so any activity any kind of activity which happens in AWS would be logged over here okay guys so this was about cloud trail so let's go back to our slide and move ahead and create a session so like I said so now you have identified who is responsible for your problem right so now the next step is managing access right so now you should be able to throw that person or remove that person from the system so most of the times what happens is like if you take our Facebook use case so basically there was a user who triggered that problem right so the two things that you have to do is first of all you have to remove that spam from your system so you've got to know where it originated so now you start wiping it after that you have to debug that user from doing it again right so from The Source you get to know who that user is now using managing access you will actually get access to do all that right so if you talk about AWS this service is called AWS Im so what AWS IM does is it basically authenticates that particular service now you are a root user right so you can do anything but what if you have employees and obviously all employees will not have all the rights right now what if you want to give granular permissions to your employees now for like in our example what if one specific employee is capable to track down this problem right or track down what has to be done so you can give that particular person the rights how using IAM right so IAM is used to provide granular permissions it actually secures your access to the easier instances by giving you a private file and also it is free to use right so let's see how IAM is used so let me go back to my AWS console okay guys so this is my AWS dashboard I'll go to the security identity and compliance domain and then click on IM right now over here I'll click on rules now I can see all the rows which are there in my IM right so since I would have identified which role is creating a problem so I'll go to that role so for example uh I have a problem in say AWS elastic bean stock easy to roll right I click on this now once I click I will be getting this screen so now I can see the permissions the trust relationships access advice and the revoke sessions right so I'll go to revoke sessions and I'll click on revoke active sessions and hence I'll be able to wipe out that user from accessing my AWS resources right so this is how you use IIM guys and now one more thing that you can do over here is you go back to your dashboards go to roles now like I told you guys you can actually create a role for a person who'll be able to access restricted uh things on your AWS account right so let me quickly show you how you can do that so you will click on create new role and you'll give your role some name so let's give it hello over here right click on Next Step go to role for Android provider access right and now you can select how that user of yours will be accessing your AWS account right so allow users from Amazon Cognito Amazon Facebook Google ID all right so let's select this now let us select Facebook and let's give it some random application ID right so anyways not going to create this role I'm just telling you guys how to do it right so basically you get your application ID by Facebook over here there you'll be uh since you are using Facebook to authenticate that guide to your AWS account you'll get an application ID by going on to craft.facebook.com you can do all of that over there okay so that is not the concern you will enter the application ID and click on next step right so you'll get the policy document so whatever you configured in your text boxes has actually been created in a Json file right so you don't have to edit anything over here click on next step now you have to attach a policy now what are the policy so policy is basically what all permissions you want to grant that user right so if you want to Grant him the execution role for Lambda you can do that you can grant him the S3 execution role right so whatever policy that you create you can actually create a policy in your IM right I'm not going much in details of this because all of this is covered in your IAM session but I'm showing you guys because I just told you guys how this can be done so let me show you how it can be done right so you'll select whatever policy you want and click on next step and review it and create that rule this is it guys right so you can actually select a policy whatever policy you want that role to have and hence so policy is basically a permission that you want that role to have so if you get the permissions to just review your instances he'll be only able to review your instances okay one more thing I want to make is that you don't have to give your security credentials to that guy anymore because now you'll be specifying that user will be able to connect to Facebook okay so also you have a part here wherein you can specify what specific user can access it right so I can type in my name here and if I'm being logged in through Facebook if my username is Heman Sharma only then I'll be able to connect to my AWS account right now this is ID right I can also set the local parameter right so ID I think is fine wherein you will be adding the ID of the guy who whom you want this AWS account be accessed by right so you all have Facebook IDs right so you all have to just punch in your Facebook IDs over here click on next step and then you'll be able to access this AWS account if I create this role right now with the policies that I'll be attaching to your role right so this is how you use I am guys let us go back to our session okay so these are the three services guys so you have IAM you have cloud trail and you have Cloud watch using which you can control or you can actually see what is going on in your AWS account [Music] why do we need access management all right so to discuss this topic let's understand it using an example say you have a company in which you have a server and this server has everything in it it has all the modules in it and it gives you the it gives different users the permission to use the different servers right now in your company first of all you should have an administrator which will have all the rights uh to uh to access the server right so nobody in in the today's it World works on the root account right so there has to be an administrator account so first we will create an administrator account with all the permissions now tomorrow say a UI developer comes into your company right now A UI developer will only work on the graphical tools right so he should only be allowed the graphical tools and not some other tools uh maybe uh he shall not be given the internet access or something like that right maybe he's not giving the PowerPoint access maybe he's not given some folders access some drives access anything like that so all of that can be defined in the server by the administrator and specific rights will be given to a UI developer right similarly if to if after that a business analyst comes in so he should only be able to access the analytics module which which is there in your server right he should not be able to get into the UI development part or he should not be able to see the other aspects of what is there in your server right so each and every uh user each every rule will have specific rights assigned to them right and this is done by policies which are in turn given by administrators right so this is what access management is that giving each role the specific rights that they deserve and this is what we are going to accomplish today in AWS right so this this is why we need access management let's go ahead and understand how can we accomplish this in AWS right so to accomplish this in AWS you need a service called IAM you have a service called IM which uses this this concept of access management and allows you to give it to your users who are going to use your account all right so uh what is IAM so IM is basically a service from AWS using which you can give permissions to different users who are using the same AWS account that you have created right so in a company like uh in in any company be it you don't have to have two or three AWS accounts you can have one AWS account on which a number of people can work right for example uh you can Define that maybe a developer would like to work on your AWS account and he should only have the ec2 instances or you should only work on the Eco instances you decide that right so you can only Define you can define a policy like that that only the developers will only be able to access the ec2 instances on a AWS account similarly if say uh a database administrator comes in so you should be able only able to access the DB instances on your AWS account and so on right so all of that is possible using IIM but IM is not only about creating users and creating policies it's more there is more to I am right and hence we'll be discussing the different components of IIM now so let's go on and see uh what are the different components so there are basically four different components in I in the IIM service so the first service is user then we have groups then we have rows and then we have policies right so the way we are going to go about these are first I'm going to explain you each uh role or each service in I am each component in I am and then we're going to see how we can execute them or create them in the AWS console right so let's start with the users so the very first time you actually create a AWS account that is basically the root account that you have created right so there is no user inside it so uh why do we basically need a user you need a user because you are supposed to give permissions to someone right so say um I I first of all want to give administrator uh rights to a user right so you understand you have to have an entity first to which you can assign permissions right so these entities are called users and AWS so any person who wants to access your AWS account has to be added as a user in IM and then you can attach different policies onto that user all right so this is what user is all about uh let me go to my AWS Management console and show you how you can create a user in iam all right so uh give me a second all right guys so this is my AWS uh sign sign-in page all right so this email I when you log in through your email ID and your password that is basically your root account so what I'm going to do right now is I'm gonna log in using my root account and first create a admin account for myself all right guys so you should never work in your root account you should always have an administrator account to work in the root account should only be used when there is an emergency say you have been locked out of your administrator account only then you should be using your root accounts the first thing that you should do when you enter your root account is go to IAM which is just right here go to IIM and then you will have this dashboard thing right over here you can see there is a thing called users you will click on users and you will click on add user all right so now it will ask you for the username so you can provide a username say I'll add my name first so that be hey month right and what what kind of access do I want to give to this particular user so there are basically two kinds of access that I can give first is the AWS Management console axis and then we have the programmatic access right so what is these two so if you want to so there are basically two ways you can access the AWS resources right you can either access them using uh apis that is using your code say you have created an application which is interacting with your AWS resources right so in that case if you're interacting uh with the apis using the apis that is called the programmatic access right secondly is the AWS Management console access that is when you are using the AWS website to actually deploy resources or create or create or remove policies or whatever right so that is called the AWS Management console access so for my user uh I'd be giving it both the accesses that is programmatic axis and the management control access also guys when you enable the programmatic programmatic access basically you get the access key and the secret key as well what are these I will be explaining you in a bit all right so we have selected both of these options and then move ahead to choose the password so do you want an auto generated password or a custom password I'll choose a custom part for pass versions I am creating a account for myself right so I'll choose a custom password and do I want to reset the password on the first login no I don't want that so I'll click on next permissions all right so what kind of permissions do I want my account to have I will become configuring that over here so as of now there are no groups there is no existing user that I can copy from so I'll attach existing policies and since I want to attach the administrator access that is the first policy over here I'll select that and click on next all right so you can review all the settings that you did over here and click on create user this will create a new user in your AWS account so as you can see I have got my access key ID and a secret access key now guys the secret access key you only get to see one time only one time when you have created your account so it is essential that you store your access key and a secret access key once you get this page all right so let me store it quickly so this is my access key ID why we are copying it you'll get to know during the session don't worry and my secret access key which is this so let me copy this and paste it in the notepad all right so don't worry you you might be thinking that I've exposed my secret keys to you so I'll be deleting this account afterwards so you don't have to worry about that all right so I've got my access key ID and my secret access key so that is done now what I'll be doing is I'll be logging out from my um from my root account and logging in this user account that I just created all right so one more thing that you have to be very careful of that you will not be logging in through the same login page that you just saw right so you'll have to login through a different login page now and the URL for that is this right right so you'll be logging in through this link as of from now on so what whenever you create a user if you want them to log into your account you have to give them this link to login to right so let us copy this link over here and log out from my root account all right so I've logged out I'll close this and I'll come here and go to this particular link all right so once you reach this particular link it'll be asking you the account name which will be self-filled by your link right so you have to give your username now which is hey month and then the password so I'll type in the password that I've given it and click on sign in so now I have basically signed in uh to to the user that I've just created on my root account right so I no longer have to use my root account I can basically lock away my root account for emergency purposes I'll be using my administrator account from now on I can do everything from my administrate account that could be done from my root account as well but there are cases wherein you get locked out from your administrator account in that cases you will be needing your root access all right so moving on guys so I'll go to I am now so as you can see we have created a user and we have logged into that user and if I go to IM now you can see that it will show that one user has been created that is here all right so let's get back to our slide and discuss the next component all right so we've discussed what are users let's move on to the second component which are groups all right so whenever you create users they can also be combined into groups now why do we need groups we need groups because um say let's take an example so say you have five users and these five users have to be given identical access right say these five users belong to the development team and the developing team has to have some common uh access that they all will have right now one way of doing this would be that I would go to each and every user and attach a policy that they need right the smart way to do this would be to uh to include them inside one group and to that group I will once only once I will attach the policy and it will apply to all these five users right so these are why groups are very important now how we can create groups let me shed a light on that so you will go to uh you you can see you can click on groups over here and what you'll do basically is you'll click on create new group right so let me give the group name as live demo all right and I'll click on next step now it'll ask me the policy that I want to attach to this particular group all right so say for example I just want this group to be able to access the S3 service from AWS so what I'll do is I will select the policy which says Amazon S3 full access and I'll click on next tab now this policy basically tells you that you're going to only use the S3 service in the Management console and no other service all right so I'll click on create group and now whatever whichever user I will be putting in putting inside this group will have this property all right so I don't have to configure the policy for any user now so what I'll do is I'll create a new user now uh so say I create a new user saying test all right and then I'm not giving him the programmatic access I'm just giving him the Management console access all right I'll click on this and I'll give it a custom password and then I don't want him to reset his password I'll click on next right and now it is asking me whether I want to include it inside a group so yes I do I want to include it inside the group that I've just created and I'll click on next and review all the settings that I've just did and click on create user all right so the test account has just been created now as you can see guys in the case of my account which I created I got an access key and a secret access key right so in this case I'm not getting any because I didn't select the programmatic access only when you select the programmatic access it will give you the key so that your application can actually interact with the services that you have launched all right so I have I've created a test user successfully let's log into this test user uh so so I will type in the URL that has been given to me right now when I reach this page I'll enter the username as test and the password as what I have entered right and I'll click on sign in now with this uh you can see that I will now be able to see the Management console the Management console will exactly look like how it was used to see how I used to see it in my root account or my administrator account but when you will try to access say a service which you have not been assigned to say for example I only have access to S3 right now because I've deployed it in the group where it has only the access to S3 if I try to go inside ec2 let's see what will happen foreign so it says you're not authorized to describe running instances as a matter of fact I'm not authorized to see anything on my ac2 page all right so that is because I cannot I don't have access to the ec2 dashboard but let's see if I can see the S3 dashboard so I'll quickly go to S3 and if I have the S3 access I'll be able to see all the buckets which are there in my S3 and yes I do so let me go inside a bucket and delete something so that all right so let me delete an object from this particular bucket so yes secondly did all right so let me check if what what happens if I delete or I I detach this particular policy from that group all right let's see what happens so I'll go to IAM and I'll go to groups I'll go to this particular group and I can see that the policy is listed over here what I'll do is I'll click on detach policy and let's see what happens now right so I'll go to Management console so on if now I try to exercise three it will show me that access is denied right so I no longer have access to the S3 service on my AWS console so this is how you can control access to different users you can revoke access you can include access right you can do all of that in iam all right so let us come back to a slide to discuss our next component all right so we've discussed what are users we've discussed what are groups now let's come back come down to rules all right so rules are similar to users but roles are actually assigned to Applications all right so users are actually assigned to people right so whenever you have a developer in the company you will assign them the developer roles right but uh when you have rules rules are basically assigned to Applications how let me explain you say you create an issue instance and inside that in zero instance you are hosting a web application now that web application has been has been designed in such a way that it has to interact with your S3 services for example that we'll be doing today we'll be I'll be showing you the demonstration today for this right so say uh that application has to interact with the S3 service now if I want to want that application to interact with the S3 service I have to give it permissions and to give it permissions I will use rules so I will create a role wherein I will specify that this role can access the S3 service and I will attach this particular role to that particular Eco instance in which my application is hosted and in that case my application will be able to interact with the SD service right it might sound complicated guys but it is very easy to implement let me show you how so what I'll do now is I'll go back to my Management console which is here all right I'll go to the dashboard and say I'll go to roles now all right so I'll create a new role now roles can be assigned to any AWS service which is listed here what I'll do is I'll assign it to I'll create a role type of ec2 all right so I'll select Amazon ec2 and what type of role do I want to apply to I want to say have the access to S3 right so I'll select Amazon S3 full access over here and I'll click on next step so it'll ask me the role name so let me specify the role name as edureka underscore one right and I'll click on create role so with this a role has now been created but mind you guys have not attached this role to any e0 instance right so what I'll do now is I'll go to my eco console so over there I already have built an issue instance uh it is stopped so I started and attached this particular policy to that Eco instance all right so my Isuzu instance name is hemant underscore one so here it is go to actions I'll start this particular instance right and what I can do is I can attach the policy using instance settings it says attach or replace IM role I'll go here I will go to the drop down and select the role that I've just created which is edureka underscore one I'll select that and I'll click on apply now with this what will happen is my role is now my sorry my eco instance is now configured to interact with the S3 service in this particular account all right so any application that I deploy in this e0 instance will be able to interact with the S3 okay so I don't have to specify any access key any secret access key uh if you're still confused with that uh be patient we are getting on to where do we actually use these keys and where do we not all right so this is what uh your roles are all about right so roles like I said they are for resources in AWS users are for people roles and users are similar things you attach policies onto them and they basically identify a particular instance or a particular person as the owner of that particular service right so I've discussed what rules are let's move on and discuss policies so um if you think about it guys we've actually been dealing with policies right so policies are nothing but permissions that you give to your uh uh with whatever role or user or group that you have created right so for for example I want to give the ec2 instance access right so that ec2 instance access is basically a policy that I'll be attaching to the user or to the rules all right uh let's see how we can create policies guys so I'll go to my Management console I'll go to iam right so uh the you can either create policies or you can actually use already existing ones so there are there are a couple of policies that have already been created in your AWS account but you can go ahead and create your own policy as well all right so let me show you how so say uh for my test account what I'll do is I will go inside test account all right and I will add permissions and I will attach existing policies directly and here I am guys so now you can you can create policies as well so you see the tab over here guys it says create policy so if you feel your the kind of policy that you want to create is not listed over here in the default policies you can actually create one and creating a policy is very easy guys you'll just click on create policy and you will see this page all right so you'll have three options you can either copy in AWS manage policy that is a default policy you can create your own policy by just typing in the Json code and if you're not comfortable with coding what you can do is you can use the policy generator now what is policy generator let me explain you so with policy generator you just have to select what effect do you want do you want it to allow it or do you want it to deny it right so say um I want to allow the ec2 service to this particular uh test account all right so I'll go to ec2 right here it is I selected ec2 what kind of actions can he perform say I want to give him all the actions you can do anything with ec2 and resource name is basically a particular resource so with Arn you can identify a particular resource so I don't want a particular resource to be assigned to him I want he can access every resource in ec2 right so I just add star for all of them right and click on next step so with this you as you can see it has automatically created a policy document for you all you have to do now is click on create policy and it will create the policy for you so as you can see there are 18 customer managed policies that are now 19 so I can go here and select the policy a policy over here all right so uh if I go to my user now which is test and go to permissions I will just click on ADD inline position policy click on select again go to ec2 select actions all actions right and put it to Star so I'll click on ADD statement click on next step and click on apply policy so our policy has been applied on the test user that it can actually access the ec2 instances now so if I go to my test user now which in which I was not allowed to access the Eco instances I can actually use the ac2 instances now so if I go to ec2 you can see it will not give me the access denied thing right so I can access all the instances over here as if I was using the root account but only for the ec2 service right if I go to S3 you can see I will still have the access denied page because I've not been assigned the access to this particular service all right one more thing guys if uh what if you add and allow and a deny uh policy together inside a group what will happen then so in that case uh so since I have allowed EC to access what I'll do is I'll deny Eco access as well in this particular user so I'll create one more policy and I'll say deny I'll select ec2 right I'll select the actions as all actions I'll give the resources all at the statement and click on Next Step apply the policy so now uh I have denied Eco instances as well and created and allowed ecd instance ec2 instances as well what do you think will happen now so if now I try to go to ec2 let's see what will happen so it will say you're not authorized to use ec2 anymore because um whenever you're creating policy guys you either get the allo option or the deny option if you have selected both of them it will always prefer the least permission that you have given so in our case that is the deny option right so it will always deny the case even if you have allowed it in the same user right if you have mentioned that uh that particular service has to be denied to that particular user all right so this was about policies guys let me come back to my slide so we have discussed what are users what are groups what are rows and what are policies let's go ahead and discuss the very uh important part of authentication which is called the multi-factor authentication so what is multi-factor authentication guys so multi-factor authentication is basically uh something like OTP that you get when you're logging into your Gmail account right so you enter your Gmail email ID you enter your password and when you click on continue It'll ask you for your OTP right so same as the case here as well you can configure your AWS account in such a way that you will enter your username you will enter your password and when you click on login it will ask also ask you for a code that has to be given to it now that code is basically the multi-factor authentication thing that we're talking about so there are basically two layers of security Now one layer is a password and second layer becomes the code that you'll be entering right now uh with AWS there is an application called the Google Authenticator right which you can use to create a virtual multi Factor authentication device now for those of you who already are using multi-factor authentication in your companies you um so there's a thing called jamalto right so people who work from home and they have to connect to their company's Network the way you connect it is using a jamalto token right uh and so those of you who are from the ID background you can relate to it right but if you want to go through to uh through a simpler way you can actually create a virtual multi uh Factor authentication device and to create that uh in your AWS is pretty simple you just have to download an application called do Google Authenticator on your phone and you have to connect that application to your AWS account and that is it now it might sound tough but it's very simple let me show you how so you'll basically go to your AWS Management console and you will go to the particular user that you want that multi-factor authentication to be assigned to all right so for example I wanted to be assigned to the test user right so what I'll do is I'll go to users I'll go to test right and in the security credentials tab I will have this page which says assigned MFA device so it says no as of now so I'll assign it a device I'll click on edit and now it'll give me an option between a virtual MFA device and a hardware MFA device now I have to choose among the two so since I said you can create a virtual MFA device very easily so I'll select the virtual MFA device and now it is basically asking you to install the application on your phone so we've already done that let's click on next step and now you'll be presented with the screen so basically now what you have to do is you will be logging into your Google Authenticator app and you will be scanning this barcode from your phone so let me show you how let me connect my phone to the computer so that you can see the screen give me a second all right so this is a screen to my uh phone guy so what I what I have to do now is I have to go to the Google Authenticator app It'll ask me to create an account so I'll click on begin and once I have that basically now I'll have to scan the barcode from my mobile so the way to do that is I'll click on scan a barcode and then I'll scan this barcode over here right it might take some time so be patient yeah so it's done now you're all set right so you'll just click on done and now you have to enter two codes that you you'll be receiving on your uh on your Google Authenticator so basically these codes change from every 30 seconds right so I have to enter these codes over here so it's two zero four and then three five sorry zero to zero and three five three zero to zero three five three and I have to enter the next code as well so let's wait for the next code and it's one two seven eight nine one so I'll enter that over here as well so it's one two seven eight nine one and that is it guys so now I'll click on activate virtual MFA and it says the MFA device was successfully Associated so I'll click on finish and that is it guys you're done right so now if I log out from my test account that is come on from here right this is my test account so if I log out from here right now and try to log in again using test foreign page right so I'll enter my username and my password which is this and now I'll click on sign in so now it will ask me for the MFA code so let's see what is our MFA code as of now so it has changed to 734552 so let us enter that seven three four five five two and click on submit so with this I'll now be able to login to my AWS console using the test account which I configured using the administrator account in IM right so it's very simple guys it's you can actually get a world-class security with a click of a button using iam all right so uh we have seen how we can do multi-factor authentication let's move on to the Hands-On part now so this is what is you guys have been waiting for so just give me a second so that I can configure everything on my end all right so what we'll be doing uh now is uh I have created an application which can interact with the S3 service all right so using that SD service now will be uploading files to our S3 console and how will we are going to do that first we are going to do that using localhost and that is where our secret keys and my access key comes in and then uh we'll be we have assigned a role to our e0 instance right so we'll be accessing that website using easy to without the access key and the secret access key and we can and we'll see do we get the access to our SC service or not all right so let us do that so now what I'll do is I will go to my local host uh application so guys this is basically my application what I have to do is I'll choose a file I'll upload a picture from any sample pictures and then it will upload it to a particular bucket that I've defined in S3 and that the bucket looks something like this right so that's bucket's name is quarantine demo so let me show you the bucket so as of now I think there are some objects so let's delete those objects so here it is this is the bucket quarantine demo so I have like three objects over here as of now let's delete these objects all right so now what I'll be doing is um this is the code for my application guys all right so uh in this code as you can see I have not specified the key and the Secret Key address now so I'll get the key and the secret key from here right so let me quickly so let me show you without the secret key and access key how is this localhost website functioning so if I try to upload a file as of now say this is the file that I want to upload I'll click on upload image and I'll get an error right because it is not authenticating itself to the service that I want to go to so now I'll add the credentials that is the key and the secret key now the way to do that is like this so I'll copy it and I'll paste it here I'll delete this and this as well not required and now I'll paste my key and my secret key which is this right so I'll copy the key over here and then my secret key as well over here and now I'll save it and if I try to access my localhost website now I should be able to upload a file right so if I try to upload the file now foreign upload complete so these credentials that I've just entered are basically credentials for my payment account so if you want to see uh where did I get these credentials from again you can basically go to users you can go to your user and you can go at security credentials and over here it will list you the access key ID it will not list you the secret access key because it it is only available once you can only use it once copy it once you will not be able to see it again and if I make this particular key inactive from over here and if I try to upload anything again I will again get an error because without the keys my account will not be um will not be authenticated to the S3 Service as you can see it says invalid access key because it is not valid anymore all right so I can make it active again but that is not required as of now so what I'll do now is I've already configured this website on the ec2 console all right so let me go to my ec2 right here it is so remember in the starting of the session we created a role for S3 full access right so that role has been attached to my eco instance so let me show you the website here it is all right so I can access the website on my ec2 now if I choose a file as of now and I try to upload the file I'll be able to do so because my policy has been attached now let's see what happens if i d attach the policy right so I'll go to this and I'll select no role click on apply yes detach and now if I try to upload a file again as you can see I see a blank page which basically means that an error has occurred all right so I am not able to upload any file because my role has been detached from my eco instance so if I wanted to be working again I'll just simply go here go to actions settings attach the rule that is this click on apply and it will again work right I'll choose a file so this file upload the image and it'll work again works like a charm right so that is it guys you don't have to configure much you just have to have the knowledge of IAM and with that you can do complex procedures with the click of a button and you don't have to swear about it all right uh you might want you might be wondering did I change anything in the code when I uploaded it to ec2 so you don't have to do anything guys you just have to delete the credentials key and Secret and you will upload the code as it is you don't have to change anything it'll if it doesn't have the key uh mentioned in this particular function it will basically get those keys from the metadata of easy to and metadata is the place where your role is actually assigned or your role is actually attached right so if it doesn't find the key in the code it basically goes to the metadata and picks the key from over there [Music] I'm sure you know what a data warehouses you can think of data warehouse as a repository where data generated from your organization's operational systems and many other external sources is collected transformed and then stored you can host this data warehouse on your organization's Mainframe server or on cloud but these days companies are increasingly moving towards cloud-based data warehouses instead of traditional on-premise systems and to know why we need to understand the underlying architecture and the disadvantages of using traditional data warehouses so let's begin by looking at architecture bar it is important to understand where the data comes from traditionally data sources are divided into two groups first we have internal data that is the data which is being generated and Consolidated from different departments within your organization and then we have external data that is the data which is not getting generated in your organization in other words that is the data which is coming from external sources so this traditional data warehouse follows a simple three-tier architecture to begin with we have bottom tier in bottom tier we have a warehouse database server or you can say a relational database system in this tier using different kind of backend tools and utilities we extract data from different sources and then cleanse the data and transform it before loading it into Data Warehouse and then comes the middle tier in Middle tier we have olap server olap is an acronym for online analytical processing this olap performs multi-dimensional analysis of business data and transforms the data into a format such that we can perform complex calculations Trend analysis and data modeling on this data very comfortably finally we have top tier this top tier is like a front-end client layer this year holds different kind of query and Reporting tools using which the client application can perform data analysis query reporting and data mining so to summarize what we have learned till now traditional data warehouse as a simple three-ter architecture in the bottom tier we have backend tools using which we collect and cleanse the data and then in mid lto we have tools which is olap server using which we transform the data into the way we want and then finally dropped your in which using different query and Reporting tools we can perform data analysis and determine it moving on to the disadvantages of traditional data warehouse concept there is this leading U.S Business Service Company and this company is running a commercial Enterprise data warehouse this data warehouse has data coming from different sources across different regions the first problem that this company faced was when it was setting up a traditional data warehouse as we discussed earlier the architecture of traditional data warehouse is not very simple it consists of data models extract transform and load processes which we call ETL and you have bi tools sitting on top so this us-based company had to spend a lot of money and resources to set up a traditional data warehouse data warehouse which was initially five terabytes was growing over 20 percent year over year and it was expected that there might be higher growth in future so to meet this continuously increasing storage and compute needs the company had to continuously keep upgrading the hardware again this task of upgrading the hardware continuously involves a lot of money Manpower and too many resources so Auto scaling in traditional data warehouse is not an easy concept and since the company could not meet all the storage and compute needs easily it was facing a lot of performance issues as well and finally the company had to deal with increasing cost initially they had to spend a lot on setting up data warehouse like that to spend on Hardware Manpower electricity security real estate and deployment costs and many other and as their data warehouse grew they had to spend again to meet storage and compute needs so to sum it up setting up a data warehouse and deploying it and managing it later involves a lot of money and resources moreover Auto scaling and traditional data warehouse is not an easy concept because of all these reasons many companies are increasingly moving towards cloud-based warehouses and sort of traditional on-premise systems so guys in this session we'll be dealing with one of the most famous cloud-based data warehouse provided by Amazon which is Amazon redshift in simple Watts Amazon redshift is a fast scalable data warehouse that makes it simple and cost effective for you to analyze all your data across your data warehouse and data Lake guys I have a definition which is put up on the screen and I have few words which I have highlighted over there so as we progress through the course of this session we'll know what those words exactly mean so let's ignore them for now but there are certain key Concepts which you should be aware of when you're dealing with Amazon redshift so we'll discuss them now Amazon redshift dataware is a collection of compute resources which we call nodes and these notes when organized into a group they become clusters each of these clusters run an Amazon redshift engine and it contains one or more databases so this cluster has a leader node and one or more compute nodes as for the leader node it receives queries from client applications and then it passes these queries and develops a suitable query execution plan and then it coordinates the parallel execution of these plants with one or more compute nodes once the compute nodes finish executing this plan again the leader node Aggregates the results from all this intermediate compute nodes and then sends it back to client application then we have compute nodes you can think of this compute nodes as a compute resources that execute the query plan which was developed by leader node and when they are executing this plan the transmit data among themselves to solve many queries these compute nodes are further divided into slices which we call node slices each of these node slices receive part of memory and disk space so the leader node distributes data and part of user query that it receives from Clan application to these node slides and all this node slices walk in parallel to perform operation and increase the performance of your redshift data warehouse so to say we have leader node we have compute nodes and node slices but how do they interact with client application that is the question here so I have this client applications basically bi tools or it can be any other analytical tools which communicate with Amazon redshift using drivers like jdbc and odbc jdbc refers to Java database connectivity driver it is an API for programming language Java then we have odbc it refers to other database connectivity driver and it uses SQL to interact with leader node so basically using these drivers client application sends a query to leader node leader node on receiving the client applications queries it passes these queries and develops a suitable execution plan once the plan is set up compute nodes and compute slices start working on this plant the transmit data among themselves to solve these queries so once the execution is done leader node again Aggregates the results from all this intermediate notes and sends it back to client application so this is a simple explanation of Amazon redshift Concepts moving on when you launch a cluster you need to specify the node type basically we have two types of notes then storage nodes these are storage optimized and are used to handle huge data workloads and basically they use hard disk drive or HDD type of storage and then we have dense compute this dense compute nodes are compute optimized and they are used to handle high performance intensive workloads in the mainly use solid state drive or SSD kind of storage but there are three things that you should keep in mind when choosing one among them firstly you should be aware if the amount of data that you want to import into your Amazon redshift and then the complexity of the queries that you run on your database and the need of Downstream systems that depends on the results of these queries so keeping this three things in mind you can choose either dense storage notes or dense compute nodes so guys that was the architecture and its key Concepts now we'll take a look at few reasons as to why Amazon redshift is very popular as we discussed earlier setting up a traditional data warehouse involves lot of money and resources but it's very easy to set up deploy and manage a suitable data warehouse using Amazon redshift on Amazon redshift console you will find create a cluster option when you click on that option Amazon redshift asks you for certain details like the type of node you want to choose the number of nodes the VPC in which you want to create your data warehouse user ID password and many other details once you feel that you have given the right set of details you have an option which says launch the cluster and one click your data warehouse is just created so with one click you can easily create a data warehouse in Amazon redshift once your data warehouse is set up Amazon redshift automates most of the common administrative tasks like managing monitoring and scaling your database so you don't have to worry about managing or scaling your database needs so that's how easy it is to develop or set up a data warehouse using Amazon redshift we also learned that auto scaling is difficult in traditional data warehouse but you can scale quickly to meet your needs in Amazon redshift well we already know that a cluster node as a leader node and one or more compute nodes so if you want to Auto scale an Amazon redshift all you have to do is resize your cluster size as we know this compute nodes are like compute resources so if you want to scale up you can increase the number of compute nodes similarly if you want to scale down you just have to decrease the amount of compute nodes alternatively we have something called single node and multiple node and single node cluster one node takes the responsibilities of both leader and compute functionalities and the multinode cluster contains one Laden node and user specified number of compute nodes so suppose you want to resize your cluster and you are using a single mode cluster then you can change from single node cluster to multi-node cursor similarly you can change from multiple node cluster to single node cluster when needed so that's how easy it is to scale up and down an Amazon redshift moving on we learned earlier that while using traditional data warehouses it's possible that the performance of your data warehouse might decrease but with Amazon redshift you can get 10 times better performance than any other traditional data warehouse it uses a combination of different strategies like columnar storage and massively parallel processing strategies to deliver high throughput and response times so let's discuss these strategies one by one well first we have columnar data storage to understand what that is first we should know row storage most of the traditional data warehouse and databases use this row storage in row storage all the data about the record is stored in one row okay so let's say I have this database here I have three columns and two rows the First Column contains the unique number associated with student the second column contains the name of a student and the third column contains the edge as we already know data is stored in form of blocks in databases or data warehouses so as you can see in row storage the block 1 contains all information there is about a particular student his SSN his name and then age so basically it stores the alpha information that there is in a single room so in the first block you have information about first student and in the second block you have information about second student and it goes on now the columnar storage again I'm using the same database again I have three columns and two rows but column storage stores data by columns with data for each column stored together so again we have blocks but the first block here has all the data that is there in First Column so you have all the assistants stored in first block and all name store in second block and all the age is stored in third block so it goes on there are a lot of advantages of using this column storage firstly since in column storage a single block contains same type of data you can achieve better data compression as you can see columnar storage can hold values three times the records as row based storage because of this the number of input output operations decreases and thirdly by storing all the records for one field together column net database can query and perform analysis on similar type of data far quicker than row storage so this is how the concept of columnar storage which is used by Amazon redshift provides us a better performance and then we have massively parallel processing I'm sure you might have heard of parallel processing in computer science it's just that number of different processors work together or compute together or in palette similarly massive parallel processing in Amazon redshift is nothing but cluster we have already discussed this earlier we have a cluster and this cluster has a leader node and one or more compute nodes and this compute nodes is further divided into something called node slices so when this leader node receives a query it develops execution plan and this compute nodes and compute slices work together or in parallel to execute this plan and later this leader nodes sends the results back to client application so basically this computes slices and compute nodes work in parallel to achieve better performance moreover Amazon redshift is also able to smartly recognize the data on nodes before running a query which dramatically boosts the performance so that's how we can get a 10 times better performance using Amazon redshift and then the cost and traditional data warehouses people had to spend a lot of money to set up and then later to maintain the data warehouse but Amazon redshift is the most cost effective cloud-based data warehouse if you remember in traditional data warehouse they had to spend on Hardware real estate Manpower electricity and deployment cost and many others and as their data warehouse grew they had to spend again on meeting the storage and compute needs but an Amazon redshift we don't have to pay any upfront cost so Amazon redshift is most cost effective and it cost one tenth of traditional data warehouse you can start small fidgets 0.25 dollars per hour without any commitments and you can gradually scale up later if you need in addition to all those advantages Amazon redshift allows you to query data from data leak data lake is a storage repository that holds a vast amount of raw data in its native format until it is needed so in data Lake you have data in different formats you can load data from Amazon S3 into your Amazon redshift cluster for analysis very easily that is from data Lake you can store easily to into your Amazon redshift but it needs more effort and cost effort because loading data into Amazon redshift cluster involves extract transform and load which we simply call ETL process and this process is very time consuming and compute intensive and it's costly because uploading lots of data cold data from Amazon S3 for analysis requires growing your clusters which is again a costly and requires lot of resources so as a solution we have something called Amazon redshift Spectrum which acts as the interface between your Amazon S3 or data Lake and Amazon redshift so you can directly query data stored in Amazon S3 or data lake with this redshift Spectrum without need for Unnecessary data movement I hope that was clear and finally with Amazon redshift your data is safe and secure it offers backup and recovery so as soon as data is created or stored in Amazon redshift a copy of that data is made and through secure connections a snapshot of it is sent to Amazon S3 for later so suppose you lose your data or if you have deleted the data from Amazon redshift by mistake you can restore the data easily from Amazon S3 service Amazon redshift also provides you with an option to encrypt your data so when you enable this encrypts option all the data in your cluster in your leader node and your compute nodes and node slices is encrypted and this way your data is very safe and secure so Guys these are all the advantages of using Amazon redshift so now you have a basic idea of its architecture its various key Concepts like clusters notes leader node node slices now it's time to work on a demo in this demo we'll see how to transport data from Amazon S3 to Amazon redshift data warehouse and perform simple queries so I hope that was clear to you guys let's get started but the first thing there are certain softwares which you need to pre-install so that you can start working on Amazon redshift first suppose you want to perform queries on the data on Amazon redshift then you need a SQL workbench where you can perform your queries and as we learned earlier the client application need a connection to communicate with redshift so we need to install a jdbc driver and for that jdbc driver to run we need to have a Java runtime environment so we have three things to install here now I'll show you how to install each of the and I have this Java runtime environment download link by softonic.com so it says free download and you click on that it'll be downloaded you can store it anywhere and once you're done with that search for Amazon redshift documentation so here it is okay I know it's that not that this one and when you scroll down it says Amazon which if get started click on that and in the step one we have prerequisitives okay scroll down and it says in the Step 2 you have an option where you can download go to SQL workbench website and download it so click on that and here it says build current version and you have download generic packages for all systems you can download it once you click on that it'll start downloading and there is one more thing which is jdbc driver go back to documentation part scroll down and the step four you can see configure a jdbc connection click on that it will take to your page where you have jdbc drivers of different version you can download the first one click on this and it'll be downloaded so once all these three things are downloaded stored them in a file of your choice well I have stored them on my desktop I have this AWS folder and in that redshift so here is my workbench a zip file it was a zip file so I extracted all the files and then I have my jdbc driver here well uh Java and time environment as in download so that's okay so I hope that was easy so just install all these things and you are set to go and we are back to our Amazon Management console I have previously used Amazon redshift so I have this Amazon redshift in recently visited Services anyway you can search for Amazon redshift here it is well it's taking time to lower okay this is my Amazon redshift console page and you have different kind of options on your navigation pane on the left side and there are two ways to create or launch a cluster first you have quick launch cluster option and launch cluster option this is the very easy way to launch a cluster but suppose you want the freedom to specify all the details as in the vpcs the security groups different type of nodes username password and all that you can go for launch cluster option let's go ahead and explore that so first it asks for a name let's say my cluster and database deity one and the port this is default Port 5039 is a default Port which will be handled by Amazon redshift you then the master username let's say AWS user and password that's it and confirm your password and click on continue option so cluster details are done and dusted then you have node configurations well for the free tag you only have DC to launch but suppose you have a premium membership then you can choose any of this for this DC to large this is the CPU capacity memory and storage and the input output performance is moderate you can go ahead and choose the cluster tab we discussed this we have multi-node and single node in single node we have both the leader and the compute node responsibilities handled by single node the multi-node we have a single leader node and uses specified number of compute nodes click on continue and then here it asks for the VPC details parameter group and suppose you want encryption or not and all the details so basically in this launch cluster option you have the freedom to specify all the details but for this demo I'm going to use quick launch cluster option so again as for the free tire I'm using DC to launch and again for the free TR I'm using dc2 large type it says there are two compute nodes and uh let's retain the same cluster name as for the master user AWS user now let me give the password and the default Port is 5439 and last option we have to choose among the viable IM users or IM roads but the question is why we need a im role here in this demo I said that we are trying to import data from Amazon S3 but you need certain set of permissions to access data which is stored in Amazon S3 for that we need to create a im role so let's go back to IIM service let me close all the steps okay here you have roles option you can click on that and click create Rule and since we're dealing with Amazon redshift select redshift redshift customizable and click on next permissions so we want Amazon redshift to access data from Amazon S3 so search for s3o and you have a permission which says Amazon S3 read-only access well for this demo this is an if but there is one more permission which is Amazon S3 full access so you can perform read and write operations as well as for this demo I'm going to choose this permission which is Amazon S3 read-only access provides read-only access to all the buckets in Amazon history and click on next review give your role a name let's say my redshift role 2 and click on create role so now our Amazon redshift database as permission to access data from Amazon S3 let's go back to redshift console okay let me refresh this and now it's showing the role which has been created is showing here so as you can see unlike other launch option in this I didn't have to specify much details just the node type the number of notes and then the master username cluster identifier and password and the default database port and you can click on launch cluster option so with one click you've easily deployed a database on Amazon redshift if you remember when we try to use this launch cluster option we had option to select a default database or use a create our own database but when you use this quick launch cluster option a default database called Dev will be created for us so guys this cluster has been created so before we connected to our SQL workbench let's try to explore here you need to make sure that the database health status and in maintenance status everything is in green color as for the class day cluster status it should be available and for the database Health it should be healthy only then you can make a perfect connection with your SQL workbench so you have this icon here click on that well you get all the information there is about your cluster or you can just go ahead and click on this cluster so this is the end point this tells me all about how to make a connection with this cluster I have this when I click on that it says publicly accessible yes and the username as AWS user and the security groups apparently just shows the TCP rules which are set so that's about the end point then the cluster name you have cluster type node type and it shows the notes in the zone and the date and time when it was created and you have cluster version as well on the right side you have cluster status which is available database health healthy so is it currently in maintenance mode no and then you have parameter group apply status which is in sync with your database and there are a few other features as well but here you can see this VPC group click on that go for n bound and make sure it is set for TCP okay edit make this custom TCP Rule and here 5439 custom that's it and click on Save option so that's the default port with which you can access the redshift and let's go back clusters okay we change the default group of VPC so this is the URL with which you can connect to the SQL workbench so let's copy this and paste it in our text file I've pasted it over there well if you're using odbc connection then you can use this URL when you scroll down you have capacity details of your entire cluster it's dc2 large so seven ec2 compute units total memory storage and platform okay let's go back to this IM role but I should have an IM role option here let me see check it out okay there's an option which say cim rules you can copy this entire thing and paste it again in the editor so that while connecting it will be easy for us to find it okay then so now your cluster has created your database or data warehouse is set up now you can just connect it with SQL workbench and start working on it so let's go back to the folder where I've stored my workbench here it is when you scroll down there's a file which says SQL workbench executables jar file open so here it is it's asking for a default profile name let's say new profile one okay then driver that was Amazon redshift driver only jdbc driver and this was the URL we copied it earlier in the editor so I'm gonna paste it over here now this is the URL Ctrl C and paste it AWS user in the password okay that should work make sure that you select this Auto comment save it and then click on OK it says connecting the database now it's successfully connected so I can easily perform queries now first let's create some tables well I'm using the sample database from Amazon S3 so you have this AWS redshift documentation go back to that and here which says get started and in the step 6 you have this default SQL queries and tables provided so you can go ahead and use that I have it stored in my editor so I'm gonna copy first I'm going to create all the tables Ctrl C and paste it over there let's check what tables are there first we have user table well this is like an auction data schema so you have user table many users and you have category users the category different categories to which users belong to then you have date date on which a particular event occurred then you have even table all the details regarding an event listing as in the items which are being sold are listed here all the details about the items then you have sales as in which user is buying how much which item and all that details so basically we have six to seven tables I'm going to select all that and say run option so here it says table users created table when you created category Date event listing and sales so all the tables are easily created now as for the next part we need to copy the data or the data for the database from Amazon S3 to Amazon redshift let's go back to the editor and I have this copy command I'll explain you the format Ctrl C and let's paste it here okay let's explore this copy command it says copy to the table users which you just created from this path that is from the file which is stored in S3 bucket but this is the credentials AWS IM role which we copied to the editor the earlier apparently we're just giving a permission to access the data from S3 so we need to copy this imro Leo and then we have delimiter as in let me go back to editor and show you an example okay let's say I have a data let's say I have name archana space sum h B Hobbies so you can see this straight line This is the delimiter as in the thing which you're using to separate all the fields or the columns so going back so that's delimiter which separates the data and this is region in which your S3 bucket is located so that's it but we have to replace the IM role this is the Arn of the role I'm going to copy it and wherever this is you need to just paste it Ctrl V okay we're done last one so select everything and click on this execute button it might take a while because the data set which was stored in Amazon S3 might contain large number of rows so it might take a while so as you can see it states executing statement here it says one out of seven finished so we have six more to go so this SQL workbench is successfully executed all the script which we have written here let's go and start performing some simple queries let's say I want to extract the metadata of user table I have this query okay select star from page table definition so since we are extracting metadata from table name let's say users and click on execute option so you have so many columns here it says First Column user ID of type integer and coding Delta then you have username first name last name city state email so basically that's the metadata or the structure of user table so you have sales ID list ID seller ID buyer ID and many other details let's execute some of the command let's say I want to find total sales on a given date okay sum the count your have some function which will count the number of sales from sales and date where the sales date ID is date ID and the date on which I want to calculate I've specified it here and then click okay the sum error it should show a number here rest is working only that is not working I've selected the user table and I've asked them to display all the all that there is in the user table so this has the data okay Select Staff from users so I want to extract the names of people who are from let's say some states let's consider some state let's take an edge so s t a t e like NH it should work now it says executing statement so these are the people who are from State energy so basically once you've the perfect connection from your SQL walkbench to your Amazon redshift you can perform whatever queries you like so let's go back to Amazon let's shift console well so this is the cluster I'm going to click on this here you have queries when you click on that all the queries which you perform till now will be shown so this is the query so it says first name from users was from State NH this was the query which we performed earlier so you have all the data or all the information regard the queries which you executed well that's all about Amazon redshift so guys this is how easy it is to create a data warehouse using Amazon redshift go ahead and explore different many other features of Amazon redshift well I've just showed a part of them here so go ahead and create a database perform various queries and have fun [Music] what is AWS kinosis so Amazon Kinesis is one of the best managed Services which particularly scales elastically especially for real-time processing of the data at a massive Point these Services can be used to collect large streams of data records which are especially consumed by the application process that runs on Amazon ec2 instances this Amazon Kinesis is used to collect streamline the process and analyze the data so that easily we can get the perfect insights as well as the quick response with respect to the information it is also offering the key capabilities at a cost effective price in order to process the Streamline data at a particular scale with the help of flexible tools according to the needs and requirements through the Amazon Kinesis you can also get the real-time data like the video audio application logs as well as the website click streams machine learning and other applications too this new technique by the Amazon will help you to analyze and process the data instantly instead of waiting for long hours after collecting the data so this was a quick overview of AWS kinasis so now let us move on to the next part here which is the advantages so basically the first and foremost advantage of Kinesis is that it is real time Amazon Kinesis enables you to ingest buffer and process streaming data in real time so that you can derive insights in seconds or minutes instead of hours or days the next Point here is that it is fully managed Amazon Kinesis is fully managed and runs your streaming applications without requiring you to manage any complex infrastructure and the third Point here is the scalability Amazon Kinesis can handle any amount of streaming data and process data from hundreds of thousands of sources with very low or minimal latency so these were a few advantages of using Amazon Genesis and now let us talk about the next part here which is the Amazon Kinesis capabilities so the first service or the capability of Amazon Kinesis is the Kinesis video streams the Amazon kindnesses video streams are used to secure all the stream data like videos photos and the connected devices to the AWS for machine learning analytics and other processing which can give access to all the video fragments and encrypts the saved data without any problem so you can refer to the diagram that is present on your screen right now the next Point here is kinesis data streams so this Amazon kindness is data streams in the Amazon is specifically used to build the real-time custom model applications by preceding the data stream process by using the most popular Frameworks it can easily ingest all the stored data with all the data streaming prices by using the best tools like Apache spark that can be successfully run on the ec2 instances so the next Point here is kinesis data fire hose in order to capture load and transform the data streams into the respective data streams this Kinesis data fire hose is used to store in the AWS data store near all the analytics with all the existing intelligence tools these tools can be used to prepare all the loads of the data continuously according to the destination with the durable for analytics which gives an output like analyzing the streaming data and the last Point here is the Kinesis data analytics the Kinesis data analytics in the Amazon Kinesis is one of the easiest ways in order to process all the real-time technique with SQL that has to learn all the programming languages with processing Frameworks this Kinesis data analytics is used to capture the streamed data that can run with all the standard queries against the data streams in order to precede the analytical tools for creating alerts by responding it in real time so these were the capabilities of AWS kinases so now let us discuss the use cases of Amazon kinesis so the first point here is the video analytical applications this Amazon Kinesis in the application is also used to secure all the streaming video for the camera equipped devices which are placed in factories public places offices and the homes to AWS account this video streaming process is also used to play the video monitor the security machine learning and face detection along with other analytics the second Point here is Bash to real-time analytics using Amazon Kinesis you can also easily perform all the real-time analytical steps on the respective data to analyze the batch processing from the data warehouses through Hadoop Frameworks data Lakes data sciences and machine learning are one of the most common methods used in such cases in order to load the data continuously you make the use of Kinesis Firehouse to update all the machine learning models more frequently for the new and accurate data outputs the third Point here is building real-time applications if you want to build the real-time applications you can also use this Amazon Kinesis in order to monitor the fraud detection along with the live leader results this process can be used to ingest all the streaming data easily to the Kinesis streams with the analytics and the data that is stored in the application itself with the end-to-end latency all these processes can help to learn more about the client's products services and the applications to react immediately and the last Point here is analyzing the iot devices this Amazon Kinesis is used to process the streaming data directly from the iot devices like the embedded sensors TV setup boxes and consumer appliances you can also use this data in order to send the real-time alerts to the actions programmatically when the sensors exceed the entire threshold operating it is better to use the sample of iot analytics codes while building an application so these were the use cases of Amazon kinesis and now we will move to the next part here which is the comparison between Kinesis and sqs Amazon Kinesis is differentiated from Amazon's simple queue service that is sqs in that Kinesis is used to enable real-time processing of the streaming Big Data sqs on the other hand is used as a message queue to store the messages transmitted between distributed application components Kinesis provides a routing of Records using a given key ordering of Records the ability for multiple clients to read messages from the same stream concurrently replay of messages up to As Long As 7 Days in the past and the ability for a client to consume records at a later time Kinesis streams will not dynamically scale in the response to an increased demand so you must provision enough streams ahead of time to meet the anticipated demand of both your data producers and the data consumers sqs provides form messaging semantics so that your application can track the successful completion of work items in a queue and you can schedule a delay in messages of up to 15 minutes unlike Kinesis streams sqs will scale automatically to meet application demand sqs has lower limits to the number of messages that can be read or written at one time compared to Kinesis so applications using Kinesis can work with messages in larger batches than when using sqs so this was a comparison between Amazon Kinesis and Amazon sqs so the next part here is to understand how does it work so here I'm going to discuss the Amazon Kinesis data stream which is one of the capability of Amazon kinases so you can use Amazon Kinesis data streams to collect and process large streams of data records in real time you can create data processing applications known as Kinesis data streams applications a typical Kinesis data stream application reads data from a data stream as data records these applications can use the Kinesis client library and they can run on Amazon ec2 instances you can send the processed records to dashboards use them to generate alerts dynamically change pricing and advertising strategies or send data to a wide variety of AWS Services Kinesis data streams is a part of kinase's data streaming platform along with other components as we previously discussed such as Kinesis data fire hose Kinesis video streams and Kinesis data analytics so before discussing the architecture let us have a look at the few of the terminologies that are used very frequently with Kinesis data stream first term here is what is a Kinesis data stream a Kinesis data stream is a set of shards each chart has a sequence of data records and each data record has a sequence number that is assigned by Kinesis data streams so we will discuss shards and sequence in the later part of the session the next term here is a data record a data record is the unit of data stored in the Kinesis data stream data records are composed of a sequence number a partition key and a data blob which is an immutable sequence of bytes Kinesis data streams does not inspect interpret or change the data in the blob in any way a data block can be up to 1 MB in size the next term here is the retention period the retention period is the length of the time that the data records are accessible after they are added to the Stream a stream's retention period is set to a default of 24 hours after creation you can increase the retention period up to 168 hours that is 7 Days using the increase stream retention period operation and decrease the retention period down to a minimum of 24 hours using the decrease stream retention period operation so additional charges apply for streams with a retention period set to more than 24 hours the next term here is the producer so producers put the records into Amazon Kinesis data streams so say for example a web server which sends log data to a stream can be called a producer and next term here is the consumer consumer gets the records from the Amazon kinas's data streams and processes them these consumers are known as Amazon Kinesis data streams application the next term here is Amazon Kinesis data streams application an Amazon Kinesis data streams application is a consumer of a stream that commonly runs on a fleet of ec2 instances so there are two types of consumers here so the first one is the shared fan of consumer and the second one is the enhanced fan out consumers the output of Kinesis data streams application can be an input to another stream which enables you to create complex topologies that process the data in real time an application can also send data to a variety of other AWS services there can be multiple applications for one stream and each application can consume data from the other stream independently and concurrently so the next term here is a Shard A Shard is a uniquely identified sequence of data records in a stream a stream is composed of one or more shards Each of which provides a fixed unit of capacity each Shard can support up to five transactions per second for reads and up to 1000 records per second for writes the data capacity of your stream is a function of the number of shards that you specify for the string the total capacity of the stream is the sum of the capacities of its shards if your data rate increases you can increase or decrease the number of shards allocated to your stream the next Point here is the partition key a partition key is used to group data by A Shard within a stream Kinesis data streams segregates the data records belonging to a stream into multiple shards it uses a partition key that is associated with each data record to determine which Shard a given data record belongs to partition keys are Unicode strings with a maximum length limit of 256 characters for each key an md5 hash function is used to map partition keys to 128-bit integer values and to map Associated data records to shards using the hash key ranges of the shards when an application puts data into a stream it must specify a partition key so the next term here is the sequence number so each data record has a sequence number that is unique per partition key within its Shard Kinesis data streams assign the sequence number after you write to the stream with client.put records or client Dot putrecord sequence number for the same partition key generally increase over time the longer the time period between the write requests the larger the sequence numbers become the next term here is the Kinesis client Library the Kinesis client library is compiled into your application to enable fault tolerant consumption of data from the Stream the Kinesis client Library ensures that for every Shard there is a record processor running and processing that shot the library also simplifies reading data from the Stream the Kinesis client Library uses an Amazon dynamodb table to store the Control Data it creates one table per application that is processing the data so these were some of the terms that you must be aware of before moving on to the architecture part so the next part here is the architecture of how does Amazon Kinesis data streams work so what happens here is as you can see the diagram that is present on your screen right now the producers continuously push data into the Kinesis data streams so as you can see here these are the producers which produce the data so for example ec2 instances client mobile client or the traditional servers so these devices produce the data here and the consumers process the data in real time so as you can see here this data produced by The Producers is processed using the Kinesis data stream here so these are the number of shards that are required for this data here and these are the consumers which consume the processed data so say for example here it is shown that ec2 instances are the consumers so consumers can store their results using an AWS service such as Amazon S3 dynamodb redshift EMR Kinesis fire hose and so on so this is the overall working of Amazon Kinesis data streams so now let us just move on to the last part of this session here which is the demo so as far as demo part is concerned I will show you how do you create a NSS data stream and how do you add data to that stream and how do you read that data so let us just move on to the Amazon console here so for that you have to login to your Amazon console here and once you log in you will see this main page here and here you have to click on services so in the services part in the analytics section you have to click on kinesis okay so once you're done with that you have to click on get started so here you can find the documentation part of Amazon Kinesis which you can read for your further understanding so let me just click on get started here so as discussed earlier these are the four capabilities of kinases data stream delivery stream which is the fire hose analytics and the video streams okay so as of now I'll click on this create data stream here so let me just name it you can name it whatever you want so let me just name it my edury car stream so and here you have to click on the number of shards for timing I'm just writing here one so this is the bare minimum value that you can enter here and depending upon your account so in my case it is a free tier account so for that I can enter maximum of 200 charts so the right capacity is 1 mb per second that is thousand records per second and the read capacity is 2 MB per second so click on kinase stream here so creating a stream takes only a few seconds so as you can see here you get this message here stream My edirecta Stream has been created and the status is active so now what we will do we'll go to the command prompt here and we will try and execute a few commands so the First Command here is the describe stream command so the command for describing an already created stream is AWS kinas's described iPhone stream here and then you provide the name of the stream using this parameter stream name that is my edureka stream and then you just have to specify the reason here okay so the reason you're in my case is Asia Pacific South one okay once you do that just hit enter here okay so as you can see here you get the description or the information of the string that you just created and for this we have one Shard idea as you can see here this part so we will see how to use this chart ID here so this is the description of the stream that we just created so the next command that we are going to see is you know you have to put the data in this stream data or you can see the record so the command for that is AWS kinases Port record stream name the name of the stream the partition key I've just specified it to one and the data so data you can enter whatever value that you want here for time being or for this example I have entered year one and then you have to specify the reason here so just hit enter here so for this record here you have the shared ID and the sequence number so similarly let me just enter one more record here so let me just add the data 11 and the partition key should be different so it's two here again hit enter here so one more record has been added just let's add one more record here data is triple one and partition key is three okay so just hit enter here and you'll see this record has also been entered With A Shard ID and sequence number so the next command that we're going to see here is the command to get The Shard iterator okay so what A Shard iterator will do so let me just brief you regarding that so A short iterator will iterate through a set of Records here so if you just had to iterate through the number of records that you have in your data Stream So for that purpose we use Shard iterator here so the command for that is very simple as you can see on the screen AWS kinases get a shard iterator side iterator type which is stream Horizon here in this case and then you have to enter The Shard idea so for these records so say for example for the first record here The Shard ID is this one okay so let me just copy it and paste it here so once you do that you just have to hit enter here and after this command gets executed you will get a shard iterator so this is the short iterator that we have just got okay so this Lindy thing here so let me just copy it so now once you have this Shard iterator what you have to do is you have to get the number of Records here so for that the command is AWS Kinesis get records Shard iterator and then you have to copy this iterator here copied you can just paste it here and you have to enter the limit here so say for example if you want to read two records so you can do that using this flag or parameter called limit here so let me just enter the limit for three years since we have entered three records so if you just hit enter here so as you can see here we have got our three records here okay so this is the data and this is the partition key so data is in encoded form here this mq equal to equal to okay and the partition key is one so similarly we have another data and the partition key for that is two okay so this was one this was 11 and this was Triple one so you can also decode this data if you want and for the third record here the partition key is three and this is the data here so this is how you write data and you get records from the Kinesis data streamer so I hope this demo here it is good enough to get you started with Amazon kindness is here similarly you can get started with other Amazon Kinesis Services here such as the fire hose analytics and the video streams so this was the demo part here that I've just covered [Music] what is an API API is basically the acronym for application programming interface which is a software intermediately that allows two applications to talk to each other suppose you're searching for a hotel room through an online travel booking site using the site's online form you will fill the necessary information like the city you want to stay in check in and check out dates number of guests and the number of rooms then you click search but what exactly is going on between entering your information to receiving your hotel choices yes that's API the site Aggregates information from many different hotels and when you click search the site then interacts with the hotel's API which delivers results for available rooms that meet your criteria and all this happens in seconds because of an API which acts like a messenger that runs back and forth between applications databases and devices so what exactly is the AWS API Gateway the AWS API Gateway is basically a service provided by Amazon that is used to create publish maintain Monitor and secure various apis such as rest API HTTP and websocket at any scale as an API developer you can create apis that can access either Amazon web services or any other web services along with the data stored in the AWS Cloud as an AWS Gateway API developer you can create apis for use in your own client applications as well as make your apis available to third-party app developers the API Gateway creates restful apis the term rest basically stands for representational State transfer it is an architectural style that defines a set of rules in order to create web services in a client server communication rest suggests to create an object of the data requested by the client and send the values of the object in response to the user for example if the user is requesting a movie in Bangalore at certain place and time then you can create an object on the server side so over here you have an object and you are sending the state of an object this is why rest is known as representational state transfer the architectural style of rest helps in leveraging the Lesser use of bandwidth to make an application more suitable for the internet it is often regarded as the language of internet and is completely based on the resources the features of API Gateway Amazon API Gateway offers features such as support for stateful or websocket and stateless or HTTP and rest apis it is powerful provides flexible authentication mechanisms such as AWS identity and access management policies Lambda authorizer functions and Amazon Cognito user polls you can also make use of the developer portal for publishing your apis it also provides Canary release deployments for safely rolling out changes is it provides cloudwatch access logging and execution logging including the ability to set alarms you will also be able to use the AWS cloud formation templates to enable API creation it also provides support for custom domain names and integration with AWS Waf for protecting your apis against common web exploits also integration with AWS x-ray for understanding and try edging performance latencies is available so those were some of the features of the AWS API Gateway talking about the benefits of using the AWS API Gateway now there are a number of benefits of using the AWS API Gateway first is that it supports efficient API development you will be able to run multiple versions of the same API simultaneously when you make use of the API Gateway which allows you to quickly iterate test and release new versions also you will have to pay for calls made to your apis and data transfer out and there are no minimum fees or upfront commitments performance at any scale you can provide end users with the lowest possible latency for API requests and responses by taking advantage of the global network of edge locations using Amazon cloudfront throttle and traffic authorize API calls to ensure that backend operations withstand traffic spikes and back-end systems are not unnecessarily called the Amazon API Gateway provides a tired pricing model for API requests with an API request price as low as 0.9 dollars per million requests at the highest Shire you can decrease your costs as your API usage increases per region across your AWS accounts easy monitoring performance metrics and the information can be monitored on API calls data latency and error rates from the API Gateway dashboard which allows you to visually monitor calls to your services using Amazon cloudwatch flexible security controls you will be able to authorize access to your apis with AWS identity and access management or the IAM and Amazon Cognito if you use oauth tokens API Gateway offers native oidc and oauth 2 support to support custom authorization requirements you can execute a Lambda authorizer for AWS Lambda restful API options create restful apis using HTTP apis or rest apis HTTP apis are the best way to build apis for a majority of use cases there are up to 71 percent cheaper than rest apis if your use case requires API proxy functionality and management features in a single solution you can use rest apis so who makes use of the API Gateway the API Gateway is basically used by two types of developers that is the API developers and the app Developers an API developer is basically someone who creates and deploys apis to enable the required functionality of an API Gateway to make use of AWS API Gateway you must be an IAM user in the AWS account that owns the API on the other hand an app developer builds a functioning application to call AWS Services by invoking a websocket or rest API created by an API developer in the AWS API Gateway this way the app developer becomes the customer of the API developer therefore the app developer does not require to have an AWS account however the API that is created should either not require the IAM permissions or it should support authorization of users through third-party Federated identity providers that are supported by Amazon cognitive user pool identity Federation such identity providers include Amazon Amazon cognitive user pools Facebook and Google API Gateway rest API so if you remember we've spoken about a rest API websocket API and HTTP API now let me just brief you about each of these API Gateway rest API it is basically a collection of HTTP resources and methods that are integrated with backend HTTP endpoints Lambda functions or other AWS Services you can deploy this collection in one or more stages typically API resources are organized in a resource tree according to the application logic each API resource can expose one or more API methods to have unique HTTP verbs supported by the API Gateway using AWS API Gateway you can create restful apis that are HTTP based and can enable stateless client server Communications and also Implement standard HTTP methods such as get push post patch and delete the AWS API Gateway websocket API it is basically a collection of a websocket routes and root keys that are integrated with backend HTTP endpoints Lambda functions or other AWS Services just as rest API you can deploy the collection of websocket apis in one or more stages API methods are invoked through front-end websocket connections that you can associate with the registered custom domain name the AWS API Gateway will create websocket apis that adhere to the websocket protocol which enables stateful full duplex communication between the client and the server also you can root incoming messages based on the message content API Gateway HTTP API it is nothing but a collection of routes and methods that are integrated with backend HTTP endpoints or Lambda functions just as the previous two you can deploy this collection in one or more stages also each route can expose one or more API methods that have unique HTTP verbs supported by the API Gateway now that you have a brief idea about the three types of apis available let's take a look at some of the important terms and Concepts that you should be aware of when you're working with apis first up is API development it is nothing but a point in time snapshot of your API Gateway API to be available for the clients use the deployment must be associated with one or more API stages API developer this is nothing but your AWS account that owns an API Gateway deployment for example a service provider that also supports programmatic access API endpoint an API endpoint is a host name for an API in API Gateway that is deployed to a specific region an API key an API key is basically an alphanumeric string that API Gateway uses to identify an app developer who uses your rest API or the websocket API API Gateway can generate API keys on your behalf or you can import them from a CSV file these API keys can be used along with Lambda authorizers or usage plans to control access of your apis API stage an API stage is a logical reference to a life cycle state of your API these API stages are identified by an API ID and the stage name a callback URL when a new client is connected through a websocket connection you can call an integration in API Gateway to store the client's callback URL you can then use that callback URL to send messages to the client from the backend system developer portal an application that allows your customers to register discover And subscribe to your api's products manage their API keys and view usage metrics of your apis Edge optimized API endpoint it is the default host name of an API that is deployed to the specified region while using a cloudfront distribution to facilitate client access typically from across the AWS regions API requests are routed to the nearest cloudfront point of presence or pop which typically improves connection time for geographically diverse clients integration requests it is the internal interface of a websocket API root or rest API method in an API Gateway in API Gateway in which you map the body of a root request or the parameters and the body of a method request to the formats that are required by the back end integration response the internal interface of a websocket API root or rest API method in API Gateway in which you map the status codes headers and payloads that are received from the backend to the response format that is returned to a client app a mapping template it is a script in velocity template language or vtl that transforms a request body from the front-end data format to the backend data format or the transforms a response body from the backend data format to the front-end data format mapping templates can be specified in the integration request or in the integration response they can reference data made available at runtime as context and Stage variables the mapping can be as simple as an identity transform that passes the headers or the body through the integration as is from the client to the backend for a request the same is true for a response in which the payload is passed from the backend to the client met third request it is the public interface of an API method in API Gateway that defines the parameters and the body that an app developer must send in requests to access the backend through the API method response it is the public interface of a rest API that defines the status codes headers and body models that an app developer should expect in response from the API mock integration in a mock integration API responses are generated from API Gateway directly without the need for an integration backend as an API developer you decide how API Gateway responds to a mock integration request for this you configure the methods integration request and integration response to associate a response with a given status code model a model is basically a data schema specifying the data structure of a request or a response payload a model is required for generating a strongly typed SDK of an API it is also used to validate payloads a model is convenient for generating a simple mapping template to initiate creation of a production mapping template although useful a model is not required for creating a mapping template so those were some of the important terms that you should be knowing in order to move ahead with the session don't worry if you haven't understood it completely we'll be elaborating more over this when we move on to with the Hands-On session before we move on to the Hands-On session let's discuss the pricing of the AWS API Gateway with the AWS API Gateway you only pay when your apis are in use there are no minimum fees or upfront commitments for HTTP apis and rest apis you pay only for the API calls you receive and the amount of data transferred out there are no data transfer out charges for private apis however AWS private link charges apply when you're using private apis in the API Gateway API Gateway also provides optional data caching charged at an hourly rate that varies based on the cache size you select for websocket apis you only pay when your apis are in use based on the number of messages sent and received and the connection minutes the API Gateway free tire includes 1 million HTTP API calls 1 million rest API calls 1 million messages and 750 000 connection minutes per month for up to 12 months post free tire services you will be charged by based on the region the type of API service that you use and the number of requests to know more in detail about the pricing you can check out the official AWS API Gateway pricing page here you will find all the necessary details based on the region of your choice now let's move on towards the most interesting part of the session which is creating apis using the AWS API Gateway prerequisites before using the Amazon API Gateway for the first time you must have an AWS account therefore you will have to sign up for an AWS account and once your account has been created you should also create an AWS identity and access management that is an IAM user with administrator permissions to know more about how to create an AWS account make sure to check out AWS crash course video from edureka creating a rest API using Lambda now before doing that I'll just brief you about what is AWS Lambda AWS Lambda is basically a serverless compute service meaning the developers don't have to worry about which AWS resource to launch or how they will manage them all they have to do is just put the code on Lambda and it executes the AWS Lambda executes your backend code by automatically managing the AWS resources when we say manage it includes launching or terminating instances Health checkups Auto scaling updating or patching new updates Etc so in order to create a rest API using AWS Lambda the first thing that you have to do is create a Lambda function now to do that you will have to open up the AWS console now if you already have an AWS account just log into your account else create a new account before you get started please note that AWS will provide free services for an ear and to know more about the free tire services you can check out Amazon's official page for the same now since I already have an account what I'm going to do is just log into my account and get started okay so once you've logged into your account you'll be able to see the AWS Management console which is just as you can see on the screen now as you can see there is a huge list of services that are provided by AWS so if you just scroll down you will notice that the Lambda service is present under the compute section so just click on that to move on to the functions list page this page basically shows all the functions that you have created in the current AWS region please make a note that recently created functions might not immediately appear it might take a while for them to show up now as you can see this page has several options such as create function filter actions Etc also if you are a returning user who has already created some Lambda functions you will see a list of functions that you have created else the list will be empty now if you want to create a new Lambda function simply click on the create function option that is present at the top right corner of the page please make a note that the action drop down list is not clickable this is because the actions are to be performed on Lambda functions that are already created so if you do not have any function you will not be able to click on the actions list however if you have some function present already select the function and you will see that the actions list gets enabled and it provides three options that is view details test and delete feet as the name suggests these actions will help you view the Lambda function details test it or delete the functions if it is no longer required now since we are focusing on creating a Lambda function let me just create on the create function option so now we've moved on to the create function page the first option is author from scratch that allows you to create the Lambda function completely on your own so if you want to create your own Lambda function just give a suitable name and specify the runtime that you prefer in our case the name of the function will be hello world the runtime or the language that I want to make use of is python so I'll just select python 3.8 so next up after the runtime you see something called as permissions which basically refers to the function permissions by default Lambda will create an execution role with permissions to upload logs to the Amazon Cloud watch logs Amazon cloudwatch is nothing but a monitoring and Management Service it provides data and actionable insights for AWS hybrid and on-premises applications and infrastructure resources when you make use of AWS cloudwatch you can collect and access all your performance and operational data in form of logs and metrics from a single platform however the default role can be customized later when you wish to add triggers okay so now I'll go ahead and click on create function option it will take a few minutes to do so so let me just wait for the same okay so once the function has been created you will see the message just as you see on the screen which says successfully created the function hello world which is the name of the function that I just created now this page basically allows the user to manage the code and the configurations once you have provided the required code and its configurations simply click on test in order to execute the code now before doing that let me walk you through this page the first element that you see over here is Page configuration this element has something called as the designer here you can add triggers layers and destinations to the Lambda function that you have created a trigger over here refers to an AWS service or a resource that is used to invoke the Lambda function now if you want to connect your function to a trigger just click on ADD trigger option and as you can see there is a drop down list over here with a huge list of triggers that you can choose from choose any service from the list to see the available options as of now I will not be using any of these so I'll just click on cancel to get back to the previous page the next thing that you see over here is two options that is the function that you've just created which in my case is hello world and below that is layers by default the function is selected and when you scroll down the page you will be able to see the code editor and a sample code that has been generated automatically for our function now this is a very simple function called as the Lambda Handler which simply returns a message saying hello world from Lambda the code editor in AWS Lambda console enables you to write test and view the execution results of your Lambda function code okay so now let me just scroll back up and select the layers option layers are basically resources that contain libraries a custom runtime or other such dependencies you can create layers to separate your function code from its dependencies by default no layer has been added to add a layer all you have to do is choose layers and then click on the option add a layer to include libraries in a layer place them in a directory structure that corresponds to your programming language now since we have a very simple function I will not need any extra layer for it but just for the sake of showing you guys let me click on add a layer option as you can see there's a drop down list present over here as well there are two more options that are present over here one is custom layer and the other one is specify an Arn in the custom layer option you will see any of the previously used or created layers since I have not used any my list is blank the next option that I see over here is to use an Arn AR and basically stands for Amazon resource name entering an Arn enables you to use the layer that is shared by another account or a layout that does not match your function's runtime the format in which you should specify an Arn is shown in the text bar itself okay so now let's get back to the previous page so I'll just click on cancel next up is the destinations option destinations are AWS resources that receive a record of an invocation after success or failure you can configure Lambda to send invocation records when your function is invoked asynchronously or if your function processes records from a stream to add a destination choose add destination option the contents of the invocation record and supported destination Services vary by The Source now the next element that is present over here is permissions like I've already mentioned before the default execution role that we used for this hello world Lambda function has permission to store logs in Amazon Cloud watch logs now this is the only permission it has got for now the resource permissions can vary depending on the role that you select for the function finally the monitoring element of this page when you click on this page you see some graphs which are not showing any data for now this is because I have not invoked my function as of yet once the function is invoked you will be able to monitor trace and debug your Lambda functions okay so coming back towards the hello world function let me just scroll down to the code editor so now to invoke this function I will have to test it before clicking on test let me just show you the default test event that is present now you can open the test configuration either by clicking on select a test event dialog box or opening the drop down list that is present next to the test option in the code editor so let's click on configure test events okay so now as you can see there are some default test events that are present over here also by default the create new test event option has been selected the other option is edit safe test events that cannot be selected as I have not created any test event before so what I'm going to do is just give some name for this event say event 1 and then I will click on create okay so now our function is ready to invoke the Lambda function click on test as you can see the message says execution result has succeeded when you open the details you will be able to see the summary of The Log output the summary section shows the key information such as the time taken for the execution build duration which is 100 milliseconds request ID Etc reported in the log output the log output section will show the logs generated by the Lambda function execution when you check the monitoring page this time you can see that the metrics over here have updated in order to gather some more metrics all you have to do is just execute or run the Lambda function a few times okay so now our Lambda function is ready the next thing that we have to do over here is create the API Gateway so in Step 2 you have to get back to the AWS console and from networking and content delivery service list select API Gateway now in case you have already created apis earlier you'll be able to see them just like how you were able to see the Lambda functions list now if you want to create a new API click on create API option if you are using the API Gateway list for the first time you see a page that introduces you to the features of the API Gateway now since we are constructing on building a rest API I will click on build from rest API option when you do this you'll be able to see the create new API page under settings for API name you can enter any name and I'll be entering new API next if you wish you can enter the description in the description field however this is optional so you can leave it empty as well in my case I'll just leave it empty for the endpoint type I'll keep it Regional now once that is done click on create API okay so as you can see under resources there is nothing but just a slash this basically depicts the root level resource which corresponds to the base path URL for your API next click on the actions menu from the actions menu click on create method when you do this you will see that a new drop down menu pops up below the root directory this list basically consists of basic API methods such as get put post delete etc for the purpose of this tutorial I will select the get method and click on the tick mark beside it in order to save the choice that I just made so as you can see the get method setup page has opened and by default the Lambda function option has been selected if you remember prior to creating this API we created the Lambda function that we wish to use along with this API so next select the use Lambda proxy integration option for Lambda region choose the region that you created the Lambda function in in my case it's Ohio now since Ohio is in Us East 2 I'll just keep the region as us East 2. please select the region according to where you have created your Lambda function in the Lambda function field enter the name of the Lambda function that you created earlier okay so as you can see the moment I type h the function hello world is displayed in the drop down menu in case you forget the name of the function that you have created just type any character and delete it then you will be able to see a drop down menu that shows the Lambda function that you have created earlier leave the used defaults timeout checked and finally click on the save button to save your choice okay so as you can see a pop-up named add permission to Lambda function has been displayed that says you are about to give API Gateway permission to invoke your Lambda function now this is exactly what we want so just click on OK so next up you will see another pane that is get method execution pane as you can see this execution pane consists of a number of items the client box over here represents the client which is the browser or the application that calls your api's get method if you click on the test link and then click on test this simulates a get request from a client the method request box represents the client's get request as it's received by API Gateway if you choose method request you'll see settings for things like authorization and modifying the method request before it's passed through to the back end as an integration request now for this demonstration I leave everything set to the default values the next box that you see is the integration request box this box represents the get request as it's passed to the backend here the settings depends on the integration types that's selected the URL path parameters URL query settings parameters HTTP headers and mapping template settings are for modifying the method request as needed by the specific backend the integration type for this demonstration will be set to Lambda function and I'll leave the other settings set to the default values itself so the next box represents the backend Lambda function that you have just created if you click on this it opens the Lambda function in the Lambda console integration response the integration response box represents the response from the backend before it's passed to the client as a method response for Lambda proxy integration this entire box is grayed out because a proxy integration Returns the Lambda function's output as it is for a Lambda non-proxy integration you can configure the integration response to transform the Lambda functions output into a form that's appropriate for the client's application we'll be doing it later on in the other demonstrations but for now I'll just leave everything set to the default values as I've mentioned earlier method response the method response box represents the method response that's returned to the client as an HTTP status code an optional response header and an optional response Body by default the response body that's returned to your Lambda function is passed through as it is as a Json document so so the response body default setting is application slash Json with an empty model indicating that the body is passed through as it is for this as well I'll just leave it set to the default values okay so next is step three in step 3 you will have to deploy your rest API in the API Gateway console now until here you've created your API but you cannot actually use it this is because it needs to be deployed to deploy this all you have to do is click on the actions drop down menu and choose deploy API from the deployment stage drop down menu click on new stage and for stage name just enter prod once that is done click on deploy so as you can see in the broad stage editor the invoke URL is present right at the top if you choose to invoke the URL it will open a new browser tab with that URL now when you refresh this you will see the default message that is Hello from Lambda now if you wish to test the API that you've just created you can make use of various tools such as curl or Postman in this tutorial I'll be using curl therefore I'll show you guys a quick demonstration of how to install curl for Windows operating system so what I'm going to do over here is just open the browser and search curl for Windows select the option curl for Windows and then you will see two options that is 64-bit and 32-bit select the appropriate option that suits your system and download the same once it has been downloaded extract the files so the next thing that you have to do is open the bin directory of this folder paste its path in the path variable which is present in your system environment variables so just open your system environment variables select path and click on new paste the path that you've just copied followed by a slash and then click on ok now to test the API open the command prompt and type the following command along with the link of the API that you've just created so curl hyphen X get in caps followed by the link as you can see I have the desired output displayed on the screen which says hello from Lambda so the next part of a demonstration deals with creating another Lambda function what I'm going to do over here is add a parameter to the Lambda function so click on the Service drop down list and select Lambda just as before click on create function and I'll just name my new Lambda function as hello world 2 and the runtime will be python 3.7 under permissions expand choose or create an execution role and from execution role drop down list choose use an existing role from existing role choose the role from your previous Lambda function also you can create a new role with basic Lambda permissions then choose create function in the function code pane you'll see the default hello from Lambda function now what I'm going to do is just replace the entire function with the code that I've already kept ready over here so I'll click on deploy and the next thing that I'm going to do over here is add a resource method and parameter to the rest API in the API Gateway console so I'll just click on services and then I'll click on API Gateway from the apis list choose new API and under resources click on slash next click on the actions drop down menu and click on create resource for resource name I'll just enter my resource so notice that the resource path field is populated automatically with the resource name choose create resource next from the actions drop down menu choose create method and under the resource name that is my resource you will see a drop down menu choose get and then I'll just click on the check mark icon to save the choice now in the my resource get setup pane for integration type I'll just keep it as Lambda function and for Lambda region just like before we will choose the same region where we've created the Lambda function in my case it's Us East 2 in the Lambda function box type any character and then choose the Lambda function that you've just created which is Hello World 2 in my case from the drop down menu now in case the drop down menu doesn't appear just like I've told you earlier in the first session type any character and just delete it so once you've selected the Lambda function that you've created click on Save so once this is done you will see the add permission to Lambda function pop-up now this basically says that you are going to give API Gateway permission to invoke your Lambda function now since this is what we want I'll just click on OK in order to Grant the API Gateway the permission that is required now once that is done you will see the my resource get method execution pane unlike the previous session this time the integration response box is not grayed out so just click on integration request and then expand the mapping templates option okay so next set the request body pass through option to when there are no templates defined so I'll just do that and then I'll choose add mapping template for the content type what I'm going to do is just type application slash Json and choose the check mark icon and then I'll click on the check mark icon now in our case the content type is application Json now in the templates window I'll just enter the code that I've already kept ready now this is the Json document that will be passed to your Lambda function in the body of the input event object the right hand side of this tells the API Gateway what to look for in your input query string the left hand side stores it into the parameter that you're about to add to your Lambda function so once that is done just click on Save and from the actions drop down menu I'll choose deploy API from the deployment stage drop down menu click on prod and then choose deploy so as you can see again I have to invoke URL present over here the next thing that I'm going to do is test the API that I've just created so what I'm going to do is just open the command prompt and I'll enter the curl command that is called hyphen X get followed by the invoke URL okay so as you can see I have the desired output that is Hello from Lambda in the next part of our demonstration I'll be showing you guys how to create an API Gateway directly through the Lambda service so get back to the Lambda console and select the Lambda function that you've created as you can see over here it shows that a trigger that is an API Gateway has already been added now click on ADD trigger from the drop down list select API Gateway so I'll just click on API Gateway and then select the create new API option from the API drop down list and for the API type I will just select HTTP API now the security can be kept open since this is just for demonstration purposes please make note that for real-time applications you cannot keep the security option as open then click on ADD now when I come back to the Lambda function configuration page you will see that there are two API gateways for this function when you scroll down you will see both the apis that have been created open the details of the hello world API and click on API endpoint link okay so as you can see the function returns hello from Lambda now I'll just click on test a few times and then I'll open the monitoring page to see the metrics so as you can see over here I have the corresponding metrics for my Lambda function through the API that I've just created so till now we have seen how to create a rest API using the Lambda function through various methods the next thing that I'm going to show you over here is how to create a rest API with mock integration a mock integration basically enables your API to return a response for a request directly without sending the request to a backend this enables you to develop your API independently from your backend so the first thing that you have to do over here is create the API now for the purpose of this demonstration I will create an API with the get method and a query parameter in the API Gateway console what I'm going to do over here is get back to the API Gateway pageant and then I'll click on create API just like we've done earlier on the next page what I'm going to do is click on build for rest API so under create new API choose API and for the settings I'll keep the API name as my API and just like before the description is optional so I'll just leave it blank okay for the endpoint type set it to Regional the next thing to do is click on create API now as you can see the methods page option is empty just like it was earlier so from the actions drop down menu click on create method under the resource name that is slash you will see a drop down menu so from this drop down menu I'll just click on get and then I'll just click on the check mark icon to save the choice so now as you can see the get setup pane has appeared over here for the integration type I'll choose mock and then I'll just click on Save Now we move on to the get method execution pane okay so now we've moved on to the get method execution pane now the terms that are present over here have already been explained in the previous session so in case you guys want to know what they are just get back to the previous session and take a look at it so for the next thing what I'm going to do is create the mock integration in this step instead of integrating with the backend I'm going to add a query string parameter to the get method and then I'll specify the response codes and messages that our API will have to return so first create the query string parameter now for this I'll just choose method request in the get method request pane I will expand URL query string parameters and then I'll choose add query string over here I'll just type my string for the name I'll click on the check mark icon okay so next what I'm going to do is create a mapping template that Maps the query string parameter values to the HTTP status code values that are to be returned to the client so I'll just get back to the method execution pane and then I'll select on integration request okay so from the integration request I'll expand the mapping templates pane and for request body pass through I'll choose the recommended option that is when there are no templates defined content type again will be application slash Json so I'm going to replace the contents that are present over here with the code that I've already kept ready so I'll just paste it and then I'll just click on Save so step three is to define the successful response so what I'm going to do over here is create a mapping template that Maps the HTTP 200 status code value to a success message that is to be returned to the client so I'll just click on method execution and from here I'll move on to the integration response now expand the 200 response and then expand the mapping template section okay so under content type I'll just choose application slash Json and for the template contents I'll just have a basic message that says hello from edureka okay so I'll just paste that up here and then I'll click on save that is present in the mapping template section please make a note over here guys that you can see two save options one is in the mapping template section from both these options you will have to select the save option that is present in the mapping template section now in Step 4 I will add an HTTP 500 status code and an error message okay so in this step what I'm going to do is add an HTTP 500 status code that the front end can return to the client and then you map it to an error message so first I'll have to add the 500 status code so I'll just get back to Method execution and I'll click on method response then just click on ADD response for HTTP status enter 500 and then click on the check mark and then I'll just click on the check mark icon okay so now create a mapping template that maps The Returned 500 status code value to an error message from the front end to the client so for that I'll just get back to the method execution pane and I'll get into integration response then I'll choose add integration response and for HTTP status regular expression enter 5 slash D and 2 within curly braces now for method response status choose 500 from the drop down menu and then just click on Save next up all you have to do is expand the 500 response and then expand the mapping template section if it isn't already expanded under content type I'll choose add mapping template and in the box under that I'll just enter application slash Json and then I'll click on the check mark icon now the message over here will be replaced saying that this is an error message okay so I'll just paste that and I'll save the changes now for the final step that is step 5 I'll have to test the mock integration so just click on method execution and I'll just click on the test option that you see over here now under query string I'll just enter my string equals my value and then I'll choose test so as you can see over here we have our desired outputs that says hello from edureka now this basically means that we have successfully created the mock API now in case you want to check if the error condition is also working all you have to do is just go back to the query string text box and just delete some item from there what I'm going to do is just delete whatever value is present for my string and I'll just leave it as an empty string okay and then I'll click on test so there it is you can see the desired message that says this is an error message so this brings us to the end of this AWS API Gateway [Music] so what is Amazon SES Amazon SES or simple email service as the name describes is an email service for sending both transactional and mass emails it is cloud-based and offers a wide list of possible Integrations you have your SMTP interface which is your simple mail transfer protocol which is a communication protocol for email transmission as an internet standard your SMTP is an application that is used to send receive and relay outgoing emails between senders and receivers when an email is sent it's transferred over the internet from one server to another using SMTP in simpler terms an SMTP email is just an email sent using the SMTP server you also have the AWS SD case with seamless integration with your applications and even email client or other types of software but then you have your regular email service why do you want to use AWS now AWS can boast its reliable infrastructure at a very reasonable cost Amazon prices are very competitive compared to other Solutions available in the market for this particular service if you check its consumer reviews you will notice it as the most often mentioned Advantage as with other things on the AWS console but we are going to take a closer look at the Amazon prices as well as its advantages and disadvantages a little bit later in this Amazon SES tutorial here we are just going to discuss the main features and the appeal of Amazon SES first of all it's high deliverability or its deliverability rate it's one of the main parameters to consider when choosing an email sending service Amazon takes reputation and whitelisting extremely seriously by supporting all three authentication mechanisms which are dkim or domain Keys identified mail which is an email security standard designed to make sure your messages aren't altered in transit between sending and recipient servers it uses public key cryptography to sign email with a private key as it leaves a sending server there's also SPF which allows email senders to Define which IP address are allowed to send mail from a particular domain and it's better known as your Center policy framework you also have dmarc which is domain based message authentication reporting and conformance which is an email Authentication Protocol which basically uses the sender policy framework so as a sender you authenticate your email using SPF and dkim and you publish a dmarc record in your DNS in addition you can track all of your sending activity and manage your reputation as well so obviously it's highly deliverable next you get content personalization with replacement tags personalized emails are messages tailored to a specific user notwithstanding the name personalized email are a part of automated marketing they are created with special templates that become individual due to its Dynamic content your username date of birth location behavior all on the app each of these parameters change from user to user and may affect the type of information included in the message with Dynamic content you can personalize emails with the user details or even show different information to different users it's done with variables or placeholders or merge tags the most common example is sending a newsletter to a list of subscribers and inserting their first names into the welcoming sentence I'm pretty sure many of you might have gotten emails that way which seems sort of generic but also personal at the same time in this case you use one template and add a first name tag for personalization and finally email receiving now with Amazon SES you can not only send emails but also retrieve them in this case you have a set of flexible options as well as usage of the received message as a trigger in AWS Lambda now we won't go into details of email receiving in this Amazon SES tutorial so if you are interested in this feature you can obviously read up more on it on the official documentation now the most obvious use case for the Amazon SES is supplementing a list of other AWS Services you already work with Amazon SES easily integrates with Amazon cloudwatch Amazon ec2 Amazon elastic Beanstalk AWS IAM AWS Lambda Amazon Route 53 Amazon S3 Amazon SNS and Amazon workmail so if your app is hosted in Amazon EC and it sends emails with the simple email service or the Amazon SES the first 62 000 of your monthly emails will be free this way if you host your app on Amazon ec2 and need a scalable way of sending emails Amazon SES is an extremely beneficial choice so versatile integration is your fourth Point another point which is extremely important for a service like this is pricing so let's take a look at Amazon ses's cost now as I already mentioned if your app is a part of Amazon ec2 or elastic Beanstalk infrastructure then you are eligible for a free tier and the first 62 000 emails are free every month for all other cases Amazon's policy of pay for what you actually use applies for emails over the 62 000 limit you will pay as much as 10 cents or point one dollar for every thousand emails that you send plus 12 cents or 0.12 dollars for each GB of attachments if your app is not hosted on Amazon server the cost will be the same you have 10 cents for every thousand emails you send and 12 cents for each GB of attachment the only difference is that you will start paying from the message sent this means that the 10 000 emails without attachment will will cost you only one dollar if each of these messages have 1 MB of attachment you will add 1.18 to your payment the Amazon SES email size is limited to 10 MB this means you can pay up to three dollars for 10 000 emails without monthly charges if you need a dedicated IP address for more security it will cost an additional 24.95 so not even 25 dollars per month again an AWS calculator where you can go ahead and look for specific and detailed pricing for your requirements this is very very helpful in cost estimation moving on let's discuss a few pros and cons of Amazon SES now Amazon Services definitely have both fans and haters but in general the concept and especially the cost of Amazon SES are more than inspiring however even the moon has its dark side so let's sum up the pros and Define the cons in addition to my own experience with Amazon SES I've also reviewed what users have to say on certain platforms such as trust radius quora and stack Overflow so that is going to be my point of reference for this particular section let's first look at advantages now the pros of Amazon SES obviously include high deliverability as we discussed before and reliability along with high sending rates apart from this you don't need any additional maintenance once you have it all set up so obviously it's pretty low maintenance third point is something I shouldn't even have to mention which is best quality to price ratio and finally you also get a comprehensive set of tools for both email receiving and further management of your infrastructure since we spoke about this extensively in the features section this was a brief recapsulation of that segment but the cons or the disadvantages is what I want to speak slightly more in depth about so first gone it has a quite complicated initial configuration well I would call it rather detailed and lacking exhaustive documentation which I think is one of the selling points of most AWS services that has really great documentation you can follow it word for word and you will never face any issue and you obviously need a good set of documentation when you start working with a new service but we're going to solve that issue for you with this Amazon SES tutorial so no problems there the next point is there are a few initial limitations before you get approved and verify your sending domains you need to correctly configure Amazon SES and expand your limits by submitting requests the third reason is that Amazon SES is a simple sending service and not a marketing platform and it has been advertised that way to All the Right which is Fair however many users find the lack of template building extremely confusing if you're not a web developer and you need to send responsive emails with Amazon SES what should you do it's kind of a pain that needs a solution and in addition to this point Amazon SES does not provide you with an email list storage so that is on you now because we discussed the cons of Amazon SES it is only fair that we see how SES compares to other services it is often seen as an alternate to send Grid or MailChimp which are email marketing platforms rather than email sending Services Amazon SES offers both SMTP and API integration on the other hand it is simply for email sending and doesn't feature any additional options like drag and drop templates a b testing or detailed analytics but there are several tools similar to SES s with varying functionalities first of all you have mail gun this is an email service for developers which you can also integrate with your app via API or SMTP in addition to email sending it offers an email list management and validation system however compared to the Amazon SES it's only fair to say that mail gun is a way more expensive tool next you have click Send which is a dedicated email sending service you can integrate it via API or SMTP or use its dashboard for marketing emails as well and finally you have peppy post which is an email sending Service as well and this can be integrated via API or SMTP relay and offers extremely competitive pricings to Amazon SES as well now it is worth mentioning that these are several platforms built on top of Amazon SES they use it as a delivery service and add extra functionalities like email templates email octopus and sendy are a few tools that they use now that you have decided to go with Amazon SES or at least experiment with it which is why I'm assuming you are following this tutorial then this is the section for you now of course I'm always going to recommend following the official documentation which you can look at after this Amazon SES tutorial indeed there are many details that you should be paying attention to when you configure your Amazon SES for the first time but the official guide has dozens of sections which is interlinked and it's extremely easy to get lost of a linear track if you're trying to dive in deeper so I've gathered the most important points here briefly explain them and then basically I'm going to show you on my Management console how to go about this setup now if you're totally new to AWS what you need to do first is create an Amazon web services account first yeah it's your AWS free tier account you'll be required to enter your credit card information at the registration stage but you won't be charged until you start using the paid services so you can go ahead and select the free plan out of the three plans to start and you will get access to all of the Amazon Services next you're going to go into your AWS Management console this is what mine looks like and select simple email service from the list of services so let's go on search and type SCS you get the simple email service I'm going to click on this okay now what we're gonna do is we're going to verify email address identities yes it's a little bit annoying but it's a matter of security and high deliverability so what you need to do is verify the email identities that you use so on the left hand side I'm going to go and create my SMTP credentials now to send emails through the Amazon SES SMTP interface which is the one we're going to use right now we're going to begin by creating the SMTP credentials which is a username and a password so in the left navigation bar I'm going to click on SMTP settings and here you can see your server name your Port your TLS configuration and your authentication I'm going to go ahead click on this blue button which says create my SMTP credentials all right now you will get a form which says this form lets you create an IM user for SMTP authentication with Amazon SES enter the name of a new IM user or accept the default and create to set up your SMTP credentials so what you're going to do right now is copy your credentials because this password will not be shown to you again so I'm going to open so my SMTP security credentials are here you have your SMTP username and password so either copy it or you can click on download credentials so I am just going to copy it because I already have a notepad open all right so I'm just going to click on download credentials and then I'm going to close it so next what we're going to do is we're going to verify an email address so we're going to add and verify our own email address using a few steps so I'm going to go on to my Amazon SES console so this is our SES home we are going to go to email addresses and click on verify a new email address so now we'll be presented with a dialog here we are going to enter the email address that we wish to send messages from so I'm going to type my address and I'm going to click on verify this email address so now we have a verification email sent to my particular ID you should just about now receive a verification message from Amazon SES asking you to confirm that you are the owner of this email address I'm quickly going to go and click on the verification link in my mail and voila you have successfully verified an email address and now you can send emails from this address so I'm going to go back here to this page and refresh it I'm going to check the status of the email address and as you can see the verification status says verified now all of this is a little bit annoying but it's a matter of security and high deliverability and that I have mentioned before you need to verify all the email identities that you use here using the same process but remember one thing that your email addresses are case sensitive and another important thing in using AWS is its regions so remember that they are the physical locations of Amazon data centers available to any customer these regions don't limit the usage of AWS services for you as their purpose is Distributing workloads basically so when getting started you don't need to worry about this but if you want to scale your services you should remember the corresponding limits in addition email verification is connected to the region the easiest way to verify your email address is is to use the Amazon SES console it doesn't require any coding skills and even if you're not a tech savvy you will be able to walk through this process within seconds all you will have to do is enter your email address and then press the link in a confirmation message that you will receive in your inbox so I'm just going to quickly add like two more email addresses now domain verification is a standard procedure as well you need to add a txt record to your domains DNS server once your email addresses are successfully verified you can now start sending emails from them but know that you need to start in a sandbox mode which basically means that you can send email to verified email addresses and domains or to the Amazon SES mailbox simulator only and you can only send up to 200 messages in 24 hours and only one message per second sounds complicated right a bit but this is the price you pay for security and send a reputation however in my opinion for experimenting with Amazon SES and understanding how it works just the basic bare minimum the sandbox mode is enough but then again to move forward and start sending messages to your customers without a need to verify recipients in boxes domains you should basically need to submit a request via the AWS Management console in your request you should describe how you or your Enterprise is going to use Amazon SES build your mailing lists handle bounces Etc now once you have successfully completed verification reviewed main AWS definitions and figured out how all of this actually works you can move to the email sending as we have already discussed you can use SMTP or API in this particular step you should Define exactly how you would like to send your emails with SMTP you can send an email from your application if you are using a framework that supports the SMTP authentication you can also send messages from software packages you already use such as your blogging platform your workflow system Etc you can also send emails right from your email client and finally integrate it with a server where you host your app servers such as your postfix XM and other such examples now obtaining the Amazon SES SMTP credentials are fairly easy we saw it a while ago these are mine in my notepad as well as integrating your email client or software package these options might be a good fit for non-developers as well API integration however requires definite technical skills and can be used to make raw query requests and responses use an AWS SDK as well as use the CLI or the command line interface now it's time for you to go from Theory to practice let's review one of the widely used types of integration sending emails using Amazon SES now I assume you have already obtained your SMTP credentials and switched to your Amazon SAS instance to the production mode if you already haven't you can just in your left navigation bar under email sending go to sending statistics edit your account details and enable production access you can also do this straight out of your sandbox yeah the good news is that the most complicated and time consuming part is now behind us all we have to do is integrate our WordPress with Amazon SES for sending notifications or even newsletters now for this you have two options so now what we are going to do is we are going to send a test email using Amazon SES so I'm going to go to email addresses I am going to send a test email from one address to another you can also send it from one address to itself because that address is verified by us so I'm going to click on this and I'm going to click on send a test email from this address of mine to this one I'm going to select formatted and something simple maybe this is a test email and the email has been sent all you have to do is go ahead and refresh and now as you can see we have our test email saying this is a test email okay now we are going to go back here again send another test email from the same email address and the same recipient except for now we are going to use the raw email format now one thing you must have directly noticed is that you have no options in your subject line here in your formatted email you have a lot of these options but when you click on draw you have no such options so I'm gonna go this put in your recipient address and your sender's address and this is a little piece of code so here I'm going to paste some sample formatter text for this particular email and you can go ahead and put in your body as well now the raw message is a Json data structure saved in a file name message.json in the current directory and as you can see the data is one long string that contains an entire raw email content in mime format including an attachment called attachment.txt chairs right here so I'm going to go ahead and send the test email and here is the email I sent to myself it has the sender address and it has address of the recipient and everything that was there in the email so all right that was sending a simple email using the Amazon simple email service [Music] AWS iot integrates with AI services so you can make devices smarter even without internet connectivity built on AWS which is used by industry leading customers around the world AWS iot can easily scale as your device Fleet grows and your business requirements evolve AWS iot also offers the most comprehensive security features so you can create preventative security policies and respond immediately to potential security issues basically AWS iot is everywhere now talking about what is this AWS iot so Amazon web services Internet of Things is an Amazon web services platform that collects and analyzes data from Internet connected devices and sensors and connects that data to AWS Cloud applications now AWS iot can collect data from billions of devices and also connect them to endpoints for other AWS tools and services allowing a developer to tie that data into an application now an AWS user accesses AWS iot with the AWS Management console software development kits or the AWS command line interface now an application accesses the service through AWS SD case AWS iot apis are divided into the control plane which includes service configuration device registration and logging and the data plane which includes data ingestion now for the iot service it includes a rules engine feature that enables an AWS customer to continuously ingest filter process and root data that is streamed from connected devices a developer can configure rules in a syntax that's similar to SQL to transform and also organize the data the feature also allows the user to configure how data interacts with other big data and Automation Services such as AWS Lambda then there's Amazon Kinesis Amazon machine learning Amazon dynamodb and Amazon elasticsearch service now each rule consists of an SQL statement and an action list that defines and executes the rule using an editable json-based schema then we also have a device Shadow which is an optional rule that enables an application to query data from devices and send commands through rest apis now device Shadows provide a uniform interface for all devices regardless of limitations to connectivity bandwidth Computing ability or power so this was about what is AWS iot but why do we actually need it now talking about the reasons why AWS iot is important the first reason is that it's Broad and deep now AWS has broad and deep iot services from The Edge to the cloud device software Amazon free rtos and AWS iot Greengrass provides local data collection and Analysis now in the cloud AWS IOD is the only vendor to bring together data management and Rich analytics in easy to use Services designed specifically for noisy iot data not just that it also has multi-layered security Now AWS iot offers services for all layers of security AWS iot includes preventative security mechanisms such as encryption and access control to device data AWS iot also offers a service to continuously Monitor and audit security configurations you will receive alerts so you can mitigate potential issues such as pushing a security fix to a device apart from that it also has Superior AI integration now AWS is bringing Ai and iot together to make devices more intelligent you can create models in the cloud and then deploy them to devices where they run two times faster compared to other offerings AWS iot sends data back to the cloud for continuous Improvement of models and also AWS iot supports more machine learning Frameworks compared to other offerings and finally it's also proven at scale now AWS iot is built on a scalable secure and proven Cloud infrastructure and scales to billions of different devices and trillions of messages AWS iot integrates with services such as AWS Lambda Amazon S3 and Amazon Sage maker so you can build complete Solutions such as an application that uses AWS iot to manage cameras and Amazon Kinesis for machine learning so this was about what is AWS iot and why do we actually need it or what's the importance now let's move on and see how does AWS iot work now AWS iot enables Internet connected devices to connect to the AWS cloud and let's applications in the cloud interact with internet connected devices common iot applications either collect and process Telemetry from devices or enable users to control a device remotely now for the working of the AWS iot we have certain components such as we have the devices and the iot applications then then the message broker device Shadows rules engine and security and identity works as the middleman of this entire working process and then it's connected to the Amazon dynamodb Kinesis AWS Lambda Amazon S3 Amazon SNS Amazon sqs Etc now talking about the working the state of each device connected to AWS iot is stored in a device Shadow the device Shadow service manages device Shadows by responding to requests to retrieve our or update device date data the device Shadow service makes it possible for devices to communicate with applications and for applications to communicate with devices now communication between a device and AWS iot is protected through the use of x.509 certificates now AWS iot can generate a certificate for you or you can use your own in either case the certificate must be registered and activated with AWS iot and then copied onto your device when your device communicates with AWS iot it presents the certificates to AWS iot as a credential now we recommend that all devices that connect to AWS iot have an entry in the registry the registry stores information about a device and the certificates that are used by the device to secure communication with AWS iot also you can create rules that Define one or more actions to perform based on the data in a message for example you can insert update or query a dynamodb table or even invoke a Lambda function function now rules use these Expressions to filter messages when a rule matches a message the rules engine triggers the action using the selected properties rules also contain an IAM role that grants AWS iot permission to the AWS resources used to perform the action so these are the different components involved in the working of the AWS iot and also these are the steps required for the working of AWS iot now let's move on and talk about the various features of AWS iot now talking about the first feature we have AWS iot device SDK now the AWS iot device SDK helps you easily and quickly connect your Hardware device or your mobile application to AWS iot core now the AWS iot device SDK enables your devices to connect authenticate and exchange messages with AWS iot core using the mqtt HTTP or websockets protocols now the AWS iot device SDK supports C JavaScript Etc and includes the client libraries the developer guide and the porting guide for manufacturers you can also use an open source alternative or write your own SDK the next feature includes the device Gateway now the device Gateway serves as the entry point for iot devices connecting to AWS the device Gateway manages all active device connections and implements semantics for multiple protocols to ensure that devices are able to securely and efficiently communicate with AWS iot core currently the device Gateway supports the mqtt web sockets and HTTP 1.1 protocols for devices that Connect using mqtt or web sockets the device Gateway will maintain long-lived bi-directional connections enabling these devices to send and receive messages at any time with low latency the device Gateway is fully managed in Scales automatically to support over a billion devices without requiring you to manage any infrastructure for customers migrating to AWS iot the device Gateway offers capabilities to transition infrastructures with minimal impact to existing architectures and iot devices the next feature includes the message broker now the message broker is a high throughput Pub or sub message broker that securely transmits messages to and from all of your iot devices and applications with low latency the flexible nature of the message Brokers topic structure allows you to send messages to or receive messages from as many devices as you would like it supports messaging patterns ranging from one to one command and control messaging to one to one million broadcast notification systems and everything in between in addition you can also set up fine-grained access controls that enable you to manage the permissions of individual connections at the topic level ensuring that your devices and applications will only send and receive the data that you want them to the next we have Authentication and authorization now AWS iot core provides Mutual authentication and encryption at all points of connection so that data is never exchanged between devices and AWS iot core without a proven identity AWS iot core supports the AWS method of authentication that is the x.509 certificate based authentication and customer created token based authentication now connections using the HTTP can use any of these methods while connections using the mqtt use certificate based authentication and connections using web sockets can use Sig V4 or custom authorizers you can create deploy and manage certificates and policies for the devices from the console or even using the API now the next feature is registry the registry basically establishes an identity for devices and tracks metadata such as the devices attributes and capabilities the registry assigns a unique identity to each device that is consistently formatted regardless of the type of device or how it connects it also supports metadata that describes the capabilities of a device for example whether a sensor reports temperature and if the data are Fahrenheit or Celsius then the next feature is device Shadow now with AWS iot core you can create a persistent virtual version or device shadow of each device that includes the device latest state so that applications or other devices can read messages and interact with the device now the device Shadow persists the last reported State and desired future state of each device even when it is offline so you can retrieve the last reported state of a device or set a desired future State through the API or using the rules engine now basically this device shadow makes it easier to build applications that interact with your devices by providing always available rest apis it also lets you store the state of your devices for up to a year for free now finally we have the rules engine now the rules engine makes it possible to build iot applications that gather process analyze and act on data generated by connected devices at global scale without having to manage any infrastructure now the rules engine evaluates inbound messages published into AWS iot core and also transforms and delivers them to another device or a cloud service based on the business rules you define a rule can apply to data from one or many devices and it can take one or many actions in parallel now the rules engine can also root messages to AWS endpoints including the AWS iot analytics AWS iot events AWS Lambda Amazon kindnesses then there's Amazon S3 dynamodb Cloud watch simple notification service simple queue service Etc now external endpoints can be using the AWS Lambda Kinesis Amazon SNS and rules engines native HTTP action also the rules engine provides dozens of available functions that can be used to transform your data and it's possible to create infinitely more via the AWS Lambda so this was about the different features of AWS iot and how it actually helps in the working of AWS iot now let's move on and talk about some of the AWS iot Solutions now talking about the iot solutions it covers different sectors in our everyday life so first it's about the industrial sector AWS iot customers are building industrial iot applications for predictive quality and maintenance and to remotely monitor operations then we also have the connected home sectors for AWS iot customers are building connected Home applications for home automation home security and monitoring and also home networking and finally we have the commercial sector AWS iot customers are building commercial applications for traffic monitoring Public Safety and also Health monitoring so these were the different solutions provided by AWS iot now let's get into the details of these Solutions and find out what are the different AWS iot services now industrial iot brings machines cloud computing analytics and people together to improve the performance and productivity of industrial processes so with iiot industrial companies can digitize processes transform business models and improve performance and productivity while decreasing the waste now talking about the industrial iot use cases we have the predictive quality now predictive quality analytics attracts actionable insights from industrial data sources such as manufacturing equipment environmental conditions and human observations to optimize the quality of factory output so using AWS iot industrial manufacturers can build predictive quality models which help them build better products higher quality products increase customer satisfaction and also reduce product recalls then we also have the asset condition monitoring now asset condition monitoring captures the state of your machines and equipment to determine asset performance with AWS iot you can capture all iot data such as the temperature vibration and error codes indicate if equipment is performing optimally now with increased visibility you can maximize asset utilization and fully exploit your investment and finally we have the predictive maintenance now predictive maintenance analytics captures the state of industrial equipment to identify potential breakdowns before the impact production resulting in an increase in equipment lifespan worker safety and the supply chain optimization now with AWS iot you can continuously Monitor and infer equipment status health and performance to detect issues in real time now talking about the working we have various components here so we have the industrial equipment the OPC UA data server then we have the free rtos where we program deploy secure connect and manage all small low power devices now from these equipments we go to the AWS iot sitewise Gateway and also the AWS iot green grass now the Gateway collects data from on-premises equipment and historian databases and the green grass runs application code at the edge then the next section is the data management part where we have the AWS iot sitewise that collects organizes and analyzes industrial data at scale and for the device connectivity and control we have AWS iot things graph which connect devices and web services and we have the AWS iot core which is used to secure device connectivity and messaging and then AWS iot device management which is used for Fleet onboarding Management and software updates and finally there's the AWS iot device Defender which Fleet audience and protection now after all of these the final section is the analytics and event detection now here we have the AWS iot events which detect and respond to events from iot sensors and applications and then we also have the AWS iot analytics which process and prepare data for analytics and machine learning so this was the working of the industrial AWS iot now let's take a case study and understand this better now the CAF increases train safety with the help of AWS iot so basically CF is a leading designer and manufacturer of trains and Railway signaling systems the rail Services Unit reduces maintenance costs over the life cycle of a train using predictive maintenance a process that schedules Repairs by interpreting real-time operational data so CAF launched a new initiative called the digital train which led to the development of the lead mind platform and leadmine utilizes Internet of Things technology to deliver predictive maintenance using data captured by sensors on trains in real time the leadmine platform securely connects its strain-based sensors using the AWS iot core now because of this we have safer trains with reliable data analysis and Amazon kindnesses ingests the iot data in real time and sends it to Amazon redshift which acts as a data warehouse while also integrating with caf's business intelligence tools for data analysis so this was just one example of the many how AWS iot is implemented in Industries now talking about the connected home AWS iot it helps the connected home device manufacturers easily quickly and securely to build differentiated connected Home Products at scale and as the use of connected home devices continue to grow more and more data is being pushed to the cloud where the latest iot and machine learning Technologies are enabling new Innovations in Connected Home applications now talking about the various connected home iot use cases we have the home automation Now using AWS iot you can enable any device to connect to the internet and perform a desired action quickly reliably and easily now these devices can work alone or together with other devices or hubs for an integrated smart home experience additionally devices can also benefit from using voice services like Alexa for a seamless customer experience then we also have the home security and monitoring now devices like door locks security cameras and water Leak Detectors built with AWS iot can use machine learning to automatically detect threads take action and send alerts to homeowners AWS iot also enables devices to run with low latency and compute data locally without internet connectivity and finally we also have the home network management now Network operators are looking for new ways to help customers quickly discover troubleshoot and fix home network issues including Wi-Fi and cable TV connectivity AWS iot enables set-top boxes can automatically log Network Diagnostics and send to the customer service center proactively or allow customers to Monitor and troubleshoot their Network Health through a mobile app so these were the different use cases now let's take an example and understand this better so talking about the examples of connected homes we can take the example of LG Electronics now in December 2017 LG Electronics announced the launch of its LG think Cube brand to categorize all its upcoming smart products and services which feature artificial intelligence technology the concept of thank you is to embed Wi-Fi chips in LG products which allows these products to communicate with each other while learning about their users behavioral patterns and environments now because of the managed services and AWS iot core LG was able to relaunch its iot platform with a modest number of development resources Now by using AWS for the iot platform it could save 80 percent in development cost it also enabled the developers to focus on writing business Logic for LG service scenarios LG is also using AWS iot and a serverless architecture for air quality monitoring Services Customer Service chat Bots and Energy Solutions since March 2018 a chatbot service for customer support that was developed using serverless Services has been running in Korea and the United States so these were the different use cases now talking about the working so here you can see that how AWS iot works for connected homes so we have the Amazon free rtos which includes the door lock light bulb white goods and small appliances then we have the AWS green grass which involves the security camera Wi-Fi router and home controller which is then connected to the cloud where we have the AWS iot core AWS iot device management AWS iot device Defender AWS iot analytics which comes under the internet of things that is the iot then we have the AWS products and services which includes the machine learning database and computation and finally it's all connected to the mobile apps dashboards or customers or Alexa devices now let's take an example for connected homes and understand this better so this was about connected homes now let's move on and understand how AWS iot works for any Enterprise or any commercial purpose now most Enterprise customers across Industries choose AWS as their Cloud platform they choose AWS to become more agile Innovative and efficient with AWS you benefit from the fastest pace of innovation the broadest and deepest functionality the most secure Computing environment and the most proven operational expertise now you can achieve your business goals with the help of AWS iot as it grows new revenue streams now it quickly responds to new business insights customer demands in changing market conditions to grow new revenue streams with AWS you can rapidly test and iterate on ideas in an agile environment and then quickly scale the new business ideas that show the most promise then there's also the increased operational efficiency when you move to AWS you get benefits Beyond it cost savings with AWS you improve Workforce productivity with Agile development teams accelerate development Cycles to get to Market more quickly and increase operational resilience to improve uptime then finally we also have the lower business risk now protecting your data and maintaining customer and stakeholder trust is critical to your business AWS gives you the most flexible and secure Cloud platform backed by a deep set of security compliance and governance services that help you minimize risk without compromising time to Market scale and business agility now let's take an example so McDonald's has been using AWS iot for their home deliveries now McDonald's is the world's largest restaurant company with 37 000 locations serving 64 million people per day now using AWS McDonald's built home delivery a platform that integrates local restaurants with delivery Partners such as ubereats McDonald's built and launched the home delivery platform in less than four months using a microservices architecture running on Amazon elastic container service Amazon elastic container registry application load balancer Amazon elastic cash then Amazon sqs Amazon RDS and Amazon S3 now the cloud native microservices architecture allows the platform to scale to 20 000 orders per second with less than 100 millisecond latency and open apis allow McDonald's to easily integrate with multiple Global delivery partners and use AWS also means the system provides McDonald's with a return on its investment even for its average two to five dollar order value so this is how AWS iot is being used and implemented in various sectors it's used for different Enterprises or commercial purposes and then it is used for Smart Homes as well and also various industrial sectors so this was all about the AWS iot and how it is involved in our everyday lives [Music] what is Amazon poly so Amazon poly is a cloud service that converts text into lifelike speech you can use Amazon poly to develop applications that increase engagement and accessibility Amazon poly supports multiple languages and includes a variety of lifelike voices so that you can build speech enabled applications that work in multiple locations and use the ideal voice for your customers with Amazon poly you only pay for the text you synthesize you can also cache and replay Amazon police generated speech at no additional cost additionally Amazon poly includes a number of neural text to speech that is NTTS voices delivering groundbreaking improvements in speech quality through a new machine learning approach thereby offering to customers the most natural and human-like text to speech voices possible NTTS technology also supports a newscaster speaking style that is tailored to news narration use cases so this was an overview of what exactly is Amazon poly so now let us just move on and talk about few of the benefits of using Amazon poly so the first point here is the quality Amazon poly offers new neural TTS and Best in Class standard TTS technology to synthesize this Superior natural speech with high pronunciation accuracy that includes abbreviations acronyms expansions date time interpretations and homograph disambiguation the second advantage of using Amazon poly is the low latency Amazon poly ensures fast responses which makes it a viable option for a low latency use case such as dialog systems the next Point here is support for a large portfolio of languages and voices Amazon poly supports dozens of voices languages offering male and female voice options for most languages NTTS currently supports three British English voices and eight U.S English voices this number will continue to increase as we bring in more neural voices online U.S English voices Matthew and Joanna can also use the neural newscaster speaking style similar to what you might hear from a professional news anchor so the next Advantage here is the cost Effectiveness Amazon police paper use model means there are no setup costs you can start small and scale up Azure application grows and the last benefit here is the cloud-based solution on device TDS Solutions require significant Computing resources notably CPU power ram and disk space these can result in higher development costs and higher power consumption on devices such as tablets smartphones and so on in contrast TTS conversion done in the AWS Cloud dramatically reduces the local resources requirements this enables support of all the available languages and voices at the best possible quality moreover speech improvements are instantly available to all the end users and do not require additional updates for the devices so these were few of the benefits of using Amazon poly so now let us just talk about how exactly Amazon poly works so Amazon poly converts input text into lifelike speech you call one of the speech synthesis methods provide the text that you want to synthesize choose one of the neural text to speech that is entities or standard text to speech that is TTS voices and specify an audio output format Amazon poly then synthesizes the provided text into a high quality speech audio stream so for using Amazon poly you first need to provide the input text so here you have to provide the text that you want to synthesize and Amazon poly returns an audio stream you can provide the input as plain text or in speech synthesis markup language that is ssml format with ssml format you can control various aspects of speech such as pronunciation volume pitch and speech rate the next Point here is the available voices so Amazon poly provides a portfolio of languages and a variety of voices including a bilingual voice that is for both English and Hindi for most languages you can choose from several voices both male and female when launching a speech synthesis task you specify the voice ID and then the Amazon poly uses this voice to convert the text to speech Amazon poly is not a translation service the synthesized speech is in the same language as the text however if the text is in a different language then they designated for The Voice numbers represented as digits are synthesized in the language of the voice and not the text and the last Point here is the output format Amazon poly can deliver the synthesized speech in multiple formats you can select the audio format that suits your needs for example you might request the speech in the MP3 format for consumption by web and mobile applications or you might request the PCM output format for consumption by AWS iot devices and telephony solutions so this is how Amazon poly works so moving on we will discuss a few of the use cases of Amazon poly so the first use case here is the content creation so audio can be used as a complementary media to written or visual communication by voicing your content you can provide your audience with an alternative way to consume information and meet the needs of a larger pool of readers Amazon poly can generate speech in dozens of languages making it easy to add speech to Applications with a global audience such as RSS feeds websites or videos so let me give an example here so like let's just assume that you have written a block you have written a Blog on WordPress so what you can do here is you can just provide the text that you have written in your blog on WordPress so that can be converted into an audio file so once that audio file is converted you can use it on your mobile devices so that whenever you have free time you can listen to the audio that you have with you so this way you can have a lot of content that is available in the audio format with you so this is the basic idea behind the podcast also so I hope you guys are aware of what podcasts are so the next Point here is the e-learning Amazon poly enables developers to provide their applications with an enhanced visual experience such as speech synchronized facial animation Amazon poly makes it easy to request an additional stream of metadata with information about when particular sentences words and sounds are being pronounced using this the metadata stream alongside the synthesized speech audio stream customers can animate avatars and highlight text as it is currently spoken text in their application so the example here is playing speech and highlighting the spoken text this is a very good use of Amazon poly in the e-learning field and the last Point here is the teleforming with Amazon poly your contact centers can engage customers with the natural sounding voices you can cache and replay Amazon police speech output to prompt callers through interactive voice response that is ivr systems such as Amazon connect Additionally you can leverage Amazon polys API to deliver automated real-time information such as service status account and billing inquiries addresses and contact information so the example here is text to speech for telephony systems so these were some of the use cases wherein Amazon poly can be used so with this we have come to the last part of the session wherein I'll show you a short demo of how exactly Amazon poly works so the demo that I'm going to show you right now is based on text to speech conversion so say for example let me just give you a brief idea about this concept here so the concept of text to speech software is simple you take a paragraph a page or an article or even a whole book and you have a computer read it aloud to you when people think about text to speech they often associated with robotic voices and stilted cadences however this usually isn't the case anymore particularly with the modern software to some people text to speech may sound like a gimmick but it's a technology with very practical applications so let me tell you a few advantages of text to speech software so first it enables people with disabilities to read so the most obvious use of text to speech software is to enable people with visual impairments to consume the written content the second Point here is it provides hands-off reading experience even if your eyesight is perfect sometimes it is more comfortable or convenient to listen to something instead of reading it and the third Point here is for situations where audio versions of content aren't available so these days most popular books are also released in the audio format however the same doesn't hold true for most other written content including articles poems and more so text to speech software enables you to listen to any written content you want as long as the functionality is built in so here what I'm going to do is so say for example we have a Blog here on this at Eureka websites we have many blogs okay so what I'll do is in this demo I'll just copy one block here into this Amazon poly and we will generate an audio file of that blog so what you can do is you can download those audio files and you can listen to it whenever you're free so this will give you a hands-off experience for consuming the content instead of reading it so let's just go to our AWS console here so for that you need to have an obviously a prerequisite is you need to have an AWS account here so let me just log in to my AWS account here so enter your username and then enter your password so let me just enter my password Here okay so as you can see this is the AWS Management console homepage so what we will do is we'll click on Services here so in Services section if you can't find it in the history here as of now I have it here but I'll just show you where do you get it if you don't have it here in the history okay so in the machine learning section you'll find that okay so where is machine learning okay so here as you can see here the machine learning category and Amazon poly if you click on that okay we are in this section called text to speech the information part the documentation part listen customize and download speech integrate when you're ready so let me just open one block here of edureka so see programming language a very simple work as you can see here this is the block Okay C programming tutorial the basics you need to master C so this is the block that I'm going to convert into an audio file here so this is a pretty lengthy blog here so here as you can see this is the plain text here whatever you have to convert into audio that you have to write it in the plain text format and this one another tab here called ssml so ssml means speech synthesis markup language in this tags you can write whatever you want so I'll just give you a quick demo here so I'll just quickly copy this and I'll paste it here okay so let me just here to the audio here for listening to this speech you can click on this tab here listen to speech so language and region there's one more section of language entries in here so you can select whatever language that you want okay depending upon your requirements and the voice here so we'll check out a few voices here a few of the voices that are available both males and females so let's just go to the speech here first see programming language was developed in the mid-1970s but still it is considered as the mother of all programming languages okay so this was one voice so Joanna and she just read the first sentence here C programming language okay till here the mother of all language so now let us just take another voice here Sally female okay free programming language was developed in the mid-1970s but still it is considered as the mother of all programming languages soldiers just here to one more voice here Matthew so let us just hear to one more voice here Matthew who's a male C programming language was developed in the mid-1970s but still it is considered as the mother of all programming languages it supports multiple functionalities and also powerful enough to directly interact with Hardware units in the kernel this C programming tutorial deals with brief history of C programming language so as you can see here it is reading this text here and it is converting it into an audio file and you can also download the MP3 here and you can you have an option to change the file format Here MP3 OGG PCM speech marks and also the frequency okay the sample rate here okay so these are the options and apart from that you can synthesize it to S3 so you can transfer this audio file to S3 when you can use it for any other purpose depending upon your requirements here so let me just show you how do you do it in ssml so what you need to do is you have to just let me just copy this thing here okay first I'll copy this thing here and I'll paste it here in the speak tag apart from this so let me just copy another thing the remaining part of this paragraph okay and I'll paste it here so as you can see here we have copied this entire paragraph here inside this speak tag it begins with this tag and it ends with the stack okay so and apart from this if you want to add any break time here so if you want any pause here so what you can do is you can simply type break time equal to so say for example if you want a pause of say two seconds so you just add it using this thing here break time equal to in double quotes the duration so let me just listen to the speech here see programming language was developed so this speech began after the pause of two seconds so I think it's not needed here so maybe we'll try and add this somewhere else so let me just add it here so now let us just listen to the speech here see programming language was developed in the mid-1970s but still it is considered as the mother of all programming languages it supports multiple functionalities and also powerful enough to directly interact with Hardware units in the journal this C programming tutorial deals with I hope you guys observe the break time year it was a pause of two seconds so similarly wherever you want this pass you can add it so this was one demo here when we converted the text into a speech so like this you can write the entire blog here in this tag here speak tag you can introduce breaks wherever you want and like this you can convert the text into a speech and then you can download this page okay using this tag so with this you can have the audio files of your blogs and you can listen to this audio files whenever you want as per your convenience maybe on your mobile devices or iPads and other things so this was one application here of Amazon poly so let's just move on to the next part here so the next part here is the lexicons okay so as you can see here you can customize the pronunciation of specific words and phrases by uploading the Lexicon files in the PLS format let me just click here on learn more so the basic usage of lexicon says so many different languages have different pronunciations of specific words this is the whole purpose behind the concept of lexicons okay so for more reference you can refer to the documentation here managing lexicons this was one part and apart from that there's been this S3 synthesis tasks so you can synthesize long inputs directly into S3 so this was one more usage of Amazon poly and in this block thing here so say for example if you are hosting your website if you write your own blogs here okay if you have your own domain so what you can do is say for example in WordPress there's one plugin for Amazon poly so while writing a Blog you can download and install that plugin and what that plugin does is it will automatically convert your blog into an audio file and save it so that is a very convenient option here so if you want to try you can do it this was the overall demo part here [Music] what is Amazon recognition so as you can see on the screen Amazon recognition makes it easy to add image and video analysis to your applications using proven highly scalable deep learning technology that requires no machine learning expertise to use with Amazon recognition you can identify objects people text scenes and activities in images and videos as well as detect any appropriate content Amazon recognition also provides highly accurate facial analysis and facial search capabilities that you can use to detect and analyze and compare faces for a wide variety of user verification people counting and Public Safety use cases with Amazon recognition custom labels you can identify the objects and scenes in images that are specific to your business needs so let me give an example here you can build a model to classify a specific machine parts on your assembly line or to detect unhealthy plants I Amazon recognition custom labels takes care of the heavy lifting of model deployment for you so no machine learning experiences required you you simply need to supply images of objects or scenes you want to identify and the service handles the rest so this was the overall idea behind the concept of Amazon recognition so now let us just move on and discuss the various features so the first feature of Amazon recognition is the labels with Amazon recognition you can identify thousands of objects such as bike telephone building and scenes such as parking lot or Beach or a stadium or whatever when analyzing video you can also identify specific activities such as playing football delivering a package and so on so the second feature here is the custom labels with Amazon recognition custom labels you can extend the capabilities of Amazon recognition to extract information from images that is is uniquely helpful to your business so for example you can find your corporate logo in social media identify your products on store shelves classify your machine parts in an assembly line or detect your animated characters in videos the third Point here is the content moderation Amazon recognition helps you identify potentially unsafe or inappropriate content across both image and video assets and provides you with a detailed labels that allow you to accurately control what you want to allow based on your needs so the next Point here is the text detection in photos text appears very differently than neat words on a printed page Amazon recognition can read skewed and distorted text to capture information like store names street signs and text on product packaging so the next feature here is the face detection and Analysis with Amazon recognition you can easily detect when faces appear in images and videos and get attributes such as gender age range eyes open glasses facial hair for each in video you can also measure how these attributes change over the time such as constructing a timeline of the emotions expressed by an actor so the next Point here is the face search and verification Amazon recognition provides fast and accurate face search allowing you to identify a person in a photo or a video using your private repository of face images you can also verify identity by analyzing a face image against images you have reached out for comparison so the next Point here is the celebrity recognition you can quickly identify well-known people in your video and image libraries to catalog footage and photos for marketing advertising and media industry use cases and the last Point here is the pathing you can capture the path of the people in the scene when using Amazon recognition with video files for example you can use the movement of athletes during a game to identify plays for the post game analysis so these were some of the features of Amazon recognition and now let us talk about the advantage just you so the First Advantage here is that you can easily integrate powerful image recognition in your mobile or desktop application so here you eliminate the time consuming complexity associated with creating capacity for image recognition in your applications with this simple API AWS recognition is a turnkey solution for precise and robust image recognition moving on the second Point here is the artificial intelligence is at the core of the AWS recognition this service has grown out of deep learning technology that Amazon has been working on for some time already as a result Amazon is continuously adding support for new objects and improving their facial analysis capacity the breadth and precision of Amazon recognition keeps growing and improving as the new challenges present themselves the third Point here is image analysis that is scalable Amazon recognition analyzes literally billions of images over the course force of a single day the service provides uniform response times independently of the volume of request for analysis that you carry out this is to say that the latency of your request for information Remains the Same whether or not you make one request or thousand or even more the next Advantage here is full integration with the popular components of the Amazon web services so Amazon recognition is designed to work together perfectly with Amazon S3 AWS Lambda and various other popular AWS services so as far as integration of Amazon recognition is concerned there is not a single issue that you can face while integration and the last Point here is the low cost developers only pay for the quantity of images that they analyze and the metadata for faces that they store there are no required minimum payments or initial commitments so these were some of the benefits of Amazon recognition so let's just move on to the next part here to understand how it works so Amazon recognition provides two API sets you can use Amazon recognition image for analyzing images and Amazon recognition video for analyzing the videos both the apis analyze images and videos to provide insights you can use in your applications for example you could use Amazon recognition image to enhance the customer experience for a photo management application when a customer uploads a photo your application can use Amazon recognition image to detect Real World objects or faces in the image after your application stores the information returned from Amazon recognition image the user could then query their photo collection for photos with a specific object or a face deeper querying is also possible so for example the user could query for faces that are smiling or query for faces that are of a certain age you can use Amazon repair Nation video to track the path of the people in a stored video alternatively you can use Amazon recognition video to search a streaming video for persons whose facial descriptions match the facial descriptions already stored by Amazon recognition the Amazon recognition API makes deep learning image analysis easy to use for example recognize celebrity returns information for up to 100 celebrities detected in an image this includes the information about where celebrity faces are detected on the image and where to get further information about the celebrity so this was the overall working of Amazon recognition so now let us discuss a few use cases wherein Amazon recognition can be used so the first use case here is make content searchable Amazon recognition automatically extracts metadata from your image and video files capturing object faces text and much more this metadata can be used to easily search your images and videos with keywords or to find the right assets for Content syndication the next Point here is flag the inappropriate content with Amazon recognition you can automatically flag inappropriate content such as nudity graphic violence or weapons in an image or a video using the detailed metadata returned you can create your own rules based on what is considered appropriate for the culture and demographics for your users the third Point here is enabling digital identity verification using Amazon recognition you can create scalable authentication workflows for automated payments and other identity verification scenarios Amazon recognition lets you easily perform face verification for opted in users by comparing a photo or a selfie with an identifying document such as the driving license so the next Point here is respond quickly to the public safety challenges Amazon recognition allows you to create applications that help finding missing persons in images and videos by searching for their faces against a database of missing persons that you provide you can accurately flag potential matches and speed up a rescue operation so the next Point here is identify products landmarks and brands app developers can use Amazon recognition custom labels to identify specific items in social media or photo apps for example you could train a custom model to identify a famous landmark in a city to provide tourists with the information about its history operating as and ticket prices by simply taking a photograph and the last Point here is analyze the Shopper patterns with Amazon recognition you can analyze Shopper behavior and density in your retail store by studying the path that each person follows using phase analysis you can also understand the average age ranges gender distribution and emotions expressed by the people without even identifying them so these were some of the use cases wherein Amazon recognition can be used so now let us just move on to the last part of this session that is the demo part so what we will be doing here is we'll get one image here and we will try to fetch labels out of that image you know so something like object detection and all all those things so for that let me just go to my Amazon console here okay so you need to have an Amazon account here to use this okay so first of all what do we need is we need to search the IAM here so IM is basically for defining the rules here okay so click on IM that is from the services identity and access management so here what you need to do is you have to click on roles here so once you do that so say for example you have to create a new role here so once you click on create role here you have to choose the Lambda option here so once you click on a Lambda then you have to click on permissions here so once you do that you have to filter policies here so for that we need Lambda execute AWS Lambda execute after that we will need one more policy here recognition full access so this one okay so you have to just click on this checkbox here and click on next tags so this part is totally optional as you can see here after that click on review here and you just have to roll name your rule so say for example arvind arvind underscore role so once you do that click on create rule so as you can see here this Urban role has been created so after this you have to click on Services here and click on S3 okay in the storage part you have S3 I have this S3 Management console here and once you go to S3 you have to create a bucket here so creating a bucket is very simple you have to just click on create bucket here type the name of the bucket so once you do that you have to click on create bucket and as you can see I've already created one bucket here at Eureka record bucket so once you create this bucket here you have to upload one image here so for uploading this image so let me just show you this image as you can see here my image.jpg so for uploading this you have to click on upload here and you just have to add it okay or you can drag and drop as well so let me show you the image that I've just added here so this is the image on which we will be doing some analysis so once you add this image Here and Now what you have to do is you have to click on Services here and search for Lambda so once you click on Lambda here so what you need to do is you have to create a function here okay so as you can see here you have to click on create function so once you do that you have to name your function so you can name it whatever you want so I if we just name it my direct Funk and runtime you have to choose the language to use to write your own function so here I will choose python python 3.6 so here you have to choose or create an execution role so if you remember just few minutes ago we created one rule here our wind roll okay so you just have to create a role select that role here okay so as you can see there's this option of existing role use an existing rule to select arvind underscore role here okay so once you do that you have to click on create function so as you can see here this function my edurica function and here you have to write your code here so as you can see here this is our code for analyzing that image here apart from Json we have to import Moto 3 and this is the definition of this function here you have to use one client here okay for that I've defined this client using moto3 dot client and the name of the client is recognition and S3 also we have to Define for this client and once you do that you have to get your S3 object for that we have the function this as you can see s3.getobject here we have to specify bucket name so the bucket that you created in S3 the name of that bucket here and the name of the image here so this is like key value pair so bucket and key after that you have to get the content of that file whatever you have passed here okay my image.jpg and after that once we get the file content we have to get the response here okay so for response we have response equal to client dot detect labels so basically out of that image we are detecting labels here okay so whatever objects or things that we can identify out of that image so for that we have written detect underscore labels in which we have to pass this image S3 object bucket name of the bucket and the name of the image here okay so once you do that you can also specify your maximum labels so this part is totally optional if you specify maximum levels three years so in the output you will get three labels if you remove this parameter your max levels even that is fine you will get as many number of labels that are present in the image okay and the minimum confidence here is so you can specify your value 70 75 80 whatever you want so I've specified 70 here okay so once you do this you just have to print the response here so once you do that you have to click on Savior and now what we will do is we just have to test this so click on test here and you have to name this event you can name it whatever you want I've just named it my test and click on create here so after this click on test here so as you can see here once you click on test you can see the result here execution result succeeded and the details here so in the details okay so the log output as you can see here so we'll just verify this so the labels are person okay so in this image the recognition can see a person here and the confidence level for that is 99.10 something so this was one label and after that we have another label indoors okay so the confidence for that is 84.44 and the third one is the table okay so as you can see this person is sitting inside a room so that's why it said indoors and you have one table as well here okay so this was the third label as you can see here table and confidence 70.38 okay so now what we will do is we'll just edit this thing here we'll try and change the max level thing here okay so let's just remove this and now we will click on we have removed that parameter Max labels and click on Save here and now we will again click on test here okay so once you do that as you can see here we will get many number of labels so we have human here and person and after that we have indoors and table Furniture so we have these many number of labels here so these are pretty much relevant to this image here so this was the demo part okay wherein we perform some image analysis so after this what you can do is just for further demo purpose or you can try it on your own if you want you can just add video here okay so on that video also you can perform some analysis [Music] foreign devops and then we would understand how AWS and devops go well together so what exactly is devops now I just talked about software development right so when you talk about software development you have to mention devops now let's try to understand why to do that let me give you this basic definition first so it is nothing but a set of practices intended to reduce the time between committing a change to a system and the change being placed into normal production while ensuring high quality yes very textbookish and again for people who do not know what devops us this might seem a little vague so let me just simplify this definition for you people again you see an image here what you see is you see a developer you see an operator and there is a deployment wall which none of these two is ready to take responsibility of they're pushing that responsibility on someone else so yes this is what the scenario is when you talk about software development again let me give you a little more idea about this particular term so let's try to understand how developers work and how operators work I'm going to talk about developers their responsibility is to create code to update this code whenever required wait for the next releases and if there are any changes commit those changes submit those changes and again move it to the production environment where the operators take care of it then wait for the feedback from The Operators if there is any and then again go through the changes if there are any likewise wait for newer softwares newer products to work on so yes this is what the responsibility is create code create applications right so what happens here is when you do create a software so there are constant releases that you need to focus on we all know that every now and then you'd be getting a Windows update or your mobile phone update saying that okay you have a new operating system new release new version updated so this is how the technology is working everything gets updated every now and then so the reason this is happening is people want to stay competitive in the market the software company is at least and they want to ensure that their product has the latest features so this puts burden on the Developers because they have to constantly update their softwares now once they update a particular software it has to go and work in the production environment but at times it does not work in the production environment because the developer environment and the production environment might be a little different so something that works in the developer environment might not work in the production environment so again some changes are thrown back by The Operators and developers again get stuck so they have to wait till they get the response from The Operators and if it takes a longer while their work is stuck now if you take a look at it from the operator's perspective the job is to ensure that whatever is working in the developer environment it has to work in the production environment as well they deal with the customers get their feedbacks and if there are any changes which need to be implemented at times they implement it themselves if there are any core or important changes that are required those have to be forwarded to the developers so yes what happens at time says what works as I've already mentioned works in the developer environment does not work in the production environment and operators might feel that this was the responsibility of the developer which they did not do and probably they are facing problem because of it again the customer inputs if those are forwarded back to the developers team the operator team has to depend on the developers to make those changes right so as you can see these two teams are interdependent on each other and at times they feel that somebody else's work the developer's work is pushed upon the administrators or the developers feel that the administrators teams work is pushed upon their side so there is this constant tussle which the company owners have to take care of they have to think as an okay if this goes on how can I generate or produce new releases new softwares every now and then this could be a problem right so this is what devops does as the name suggests it is Dev plus Ops that means it combines the operations team and the devops team when I say combine they bring in this approach where integration and deployment and delivery it happens continuously and the fact that these things happen continuously we do not see the tussle between these two teams so yes as you move further devops helps you unite these two teams and they can work happily together so this is what happens in devops you code you plan you release there's deployment there's operations there's monitoring there's testing everything happens in a pipeline and these are some of the popular devops tools that let you take care of all these things but now again this is devops in general you have get you have puppet you have Chef you have ansible salt stack that help you automate this process of integration and deployment of your software but the fact that everything is moving to Cloud these days we are thinking about how can we do all these things on cloud do I need to move in these many tools if you want definitely you can move all these tools but a platform like AWS which is a popular cloud service provider what they have done is they've ensured that all the requirements of devops can be taken care on the platform itself and you have various services that are made available to you that help you in this process now say for example you have easy to write instances now you can launch servers at your will you can launch instances at your will so if your concern is scaling up and down AWS takes care of it you have various Services which help you monitor your process so monitoring is something that is taken care of there's Auto scaling there are various other services which This Cloud front which actually lets you create content delivery networks I mean you can have temporary caches where you can store your data and stuff like that so there are various AWS services that actually help you carry out the devops or the CI CD process with a lot more ease and that is why devops and AWS they form a very good combination or a combo hence we are talking about this term today that is AWS devops now that we have some idea about what AWS is what devops is let's try to understand how continuous integration delivery and deployment work with AWS and how they incorporate the devops approach to do that let's try to understand continuous integration and delivery first so let's take a look at this diagram to understand this process so these are the four steps that are there you have split the entire chunk of codes into segments so guys think of it as more of your mapreduce kind of an action I mean what happens is in your continuous integration and delivery we are trying to bridge in the gap between the developer team and the operations team right so we try and automate this process of integration and delivery so the fact that continuously you have various software updates which I just mentioned right so what if I have like 50 or maybe 100 developers who are working parallely now there are certain resources that need to be used by everyone right so what problem it creates is suppose if I'm working on a particular code I work on that piece of code and if somebody else is working on that piece of code and we have this Central system where the data needs to be stored so I'm working on this piece of code I make a particular change and I store it there now someone else is working on this piece of code and that someone makes a change and he or she stores it there right so tomorrow if I come back probably I need a fresh copy of this piece of code what if I just start working on the piece of code that I'm working and then I submit that code there so there would be an ambiguity right whose code to be accepted whose quotes copy should be made so we need this Central system to be so smart that each time I submit a code it updates it runs tests on it and sees whether it's the most relevant piece and if someone else submits that debt piece of code then tests are run on that piece of code this system should be able to ensure that each of us next time when we go and pick the piece of code we get the latest piece of code and we get the most updated one or the best piece of code so this process of committing the code putting in that piece of code and automating this whole process so that as it moves further it also gets delivered and deployed to the production in the similar manner with the tests that need to be conducted is called as continuous integration and delivery now integration as I've mentioned here the continuous update it's in the source code or the code that I'm building the code is built compiled and when I talk about delivery and deployment the pieces of code once they are ready to move to the production environment those are continuously deployed to the End customer now deployment seems a very easy process right I mean picking up the code and giving to the End customer no it's not that easy deployment actually involves taking care of all the servers and stuff like that and spawning up these servers is a difficult task so automating this process becomes very important and if you do it manually you're gonna suffer a lot so yes this is where continuous integration and delivery comes into picture code it is continuously generated it is compiled it is built then compiled again then tested again then delivered and made sure that it gets deployed to the End customer the way it was supposed to be so you can see that there are certain steps here it says split the entire chunk into codes or into segments keep small segments of codes into manageable form basically integrate these segments multiple times a day which I mentioned that there should be a central system and then adopt a continuous integration methodology to call coordinate with your team so this is what happens I mean you have a source code repository where the developers work they continuously submit their pieces of code now repository think of it as a central place where the changes are constantly committed then you have a build server where everything gets compiled reviewed tested integrated and then packaged as well finally certain tests final tests are run to go through the final integrities and then it goes to the production environment where this process the building the staging and the committing process it gets kind of automated to reduce your efforts so guys when you talk about AWS in particular you have something called as AWS code pipeline which lets you simplify this process it lets you create a channel or a pipeline in which all these processes can be automated so let's take a look at those processes as well first let's get through the definition part let's see what it has to say I would be blankly reading this thing and then probably will be having the explanation part that follows so as the definition says it is a code pipeline which is nothing but a continuous delivery service we talked about continuous delivery already and you can use the service to model visualize and automate certain steps required to release your software something that we've already discussed in continuous integration and delivery so this is basically a continuous delivery service which lets you automate all these processes so as I mentioned automating these processes becomes very important so once you do use this service these are some of the features it provides you it lets you monitor your processes in real time which becomes very important because we are talking about deploying softwares at a greater pace so if this can happen at real time I mean if there is any change and if it is committed right away probably you are saving a lot of time right you ensure consistent release process yes as I've told you deploying servers is a difficult task and time consuming task if this can be automated a lot of effort is saved speed of delivery while improving quality yes we've talked about this as well and we have pipeline history details monitoring becomes very important guys so what code pipeline does is actually it lets you take a look at all the processes that are happening I mean if your application is built it goes to the source then it moves to the deployment all these processes can be tracked in the pipeline you get constant updates as in okay this happened at this stage if anything failed you can detect as an okay this is the stage where it is failing maybe stage number three stage number four and accordingly you can edit the stuff that has happened at that stage only so viewing the pipeline Ministry details actually helps a lot and this is where code pipeline comes into picture so this is what the architecture of code pipeline looks like it's fairly simple guys so some of this might seem a little repetitive to you people because the concepts are similar the concepts which we discussed those can be implemented by using Code pipeline so yes I've talked about these things but let's try to understand how the architecture works and will be using some other terms and discuss some terms in the future slides as well which we've already talked about but each of these Services they do this task a little differently or help you automate these processes hence the discussion so let's see to how much level can we keep it unique and let's go ahead with this discussion as well so let's see how the code pipeline Works basically there are developers as I've already mentioned these developers would be working on various pieces of codes so you have continuous changes and fixes that need to be uploaded so you have various Services one of them is code commit which lets you have a initial Source management system kind of a thing which lets you basically take care of repositories and stuff like that so it lets you directly connect with Git I would be talking about git what get is but for people who know what git is if you have to manage your git repositories you have a service called as code commit so this is what happens if there are any changes those go to the source your developers can commit those changes there and then it goes into the build stage this is is where all the development happens your source code is compiled and it is tested then it goes to that staging phase where it is deployed and tested now when I say tested these are some final tests that have to be implemented before the code gets deployed then it has to be approved manually and it has to be checked manually whether everything is in place and finally the code is deployed to the public servers where customers can use it again if they have any changes as I've mentioned those can be readily taken from them and it goes back again to the developers and the cycle continues so that there is continuous deployment of code this is another look at it it is very simple but this is more from AWS perspective so if there are any changes that developers commit those go to the source now your data is stored in a container called as S3 that is simple storage service in the form of objects so if there is any change that has to happen the data is either fetched from the storage container which is S3 and the changes are built and then again a copy of it is maintained in the form of zip as you can see here there are continuous changes that are happening and those get stored in the S3 bucket now S3 should preferably be on the region or in the place where your pipeline is that helps you carry out the process of continuous integration and delivery with ease in case if you are concerned with multiple regions you need to have a bucket at each reason to simplify these processes so again here to the code gets to the source it is probably submitted to the build stage whereas the changes happen a copy is maintained at S3 and then it goes to the staging again a copy is maintained and then it gets deployed so this is how the code pipeline works and to actually go ahead and Implement all the actions of code pipeline you have a service or these services that is your code deploy build and code commit in AWS so these Services actually help you carry out some or most of these processes that are there let's take a look at those services and understand what do they do so first and foremost you have your code deploy code built and code commit so this is not the order in which you deal with these things now these things actually help you in automating your continuous delivery and deployment process they have their individual commitments let's talk about them one by one first let's talk about code commit which is last in the slide so basically I talked about moving your piece of code to a central place where you can continuously commit your code and get the freshest or the best copy that is there right so code commit what it does is it helps you manage your repositories in a much better way I mean think of it as a central repository so it also lets you connect with Git which itself is a central storage or a place where you can commit your code you can push and pull that piece of code from there work on it make own copy of it submit it back to the main server or your main or Central operating place where your code gets distributed to everyone so that is get and what code command does is it lets you integrate with Git in a much better way so you do not have to worry about working on two different things it helps you in automatic authorization pulling in the repositories that are there in your git account and a number of other things so yeah that is what code comment does then you have something called as code built as the name suggests it helps you automate the process of building your code where your code gets compiled tested certain tests are performed and again making sure that artifacts or the copies of your code are maintained in your S3 and stuff like that so that is what code build does and then you have code deploy as I've already mentioned deployment is not an easy task I mean if we are stuck in a situation where we are supposed to manage the repositories we are supposed to work on quite a few things in that case if we are forced to kind of take a look at the servers as well one new instances upon new piece of servers that could be a tedious task so code deploy helps you automate these processes as well so this was some basic introduction to these things let's just move further and take a look at the demo so that we can talk about some of these terms and the terms that we've discussed previously in a little more detail now in one of my previous sessions I did give you a demo on continuous integration and delivery I believe there were certain terms that people felt were taken care of in a speedy way hope that I've explained most of the terms with more Finance this time and in more detail as we go through the demo to I would try and be as slow as possible so that you understand what is happening here so let's just jump into the demo part guys so guys what I've done is I've gone ahead and I've switched into my AWS console for people who are new to AWS again you can have a free tier account with AWS it's very easy you have to go and sign in put in a credit card or debit card details a free verification would happen and probably you'd be given access to these Services now most of these services are made available to you for free for one complete year and there is certain limitation on these services so you have to follow those limitations if you cross those limitations maybe you would be charged but that happens rarely I mean if you want to get started definitely this one year free subscription is more than enough to get Hands-On on most of the services so I would suggest that you create this free tier account if you've taken a look at my previous videos you'll know that how to create a free tier account if not it's fairly simple just go to your browser and type AWS free tier and probably you'll be guided as in what details have to be entered it's not a complex process it is fairly simple and it happens very easily so you just have to go ahead and do that once you do that again you'd be having access to this console guys so once you have an access to this console you have all the services that you can use so in today's session we would be working on a similar demo that we worked in our one of the previous sessions here we would be creating an application a pass application platform as a service application and we would be deploying that application using our code pipeline so there would be talking about other terms as well like code commit code different code build so do not worry we would be discussing those as well so this is what the demo is for today's session so guys let's start by creating our pass application to do that we would be using elastic Beanstalk which lets you have a ready-to-use template and using which you can create a simple application now this being a demo guys we would be creating a very simple and a basic application so just come here and type elastic bean stock so when I come to this page guys if you've created an application it would show you those applications but the fact that if you're using it for the first time this is the console that you'd be getting that is why I have created this demo account so that probably we get to see how you can start from the scratch so if you click on get started guys creating an application here is very easy like extremely easy you have to enter in certain details only it takes a while to create an application understandable I would tell you why it takes the time but once it happens it happens very quickly so all you have to do is give your application name let's call it say deployment app I'm very bad at naming conventions let's assume that this is good you can choose a platform guys you can choose whatever platform you want say PHP is what I'm choosing right now as I've told you it's a pass service pass that is platform as a service means that you have a ready to use platform guys that is why you can just choose your platform and your elastic bean stock would ensure that it takes care of all the background activities you do not have to set up your infrastructure it takes care of it so once I select the platform I can use the sample application or use the code if I have in this case I would be using a sample code that AWS has to offer and I say create there you go guys this thing is creating my application so whatever is happening here it shows that these are the processes now it is creating a bucket to store all the data and stuff like that so it would take care of all these things guys it might take a couple of minutes so meanwhile let's just go ahead and do something else let me just open AWS console again somewhere else I hope it does not ask me to sign in again I've already signed in so meanwhile that application gets created let me just go ahead and create a pipeline guys so code pipeline again as fairly simple guys what happens here is very easy I just go ahead and put in certain details here as well and my pipeline would be created so do you want to use the new environment or want to stick to the old one you can click on Old right and you can go back and create it the way it was done or you can use the previous environment I'm gonna stick with that I was very comfortable with that so let's just stick with it if you want you can use the new interface there's not a lot of difference certain little or minor differences so you can just come here and add in the name of the pipeline that you want to create say demo pipeline I see next Source provider guys I would be using GitHub here because I want to basically pick up a repository from GitHub that helps me in deployment so I need to connect to GitHub for that it would ask me to authorize if you have an account you can always do that so that it can basically ring in all the repositories that you have so just say authorize if not you'll have to sign in once so my account has been added here guys repository I need to pick a repository this is the repository that I would be picking do not worry I would be sharing this piece of code or else what you can do is you can just go to GitHub and type AWS Dash code pipeline Dash S3 Dash code deploy Dash Linux now it is a repository given to you by AWS if you take a look at it and if you type it just the way it is named here from AWS you should get that repository in GitHub you just have to go ahead and Fork it into your GitHub account and probably you'd be able to import that repository directly you can see that repository has been for locked here into my GitHub account you just type the name here this name search it and probably there would be an option here Fork I forged it so it does not activate this option for me in your case it would be activated you have to just click on it and the repository would be forked into your account so I'm getting or importing a fork from my GitHub I've authorized my account and then I can just go ahead and do the stuff Branch Master Branch yes and just do the next step build provider no Builder I don't have anything major to build so I don't need to go ahead and provide a build provider you can use code build right guys if you want to move or basically deploy your code to ec2 instances you can use code build if you want in this case I have an application in which I have an ec2 instance and stuff like that so I don't need to go ahead and do any building stuff hence no build for me so I say next deployment provider in this case my deployment provider would be my EBS so we have that option yes select EBS elastic Beanstalk not EBS EBA stands for elastic block storage that is a different thing guys elastic Beanstalk make sure you do that application name deployment app was the name right yep and the environment this is the environment it creates the environment on its own I believe that it has created the environment it says it is starting I hope the environment has been created so guys let's just see whether our application is up and running so that probably I can pass into the details yes the application has been created guys so let's just go back and select this say next now create an IM role is what it's saying so let's say sample okay guys so what happens normally is um an IM user gets created each time you create a role so in this case it is asking me to create one it's access create a new M role AWS code pipeline nice here love successful so a role has been created next step now it gives me the details guys basically it would tell me what are the stuff that I've done so everything is here I don't think I need to cross check it you might just cross check the stuff that has happened and say create a pipeline so guys the pipeline has been created here as you can see these are the stages that have happened if you want you can just go ahead and say release a change now these things are happening guys and let's hope the deployment also happens successfully it just created an IM user let's see whether it falls in place everything is in place as far as the source part is concerned it has succeeded and now the deployment is in progress so it might take a while meanwhile just go back and take a look at this application so if I open this application guys it would give me an overview of what has happened with this application guys as you can see these were the steps that were implemented now the application is available for deployment it successfully launched the deployment environment it started with everything that it was supposed to do like create or launch an ec2 instance and stuff like that so everything is mentioned here what happened at what time so this is a past service guys and it works in the background I mean if you actually go ahead and launch an instance on your own configure IM users configure security groups it takes a longer while but what the service does is it automates that process it understands that you need an ec2 instance it launches that instance it assigns security groups vpcs and stuff like that all you have to do is run your application on top of it as simple as that so it has taken care of everything and run a PHP application for me so yes this is what has happened here if I just go back here meanwhile let's see whether our code has successfully run you can see what has happened here I've released the change as well and you can view the pipeline history if you want you can click on this icon and all the details will be given to you what happened in what stage so these are the things that have happened till time now guys let's just go back and take a look at something that we could so I'm gonna come here and say service ec2 because my app launched an ec2 instance so there should be an instance created by elastic Beanstalk see one instance is running it has a keeper attached to it as well so these are the details guys I have a public IP associated with it if I copy it there you go copy this IP and I say run this IP you have successfully created a pipeline that retrieved this source application from an Amazon S3 bucket and deployed it to three instances it did not deploy to three instances using Code deploy it deployed it to only one instance you see this message that it deployed it to three instances is because the code or the repository that I used it was supposed to deploy to different instances if there are multiple instances and hence this message would have made more sense then but the fact that we've deployed it to only one ec2 instance it should actually display that message so the message that you're supposed to give you can actually come back here and make change to the piece of code that you worked on if you go to the readme MD file I think this is where the piece of code is there you go okay not here Compares that file that needs to be edited let me just take a look at some other files as well yeah this is the file sorry so if you go to the index.file here is the message guys so you can probably make a change to this message instead of saying three you can say one here edit this piece of code and then you submit the code again so when you do launch or type in this IP address probably that change would be reflected so guys what we've done is we've actually gone ahead and created a pipeline successfully and in that process we've actually gone ahead and moved or deployed our application from here so guys in case if I do go ahead and commit changes to the code that I just talked about those would get reflected right away in my history when I talk about this pipeline so it does give you continuous integration and deployment these are the things that have happened till time now guys let's just go back and take a look at something that we could so I'm gonna come here and say service ec2 because my app launched an easy to instance so there should be an instance created by elastic Beanstalk see one instance is running it has a keeper attached to it as well so these are the details guys I have a public IP associated with it if I copy it there you go copy this IP and I say run this IP you have successfully created a pipeline that retrieved this source application from an Amazon S3 bucket and deployed it to three instances it did not deploy to three instances using Code deploy it deployed it to only one instance you see this message that it deployed it to three instances is because the code or the repository that I used it was supposed to deploy to different instances if there are multiple instances and hence this message would have made more sense then but the fact that we've deployed it to only one ec2 instance it should actually display that message so the message that you're supposed to give you can actually come back here and make change to the piece of code that you worked on if you go to the readme MD file I think this is where the piece of code is there you go okay not here Compares that file that needs to be edited let me just take a look at some other files as well yeah this is the file sorry so if you go to the index.file here is the message guys so you can probably make a change to this message instead of saying three you can say one here edit this piece of code and then you submit the code again so when you do launch or type in this IP address probably that change would be reflected so guys what we've done is we've actually gone ahead and created a pipeline successfully and in that process we've actually gone ahead and moved or deployed our application from here so guys in case if I do go ahead and commit changes to the code that I just talked about those would get reflected right away in my history when I talk about this pipeline so it does give you continuous integration and deployment [Music] what are containers before we get into what containers are you need to understand what a virtual machine is virtual machine is a software-based application environment that is installed to simulate the underlying Hardware so to put it simply virtual machine gives you an illusion that you're running multiple computers on Hardware when in reality you're just using one so basically watch on machines offer you virtualization if we take a look at it diagram which you can see on the screen you can see that the bottom layer you have host machine or host operating system this host operating system assigns different resources like RAM and CPU to all the virtual machines so all this virtual machines share the resources that are available to them and then there is hypervisor it acts as an agent between all the virtual machines and your host machine and finally sitting on top of hypervisor we have virtual machine as you can already see every virtual machine has its own underlying operating system has its own binaries and libraries and an application that it services now talking about containers at their code they're very similar to Virtual machines except that instead of virtualizing Entire Computer like we do in Virtual Machine we just virtualize the underlying operating system in containers so in container technology a single guest operating system on host machine can run many different creative applications this makes containers much more efficient fast and lightweight when compared to Virtual machines so in simple terms you can think of running a container like running a virtual machine just without the overhead of spinning up entire operating system so now you know what containers are but how do you create one how do you create a container Amazon elastic container service for kubernetes is a managed service which allows users to run kubernetes on AWS Cloud without having to manage underlying kubernetes control plane let's go ahead and take a look at few other benefits like we discussed just now you don't have to manage any control plane when you're using Amazon eks service Amazon eks runs the kubernetes management infrastructure across multiple AWS availability zones so you simply provision worker nodes you create them and then connect them to your eks cluster endpoint moving on infrastructure running on Amazon eks is secured by default this is mainly because Amazon eks sets up secure and encrypted communication channels between your worker nodes and your kubernetes cluster endpoint then applications which are managed by Amazon eks are fully compatible with those on standard kubernetes environment what I mean by that is you can easily migrate any standard kubernetes application to Amazon eks without modifying any code just like that you can transfer from standard kubernetes environment to AWS cloud and finally AWS actively work sorry and finally AWS actively works with kubernetes Community it also makes contributions to kubernetes code base that helps AWS e-case users to use other AWS Services as well so I hope now you know what Amazon eks services and what it does now let's go ahead and see how it actually works just so you know we'll be following the same set of steps in the demo part of the session as well so make a note of them okay then getting started with Amazon elastic container service for kubernetes is fairly simple first you need to create an Amazon eks cluster this you can do it using AWS management concern or AWS CLR so once you've created a cluster you need to create your Walker notes and then add them to the cluster which you created earlier as for the demo part of the session we'll be using a cloud formation template to create and configure worker nodes so once your cluster is ready and once your worker notes are there then you need to access you need a tool to access your kubernetes cluster right so this is where you can use tubactyl or cube feed or mini Cube and many other options so as for the demo part of the session we'll be using cubectil to access kubernetes cluster and finally once the cluster and worker nodes are up and running you can easily launch an application on Amazon eks cluster the same way you would do with any other kubernetes environment so I hope you have noted down the steps I hope you have understood what Amazon eks is and what you do with it now let's go ahead and see how to create a cluster and how to create worker nodes and attach them to it and how to launch an application so I'll be launching a simple nginx application here so let's get started with the demo so guys this is my AWS Management console before we go ahead and create Amazon eks cluster there are certain prerequisites firstly we need to create an IM role this is mainly to assign permissions to kubernetes to control resources on your behalf secondly you need to create a VPC and security groups for your cluster to use so first let's go ahead and create an IM role so I'm going to search for I am here click on iron you have roles here click on that and click on create rule since we are designing permissions to Cuban at the service so select eks here and click on next permissions as you can see this role basically follows two policies you have Amazon eks cluster policy which mainly assigns permissions to kubernetes to manage resources on your behalf then you have Amazon eks service policy this policy assigns permissions to eks to create and manage necessary resources to operate your eks cluster so now once you've chosen the policies you can click on review your let's give role a name let's say eks rule since every information is right here I'm going to click on create rule so the role is now created now we'll have to create a VPC and um security groups this VPC and security groups are nothing but resources that you use on AWS cloud so I can go ahead and create them using a cloud formation template basically cloud formation is a service which allows you to which allows you to use a single text file to manage all your resources that you're using on AWS cloud so you can as you can see this is my cloud formation console and I'm going to click on create stack here while you have an option to create your own template or use a sample one but I'll be using a new specific URL here so for that open Amazon ecast documentation click on get started here okay when you scroll down here's the URL I'm gonna copy it and paste it well if you know if you want to know what template actually is you can go ahead and click on view in designer here basically it's going to create a VPC and a security group and three subnets click on next we'll have to give stack a name so let's say eks service as you can see this is the PC blog subnet one subnet 2 and subnet3 click on next so this page is optional if you want to assign any additional tags then you can go ahead and do it or if you want to assign any IM rule so that cloud confirmation can access your resources Beyond on your behalf then you can go ahead and do that as well I'm Gonna Leave This page blank and click on next year so review URL right it's right and then click on create stack here well it's going to take a minute or a minute for so as you know we have used a cloud formation template here to create VPC and security groups while you can create this on your own as well you have a service called VPC or you can go ahead and create VPC using that as well if you want to move no more then left it to Amazon eks documentation it's given properly over there so bear with me guys that's gonna take maybe one minute so once we're done with the prerequisite prerequisitives that I am role and VPC will go ahead and create a cluster so it's created successfully now let's go ahead and create a cluster before that click on the stack which you just created go for outputs and maybe you can just copy all this VPC ID subnet ID and security groups in our editor so you can refer to them easily okay I'm gonna paste it over here yeah it should be fine let's go back now let's go ahead and create eks cluster so when you search for eks this is the page where you like click on Next Step here give a unique name I'm going to use nginx cluster version it'll take the default current version and I am role as we know we created one earlier that should be this right okay then VPC let me go back and check the ID it says it ends with CB third and the the subnets which are related to VPC have already been specified here then Security Group as well this one so once you're sure that you've given correct details correct VPC and the subnets and Security Group click on create here so guys it's gonna take a while for the cluster to get creative so let's go ahead and install some software like I said earlier you need a tool to communicate with kubernetes cluster we'll be using cubactyl here so let me show you how to install it go back to ek's documentation part and if you scroll down it says install cubectil here let me open it a new tab so depending on the operating system you're using you can download it I'm using Windows so so firstly you need to open your partial well you're using curl command to download the software here I've already downloaded it once you've done that you need to add this binary to your path and suppose if you have any directory which is already there in the path and if you have stored this quebecile.exe file there in that directory then you know that you don't have to change the path I've already installed it so I've created a separate folder called cubot and have storedcubacter.exe here so all you have to do is create a new directory like you did to the new directory then you have to edit your path environment variable that is you can click on this PCR and right click click on properties add system settings environment variables and path click on edit here as you can see I've already added the path here so make sure you've had you've made necessary changes in the path let me see if the cluster is created it's still creating anyway in addition to cubectil you have to install AWS I'm authenticator as well it's basically a tool for using AWS IM credentials to authenticate a kubernetes cluster so let's say you are an admin you are running and kubernetes cluster on AWS so you already need to manage AWS IM credentials to create and update your cluster in addition to that you also need to check if the user is authorized to access your cluster or not well this is a lot of work to do instead you can use AWS IM authenticator this assign CM users to some certain role and it determines if a user has access to Cluster or not if you can use or if we can access the cluster or not so let's see how you can install that going back to documentation page it says to install AWS I'm authenticator here well again depending on the operating system you're using you can just click on the link or you can use Curl command on Powershell to download it so once you've downloaded again you can for easy access you can go ahead and store it in the same directory which where you've stored your quebectal.exe binary as you can see I have it I have it here so make sure you install quebectel and AWS I am authenticated properly okay it's taking quite long foreign yep so since it's taking long let's go ahead and check if cubectil and AWS I'm authenticator have installed properly or not so let me go to my Cuban folder folder there if it's to install properly then should give you an output for this command I'm just checking the client version here so since it's giving me output I'm sure that installation is proper same for AWS I am authenticator as well you can give help command yeah well let's walk in both of them are installed properly so now let's go back and see if the cluster is created yet foreign foreign foreign foreign foreign so finally it's created it usually doesn't take this long but sometimes it might take anyways now that we've created cluster we need to make the quebecile tool which we installed earlier to point to Cluster so that we can access the cubacter cluster I mean kubernetes cluster oh so let's go back to partial so we'll be using a command called update cubeconfig when we use this command a configuration file will be created by default if you have created cluster earlier this config file will be updated if you haven't then it'll be created well you can configure your tool to point to Cluster using AWS CLI or you can do it using AWS Management console as well but I'm using AWS CLI if you are using awcli make sure that you've installed the correct current version of it let me check for the version here and I done in addition to that you also make sure that the python version is above three point okay so let's go ahead and use update config command and I need the name of the cluster I'm going to copy this and paste it here it says added new context so and so to config file let's go back and check list D and users I have Dot qbr under that fconfig and when I click on open with as you can see a cluster has been added here foreign next step is to create Walker notes again as you know worker nodes are nothing but easy to instances those are nothing but resources that you use on AWS Cloud so again you I'm using your cloud formation template to create this worker notes click on cloud formation or search for cloud formation click on create stack here again I'll be using the URL let's go back to eks documentation and in the step three you have URL paste it here so basically this template using this template we are going to create Walker notes again give your stack a unique name let's say nginx cluster Walker notes one more important thing guys here the cluster name should be exact same class I mean exact same as the one which you created earlier don't make a mistake otherwise your worker notes won't be able to connect to the main cluster So to avoid making that let me copy the exact name from eks console so I'm going to copy this cluster name and thank you paste it here now the security groups are the same ones which we created earlier so this should be it let's just let me just check and give your node entire node a name like one is the minimum number of instances that your auto scale group will create and three is the maximum number of instances or worker notes well like I said worker notes are nothing but easy to instances so obviously you'll be needing an Amazon image for that let's go back to documentation part again and you have Ami here then make sure to check out in the which region you are I'm in North Virginia region so I'm as in I'm using the North Virginia reason region so I'm copying this ID and you'll have to give a key name as well I've already created what if you haven't just go to ec2 services and under Networks let me show you that so under Network and security you have key pair option you can go ahead and click on create keypad once the case created you download the file once you've done that you can see that you can see your key here well if you want to assign any additional features you can do so using bootstrap arguments and finally VPC ID sure and make sure you've selected all the correct subnets so once you show that you've given all the details correctly click on next option like I said this page is optional so if you want to add any text or assign any IM role you can do that here I'm going to skip this and click on next before you go ahead and click on create just review if you have given proper correct details like the cluster name and number of nodes node group Ami and all that details and click on this it says acknowledge that AWS cloud formation might create IM resources on my behalf and then click on create stack so guys it's gonna take a while till the stacks gets created let's go ahead and take a look at it take a look at architecture yeah so guys as you know Amazon eks is a fully managed service where you can deploy manage and scale different type of containers using kubernetes on AWS and this eks service runs control plane across multiple availability zones availability zones so as you can see I have my eks control plane here in eks VPC and on others and on the other side I have worker notes running on my VPC now we have to so once the worker nodes are created we have to configure these worker nodes to connect to eks cluster for that we need to make some Network changes like we need to ingest eni that's nothing but elastic network interface basically it's a component of VPC and is managed by eks control plane so if your eks cluster or kubernetes cluster wants to interact with ec2 nodes or your worker nodes or applications that are running on this work notes then it'll use this elastic network interface to do so similarly suppose your applications or your ECT ec2 worker notes want to interact with cluster then they'll be using low network node load balancer here so like I said the architecture is basically very simple you have your kubernetes cluster and you have control plane using which you can control your kubernetes cluster you have your quebecue using which you can access your cluster then suppose if your cluster wants to communicate to worker nodes it'll be using elastic network interface card similarly if your Walker nodes want to communicate with cluster then they'll be using a network load balancer so I hope you've understood let's go back and see if the stack is created or not yet foreign so while you while you guys are waiting for this to get created maybe you can go ahead and install Quebec till AWS IM authenticator and maybe start working with me in palette go ahead and create clusters and all the chart formation stacks and all doesn't care so once this worker nodes get created all we do is we connect this work notes so that they can start interacting with cluster as when we add this worker nodes to Cluster then we'll launch a simple nginx application on it as you can see the worker nodes are finally created let's go back to Powershell there's a command called get notes it basically lists out all the worker nodes that you have created it says no resources found because we just created Walker notes we have not connected them to Cluster so to do that we use some we use a kubernetes object of type config map so in my text editor I have a file with DOT yml extension as you can see it's a kubernetes object of type config map here you can see an Arn just now when we created a stack okay aren't should have been created as well so you can click on this and go for output you'll find IR in your copy it and placed it paste it here I hope I've copy the right one right it ends with VA yeah save it and then store it in the same folder where you've stored your back Quebec till dot exe file as well save and yes now if you try this command again it's again going to say no resources fault because we just created a config Mark file we didn't apply the configuration yet for that okay back to apply iPhone f your file name okay let me check sorry for the mistake foreign name it says config map created let me clear the window now to check if the configuration was done properly or not to bectal get notes foreign created and they're attached to Cluster properly now so so like I said we created a cluster we created Walker notes and attached them to Cluster now all we have to do is we have to launch a simple application on this cluster so I'm going to launch a simple nginx application here for that I have two files I have a kubernetes object of type deployment using this I can launch an application on kubernetes so as you can see it's of type deployment and file name is nginx you have something called pods these are nothing but group of containers kubernetes runs containers on ec2 instances and it includes containers and specifications for how they should run and network and storage details and all so this pods basically contain all that information so I'm saying deploy two parts matching the template and this is the application name version and this is a Target Port which I'll use when I'm trying to open application on web browser now suppose what if node containing all the spot Parts die well you might load you might lose the copy of your application right so to avoid that you have kubernetes objective type service so this cubanatic object of site service is an abstract it is an abstraction which defines a logical set of pots running somewhere in your cluster and all these clusters are provided with same functionality as the ones you've created in your deployment object so I'm gonna go ahead and save this file in the same folder where I've saved config map as well so nginx similarly this as well save now let's go back to Powershell so as you can see I have both nginx a documentation object and service object as well now we just save the files in the folder we'll have to create an application for that we'll be using Create command create iPhone f and folder name it is deployment kubernetes object of type deployment is created now same for the service that is created as well so if you have to confirm that then to back till get SVC or services provide format so as I said this is Master kubernetes which we created earlier the cluster and this is the application which we just launched on cluster so this is the external IP address that we are going to paste it in the browser to take a look at application but before that just go ahead and we'll see all the information about this nginx application using describe command SVC nginx so as you can see using describe command when you use describe command you get all the information about your application you have type which is load balancer you have external IP address you have Target Port using which you can open your order application so I'm going to copy this and paste it in my browser like I said Target Port is 80. well there's some error give me a minute foreign sorry guys just give me a minute yeah so as you can see application has successfully launched so like I said music kubernetes on AWS is fairly easy let's go through the steps which we went first we created a cluster before that there were some prerequisitives like I am role to assign permissions to a kubernetes cluster and then vpcs and subnets on which you are cluster can work on once we've done that we created a cluster then we created worker nodes and then configured them to configure them so that they are attached to Cluster successfully once your walk-in and cluster nodes are up and running then we launched some simple nginx application so just don't stop here go ahead and try to create your own cluster and launch your own application on it copy this and paste it in my browser like I said Target Port is 80. well there's some error give me a minute foreign sorry guys just give me a minute yeah so as you can see application has successfully launched so like I said music kubernetes on AWS is fairly easy [Music] now the first question is a very common one that is what is Amazon ec2 now ec2 is basically the shot for elastic compute cloud and it provides scalable Computing capacity using Amazon ec2 eliminates the needs to invest in Hardware leading to faster development and deployment of applications you can also use the Amazon ec2 to launch as many or as few virtual servers as needed configure security and networking and also manage storage it can scale up or down to handle changes in requirements reducing the need to forecast traffic now the ec2 provides virtual Computing environments which are called as instances so this was about the Amazon ec2 now moving on to the next question what type of performance can you expect from elastic block storage which is also known as the EBS so how do you back it up and enhance the performance now for the performance of an elastic block storage varies that is it can go above the SLA performance level and also after that drop below it now SLA provides an average disk IO rate which can at times frustrate performance experts who yearn for Reliable and consistent disk throughout on a server now virtual AWS instances do not behave this way one can backup EVS volumes through a graphical user interface like the elastic Fox would use the snapshot facility through an API call also the performance can be improved by using Linux software rate and striping across four volumes the next question is how terminating and stopping an instance are the different processes now the instance performs a regular shutdown when it is stopped it then performs transactions as the entire ABS volumes remain present it is possible to start the instance anytime again when you want the best thing is when the instance remains in the stop State users don't need to pay for that particular time but when it comes to termination the instance performs a regular shutdown and after this the Amazon EBS that is the elastic block storage volumes start deleting so you can stop them from deleting simply by setting the delete on termination to false because the instance gets deleted it is not possible to run it again in the future so this is how these two processes that is determination and the stopping are different now moving on to the next question what are the differences between an on-demand instance and a spot instance now spot instance is similar to bidding and the price of bidding is known as the spot price both spot and on-demand instances are pricing models in both of them there is no commitment to the exact time from the user end without upfront payment spot instance can be used while the same is not possible in case of an on-demand instance it needs to be purchased first that the price is higher than the spot instance now the spot instances are spare unused ec2 instances which one can bid for and once the bit exists the existing spot price the spot instance will be launched if the spot price becomes more than the bid price then the instance can go away any time and terminated within 2 minutes of notice now the best way to decide on the optimal bid price for a spot instance is to check the price history of last 90 days that is available on AWS console now the advantage of spot instances is that they are cost effective and the drawback is that they can be terminated anytime and mostly spot instances are ideal to use when there are optionalized to have tasks you have flexible workflows which can be run when there is enough compute capacity tasks that require extra Computing capacity to improve performance talking about the on-demand instances these are made available whenever you require them and you need to pay for the time you use them only on an hourly basis these instances can be released when they are no longer required and do not require any upfront commitment the availability for these instances is guaranteed by AWS unlike spot instances so these were some of the differences between the spot instance and also the on-demand instance now the next question is how do we vertically scale on an Amazon instance now in order to vertically scale on an Amazon instance we need to follow certain rules so the first one is you have to spin up a larger Amazon instance than the existing one also pause the existing instance to remove the root ABS volume from the server and discard then you have to stop the live running instance and detach its root volume also make a note of the unique device ID and attach that root volume to the new server finally you have to just start the instance again so these are the steps that you need to follow in order to vertically scale on an Amazon instance now the next question is what is the difference between a vertical and a horizontal scaling in AWS now the main difference between vertical and horizontal scaling is the way in which you add compute resources to your infrastructure in vertical scaling more power is added to the existing machine while in horizontal scaling additional resources are added into the system with the addition of more machines into the network so that the workload and processing is shared among multiple devices now the best way to understand the differences imagine that you are retiring your Toyota and buying a Ferrari because you need more horsepower this is vertical scaling now another way to get that added horsepower is not to ditch the Toyota for the Ferrari but buy another car this can be related to horizontal scaling where you drive several cars all at once so when the users are up to 100 and easy to instance alone is enough to run the entire web app application or the database until the traffic ramps up under such circumstances when the traffic ramps up it is better to scale vertically by increasing the capacity of the ec2 instance to meet the increasing demands of the application AWS supports in census up to 128 virtual cores or 488 GB Ram now when the users for an application grow up 2000 or more vertical cannot handle requests and there is Need for horizontal scaling which is achieved through distributed file system clustering and load balancing so I hope you have understood the difference between a vertical and a horizontal scaling now let's move on to the next question that is when instances are launched in the cluster placement group what are the network performance parameters that can be expected now this basically depends largely on the type of instance as well as on the specification of network performance now in case they are started in the placement group you can expect parameters like 20 gbps in case of full duplex or when in multi-flow and then you will have up to 10 gbps in case of a single flow and finally outside the group the traffic is limited to only 5 gbps now moving to the next question what is Amazon virtual private cloud or VPC and why is it used now a VPC is the best way of connecting to a cloud resources from your own data center so once you connect your data center to the VPC in which your instances are present each instance is assigned a private IP address that can be accessed from your data center that way you can access your public Cloud resources as if they were on your own private Network so this is exactly why mostly we use the VPC because you can access your resources and make it your own private Network as well now the next question is what are these states available in processor State Control now there are two states that are available which is the P State and also the C State now P state has different levels starting from P0 to p50 whereas when you come to the C state it levels are from c0 to C6 where C6 is the strongest State for the processor so these are the two basic and main states that are available and that you can use now let's have a look at the next question so how do we transfer the existing domain name registration to Amazon Route 53 without disrupting the existing web traffic now for this you will need to get a list of the DNS record data for the domain name first it is generally available in the form of a Zone file that you can get from your existing DNS provider now once you receive this DNS record data you can use Root 53's Management console or simple web services into interface to create a hosted Zone that will store your DNS records for your domain name and follow its transfer process it also includes steps such as updating the name servers for your domain name to the ones associated with your hosted zone for completing the process you have to contact the registrar with whom you registered your domain name and follow the transfer process as soon as you register a propagates the new name server delegations your DNS queries will also start to get answered so this was about the transferring the existing domain name registration to Amazon Route 53 and there would be no disruptions this way now let's have a look at the next question which is how is AWS elastic bean stock different than AWS Ops works now AWS elastic bean stock is an application management platform while the Ops works is a configuration management platform now Beanstalk is an easy to use service which is used for deploying and scaling web applications developed with java.net PHP node.js python Ruby go and Docker now customers upload their code and elastic Beanstalk automatically handles the deployment the application will be ready to use without any infrastructure or resource configuration but when we talk about the AWS Ops works it's an integrated configuration management platform for it administrators or devops Engineers who want a high degree of customization and control over operations so I hope you have understood how the elastic bean stock is different from the AWS Ops works now moving on to the next question what happens if the application stops responding to requests in a bean stock NOW AWS Beanstalk applications have a system in place for avoiding failures in the underlying infrastructure if an Amazon ec2 instance fails for any reason being stock will use Auto scaling to automatically launch a new instance so Beanstalk can also detect if your application is not responding on the custom link even though the infrastructure appears healthy it will be locked as an environmental event for example a bad version was deployed so you can take an appropriate action here so basically you just have to use the auto scaling for an automatic launch of the new instance now the next question is an organization wants to deploy a two-tire web applications on AWS the application requires complex query processing and table joins however the company has limited resources and requires High availability which is the best configuration that company can opt for based on the requirements so if we talk about the best configuration dynamodb deals with core problems of database scalability management reliability and performance but does not have the functionalities of a rdbms Now dynamodb does not render support for complex joints or query processing or complex transactions so you can run a relational engine on Amazon RDS or ec2 for this kind of a functionality so I guess here dynamodb is the best configuration that the company can opt for based on all of these requirements now talking about the next question how can you Safeguard ec2 instances running on a VPC now AWS security groups associated with ec2 instances can help you Safeguard easy to instances running in a VPC by providing security at the protocol and Port access level so you can configure both the inbound and outbound traffic to enable secured access for the ec2 instance AWS security groups are much similar to a firewall they contain set of rules which filter the traffic coming into and out of an ec2 instance and deny any kind of unauthorized access to ec2 instances so this is exactly how you can Safeguard all your ec2 instances running on a VPC next is what automation tools can you use to spin up these servers now there are different tools that can be used here so first you can have the roll your own script and use the AWS API tools such scripts could be written in bash Pearl or other language of your choice then you can also use a configuration management and provisioning Tool like puppet or its successor of score Chef you can also use a tool like this scalar and use a managed solution such as the right scale now moving on what are some of the important features of a classic load balancer in ec2 now the high availability feature ensures that the traffic is distributed among ec2 instances in single or multiple availability zones this ensures High scale of availability for incoming traffic now the classic load balancer can decide whether to Route the traffic or not based on the results of health check so you can Implement secure load balancing within a network by creating security groups in a VPC and finally there's this classic load balancer which supports sticky sessions which ensure that the traffic from a user is always routed to the same instance for a seamless experience these are basically some of the important features of a classic load balancer in ec2 and how it actually helps you whenever you're using this now let's move on so what do we know about an Ami so Ami is basically considered as the template for the virtual machines while starting an instance it is possible to select the pre-baked Amis that am I commonly have in them however not all Amis are available to use free of cost it is also possible to have a customized Ami and the most common reason to use the same is nothing but saving the space on Amazon web service this is done in case a group of software is not required and Ami can simply be customized in that particular situation so this is all about Ami and AWS now what is the total number of buckets that can be created in AWS by default now this is a very simple question which is also one of the most asked questions so you can have 100 buckets that can be created in each of the AWS accounts and if you need additional buckets then you have to increase the bucket limit by submitting a service limit increase but basically you can just have 100 buckets for each AWS account now next is explain stopping starting and terminating an Amazon ec2 instance now stopping and starting an instance are the most common commands used on the Amazon ec2 platform and these are definitely one of the most asked questions in any AWS interview as well now once the command for stopping in instances used the instance first performs a normal shutdown and then transitions itself to a stopped State all the Amazon EBS volumes remained attached as they were and you can resume the instance at a larger State one of the main advantages of this feature is that Amazon does not charge you additionally for the hours while the instance was in a stopped state so when you issue the termination command to an instance the instance first performs a normal shutdown and then moves ahead with detaching the existing Amazon EBS values this can only be achieved if the delete on termination attribute is set to false in the Amazon EBS settings now once it is terminated the clients cannot resume the instance at a later stage so this was all about stopping starting and also terminating any Amazon ec2 instance now moving on to the next question how can S3 be cast off with ec2 instances now it is possible to cast off with ec2 instances by using root approaches which have the backup of native occurrent storage so when a developer or a client is using Amazon S3 Services they have the capability to use extremely scalable and additionally fast Dependable low priced data storage structures that are used by Amazon itself to track the worldwide network of its own websites however in order to perform these operations in the Amazon ec2 atmosphere developers need to use certain tools in order to load their Amazon machine images or the Amis into Amazon S3 and then transfer them back to Amazon ec2 now the additional use of this method might be when developers need to load stationary content into S3 from their websites hosted on Amazon ec2 now let's move on to the next question now the next question is list out the different ways to access ec2 now ec2 can be accessed more via web-based interface and also command line interface additionally there are Powershell tools available in Windows which can be simply executed now the next question is to Define regions and availability zones in Amazon ec2 now being such a mammoth in the industry it is common knowledge that Amazon ec2 will be hosted in multiple locations across the world these worldwide locations are categorized in terms of availability zones as well as regions now each of these regions is completely independent of the other and each availability zone is isolated as well but all the availability zones in a particular region are interconnected through multiple low latency links so this was about the regions and also the availability zones in Amazon ec2 now moving on to the next question what is Amazon ec2 root device volume now when you as a developer launch an instance the root device volume has the image that was used to boot up the instance in the first place now there are two types of Amis or Amazon machine images that are available first we have the EBS based storage that is the elastic block storage and then we have the instance stored back Ami so this is what Amazon ec2 root device volume is all about and also the two types of Amis available now next up what is auto scaling and also how does it work so Auto scaling is one of the most important features that Amazon web service provides that gives you an allowance to configure and automatically stipulate and also twists new instances without even your intervention now this can be done by setting the edges and measurements to screen at the point when those edges have crossed another instance based on your preference will be spun up rolled and configured into the load balance of pool now you would have scales that horizontally without the intervention of an operator so this is exactly how the auto scaling works and you can see that you will have either a minimum size then have a desired capacity scaled out is needed and also finally the maximum size so you can consider what is the minimum size based on the least number of size of any particular block then you can also specify your desired capacity of how much you want it to be then scale out as needed you can scale out the extra part and also you can include the entire size or the maximum size now next up what is server load balancing so once you are connected to the internet you will have a load balancer which will be applied to all your application web servers and which will directly go to your database now this SLB or the server load balancing provides the performance of the network and also it delivers the content by the implementation of a series of priorities as well as algorithms which helps in responding to the precise requests that are made to the network in other words server load balancing takes the part of Distributing the clients to a vast group of some servers and that also ensures that the clients which are sent are only sent to the specific servers and not to the failed servers now the next question is what is global server load balancing and does clustering need to be turned on in order to use the gslb or the global server load balancing now the gslb or the global server load balancing is very much similar to SLP which is the server load balancing but gslb takes SLB to a global scale it authenticates us to stack balance VIPs from various geographical locations as well as a single entity from this the geographic site gets scalability and also fault tolerance now you must turn on clustering and also configure it in order to use the global server load balancing each and every proxy that comes within the site or cluster must acquire the same configuration so every piece of equipment can act as a DNS server if that becomes the master for this site and each of the sites will be having a unique SLB or gslb or cluster configuration and you will have to use the gslb site overflow command so that the remote gslb side can be added to the local appliance moving on the next question is what are those load balancing methods which are supported with array Network gslb now there are different methods of the global server load balancing that are supported by the array Appliance so we have three methods first is the Overflow then we have LC and RR now for overflow this method allows all the requests to be sent to the different remote site when the local site ID loaded up to 80 percent now next is the LC so LC basically stands for the least connections it sends the clients to the site which has the least count of current connections and next we have the RR which stands for the round robin and it sends the clients in the round robin suction to each site so these were the different load balancing methods supported with the array Network jslb now moving on what is reverse proxy cache now reverse proxy cash is a cache that is presented in the front of the origin servers that's the reason for using the reverse term in the name and if a request of the cached object is made by the client then the request will be served from the cache and not from the origin server by the proxy so this is what happens in case of any River proxy cache now next is mention some of the advantages of AWS elastic Beanstalk now there are many advantages for the AWS elastic Beanstalk talking about some the EBS is economical with no hidden costs and you will pay for whatever you use not for anything extra so not more not less now the AWS Management console can be accessed within an hour with its Fast Access so it wouldn't take much time it will just take an hour for you to get the Management console then it also supports languages like java.net PHP node.js python Ruby Etc and finally AWS EBS builds the setup and Spectators the AWS service for the creation of web services now the next question is Define automate deployment now the automate deployment method is very similar in many ways to programming in other languages however the unique advantages of this platform is that it helps in cutting down a lot of challenges now one of the best things is the deployment can be made as one becomes more proficient with other offerings of the service using automated deployment clients can minimize human interference and also ensure that the outcomes are quality based in every aspect next up what are the advantages of using the serverless approach Now using this obvious approach has multiple advantages so talking about some the approach is utterly simple which converts to quicker time to Market and thus higher sales clients are also only required to pay when the code is in operation thus a huge amount of money can be saved in enhanced profits and also clients do not need any additional infrastructure in order to run this application and they also do not need to give any second thought on the server which is running the code so these are some of the advantages of using the serverless approach now moving on what is the Amazon Cloud watch now Amazon cloudwatch is basically a management tool it is a part of the Amazon web services family it is basically a monitoring service for AWS Cloud resources and all applications run on the AWS platform cloudwatch can also be used to track and collect metrics set alarms collect and monitor lock files and also monitor resources such as ec2 instances rdsdb instances and Dynamo dbtails so now that you know what is the Amazon Cloud watch what else can be done with this Cloud watch logs so as cloudwatch is capable of storing and monitoring a client's logs and help them better understand how their systems and applications are operating Cloud watch can be used to log in multiple ways like the long term log retention and also the real-time application and system monitoring so these are basically the two ways of using the cloud watch logs now next question is which platform support Cloud watch logs agent now the cloudwatch logs agent is supported by a number of operating systems and platforms so we have the sent OS Amazon Linux Ubuntu Red Hat Enterprise Linux Windows basically all of these support the cloud watch logs agent and there are many more which also supports the same and that's exactly why the cloud wash logs is so popular and also easy to use now moving on list out the retention period of all metrics now Cloud wash retains all its metrics differently so for example any data points or high resolution custom metrics with a span of fewer than 60 seconds are available for 3 hours then we also have the data points with a period of 60 seconds which are available for 50 days then data points with a period of 5 minutes are available for 63 days also data points with a period of 1R are available for 455 days or 15 months so these are some of the metrics of the retention period now moving on to the next question what are some of the key best practices for security in Amazon ec2 now some of the best practices include creating individual IAM that is the identity and access management users to control access to your AWS resources now creating separate IAM user provides separate credentials for every user making it possible to assign different permissions to each user based on the access requirements also it secures the AWS root account and its access Keys then you can also Harden the ec2 instances by disabling unnecessary services and applications by installing only necessary software and tools on ec2 instances then you can grant least privileges by opening up permissions that are required to perform a specific task and not more than that additional permissions can also be granted as required you can also Define and review the security group rules on a regular basis and have a well-defined strong password policy for all the users finally you can deploy the antivirus software on the AWS Network to protect it from Trojans viruses Etc so these were some of the best practices for security in Amazon ec2 now moving on now there is a distributed application that processes huge amounts of data across various ec2 instances application is designed in such a way that it can recover gracefully from easy to instance failures so how will you accomplish this in a cost effective manner now on demand or Reserve instance will not be idle in this case as the task here is not continuous moreover it does not make sense to launch an on-demand instance whenever work comes up because on-demand instances are expensive in this case the ideal choice would be to opt for a spot instance owing to its cost Effectiveness and no long-term commitments now the next question is what are the important features of a classic load balancer in ec2 now talking about the important features of a classic load balancer in ec2 there's the high availability feature which ensures that the traffic is distributed among ec2 instances in single or multiple availability zones this also ensures High scale of availability for incoming traffic now classic load balancer can decide whether to Route the traffic or not based on the results of health check you can also Implement secure load balancing within a network by creating security groups in a VPC and then classic load balancer supports sticky sessions which ensure that the traffic from a user is always routed to the same instance for a seamless experience so these were the important features of a classic load balancer in ec2 now moving on what are the possible connection issues you encounter when connecting to an ec2 instance so there are many issues that one might face but some of the most common ones are the unprotected private key file then there might be the server refused key and also connection timed out which happens mostly because you have not logged in for quite a long time so you can have a connection timed out issue here then no supported authentication method available another issue that you might encounter is the host key is not found and permission is denied also user key is not recognized by the server and the permission has been denied so these are just some of the common issues that everybody encounters while connecting to any ec2 instance now moving on mention the time span in which the AWS Lambda function will execute now all the process of AWS Lambda and execution takes place within 300 seconds from placing calls to AWS Lambda Now the default timeout is three seconds rest you can set up any value between 1 to 300 seconds so basically we are seeing here that the default timeout is 3 seconds and after that you can set up any value but the maximum limit is anywhere 300 seconds now moving on mention the role of sqs in Lambda now there is a certain approach which is used for sharing of information and passing that information among different hosts and connectors and communication can be established the functional components should be connected even if they are different now there are many advantages of using sqs and several of the failures are also eliminated with the help of sqs in Lambda the next question is what are the final variables now once assigned these variables cannot be changed in its earlier stage they are known as effective variables where any form of Change Is Possible and the values are assigned to them they also play an important role in testing and most of the local expressions are final moving on to the next question a company needs to monitor the read and write iops for the AWS MySQL RDS instance and send real-time alerts to their operations team so which AWS service would be able to accomplish this talking about the different services that we have in AWS like the Amazon simple email service Cloud watch Amazon simple Q service and Route 53 here we can go for the Amazon Cloud watch now this is a cloud monitoring tool and hence this is the right service for the mentioned queues case now the other services can be used for other purposes is like you can use the root 53 for DNS Services thus for this particular case you should opt for cloud watch service in Amazon and this will be the APT choice now the next question is what is configuration management why would you want to use it with cloud provisioning of resources now configuration Authority has been throughout for prolonged period in Network Services and systems control yet the rising reputation of it has been confined Mac maximum systems managers configure computers as the software was improved before version controller that is manually performing modifications on servers every server can later and customarily be modified now troubleshooting though is outspoken as you log into the case and work on it instantly configuration Authority delivers massive computerization equipment into the picture managing servers similar times of a puppet this drives regularity excellent works and reproducibility as all configs are maintained and versioned it also proposes a distinct way of operating which is the huge barrier to its adoption now join the cloud and configuration Administration becomes equivalent major critical transwickal's pragmatic servers such as Amazon's ec2 instances are enormously limited reliables and physical ones you surely need a tool to reconstruct them as at any consequence this promotes vigorous practices like computerization reproducibility and failure restoration into the internal frame now next up the final question here is what is AWS certificate manager now AWS certificate manager of the ACM manages the complexity of extending provisioning and regulating certificates granted over ACM to your AWS based websites and forms you work ACM to petition and maintain the certificate and later practice other AWS services to provision the ACM certificate for your website or purpose as designated in the subsequent instance ACM certificates are currently ready for a performance with only elastic load balancing and Amazon Cloud front you cannot handle ACM certificates outside of AWS so these were some of the most frequently asked questions during any AWS selection architect interviews and these are all the advanced level questions and with this we have come to the end of today's session as well and if you have gone through any other questions that are frequently Asked in the interviews and if you have attended any such interviews where you have found new questions you can put that up in the comment section below and also share your opinion regarding today's session till then thank you and happy learning peace and work on it instantly configuration Authority delivers massive computerization equipment into the picture managing servers similar twines of a puppet this drives regularity excellent works and reproducibility as all configs are maintained and versioned it also proposes a distinct way of operating which is the huge barrier to its adoption now join the cloud and configuration Administration becomes equivalent major critical that's because pragmatic servers such as Amazon's ec2 instances are enormously limited reliables and physical ones you surely need a tool to reconstruct them as at any consequence this promotes vigorous practices like computerization reproducibility and failure restoration into the internal frame now next up the final question here is what is AWS certificate manager now AWS certificate manager of the ACM manages the complexity of extending provisioning and regulating certificates granted over ACM to your AWS based websites and forms you work ACM to petition and maintain the certificate and later practice other AWS services to provision the ACM certificate for your website or purpose as designated in the subsequent instance ACM certificates are currently ready for a performance with only elastic load balancing and Amazon Cloud front you cannot handle ACM certificates outside of AWS so these were some of the most frequently asked questions during any AWS selection architect interviews and these are all the advanced level questions and with this we have come to the end of today's session as well and if you have gone through any other questions that are frequently Asked in the interviews and if you have attended any such interviews where you have found new questions you can put that up in the comment section below and also share your opinion regarding today's session till then thank you and happy learning I hope you have enjoyed listening to this video please be kind enough to like it and you can comment any of your doubts and queries and we will reply them at the earliest do look out for more videos in our playlist And subscribe to any Eureka channel to learn more happy learning