Transcript for:

hey this is andrew brown your cloud instructor exam pro bringing you another complete study course and this time it's the aws certified cloud practitioner made available to you here on free code camp and if you think you've seen this course before that's because this is a major update from the very popular 2019-20 course that had over 2 million views and this time around we have three times more content so this course is designed to help you pass and achieve it was issued certification and the way we're going to do that is by going through lecture content doing labs in our own account utilizing a practice exam downloading the cheat sheets on the day of the exam and then once you pass you can improve on your resume and linkedin you have that either business knowledge to get that cloud job or to get that promotion to tell you a bit about me i was previously the cto of multiple edtech companies with 15 years industry experience five years specializing in the cloud i'm ava's community hero i publish multiple free cloud courses i love star trek and coconut water and i just want to take a moment to thank people like you because it's you that make these free courses possible and if you want to know how to support more free courses like this one the best way is to buy or extra study materials and so for this course it's at exam pro dot co four slash clf hyphen c01 this is where you'll get study notes flash cards quizlets downloadable lecture slides downloadable cheat sheets uh prax exams you can ask questions and get support and i also just want to tell you if you do sign up you're going to get additional stuff already so you'll get the free practice exam and cheat sheet there's no credit card required required and there's no trial limit so there's no reason not to sign up and if there are course updates check the description in the youtube to see if there are any updates okay so there might be corrections additions modifications and this is just going to ensure that you're using utilizing the latest version of this course and so to keep up to date with upcoming courses follow me on twitter at andrew brown and if you are over there i'd love to hear if you have passed your exam and what you'd like to see next so there you go [Music] hey this is andrew brown from exam pro and we're at the start of our journey asking the most important question first which is what is the aws certified cloud partitioner so the cloud partitioner is the entry level aida certification teaching cloud fundamentals such as cloud concepts architecture deployment models it will take a close look at database core services a quick look at the vast amount of data services and will cover topics like identity security governance billing pricing support of aws services the course code for this exam is the clf c01 but it's commonly referred to as the ccp and aws is the leading cloud service provider in the world and that makes the certified cloud petitioner the most common starting point for people breaking into the cloud industry no matter what their path is so who is this certification for well you should be considering the aws cloud partitioner if you are new to cloud and need to learn the fundamentals if you are in the executive management or sales level and you need to acquire strategic information about cloud for adoption or migration or you are a senior cloud engineer or solutions architect who needs to reset or refresh their aws knowledge after working for multiple years and just seeing how the landscape has changed so what value does this certification bring well the aws certified cloud practitioner provides the most expensive view possible of cloud architectures and advanced and when we're talking about that expansive view what you should be thinking about is it being a bird's eye view or a 50 000 foot view looking onto a panoramic landscape where you can see everything and the idea of this expansive view is to promote big picture thinking so the idea here is you're zooming out and assessing the cloud it was landscape for changes trends opportunities and being strategic about the approach and process for our cloud journey the innovations cloud practitioner is not a difficult exam it will not validate that you can build cloud workloads for technical implementation roles like a developer engineer devops role it will not be enough to obtain a cloud role but it can help shortlist your resumes for interviews the exam covers content not found in other certifications and it is recommended as an essential study for your aws journey so now let's take a look at the awesome certification roadmap to see where we would go after the cloud petitioner and what kind of uh cloud roles would be associated with those certifications so at the start you get your cloud practitioner which is at the fundamental level after that we have the associate level such as the sysop administrator the developer and the solutions architect followed by the professional level the devops engineer the solutions architect professional and then the specialties such as security advanced networking database machine learning data analytics and sap which just is not on here yet because it's such a new certification so after the cloud practitioner generally people will go for an associate and it's up to you to choose one of the three because they're all great routes but the most common one is the solutions architect associate because the most common role in the industry is a cloud engineer so even though it's called solution to architect they really should have named it cloud engineer because that is really what it is uh if you were to go the developer route you're basically becoming a cloud developer and then if you are going the sysops admin route you are becoming a junior devops engineer and it's not uncommon for people to obtain all three associates and a lot of times the order will be the solution architect first because it's the easiest and and has the broadest services followed by the developer um which adds uh practical programming skills and um life cycle stuff of like deployment for apps followed by the sysops administrator which is considered the hardest of the three in the associate tier from there you can go for the solutions architect professional and that would be associated with a solutions architect or cloud architect role that's basically like a harder version of the cloud engineer with a lot more responsibilities if you were going to devops route you'd go for the devops engineer professional and so this would open you up to roles such as the devops engineer or the site reliable reliability engineer an sre and some people like to get both of the professionals and that could be if you want to be a cloud architect or devops engineer because having adjacent skills and the professionals is always very useful now you don't have to go for a professional after the associate a lot of people will jump over to the specialties and so when we're looking at the solutions architect you basically have any pic after that but generally what i see are people going for data analytics or machine learning so for data analytics this would be if you want to be a data analyst or if you're doing machine learning this is where data scientists will go through the solutions architect route okay for the junior devops you could jump over to security and become a cloud security engineer if you want to go into devsecops so the automation of security operations you probably want to get the devops engineer um or you may be if you're after the devops engineering you might be transitioning to the advanced networking for roles like in netdevops where you're specializing in migration or hybrid engineer for architectures that both use on-premise and the cloud from the devops engineer position you can still go for the database or machine learning certification if you want to become either a data engineer or an ml ops engineer so there's a lot of opportunities here and there is no perfect route but just these are suggestions for you to decide on your own okay so how long is it going to take to pass this certification well it's going to really depend on your background but if we had to generalize it we can look at it uh as kind of a scale and so if you are at the beginner level you're looking at 30 hours of studying and when we say beginner we're saying someone that has never used aws or any cloud provider i have never written code or held a tech role and when we're looking at the other side of it someone that is experienced we're looking at a six hour study time and when i say that i'm talking about somebody that's watching on two times speed and are able to absorb this information uh very quickly so they have practical working experience with aws or they have equivalent experience in another cloud service provider like azure gcp where they can translate that knowledge or they have a very strong background in uh technology where they've worked in the industry for many years and so you know their study time is going to be a lot shorter and so on average most people are going to take about 24 hours to study for this course and when we talk about the kind of stuff that you'll be doing it's going to be 50 lectures and labs and we call our labs follow alongs where the idea is you follow along in your own account and then 50 is the practice exams so if you look at the length of the content which is around uh 12 hours then you know you should expect to spend as much time doing practice exams uh to pass okay and the rem recommended time to study is one to two hours a day for 14 days okay so what kind of effort are we going to have to put in to pass this exam well you have to watch the lecture videos and memorize key information you'll need to do hands-on labs and follow along with your own account and you will need paid online practice exams that simulate the real exam and the last two here were things that i used to never suggest because you could literally just watch the videos and pass however edibus has made this exam a lot more difficult and so for these last two points you do have to do these two things for the paid online practice exams uh that can be a hard for some people so i've made it easier for you by providing you a full free practice exam on exam pro at four slash clf c01 and so you just have to sign up no credit card required and you'll get a full set of 65 questions that simulate the real exam okay so for the contents of the exam it is composed of four domains and each domain has its own weighting which determines how many questions in the domain that will appear so for domain one which is cloud concepts we're looking at 26 percent for domain 2 security and compliance we should expect to see 25 percent of the questions from there for domain 3 which is technology and where we will see the most amount of questions that we're sitting at 33 percent for domain four billing and pricing we have 16 of the exam there so just to emphasize for domain 3 you need to know a wide range of services but you also need to know in-depth the core services so where do you take the exam well at an in-person test center or online from the convenience of your own home aws is partnered with two different test center networks the first being psi and the second being pearson vue and they both offer in-person or online and these exams are proctored meaning there is somebody watching you to ensure that you are not cheating okay in order to pass this exam you have to score 700 points out of a thousand and so 700 generally equates to 70 percent but it's around 70 percent because aws uses scaled scoring meaning that they could adjust it based on how many people are passing or failing so always aim to uh get higher than 70 percent the exam contains 65 questions 50 scored and 15 unscored and you can afford to get about 15 questions wrong there is no penalty for wrong questions so you should always choose an answer and the questions come in two formats multiple choice and multiple answers for these unscored questions there are 15 on the exam they will not count towards your final score why is there unsword questions on the exam well unscored questions are used to evaluate the introduction of new questions they can determine if the exam is too easy and the passing score or question difficulty needs to be increased and they can discover users who are attempting to cheat the exam or steal dump exam questions so if you encounter questions you've never studied for that seem really hard keep your cool and remember they may be unscored questions the duration of this exam is 1.5 hours so you have about 1.5 minutes per question the exam time is 90 minutes but the seat time is 120 minutes seat time refers to the amount of time you should allocate for the exam so that means including things like time to review instructions show online proctor your workspace read and accept the nda and complete the exam and provide feedback and when you do pass this exam is valid for 36 months and that equates to three years before re-certification [Music] hey this is andrew brown from exam pro and i'm on the aws certified cloud partitioner page because what i want to show you here is the exam guide if you're wondering how to book your exam you go to schedule exam there and that's the way you can do it but if you scroll on down there's this download exam guide and this will download a pdf that will tell you everything about the exam and so just make note of the course code this is the clf c01 because if this exam has a major major change they'll call it the co2 okay and then you'll know that this exam might not fit for the new uh the new exam guide okay so if we scroll on down there is a basic introduction they'll say you have to have six months which is totally not true you can get in the cloud with no experience uh and uh be passing this exam within two to three weeks so you can just kind of ignore that so it will just state that there is multiple choice multiple responses also known as multiple answer there are 50 questions of the exam with 15 unscored questions so you'll get 65 questions in total uh it's scored between 100 to 1000. the passing grade is 700 it explains about scaled scoring there then it goes onto the course our content outline where we have the four domains and it has a big breakdown of all the things that could appear on the exam and the thing about this is is that um you know there's only 65 questions but there if you break down all these points there's like three times more information than could possibly show up on the exam so just understand that you are going to be studying a lot of information but only one third of it's going to show up on your exam so what i did is i went through every single one of these things and i made sure that we are covering them some stuff i just never saw an exam and also other people i never saw were design principles um i mean they are generally covered in the well architected framework but it's unusual because some of the things in here i just feel they aren't actually on the exam and they just kind of cram this exam guide together but i was very thorough to make sure to add everything here um so for security and compliance it's just knowing a collection of um database security services and some security concepts for technology this is our largest section you need to know so much stuff but we spent a lot of time in the course just covering technology then you have your billing and pricing and you could also say support and so that covers a lot of interesting thing a lot of stuff around ec2 pricing and then they just have a big list of stuff so this is a bit a bunch of random technologies and concepts that might be covered and then they talk about services and so again we cover basically everything just in case for you but yeah there you go [Music] hey this is andrew brown from exam pro and what we're looking at here is a free practice exam that i provide with you uh for this course and all you have to do is sign up on exam pro you don't even need a credit card and you can redeem uh the free available content here and this is really up to date and very well simulates what you will see on the actual exam and it's a full set full 65 questions so you're getting a real simulation here but what i'm going to do is just start it off here we're not going to do the whole thing i'm just going to click through and show you a couple of them so you have an idea um the level of difficulty these questions are so the first question we got presented with here is which support plans provide access to the seven core trusted advisor checks and so that is a question that you might need to answer i don't want to spell this for you so i'm not going to tell you the answer i will go to the next one so a large accounting firm wants to utilize aws to store customer accounting information in archive storage and must store this information for seven years due to regulatory compliance which database service meets this requirement so the first one you'll notice this one is multiple choice or sorry multiple answers so you have to select multiples before you can submit your answer and the next one here is just a single choice so those are the two types of questions you will see on the exam they're not going to ask you anything about coding you're not going to see any kind of code in terms of length that's pretty much what we'll see in terms of the questions i think in many cases i wrote a little bit more more like um in the style the solutions architect associate to make it slightly more difficult just so that you're a little bit over prepared so if you do well on these practice exams you're going to do a well on the real exam okay so i just wanted to kind of get you that exposure there okay [Music] hey this is andrew brown from exam pro and we are at the start of our journey asking the most important questions first which is what is cloud computing so cloud computing is the practice of using a network of remote servers hosted on the internet to store manage and process data rather than a local server or personal computer and so when we're talking about on-premise you own the servers you hire the i.t people you pay or rent the real estate you take all the risks but with a cloud provider someone else owns the servers someone else hires the it people someone else pays or rents the real estate and you are responsible for configuring cloud services and code and someone takes care of the rest of it for you okay [Music] so to understand cloud computing we need to look at the evolution of cloud hosting going all the way back to 1995 where if you wanted to host your website or web app you'd have to get a dedicated server so that would be one physical machine dedicated to a single business running a single project a site or an app and as you can imagine these are expensive because you have to buy out write the hardware have a place to store it the network connection having a person to maintain it but it did give you a guarantee of high security and they still do as of today so this model hasn't gone away but it's been specialized for a particular use case then came along the virtual private server so the idea is we still had one physical machine but now we were able to subdivide our machine into submachines via virtualization and so essentially you're running a machine within a machine and so you had better utilization of that machine running multiple web apps as opposed to having a physical machine per project so you got better utilization and isolation of resources and so these two options still required you to purchase a machine a dedicated machine and so that was still kind of expensive but then came along shared hosting and so if you remember the mid-2000s like with godaddy or hostgator or any of those sites where you had really cheap hosting the idea is that you had this one physical machine shared by hundreds of businesses and the way this worked it relied on tenants under utilizing their resources so you know you wouldn't have a sub machine in there but you'd have a folder with permissions that you could use um and so you would really share the cost and this was very very cheap but you were limited to whatever that machine could do and you were very restricted in terms of the functionality you had and there was this poor isolation meaning that you know if one person decided to utilize the server more they could hang up all the all the websites on that single server then came along cloud hosting and the idea is that you have multiple physical machines that act as one system so this is distributed computing and so the system is abstracted into multiple cloud services and the idea is that you basically get the advantages of a lot of the things above so it's flexible you can just add more servers um it's scalable it's very secure because you get that virtualized isolization you get it extremely at a low cost because you're sharing that cost with the users where in the shared hosting it might be hundreds of businesses we're looking at thousands of businesses and it was also highly configurable because it was a full virtual machine now cloud actually still includes all of these types of hosting they haven't gone away but it's just the idea that you now have more of a selection for your use case uh but hopefully that gives you an idea uh what cloud hosting looks like and it really has to come down to distributed computing okay [Music] hey this is andrew brown from exam pro and before we talk about aws we need to know what is amazon so amazon is an american multinational computer technology corporation headquartered in seattle washington and so this is the seattle skyline with the space needle and amazon was founded in 1994 by jeff bezos and the company started as an online store for books and expanded to other products so as you can see this is jeff bezos a long time ago and he has this interesting spray painted sign and his desk is held up by cinder blocks and it looks like his uh desk is like an old uh table or something and he's working really late and he used to be a millionaire at this time and he would be driving into work in his honda accord because you know he just his motivation was always to put all the money back into the company so it really shows that he worked really hard and it did pay off because amazon has expanded beyond just an online commerce store into a lot of different things such as cloud computing which is amazon web services digital streaming such as amazon prime video prime music they bought twitch.tv they own the whole foods market grocery store they have all this artificial intelligence they own low orbit satellites and a lot more stuff it's hard to list at all and so jeff bezos today is not the um the ceo it's actually andy jassy is the current ceo of amazon he was previously the ceo of aws so jeff bezos can focus on space travel so there you go [Music] hey this is andrew brown from exam pro and we are taking a look at amazon web services and this is the name that amazon calls their cloud provider service and it's commonly referred to just as aws so here is the old logo where we see the full name and here is the new logo but i like showing the old logo because it has these cubes which best represent what aws is and it is a collection of cloud services that can be used together under a single unified api to build a lot of different kinds of workloads so aws was launched in 2006 and is the leading cloud service provider in the world i put an asterisk there because technically aws existed before 2006 and a cloud service provider which is what aws is is often initialized as csp so if you hear me saying csp i'm just saying cloud service provider okay so just trying to look at the timeline of when services rolled out the first one came out in uh 2004 and was simple queue service sqs and this service still exists as of today but at the time it was the only service that was publicly available so it wasn't exactly a cloud service provider at this time and it was neither aws it was just sqs but then a couple years later we had simple storage service also known as s3 which was launched in march of 2006 and then a couple months later we had elastic compute cloud also known as ec2 and ec2 is still like the most used service within aws and is like the backbone for pretty much everything there then in 2010 it was reported that all of amazon.com's retail sites had migrated to aws so even amazon was using aws full steam and to support industry-wide training and and skill standardization it was began offering a certification program for computer engineers on april 2013 and this is the type of certifications that we are doing as we speak um so i just want you to know that aws was the one leading uh cloud certifications and we just want to take a look here at the executive level as of today the ceo is adam he's the former cto of tableau and he spent a decade with aws as a vp of marketing sales and support so he was there he had left for a bit and now he is back then we have uh werner and he's the cto of aws he's been uh the cto for pretty much the entire time it was existed with the exception of some time of the first year he's famous for quoting everything fails all the time and then there's jeff barr who's the chief evangelist so um if you're ever wondering who is writing all the blog posts and talking about databus it's always jeff barr okay [Music] all right so what i want to do here is expand on what is a cloud service provider also known as a csp just because there's a lot of things out in the market there that might look like a csp but they actually are not so let's go through this list and see what makes a csp so this is a company which provides multiple cloud services ranging from tens to hundreds of services those cloud services can be chained together to create cloud architectures those cloud services are accessible via a single unified api so databases cases that is the aws api and from that you can access the cli the sdk the management console those cloud services utilize metered billing based on usage so this could be per second per hour vpcus memory storage things like that those cloud services have rich monitoring built in so you know every api action is tracked and you have access to that so in aws's case it's ableist cloudtrail and the idea here is those cloud services have infrastructure as a service offering so iaas that means they have networking compute storage databases things like that those cloud services offers automation via infrastructure as code so you can write code to set everything up and so here's just kind of an example of an architecture where we have a very simple uh web application running on ec2 behind the load bouncer with the domain with rough d3 but the idea is just to show you that you know you're chaining these things together if a company offers multiple cloud services under a single ui but do not meet most or all of these requirements it would just be referred to as a cloud platform so when you hear about twilio or hashicorp or databricks those are cloud platforms and aws azure gcp are cloud service providers okay [Music] let's take a look here at the landscape of cloud service providers and the industry likes to break these down into three tiers so we have tier one so this is top tier these were early to market they have a wide service offering they have strong centers used between services and they're well recognized in the industry and in the leading spot is amazon web services and there's no surprise to this because they were the first to develop the technology and so they pretty much dominated the market for multiple years before anyone entered and so it's going to be very hard for anyone to catch up or even overtake them but right behind them is microsoft azure then we have google cloud platform and these three are known as the big three because they're the most used around the world and we actually have a fourth one that's in the tier one and that's alibaba cloud you might not know about it just because it really is based in mainland china and in the asia region so it is really big but it's just the fact that there's that divide between mainland china and the rest of the world okay you have tier two so these are the mid-tiers so at one point you know they could have been topped here but um you know they were just slow to innovate and so they had to turn to specialization but they're all backed by well-known tech companies have been around for a long time well before aws existed so we have ibm cloud oracle cloud or rackspace and so rackspace is offering is actually their software called openstack which allows you to run a cloud service provider-like environment uh on your on-premise okay and so you know these are still in use so oracle cloud what they usually do is they try to fight on price and ibm cloud they they fight on ai and ml uh solutions against the top tier then you have the uh tier three the light tier and so these were virtual private servers that turned to offer core iias infrastructure as a service offerings and so they're simple and cost effective and a lot of people that are getting into cloud or even just trying to deploy apps are probably using these and not realizing their cloud service providers so we have a vulture digital ocean and lynnoids so they started with a single offering just virtual machines then they added a load balancer and so they're starting to get more so like digitalocean i think is getting a serverless offering and then linoid or sorry vulture is getting a kubernetes managed service and so you know they kind of live in this realm of are they csps and i would classify them as they are i would say they are a tier three they're just a light tier and i'm sure they'll expand their services to have more of the core but they're just going to stay i think very small in general okay [Music] so how do we know who is the leader in the market well all comes down to the madric quadrant and this is a series of market research reports published by it consulting firm gardner that rely on proprietary qualitative data analysis methods to demonstrate market trends such as direction maturity and participants people take these graphs very seriously and so this is what it looks like and as you can see amazon web services is marked as the leader and the closer you are to this top corner here is the better you are off as you can see microsoft is not too far behind followed by google then followed by alibaba cloud then by oracle ibm tencent which we don't uh ever talk about and then there's the other ones that just don't show up because they're so small like digital ocean and linoid there so generally that gives you kind of an idea how the market is growing and stuff like that um but as you can see you know there's still a lot for the other ones to do to catch up to aws okay [Music] so a cloud service provider can have hundreds of cloud services that are grouped into various types of services but the four most common types of cloud services for infrastructure as a service uh and i call these the four core would be compute so imagine having a virtual computer that can run applications programs and code networking so imagine having virtual network defining internet connections or network isolation between services or outbound to the internet storage so imagine having a virtual hard drive that can store files databases so imagine a virtual database for storing reporting data or a database for general purpose web applications and aws in particular has 200 plus cloud services and i want to clarify what cloud computing means because notice that we have cloud computing cloud networking cloud storage cloud databases but the industry often just says cloud computing to refer to all categories even though it has computer in the name so just understand when someone says cloud computing they don't just generally mean the subcategory they're talking about all of cloud okay [Music] so awes has a lot of different cloud services and i just want to kind of go quickly over the types of categories that we can encounter here and just mention the four core so any csp that has ias will always have these four core service offerings we have compute so nato s this would be ec2 vms storage this could be something like ebs virtual hard drives database so that could be rds sql databases networking and content delivery but really it's networking uh and this would be vpc so private cloud network okay so uh let's just look at all the categories that are outside the four core so there could be analytics application integration ar vr it was cost management blockchain business application containers customer engagement developer tools and user computing game tech iot machine learning management governance media services migration and transfer mobile quantum technologies robotics satellites security identity and compliance if there was more i would not be surprised but you can see there's a lot of stuff that's going on here [Music] so let's take a look at all the services that are available to us so if you're on the marketing website which is adabus.amazon.com what you'll see in the top left corner is products and so these are all the categories and for whatever we want if it's like ec2 we can go into here and we can read all about it so usually we'll have our overview all right and that's not very useful and then we'll go over to features and so this is can be kind of useful to get some basic information and pricing which is something you'll do a lot in aws is you're always going to be going to a service and looking up its price and so you'll make your way over here every single one is different a very important page would be like getting started so this will give you basic information but what i do is i like to go all the way down to the bottom here and find my way over to the documentation so i'll go here to documentation to get that deeper knowledge about that service and as you can see things get pretty deep with aws in terms of the information they have so hopefully that gives you an idea of the scope also when you're logged into aws and this will be when we create our account you can explore all the services this way as well so these are all the awesome services but you just notice that there's two ways to explore them where this is actually you just actually utilizing the services and then the marketing website is you reading about them and learning all about them okay [Music] hey this is andrew brown from exam pro and we are looking at the evolution of computing your cloud service provider has all of these offerings and the idea is that you need to choose the one that meets your use case a lot of times this all has to come around the utilization of space that's what we're trying to illustrate here in this section here and the trade-offs of why you might want to use some of these offerings okay for dedicated we're talking about a a physically a physical server wholly utilized by a single customer that's considered single tenant and uh for google cloud we're talking about single node clusters and bare metal machines where you have control of the virtualization so you can sell any kind of hypervisor or virtualization you want the system the trade-off here though is that you have to guess up front what your capacity is going to be and you're never going to 100 utilize that machine because it's going to have to be a bit under in case the utilization goes up that's you choosing the cpus and the memories you're going to end up overpaying because you're uh you'll have under underutilized server uh it's not going to be easy to vertically scale it's not like you can just say resize it because the machine you have is what you have right you can't add more i mean i suppose they can insert more memory for you but that's a manual migration so it's very difficult um and replacing the server is also very difficult okay so you're limited by the host operating system it's not virtualized so whatever is on there is on there and that's what your apps are going to have access to if you decide to run more than one app which is not a good practice for these kind of machines you're going to end up with resource sharing where one machine might utilize more than the others technically with a dedicated machine you have a guarantee of security privacy and full utility of the underlying resources i put an asterisk there because yes it's more secure but but it's up to you to make sure that it's more secure so you have that's up to your skills of security right whereas if you had a virtual machine or anything above that there's more responsibility on the cloud service provider to just provide a circuit secure machine and they can do a better job than you so why would you use a dedicated machine well maybe you're doing high performance computing where you need these machines like very close together and you have to choose what kind of virtualization you need to have okay so then we're looking at virtual machines the idea here is you can run a machine within a machine the way that works is we have a hypervisor this is a software layer that lets you run the virtual machines uh the idea here is now it's a multi-tenant you can share the cost with multiple customers you're paying for a fraction of the server uh you'll still end up overpaying for the unrealized virtual machine because a virtual machine is just like you have to still say how many vcpus how much memory and your app is you know you don't want an app that uses 100 right you want to use exactly the amount you need but you can see here you know there's still going to be some underutilization uh you're limited by the guest operating system now but now it's virtualized so at least it's very easy to uh possibly migrate away if you choose to run more than one app on a virtual machine it can still run into resource sharing conflicts it's easier to export or import images for migration it's easier to vertically or horizontally scale okay and virtual machines are the most common and popular offering for compute because people are just very comfortable with those then you have containers and the idea is you have a virtual machine running these things called containers the way they do that is similar to a hypervisor but instead you have um like here is a docker demon so it's just a um a container uh software layer okay to run those containers there's different kinds docker is the most popular and the great thing is you can maximize the uh the capacity because you can easily add new containers resize those containers use up the rest of the space it's a lot more flexible okay your containers will share the same underlying os but they are more efficient than multiple vms multiple apps can run side by side without being limited by the same os requirements and not cause conflicts during resource sharing so containers are really good but you know the trade-off is there a lot more work to maintain then you have functions functions go even step further and the idea is that you uh the the containers where we where we talked about that's a lot of work to maintain now the cloud service provider is taking care of those containers generally sometimes not it depends if it's serviced or not but the idea is that you don't even think about this is called service compute but you don't even think about uh the os or anything you just know that what your runtime is you run ruby or python or node and you just upload your code and you just say uh i want this to be able to run for this long and use this amount of memory okay you're only responsible for your code and data nothing else it's very cost effective you only pay for the time the code is running and vms only run when there is code to be executed but because of that there is this concept of cold starts and this is uh where the virtual machine has to spin up and so sometimes requests can be a bit slow so there's a bit of trade-off there but functions or serverless compute is generally one of the best offerings as of today but most people are still getting kind of comfortable with that paradigm okay [Music] hey this is andrew brown from exam pro and we are taking a look at the types of cloud computing and the best way to represent this is a stacked pyramid and we'll start our way at the top with sas also known as software as a service so this is a product that is run and managed by the cloud service provider you don't have to worry about how the service is maintained it just works and remains available so examples of this and actually uh the first company to coin this was actually salesforce uh then there's things like gmail office 365 so i think microsoft word excel things like that and they run the cloud okay and sas is generally designed for customers in mind then came along platform as a service also known as pass and these focus on the development or sorry the deployment and management of your apps so you don't worry about provisioning configuring or understanding the hardware or operating system and so here we'd have things like elastic beanstalk heroku which is very popular among developers that just want to launch their code or google app engine and that is the old logo but that's the logo i like to use because i think it looks cool and so these are intended for developers the idea is that you just deploy your code and the platform does the rest then there is infrastructure as a service there's no way to say that like it's easy to say sas or pass but there's no easy way to say iaas so this is the basic building blocks for cloud it it provides access to networking features computers and data storage space and the idea here is you don't worry about the it staff data centers and hardware and so that would be like microsoft azure aws oracle cloud things like that and these are for administrators okay so there you go [Music] hey this is andrew brown from exam pro and we are taking a look at cloud computing deployment models starting with public cloud and the idea here is that everything when i say everything i'm talking about the workloads the projects the code is built on the cloud service provider so here is a diagram where we have a ec2 instance a virtual machine running our application and then we have our database in rds and we have the internet coming into our aws account and so everything is contained all of our infrastructure is within aws all right and so this is known as being cloud native or cloud first and i put an asterisk beside cloud native because that was a term uh that was uh used prior to classroom providers to refer to containers or open source um models being deployed and being mobile other places so just understand that it has two meanings but in the context of this cloud native just being like native to the cloud like using cloud to begin with okay then we have private cloud so everything built on a company's data center uh and being built on a data center is known as being on premise because that is where the data center resides near where you work and so here you could be using cloud but you'd be using openstack which would be a private cloud so here we have our on-premise data center and the internet's coming into our data center and we're running on openstack where we can launch virtual machines and a database okay then there's the concept of a hybrid cloud so using both on-premise and a cloud service provider together and so the idea here is we have our on-premise data center and then we have an established connection maybe it's a vpn connection maybe it is a direct connection um but the idea is that we're bridging that connection and utilizing both our private and our public uh stuff to uh create a cloud workload then there is a fourth one called crosscloud sometimes it's known as multi-cloud and sometimes it's erroneously referred to as hybrid cloud but it generally is not uh hybrid cloud okay the idea here is when you're using multiple cloud providers and so one example here could be using services like azure arc so azure arc allows you to extend your control plane uh so that you can deploy containers for kubernetes in azure within amazon eks within gcp kubernetes engine but you know being cross cloud doesn't necessarily mean that you're running a using a service that used works across the cloud and manages it it could just mean using multiple providers at the same time another service that is similar to azure arc but is for a google cloud platform is also known as anthos aws has traditionally not been um cross-cloud friendly and so we haven't seen any kind of developments there where we see uh these other services that are or cloud service providers behind aws trying to promote it to grab more of the market share okay [Music] so let's talk about the different deployment models and what kind of companies or organizations are still utilizing uh for these particular categories so for cloud again this is where we're formally utilizing cloud computing hybrid is a combination of public cloud and on-prem or private cloud and then on-prem is deploying resources on-premise using virtualization resource management tools sometimes called private cloud or it could be utilizing something like openstack so for companies that are starting out today or are small enough to make the leap from a virtual private server to a cloud service provider this is where we're looking at cloud so we're looking at startups sas offerings new projects and companies so maybe this would be like base camp dropbox squarespace then for hybrid these are organizations that started with their own data center but can't fully move to cloud due to the effort or migration or security compliance so we're talking about banks fintech investment management large professional service providers legacy on-prem so maybe cibc which is a bank deloitte the ccp or cpp investment board and then for on-premise these are organizations that cannot run on cloud due to strict regulatory compliance or the share size of the organization or they just have like an outdated idea of what cloud is so they just have a lot of difficulties in terms of politics adopting cloud so this would be public sector like government super sensitive data like hospitals large enterprise with heavy regulation insurance companies um so again hospitals maybe aig the government of canada and so i shouldn't say that they aren't using cloud but um you know because uh aws and all the cloud providers have um uh public sector offerings so um you know i'm just trying to stage as an example of things that could be still using on-premise so you know i know the government canada definitely uses uh cloud in a lot of ways same with aig and hospitals but you know generally these are the last holdouts of on-prem because there really isn't a a good reason to be fully on premise anymore but again there are some things that are still doing that okay [Music] hey this is andrew brown from exam pro and we are at the start of our journey creating ourselves an aws account so what you need to do is go to aws.amazon.com if you don't have a lot of confidence how to get there just type in adabus into google and then click here on the link where it says adabusamazon.com it'll take you to the same place now notice we have a big orange button in the top right corner so it says sign into the adwords console it's the if it's the first time you've ever been to this website so if i go to adabus.amazon.com incognito it will have the create enables account button i don't know why they don't keep this consistent across the board but i wish they did but if you are on the screen you can click here or there um but if you do see something that doesn't say uh you know create an account or or et cetera you can just sign in okay and then down below you can hit create a new aws account so that's the way you're going to get in there and so you're going to put an email a password and create a database account name i've created this so many times and it's so hard to set up new emails i'm not going to do this again it's not complicated but one thing i need to tell you is that you do need to have a credit card you cannot create an account without a credit card um and for those who are in places where maybe you don't have a traditional credit card maybe you can get a prepaid one so up here in canada we have a company called coho and so coho is a visa debit card and so it's basically a virtual prepaid credit card and so these do work on the platform as well so if you have a traditional credit card or possibly could find one of these you still have to load up with money but it does give you a bit more flexibility to create that account so what i want you to do is go through that process yourself it's not complicated and i'll see you on the other end okay [Music] so once you've finished creating your account you should be within the adwords management console and this is the page you're always going to see when you log in it's always going to show the most recent services here and you'll notice in the top right corner that i have my account called exam pro if you're wondering how do you change that name what you do is to go to my accounts here and once there you'll have your account settings up here if you go to edit you can change that name here okay so you know sometimes when you create your account you don't like the account name that you gave it and so that's your opportunity to fix it but once we're in our account what i want you to do is immediately log out because i want you to get familiar with the way you log into aws because it is a bit um different than other providers and so i don't want you to get hung up later on with your account so i've logged out i'm going to go ahead and log back in so you can click the orange button or what i like to do is drop down my account and go to aws management console it's a lot more clear and you'll notice we're going to have two options root user and iam user so this is what i'm talking about for the confusion so when you log into your root user account you all are always using an email and when you're logging as an imuser you're actually going to be entering the account id or account alias but what we'll do is go to the root user and this is the email you use to sign up with the account so for me i called this one andrew plus sandbox at exam pro dot co i'm gonna go to next sometimes you get this character box it's very annoying but it happens time to time and so what i'm gonna do is just go ahead and type that in okay and hopefully it likes it and then i'm just going to enter in my password all right and i'll be back into my account and so notice it takes me back to about management console so the root account is not something we want to be generally using except for very particular use cases and we do cover that in the course but what i want you to do is go set yourself up with a proper account and so what we'll do is go to the top here and type in iam and this stands for identity and access management and we'll click on iem here and on the left hand side we're going to see a bunch of options here and so notice right away we get to the i am dashboard where it's going to start to make some recommendations for us the first one is always to add mfa multi-factor authentication another thing you can do is set an account alias so you can see that i've set one here prior so if i just go ahead and remove it the way we'd have to log in is via the account alias which is the same as the account id and so i don't really like that so i can just rename it to deep space nine and these are unique so you have to pick something that is unique to you so it could be your company name or things like that it's gonna make it a lot easier to log in when we create our additional user here so we'll come back to mfa at some point here what i want you to do is go over to users and go ahead and make yourself a new user and so i'm going to call this one andrew brown and i'm going to enable programmatic access i'm going to enable aws management console so this one's going to allow me to use the apis to programmatically work with aws and this one here is going to allow me to just log into the console which is pretty fair here so now that i have this we can auto generate it or give it a custom password i'm just going to auto generate it for the time being and here it says you must create a new password at the next sign in which sounds fair to me and we go ahead and create ourselves a new group so it's pretty common to create a group called admin and notice here this is where we're going to have a bunch of different policies so the first one here which is admin and access provides full access to able services and resources and this pretty much gives you almost nearly almost the same capabilities as the um aws root user account and so that's going to be okay because we are an admin in our account so i'll check box that on but i just want to show you here if you drop down filter policies and you went to invest manage job functions these are a bunch of pre-made aws policies that you could apply to different users so what's really popular after the administrator access is to usually give the power user access and so this one allows a user to do basically anything they want with the exception of management of users and groups so you know it could be that that's something that you'd want to do for some of your users i just don't want to have any trouble so i'm going to give us admin access here and we're going to go ahead and create this group and so here is the group that we are creating we're going to go next we can apply our tags if we want i'm not going to bother we hit next review and then hit create user all right and so now what it's doing is it's showing us the access id and the access key secret that we can use to pragmatically access aws and then there's a password here so i'm going to go ahead and show it and what i'm going to do is just copy this into a clipboard anywhere and so i'm just copying that off screen here because i'm going to need it to log in and i'm just going to remember my username as well alright and so what we'll do is go ahead and hit close so what i'll do is go back to my dashboard here and remember i set my account alias as deep space 9 but we could also use the account id to log in i'm just going to grab my account id off screen here and what i want to do now is go ahead and log out and now log into this im user and this is the one that you should always be using within your aws account you shouldn't be using your root user account so what i'll do is go over to i am user here and notice now that it says account id so 12 digits or the account alias so here i can enter in these numbers here or i can enter in my alias which is deep space 9 and again you'll have to come up with your own creative uh one there for yourself and we'll go ahead and hit next and so notice what it's going to do is now ask me what my imuser name is so i defined mine as andrew brown and then we had an auto-generated password there so that we had saw and so i'm going to place that in there we'll go ahead and hit sign in and so now right away it's going to ask me to reset the password so i'm going to put the old password in there and so now i need a new password i strongly recommend that you generate out your passwords to be very strong i like to go to password generator and i'll drop this down and i'll do something really long like 48 characters and if you don't like uh weird characters you can take those out there sometimes it loads here so you gotta try it twice um and we're gonna go down to whoops 48 there we go and so that's pretty darn long so i'm going to copy that off screen here so i do not forget and you probably would want to put this in a password manager something like dashlane or some sort of thing like that and we'll go ahead and we will paste that in and we'll see whoops i don't want google to save it and we'll see if it takes it and so there we go so what i'll do is now log out and i'll make sure my new password works because you really don't want to have problems later so we'll type in deep space nine andrew brown again this is going to be based on what your what you have set and we'll go ahead and log in and there i am and so now notice that it doesn't say example or whatever it says andrew brown at deep space nine so it's using the account alias and showing the name and that's how i'm going to know whether i'm the root account user or whether i'm logged in as an iam user all right so there we go [Music] okay so now that we have the proper user account to log in i just want to point out about regions so in the top right corner you'll notice it says north virginia here it possibly will say something completely else for you but what you'll do is you'll click and drop that down and you'll see a big list of regions and so sometimes when i log into aws it likes to default me to u east uh us east ohio but i honestly like to launch all my stuff in u.s east north virginia even though i'm in canada i probably should be using the canada central region down here but the default region is going to be based on your locality okay so just understand that it might be different i strongly recommend for all of our follow alongs you run in u.s east one because usc swan is the original the original region and it also has the most access to aws services and some aws services um such as like billing and cost and things like that are only going to show up in u.s east north virginia so just to make our lives a lot easier we're going to set it there but i want you to understand that some services are global services meaning that it doesn't matter what region you're in it's going to default to global and one example could be cloudfront so if i jump over to cloudfront here for a moment and we do seem to have uh some cloudfront distributions here from a prior follow along but notice up here that it now says global so cleft front does not require a region selection let's make our way over to s3 all right and this one's also global so again this one does not require a region selection but if you go over to something like ec2 okay this has a region dependency so just be really careful about that because a lot of times you'll be doing a follow along and you'll be like why aren't these resources here or whatever and it's because this got switched on you and it can happen at any time so just be cautious or aware of that okay [Music] so one of the major advantages of using aws or any cloud service provider is that it utilizes metered billing so that is different from a fixed cost where you'd say okay i want a server for x amount of dollars every month but the way nms works is that it's going to bill you on the hour on the second based on a bunch of factors and so you're going to be able to get services at a lower cost however if you choose an expensive service and you forget about it or if there's misconfiguration where you thought you were launching something that was cost effective but turned out to be very expensive you could end up with a very large bill very very quickly and so that is a major concern for a lot of people utilizing cloud but there's a lot of great toolings built into aws to allow you to catch yourself if you happen to make that mistake and before we go ahead and learn how to do that i want to show you some place where you could end up having excessive spend without knowing it so one example and this is actually happened to me when i first started using aws before i even knew about all the billing tools is i wanted to launch a redis instance and so you just have to watch you don't have to do this but elasticash is a service that allows you to launch either a memcache or redis uh database and i just wanted to store a single value and so i went here and i scrolled down it looked all good and i hit create but i wasn't paying attention because apparently it was like the default the node type here to the cache r6g.large all right and you know you might think that a bus has your best interest in play and most services are pretty good they make sure that they're either free or very low spend but some of these and elastic is an older service where they just have these weird defaults so um you know if we were to go look up this the rg6 large all right and look at its spend all right and we would go over here whoops i think i went to the china one but if we were to go over here and look for that instance i'm just trying to find it here for cost this one down below um this doesn't say pricing does it say our pricing here here it is so this one cost um this one costs about two cents per hour it doesn't sound like a lot but if we go here and we do the math we say 7 30 7 30 is the amount of hours in a month that is 150 okay so if you don't know about that and forget about that that's gonna be 150 and i'm going to tell you that it used to be a lot higher i'm pretty sure they used to have it defaulted to something like like this or that because i remember i did this and i had a bill that came in that was like 3 000 usd dollars and i'm in canada so like 3 000 usd is like a million dollars up here and so i remember um it was a big concern and i freaked out but that was okay because all i had to do was go to support and what i had done is i went to the support center and i had opened a support case and i just said hey i have this really big bill so you go here right and you look for billing and you look for something like charging query or misspend and you say you know um you know like help my bill's too high and you just say like you explain the problem saying hey you know i was using elastic cash and it was set to a large default and i wasn't aware about it can you please give me back the money and the great thing is that aws is going to give you a free pass if it's your first time where you've had a misspending they generally will say okay you know don't do it again and if it happens again you will get billed but go ahead and learn how to set up billing alerts or things like that okay so just so you know don't freak out if you do have a really high bill you're going to get a single free pass but now that we know that let's go learn how to set up a budget okay [Music] all right so now that we've had a bit of a story about over span for misconfiguration let's learn how to protect ourselves against it and we're going to go ahead and set up a budget so go to the top here and type in budget and what that will do is bring us over to the billing dashboard another way to get here is to go click at the top here and go to my billing dashboard and then you'll see the left-hand menu here and so the great thing about budgets is that the first two are free it says there is no additional charge for any of his budgets you pay for configured use usage but i'm pretty sure that that's not true because it used to be abs budget reports okay so that costs something it used to be that aws budgets um after subscription enabled will occur 10 cents daily so in addition to budget monitor you can add actions to your budgets the first two action-enabled budgets are free okay so just be aware that just because it says there's no additional charge read into it because sometimes the fine line will tell you it does cost something but i know that the first two are free what we'll do is go ahead and create a budget i'm going to close these other tabs here since we have no need for them and we're going to be presented with a bunch of budget types we're concerned about cost today so we're going to go with a cost budget and notice we can change the period from monthly to daily to quarterly to annually if you change it to daily um you won't get forecasting so i don't want that today but a monthly is pretty good you can have a reoccurring which is strongly recommended and then you can put a fixed cost notice that i already have some spend on this account so it was like 25 bucks last month i'm going to set it my budget here to a hundred dollars and you can add filters here to um filter that cost out so if you want to say only for this region or things like that you could do that uh notice that this is my spend over here um so this is my budget and that's the actual cost notice my cost has been going up the last few months because i've been doing things with this account and so i'll do is say simple budget here we'll hit next and so now it's asking us if we want to configure alerts we probably do so you'd hit add alert and then you'd set a threshold like 80 percent or you could say an absolute value and then you put in your emails like andrew exam pro dot co and i want to point out that this is using um it was sns or it should be anyway so amazon sns has no upfront cost based on your stuff here so even though you're filling out an email you know and maybe it doesn't show it but i'm pretty sure that this would create an sns topic but what we'll do is hit next here we have an alert so we're just reviewing actually this is for attaching any action so maybe we want some kind of follow-up thing to happen here so we say add action and uh requires specific i am permissions on your behalf okay sure so i guess you could follow up actions that's no different than um on a building alarm but we're not really worried about that right now i'm not going to bother with an action we'll go ahead and create a budget and so here it's going to say that our budget is 100 it's going to show us the amount used forecast amount current budget sometimes this takes time to show up so i'm going to hit refresh and see if it shows up yet there we go so notice we have forecast amount 23 current budget etc forecasted budget uh forecasted versus budget so it's pretty straightforward on how that works i'm just curious if it actually created an sns event so i'm going to go over here because a lot of services utilize sns so if i go over here default cloud watch alarm um so i think this is something i had created before so i'm gonna go ahead and just delete it so default cloudwatch alarms actually i'm going to just click into here and see what i have confirmed so i think it might have used this when we created it but um the reason i'm bringing up sns is that there's a lot of services that allow you to email yourself for alerts and it always integrates with this service and so i just want kind of want to point that out so that you remember what sns is for but yeah so setting up a budget is not too hard so there you go [Music] all right so now that we've set a budget what i want to talk to you about is the free tier and the free tier is something that is available to you uh for the first 12 months of a new abs account and allows you to utilize the services without incurring any cost to you and so it's in your advantage to utilize this free tier as you are experimenting and learning cloud so if you want to learn about all the offerings what you do is go to google type in aws free tier and you'll get this page that explains all the sorts of things here so you can get 750 hours on ec2 rds things like that there are stipulations in terms of what it would be so here this is a t2 or t3 michael mic micro running linux red hat or other type of os's okay so there are details you have to read the fine print some services are only available for the first two months things like that so it's going to highly vary based on service but it's worth giving us a read in areas that you are interested in now the thing is is how do you know that you are still in the free tier or you go outside of it and that's what i want to talk to you about right now so i am actually in another aws accounts that knows in the top right corner says brown.lap or hyphen laptop exam pro dot co sometimes i will switch into different abs accounts during these follow along so i can best show you um you know these settings so if you make your way over to billing and actually i should show you up here if we go to my dealing dashboard just trying to be consistent here and you go to the left-hand side to billing preferences what you can do is enable receive free tier usage alerts and then put your email in there and save that and so turn on this feature to receive email alerts a when your abs service usage is approaching or exceeded database free tier usage limits if you wish to receive these alerts etc etc etc right and while you're there i want you to also check box receive billing alerts so i can show you how to set a billing a billing alert and adabas says you know budgets are a new thing but billing alerts are still something that we use as of today so if you checkbox that on we'll be able to see your cost if we go back here it should show you um it's because i'm out of the free tier on this account but it would show you in the alerts you know your usage there so example here is if we scroll down this is the documentation tracking your image free tier usage you would see like a box like this and would say hey your free tier usage limit is here and you're over it okay so that generally would show up on this panel here but again i'm outside of the free tier so i'm not seeing it here today okay so you know hopefully that is clear um but yeah there you go [Music] all right so we created ourselves a budget we're monitoring our free tier but there's another way that we can monitor our spend and that is through building alerts or alarms and it is the old way before we had it was budgets this was the only way you could do it but i still recommend it because there is a bit more flexibility here with this service and so i wanted to teach you early on so that you know what's available to you or if you want to play around with it in the future so what you'll do is go to the top here and type in cloudwatch and cloudwatch is one of those services where it's actually a collection of services so there's cloudwatch alarms cloudwatch logs cloudwatch metrics those are all individual services and animus loves to update their interface so sometimes you'll be presented this option to change the latest interface i'm going to try out the new interface here and that is one challenge with databases you always have to expect that they're going to change the ui on you and you're going to work through it so just understand that i try to keep my videos up to date as best i can but part of the challenge is getting used to that so this is what they have today i don't know if they're going to stick with this but this is what it looks like but what i want you to do is make your way over to alarms on the left hand side and notice that we actually have a section just for billing which is interesting i remember them having that before so it's new so uh here it says it was cloudwatch help can help you monitor the charges of the spill remember that we had to turn that on get 10 free alarms with a thousand free email notifications each month as part of the free tier so understand that if you create billing alarms they do cost money um as well if you go over that limit but you sure get a lot 10 free alarms is quite a bit but we'll do is go ahead here and create our sales alarm we're going to go and choose a metric and so here are the options we could choose from and so we i think would like um billing and so we can do by service or total estimated charge we're going to do a total estimated charge we can only select usd i've never seen any other currency ever there and so here we kind of get this little graph where we can see stuff but this is a lot more powerful than budgets because you can do anomaly detection uh so like here it will actually check base between a range as opposed to just going through a particular value but what i'll do is just set a value here like fifty dollars right so notice that it sets the line up here and this is my current spend here right and so back to anomaly detection this is a lot smarter so the idea is that if something is outside this band of a certain amounts then it would alert okay but i'm going to go back here i'm just going to set this to 50 and that looks okay to me you can change the period six hours is fine um there's additional configuration that's fine as well we're going to go ahead and hit next and so the idea is that you know if it passes that red line it will go to an in alarm state and then what it will do is uh we want to have it to trigger an sns topic so i would generally just create a new one here and we'll just say my billing alarm okay and then here we'll just set the email and your exam pro.co and we'll go ahead and create that topic and so that is now set i don't know if it would confirm it we might have to go to our email to confirm it so notice it says pending confirmation so what it has done is it sent me out an email and it wants me to click that link to confirm that i want to subscribe to it so i might just do that off screen to show you here okay so i'm just going to pull up my email here just give me a moment okay and so if i come back here this is the email that came in so i'm just going to confirm that subscription says i'm confirmed good and if i refresh this page we can now see that that is confirmed all right so we'll scroll down here so we can trigger an auto scaling action so maybe you know if you have too many servers you say hey the cost is too much shut down those servers there's ec2 actions things like that so these are kind of similar to budgets right they're system manager actions i imagine all these things are available in budgets as well but budgets just makes it a little bit easier to look at so i'm going to say my simple building alarm here we'll hit next all right we'll hit create alarm and there you go so billy alarms don't have like forecasting things like that um but you know they are they do have their own kind of special utility and so i utilize both okay so there we go let's go back to our management console move on to the next one [Music] so one of the strongest recommendations that abuse gives you is to say to set mfa on your database root user account so that's something we're going to do right now so make sure you're logged into the root user account so i'm going to go log out as my im user i'm going to go back and log in and i'm going to log in as my root user here so to do that no sometimes it will be expanded as the imuser click and sign into root user here we'll have root user i'm going to go ahead and enter my email that i used and if you do switch accounts frequently they will ask you these silly captchas which drive me crazy but uh you know it happens you probably won't encounter it as much as i do and so i'm going to go ahead and grab my password here and paste it on in and so now that i'm in what i want to do is make my way over to iam and i'm going to go and look for users actually sorry just right here add an mfa root user we're going to go ahead and hit add mfa all right so that's going to bring us to this screen and so here we can activate our mfa and so we have a few options here so we have virtual mfa device u2f security key other hardware like a gem gym gemalto token so you know i generally use this because i have a security key and i want to show you what i'm talking about so this is how i log into my machine or my aws account this is a security key an ubi key that sits on my desk i tape it so it doesn't fall fall off the cord but the idea is that when i log in i have to press this little button here to double confirm before i get into my account but if you don't have a security key you can just use a virtual mfa and all that means is you're going to use something on your phone to log in so we'll click continue here and so it says install a compatible app on your mobile phone or device and so if you click and open this what it will do is tell you about some things that you can use um so if we scroll down to virtual here this suggests uh if you have android iphone so authy dual mobile last path microsoft authenticator google authenticator so google authenticator microsoft authenticator and authy i have all those three installed um honestly authy has the the nicest simplest ui but i'm using microsoft authenticate authenticator quite a bit so anyway whichever you want to do it's fine but what we'll have to do is go back here and then it says use your virtual mfa app on your device camera to scan your qr code so once you have one of those apps installed like authy or whatever one you want what you're going to do is open up the application and i can't tell you exactly where it is but you'll have to hit add account in your in your app and then from there it will ask you to scan your qr code and so once you're ready you hit show the qr code you hit scan the qr code on your phone i'm holding my phone up to my my um my computer screen here and it's going to find it and i'm just going to take a moment here to rename the account so i can tell what it is so i'm just naming it aws sandbox because that's what i call this account and i'm gonna go ahead and save that and so now what i can do is enter uh two consecutive mfa codes now this always confused me what they wanted here but the idea is that you're gonna see one code right whatever's on the screen right now so i'm gonna type in it it says seven 734051 and i'm going to wait until the new code shows up so there's like a timer in all these apps and they go across the screen or they count down and so you have to wait for that to happen and so i'm just going to wait here a little bit and once i get the new number here this one is zero seven one five three zero i'm gonna hit assign mfa and there we go and i can't tell you how many times i like messed that up because i didn't understand the consecutive numbers but you're just waiting for uh the number that's on the screen it entered in and then entered the next one in to turn on mfa and so now your account is protected and every time you log in you're going to have to enter in mfa so let's log out and see what that looks like so we'll go ahead and sign in and again we'll put in our root user account here we'll type in 74m32t submit and i need to go grab my password so that's in my password managers just give me a moment here and now it wants the mfa code so this is in my phone and so i'm going to go enter it in so this one says four seven five eight four one all right we'll hit submit okay there we go so that's gonna happen every single time we want to log in i'm going to tell you that if you get one of these they're so much easier to use because you just press the button okay so that's why i have this because i cannot stand entering the code in time and time again but you know those are your options there okay [Music] hey this is andrew brown from exam pro and we're looking at the concept of innovation waves so when we're talking about innovative waves we're talking about chondrativia or k waves which are hypothesized cycle-like phenomena in the global world economy and the phenomenon is closely connected with technology life cycles so here is an example where each wave is irreversibly changes the society on a global scale and if you look across the top we can kind of see what they're talking about so we have steam engine cotton railway and steel electric engineering chemistry petrochemicals automobiles information technology and so the idea is that cloud technology is the latest wave and i'm not sure if you'd fit web 3 in there as well ml ai but maybe they're all part of the same wave or their separate waves but generally they're broken up based on this p r d e here where it says perspective recession depression and movement uh improvement sorry and so this is the common pattern of wave where we see a change of supply and demand and so if we're seeing this we know that we are in a wave in where we are in a wave okay [Music] hey this is andrew brown from exam pro and we are looking at the concept of a burning platform so burning platform is a term used when a company abandons old technology for new technology with the uncertainty of success and can be motivated by fear the organization's future survival hinges on digital transformation and just to kind of give you a visualization here is a literal burning platform so imagine you have to jump to it uh jump from it to make a change so um you know burning platform could be you know stop using on-prem and start using cloud or maybe going from cloud to web 3 and that's generally the idea when we talk about a burning platform [Music] so i just want to quickly show you that digital transformation checklist that i mentioned and the way you can get to it is by typing in digital transformation aws and so it should bring you to the public sector page and here it is so we click there and all it is is a pdf uh so it's not news from 2017 but that doesn't mean that it's not valid anymore uh it's just that that's when it was made so we scroll on down and we can see transforming vision and so we have a checklist there so if we click into this uh we can see things like communicate a vision of what success looks like define a clear governance strategy including the framework of achieving goals uh build a cross-functional team identify technical uh partners they talk about shifting the culture and then down below i assume that this one is related to that one it's unusual because you know they just have a checklist here but then they have a sub checklist which must be clear to that so reorganize staff into smaller teams things like that so it's not super complicated you'll see each category go go cloud native they'll have a checklist um you know and if you are at the executive level or the sales level or trying to convince your vps and stuff like that give this a read it might give you something useful in the end to help better communicate that transformation for you okay hey this is andrew brown from exam pro and we are looking at the evolution of computing power so what is computing power it's the throughput measured at which a computer can complete computational tasks and so uh what we're pretty much used to right as of these days is general computing so a good example here would be a zeon cpu processor uh that's more of a high-end processor not something you'd find in your home computer but we're talking about data centers specifically uh um you know innovative data centers xeon cpu processors are what you're going to come across uh then came along a new type of compute which is gpu computing um when we're talking about google cloud they have tensor computing and so this is where i get the 50 times faster based on that metric and so i didn't have an exact metric here for aws as a solution for this mid tier of computing power so i just borrowed that 50 times there but the idea is that gpu computing or tensor computing is is 50 times faster than traditional cpu and generally that's going to be used for very specialized tasks when you're doing machine learning or ai so it's not something you're going to be doing for your regular web workloads but just understand that all these fit so we're not getting rid of general computing we're just adding new levels of compute then there's the latest which is uh quantum computing and so here we have an example of the rigid rig right getty 16q aspen 4 and so it literally looks like it's out of um science fiction and this thing is like a hundred million times faster it is super cutting edge and we don't even know exactly how it works and there's not even anything that's very applicable that we can use this for but the idea is that we're not done with the evolution of computing power things are going to get a lot faster once we solve this last one here and so above service offering here would be for general computing you're looking at elastic compute cloud ec2 so we have a variety of different uh instance types and they're all going to have different types of hardware with different types of general computing for gpu computing this is a specialized chip that aws has produced called the edibus and i don't know how to say it but we'll just abbreviate it to infer so aws infer chip and this was designed as a direct competitor to gcp's tensor computing uh unit the tpu um and so this is intended for ai ml workloads but it works with not just um tensorflow but it works with any machine learning framework so that is one advantage it has over uh tpus um and then the last one here is aws brackets so you can actually use quantum computing as a service on your bus you uh as of even today um the way aws is able to do this is they work with caltech so that's the california technology university or institute i'm not sure the name of it there so it's not exactly aws producing this but itabus is doing this as a partnership to give quantum computing accessible to you okay [Music] so i'm here in the aws console because i just want to prove to you that you can use quantum computing on aws it's that accessible so all you'd have to do is go to the top here type in bracket and then you make it over to amazon bracket and so here you can like set up quantum tasks the first time you set it up you've got to go through this process here i think i have to go through this onboarding to be able to show you the next step so i'm going to go ahead and enable bracket in this abs account okay and i'm not going to launch anything i'm just going to try to just kind of show you a little bit of what is accessible to you because it's not super exciting but the fact that you can do it is kind of interesting so here i am on the inside here and we have all these different types of quantum computing so d wave i know i i o n q righty things like that and then down below these are the quantum processing units the q q p u's and then down below you have the simulator so you can kind of simulate uh these things here um so i think that's kind of interesting but in terms of the cost like if you scroll on down here um so it was bracket is part of that it was free tier it gives you one free hour of quantum circuit simulation time per month during the first 12 months so it's free to do a circuit simulation but if you actually want to run it on the actual hardware you can see the cost there's the per task price the per shot price things like that what could you do with this i don't know there's things called like quad bits or something like that and i can't imagine that you're going to be doing anything useful but i think it's just more so like you are sending out quad bits or whatever they are and you're observing them but what you can do with them i have no idea but it's just exciting that you can do that i didn't have any spend just by activating that i'm just kind of just showing you there okay [Music] hey this is andrew brown from exam pro and we are looking at the benefits of cloud and this is a summary of reasons why an organization would uh consider adopting or migrating to utilizing public cloud and so we'll quickly go through the list here uh because in the follow-up slides we actually go into them a bit more detailed so we have agility page ago economy of scale global reach security reliability high availability scalability um and elasticity so the thing is is that eight of us had this before it was called the six advantages of cloud but they have reworked it to include additional items um and so where you see these uh sub bullets here those are the original six as you see one two three four five six and so i kind of just put them where they kind of fall under the new categories there and you'll notice that database has included high availability elasticity reliability and security as uh new ones here okay and so the thing is is that um i have always always even in my original uh i think my original cloud practitioner had cloud architecture as a separate section and included all these things in here so it's a great thing to see that ableist has included it but in terms of how i organize this course we're not going to cover them in this section because i have the cloud architecture section so just understand that we will come to those eventually and i would just say that aws is still missing something on this list which is fault tolerance so you know my list looks like this except i would add fault tolerance to it so you have everything there and disaster recovery okay so the benefits of cloud is a reworking expansion of the six advantages of the cloud and we will look at the original six advantages um and then look at another one that is more of a generalized one that i i've used across my courses so that we fully understand the benefits okay [Music] all right let's take a look here at the six advantages to cloud defined by aws and so these are still uh part of aws marketing pages um but you know it's interesting because you can't find the benefits of the cloud in a single page on any of this at least the time of making this so there's a bit of disconnect between the um exam guide and the actual marketing material but that's okay i fill it all in for you so you know i'm just again noting that the sixth advantage of cloud was the original description for cloud benefits and we'll go through them okay so the first is trade capital expense for variable variable expense so you can pay on demand meaning that there is no upfront cost and you pay for only what you consume or you pay by the hour minutes or seconds so instead of paying for upfront costs of data centers and servers the next is benefit from uh massive uh economies of scale so you are sharing the cost with other customers to get unbeatable savings hundreds of thousands of customers utilizing a fraction of the server stop guessing capacity so scale up or down to meet the current needs launch and destroy services whenever so instead of paying for idle or underutilized servers we have increased speed and agility so launch resources within a few clicks and minutes instead of waiting days or weeks of your it to implement the solution on premise we have stopped spending money on running and maintaining data centers so focus on your customers developing and configuring applications so instead of operations such as racking stacking and powering servers the last is go global in minutes so deploy your app in multiple regions around the world with a few clicks provide low latency and a better experience for your customers at minimal cost the six advantages of cloud still apply and i like to include them here because they just have a different kind of lens or or or angle when you're looking at this stuff and so we've looked at the six advantages of cloud and now let's take a look at the next slide my reworking of the sixth advantage of the cloud to be more generalized okay [Music] all right i just wanted to show you where that sixth advantage of cloud computing comes from it's part of it it was documentation so i typed it in here and you can see that it is still around and so it's unusual because this used to be part of the marketing website it had those nice little graphics but for whatever reason it's over here now in the overview of amazon web services and by the way if you're starting starting out with databus this is a very light read but it is a good read to get started with we obviously cover all this stuff in the course um but you know maybe you'll get something different here but the idea is that it was definitely expanded on this but for whatever reason this documentation hasn't changed so just understand that i've polyfilled that for you in this course okay [Music] all right so this is the seven advantages to cloud i said six but i meant to say seven and so um you know since i've created fundamental courses for all these cloud service providers i started to notice kind of a trend and so what i did is i normalized it into my own seven advantages and this actually maps up really well to the new benefits of the cloud so it looks like invoice was thinking the same as i was um with the exception of those cloud architect stuff which i keep in a separate section but let's go through it and see what is here so the first is cost effective you pay for what you consume no upfront costs on demand pricing so pay as you go p-a-y-g with thousands of customers sharing the on uh sharing the cost of resources any of us used to refer to this always as on-demand pricing and azure always said pay as you go and so it looks like aws now uses both on-demand and pay-as-you-go to describe them which is great um but there you go then we have global so launch workloads anywhere in the world just choose a region it's secure so cloud provider takes care of physical security cloud services can be secured by default or you have the ability to configure access down to a granular level uh it's reliable so data backup disaster recovery data replication fault tolerance it's scalable increase or decrease resources and services based on demand elastic so automate scaling during spikes and drop in demand current so the underlying hardware and and managed uh software is patched upgraded and replaced by the cloud provider without interruption to you so i think this is one that isn't on the benefits of the cloud which is a really good one um but uh yeah that's the seven [Music] hey this is andrew brown from exam pro and we are taking a look at what is able's global infrastructure so global infrastructure is globally distributed hardware and data centers that are physically networked together to act as one large resource for the end customers so if you see here on the right hand side we have a picture of a globe and the idea is that we have a bunch of these regions and these regions are containing a bunch of data centers and then you have those lines going in between them which kind of represents the network okay so the global infrastructure is made up of the following resources so they have regions availability zones direct connection locations point of presence so those are pops local zones wavelength zones and we're going to cover all of these in this section here and one thing i want to note is that airbus has millions of active customers and tens of thousands of global partners that are constantly using this infrastructure so you know that it is rock solid okay [Music] all right so i'm over here on the global infrastructure page if you type in aws global infrastructure you'll make your way here and so i just wanted to point out that aws is always updating their global infrastructure so these numbers are increasing all the time but if you're over here what you probably want to do is make your way to regions and azs so you can kind of see what's in your area so i'm in canada and we have canada central region here and it has three availability zones have launched in 2016. you'll notice that it has a couple asterisks if you scroll on down here explain that it's in the montreal metropolitan area so saying it's in the downtown it's in the city uh that could matter to you for whatever reason um but just kind of pointing out where that stuff is you can read about all this stuff but of course we cover this all in the course but there you go [Music] hey this is andrew brown from exam pro and we are taking a look at above regions and regions are geographically distinct locations consisting of one or more availability zone and so here is a world map showing you all the regions that abuse has in the world and the blue ones represent regions that are already available to you and the orange ones represent ones that ableis is planning to open so aws is always expanding their infrastructure uh in the world so always expect there to be more upcoming ones every region is physically isolated from independent of every other region in terms of location power and water supply and the most important region that you should give attention to is u.s east one uh in particular so this is northern virginia it was italy's first region where we saw the launch of sqs and s3 uh and there are a lot of special use cases where things only work in u.s east ones and we'll find that out here in a moment what i do want to show you is what it looks like for an architectural diagram when you are seeing a region so notice that we have this little flag here it says us east one us west one and inside of it we have an ec2 instance so that is going to represent a region in our architectural diagrams uh but let's look at some of the facts here and understand why u.s east or u.s east 1 is so important so each region generally has three availability zones and that is by intention and we will talk about that when we get to the availability zone section some new users are limited to two or uh to two uh but generally there's always three okay new services almost always become available first in u.s east and specifically u.s east one not all services are available in all regions all your billing information appears in u.s east one so that's a usc one particular thing uh the cost of aidable services vary per region and so if you're on the marketing website or uh for with global infrastructure you can see uh here in north america they'll say like when it launched how many availability zones and there might be some conditions so you'll notice there's like asterisks uh beside these things here or um in this one particular there's an asterisk saying hey there are three zones but generally you're limited to two okay when you choose a region there are four factors you need to consider uh what are the regulatory compliance does this region meet what is the cost of this enable service in this region what input services are available in this region and what is the distance distance or latency to my end users and those are those four factors that you should remember okay [Music] all right so we just talked about adabus regions now let's talk about uh how that affects our services versus regional and global services so regional services are scoped based on what is set in the database management console on the selected region so you have this drop down and that's what you'll do you'll say okay i want to have resources in canada or in europe so this will determine where a native service will be launched and what will be seen within the airbus services console you generally don't explicitly set the region for a service at the time of creation i explicitly mentioned this because when you use something like gcp or azure when you create the resource that's when you select the region but aws is it has this kind of global thing which is unique to their platform then there's the concept of global services so some aw services operate across multiple regions and the region will be fixed to the word global and for these that's services like s3 cloud front row 53 iam so the idea is if you were to go over to cloud cloudfront and go into the cloudfront console you'll notice that it will just say global and you can't switch out of that for these global services at the time of creation it's a bit different so we were saying up here for regional ones that you don't select the region but when you are clearing global services if you're using something like iam there is no concept of region because they're just globally available so you don't have to determine a subset of regions if you're using s3 bucket that has to be in one region so you actually do have to select a region at time of creation um and then there's something like cloud form distributions where you were choosing a group of regions so you either say all of the world or only north america which is more like geographic distribution so you don't say the region in particular but you know hopefully that gives you a distinction between regional services and global services [Music] hey this is andrew brown from exam pro and we are taking a look at availability zones so availability zones commonly abbreviated as a z and i'll frequently use b using the term a z is physical locations made up of one or more data centers so a data center is a secured building that contains hundreds or thousands of computers and this is one of my favorite graphics i like to show of course uh you know aws would never have a dog um in their data center but i just thought that would be fun a region will generally contain three availability zones and i say generally because there are some cases where we will see less than three so there might be two data centers within a region will be isolated from each other so there will be different buildings but they will be close enough to provide low latency and that is within the 10 milliseconds or less so it's very very low uh it's common practice to run workloads in at least three azs to ensure services remain available in case one or two data centers fail and this is known as high availability and this generally is driven based on regulatory compliance so a lot of companies uh you know they have to at least be running in three az's and that's why aws tries to always have at least three azs within a region uh azs are represented by a region code followed by a letter so here you know you'd have us east one which would be the region and then the a would represent the particular availability zone in that region um so a subnet which is related to availability zones is associated with two availability zones so you never choose an az when launching resources you always choose a subnet which is then associated uh two and a z a lot of services um you know don't even require you to choose a subnet because they're fully managed by aws but in the case of like virtual machines you're always choosing a subnet okay so here is a graphical uh representation or a diagram that's representing two availability zones so here we have the region usc 1 and us west 2 and then we have our 2az so here is 1a and 1b and so these are effectively the subnets okay and so within those subnets then you can see or availability zones you will see that we have two virtual machines okay so the usc s1 region has six azs and i thought that's just kind of like a fun fact because it is the most out of every single one um i don't think anyone comes close to usc 1 but of course it is the most popular it is the first uh region or so it's not a surprise that that one has that many a [Music] okay so we just covered regions and availability zones but i really want to make it clear what they look like so i kind of have a visual representation so let's say we have our aws region and in this particular one we have canada central which in particular is montreal so ca central one and the idea here is that a region has multiple availability zones so here you can see that we have uh one a one b and one d for some reason aws decided to uh not launch one c maybe it's haunted who knows you know and then within your um availability zones they are made up of one or more data centers so just understand that az is not a single data center but could be a collection of buildings and that these azs are interconnected with high bandwidth low latency networking they're fully redundant dedicated to metro fiber providing high throughput low latency networking between so just very fast connections in between and all traffic between azs is encrypted and these azs are within a hundred kilometers so about 60 miles of each other okay [Music] so what i want to do here is just show you uh how regions and availability zones work with some different database services so you have a general idea when you are selecting a region or a z and when you're not so within aws when you want to select a region you're going to go up here and change it and this is going to apply to regional services a very famous example of a regional service would be ec2 so we go over to ec2 which is elastic cloud computing or compute whatever let's forget the name of it and what we can do is go over to instances i'm going to launch an instance i'm not going to complete the process i just want to show you what would happen when you go select some things here so i'm going to go with amazon x2 we're going to just go to next here and so here is where we're going to select our availability zone so up here we have north virginia that's our region and when i say we're selecting our availability zone we're actually selecting the subnet so so here we are choosing a subnet and a subnet is associated to a availability zone and every single um region has a default vpc and that vpc has subnets set up and the subnets are defaulted to each of the availability zones available so usc 1 has six of them so this server is going to launch in u.s east 1b so this is a regional service okay then we have global services like s3 so we go over to s3 and it says it's global right and so we're going to go ahead and create our bucket and so here we choose the region so we go down we're going to say the region we want to be in but we don't choose the availability zone because there's nothing to um choose because aws is going to run these in multiple azs and it doesn't matter to you what it's doing there okay so there's that and then there's something like cloudfront so cloudfront's a little bit different here so we go over to cloudfront and we create ourselves a distribution um and so yeah if you don't have that option there because sometimes database has like a splash screen just click on the left hand side then go to distributions okay and so here well they changed it again on me they're always changing this ui but if we scroll on down it should allow us to change um change where this is going to launch it's like global stuff like that literally they just recently changed this and that's why i'm confused ah we'll scroll on down here it used to be maybe it's under legacy additional customized oh it's here sorry okay so notice here the price class that says use the edge locations for best performance north america and europe north america europe asia middle uh middle east and africa so we're not choosing a particular region we're picking a geographical area and so those are pretty much the major um uh examples of that uh then there's of course things like in iem where you don't even say where it is so you go to i am you know if i create something like a group over here a user group whoops here i say create group you know i'm not saying oh this is for this particular region or something like that okay so yeah hopefully that makes sense [Music] hey this is andrew brown from exam pro and let's take a look here at fault tolerance specifically for global infrastructure and so before we jump into that let's just define some fault terminology here uh so let's describe what a fault domain is so a fault domain is a section of a network that is vulnerable to damage if a critical device or system fails and the purpose of a fault domain is that if a failure occurs it will not cascade outside that domain limiting the possible damage and so there's this very popular meme called this is fine where there's obviously a serious problem but the person's not freaking out and i gave it some context to say well the reason they're not freaking out because they know that there's a fault domain and nothing outside of this room is going to be affected okay so you can have fault domains nested inside of other fault domains but generally they're grouped in something called fault level so a fault level is a collection of fault domains and the scoping of a fault domain could be something like a specific specific servers in a rack an entire rack in a data center an entire room in a data center the entire data center building and it's really up to the cloud service provider to define those boundaries of the domain it's abstracts it all away so you don't have to think about it but just to compare it against something else when you're using azure you actually define your fault domain so you might say like okay uh make sure that this workload is never running on the same vm on the same rack for these things uh and you know you might like to have this level of control but i really like the fact that it was just abstracts it away i'm not sure how they segment their uh their their fault domains but they definitely are some broader ones which we'll describe right now so when we're looking at an enables region this would be considered a fault level and then within that fault level you would have your availability zones and these would be considered fault domains and of course those data centers can have uh fault domains within them okay like maybe you know they have everything in a particular room and that room is secure so like if there's a fire in that room it's not gonna affect the other room things like that um so each amazon region is designed to be completely isolated from the other amazon region they achieved this with the greatest possible fault tolerance and stability uh each availability availability zone is also isolated but the availability zone in a region are connected through low latency links each availability zone is designed as an independent failure zone and so here we have some kind of different language that database is using i've never experienced this terminology in other any other cloud service providers so i kind of feel like it's something that it was made up but basically a failure zone they're just basically saying a fault domain but let's kind of expand on their fault failure zone terminology so availability zones are physically separated within a typical metropolitan region and are located in lower risk flood plains discrete uninterruptible power supply so ups and an on-site backup generation facilities uh data centers located in different azs are designed to be supplied by independent substations to reduce the risk of an event on the power grid impacting more than one availability zone availability zones are all redundantly connected to multiple tier one transit providers and we'll talk about what those are in an upcoming slide and just one thing i want to note here is that when you adopt multi-az you get high availability so if an application is partitioned across azs companies are better isolated and protected from issues such as power outages lightning strikes tornadoes earthquakes and more so that's the idea behind you know why we want to run in multi-az okay because of these fault domains [Music] hey this is andrew brown from exam pro and we're talking about the global network so the global network represents interconnections between aws global infrastructure and it's commonly referred to as the backbone of aws so is ec2 so just understand that that could be used in more than one way but think of it as a private expressway where things can move fast between data centers and uh one thing that is utilized a lot to get data in and out of aws very quickly is edge locations they can act as on and off ramps to the abs global network of course you can get to the network through pops which we'll talk about um you know in the upcoming slides here but let's just talk about edge locations and what services use them so uh when we're talking about things that are getting on to the database network we're looking at things like abus global accelerator aws s3 transfer acceleration and so these use edge locations as an on-ramp to quickly reach able's resources and other regions by traversing the fast away global network notice that the names in it's a accelerator acceleration so the idea is that they are moving really fast okay on the other side when we talk about like an off-ramp we're looking at amazon cloudfront which is a content distribution network this uses edge locations to as an off-ramp to provide an at the edge storage and compute near the end user and one other thing that is kind of always utilizing the global network are vpc endpoints now these aren't using edge locations but the idea here is that this ensures your resources stay within the aws network and do not traverse over the public internet so you know if you have uh you know a resource running in u.s east one and one in uh eu it would and they never have to go to the internet it would make sense to always enforce it to stay within the database network because it's going to be a lot faster so there you go [Music] hey this is andrew brown from exam pro and we are taking a look at point of presence also known as pop and this is an intermediate location between a database region and the end user and this location could be a data center or a collection of hardware so for aws a point of presence is a data center owned by aws or trusted partner that is utilized by itabus services related for content delivery or expedited upload so a pop resource could be something like an edge location or a regional edge cache so as an example over here we see an s3 bucket and it has to go through a regional edge cache and then cut to an edge location let's go define what those are so an edge location are data centers that hold cached copies on the most popular files so web pages images and videos so that the delivery of the distance to the end users are reduced then you have regional edge locations and these are data centers that hold much larger caches of less popular files to reduce a full round trip and also to reduce the cost of transfer fees [Music] so to kind of help put pops more in presence just in the general sense here is a uh diagram i got from wikipedia that kind of just shows a bunch of different networks and notice where the pop is it's on the edge or the intersection of uh two networks so here you know we have um you know tier three and then there's tier two and there's this pop that is in between them okay so tier one networks is a network that can reach every other network on the internet without purchasing iptransit or paying for peering and so the innovas availability zones or azs are all redundantly connected to multiple tier one transit providers okay [Music] all right so let's take a look at some state of the services that are utilizing pops or edge locations for content delivery or expedited uploads so amazon cloudfront is a content delivery network service and the idea here is you point your website to cloudfront so it will write requests to the nearest edge location cache it's going to allow you to choose an origin so that could be a web server or storage that'll be the source of the cache and caches the content of what origin would return to various edge locations around the world then you have amazon s3 transfer acceleration this allows you to generate a special url that can be used by the end users to upload files to a nearby edge location once a file is uploaded to an edge location it can move much faster within the aws network to reach s3 then at the end here you have aws global accelerator you can find the optimal path from the end user to your web servers so global accelerators are deployed within edge locations so you send user traffic to an edge location instead of directly to your web application this service is really really great for if let's say you're running a web server usc 1 and you just don't have the time to set up infrastructure in other regions you turn this on and you basically get a boost okay [Music] this is andrew brown from exam pro and let's take a look at it was direct connect so this is a private or dedicated connection between your data center office co-location and aws and so the idea here is imagine if you had a fiber optic cable running from your data center all the way to your aws so that it feels like when you're using your stuff on your data center like your local virtual machines that there's like next to no latency okay so direct connect has two very fast network connection options we have the lower bandwidth which is at 50 to 500 megabytes per second and then you have the higher bandwidth which is one gigabytes to 10 gigabytes per second so using direct connect helps reduce network costs increase bandwidth throughput so great for high traffic networks it provides a more consistent network experience than a typical internet-based connection so reliable and secure i do want to point out the term co-location if you never heard of that before a co-location or a carrier hotel is a data center where equipment space and bandwidth are available for rental uh to retail customers i do want to also point out that even though it says private up here and this is the language that aws used i usually just say dedicated but the connection is private but that doesn't necessarily mean it's secure okay so uh we'll talk about that when we reach above vpns and how we can use that with direct connect to make sure our connections are secure okay [Music] all right so let's take a look at what a direct connect location is so a direct connect location are trusted partner data centers that you can establish a dedicated high-speed low-latency connection from your on-premise to aws so an example of a partner data center would be one like here in toronto the allied data center so you can tell that's right down in uh the toronto center and so you would use this uh uh as part of direct connect service to order and establish a connection okay [Music] hey this is andrew brown from exam pro and we're taking a look at local zones which are data centers located very close to densely populated areas to provide single digit millisecond low latency performance so thinks like seven milliseconds for that area so here is a map of uh local zones that exist and ones that are coming out i believe the orange ones are probably ones that are on their way and so to use a local zone you do need to opt in so you gotta go talk to aws probably open a support ticket to get access to it the first one to ever be launched was uh the la one uh and so um you know when you want to see it it looks just like a availability zone it's going to show up under whatever region that is because these are always tied to existing regions so the la-1 is tied to u.s west uh region and the az would look like u.s west 2 hyphen la x hyphen 1a okay so only specific ab services have been made available so there's a particular ec2 types ebs amazon fsx application load balancer amazon vpc they probably have extended it to more services do you need to know that for the exam no but you know the point is is that there's a limited subset of things that are available the purpose of local zone is to support highly demanding applications sensitive to latency so media and entertainment electronic design and automation ad tech machine learning so it kind of makes sense like you look at la they're in the media entertainment and so they're dealing with lots of media content so it has to be really low for them okay [Music] hey this is andrew brown from exam pro and we are taking a look at abus wavelength zones and these allow for edge computing on the 5g networks and applications will have ultra low latency being as close as possible to the users so abus has partnered with various telecom companies to utilize their 5g networks so we're looking at verizon vodafone kddi sk telecom and so the idea here is that you will create a subnet tied to a wavelength zone and then and just think of it as like an availability zone but it's a wavelength zone and then you can launch your vms to the edge of the targeted 5g network so that's the network you're using aws to deploy an ec2 instance and then when users connect to you know those radio towers those cell towers they're going to be routed to you know nearby hardware that is running those virtual machines okay and that's all it is it's just it's just ec2 instances um but you know the advantage here is that it's like super super low latency okay [Music] hey this is andrew brown from exam pro and we are taking a look at data residency so this is the physical or geographical location of where an organization or cloud resources reside and then you have the concept of compliance boundaries so a regulatory compliance so legal requirement by government or organization that describes where data and cloud resources are allowed to reside and then you have the idea of data sovereignty so data sovereignty is the jurisdictional control or legal authority that can be asserted over data because its physical location is within a jurisdictional boundary and so the reason we care about this stuff is that if we want to work with the canadian government or the us government and they're like hey you got to make sure that you know if you want to work with us all the data has to stay in canada and you need to give them that guarantee so data residency is not a guarantee it just says where your data is right and compliance boundaries are those controls that are in place to say okay this is going to make sure that data stays where we want to be and date of sovereignty is just like the idea of the scope of the legal the legal stuff that ties in with compliance boundaries so how do we do that on aws well there's a few different ways but um let's just take a look at some ways that we can meet those compliance boundaries one which is very expensive but also very cool is aws outposts so this is a physical rack of servers that you can put in your data center and you'll know exactly where the data resides because you know it's physical if it's in your data center and you're in canada that's where it's going to be okay and i believe that you know there is only a subset of aws services that are available here but you know that is one option to you another is using like services for governance so like one could be abs config this is a policy as a code service so you can create rules to continuously check database resource configuration so if they deviate from your expectations you are alerted or image config can in some cases auto remediate so if you were expecting you know um you know you had an aws account and you're saying this account is only to be used for candid resources and somebody launches let's say something in another region then you could get an alert or to tell it was config to go delete that resource okay now if you want to prevent people from doing it all together that's where i am policies come into play so these can be written explicitly to deny access to specific aws regions and you know this is great if you're applying it to users or roles but if you wanted to have it organizational wide across all of your abus accounts you can use something called a service control policy that is just an i am policy that is used within its organizations that makes it organizational wide okay [Music] hey this is andrew brown from exam pro and we are looking at it for government so to answer that we first have to understand what is public sector so public sector includes public goods and government services such as military law enforcement infrastructure public transit public education health care and the government itself so abus can be utilized by the public sector or organizations developing cloud workloads for the public sector enables achieves this by meeting regulatory compliance programs along with specific governance and security controls so this could be i meet the requirements with hipaa fedramp um cjis and fips okay so amaz has a special regions or special regions for us regulation called govcloud which we'll talk about next okay [Music] hey this is andrew brown from exam pro and we are taking a look at govcloud and to understand what govcloud is we need to know what fedramp is so fedramp stands for federal risk and authorization management program it's a u.s government-wide program that provides a standardized approach to security assessment authorization continuous monitoring for cloud products and services so that we know what fedramp is what is govcloud well and again it's not particular to aws because azure has govcloud as well but a cloud service provider like aws or azure general will offer an isolated region to run fedramp workloads and so in aws it's called govcloud and these are specialized regions that allow customers to host sensitive controlled unclassified information and other types of regulated workloads so govcloud regions are only operated by you by u.s citizens on u.s soil they are only accessible to u.s entries and root account holders who pass a screening process customers can architect secure cloud solutions that comply with fedramp uh do the doj's credible justice information systems uh security policy the u.s international traffic and arms regulation export administration regulations the department of defense cloud computing security requirements and guides so if you want to work with the us government you want to engineer and use govcloud okay [Music] hey this is andrew brown from exam pro and we're taking a look at uh running ada bus in china so eight of us china is the ito's cloud offering in mainland china enemies china is completely isolated intentionally from adamus global to meet regulatory compliance from mainland china so that means that if you make a workload on the awesome global you can't interact with it within the aws china one okay it's basically treated like a completely separate service like adabus has its own chinese version uh and so it was china is on its own domain at amazon aws.cn and for everybody else that's what's considered it is global so when i'm using adabus from canada or use it from the u.s or from india or from europe or wherever that is the adabus global okay so in order to operate in aws china regions you need to have a chinese business license so icp license not all services are available in china so you will not have the use of route 53 and you might say well why not just run in singapore and it was global and you could do that but the advantage of running in mainland china means that you would not have to traverse the great firewall okay so all your traffic is already within china so you don't have to deal with that airbus has two regions in mainland china so uh there's this one here which is the northwest region operated by nswc and then you have the one in beijing north one operated by uh synnet so you know itabus just could not meet the the compliance requirements so they had to partner with local providers uh or data centers and so that is how that works okay [Music] all right so i want to show you how you get over to the chinese database management console so this one is adabus.amazon.com that is the global one for everyone outside of mainland china but if you want to run resources uh on data centers within mainland china this is at amazon awesome.cn and so it looks very similar if you go to create a free account you're going to fill in this stuff but notice that you need to have your business registration certificate uh and additional information in order to run these data centers down below that aws is partnered with also notice that the logo doesn't say aws in it and there's a good reason for that if i type in aws trademark china inbus is actually banned from using the aws logo in china uh for whatever reason it's a weird reason if you ever want to read about it but that's why you don't see aws here all right so yeah there you go [Music] hey this is andrew brown from exam pro and we are looking at sustainability for aws global infrastructure and before we talk about that let's talk about the climate pledge so amazon co-founded the climate pledge to achieve net zero carbon emissions by 2040 across all of amazon's businesses which includes aws if you all want to find out more information go to sustainability.about amazon.com there's a lot of great information there and you'll learn exactly how uh aws is achieving this in particular like their data centers it's very interesting okay so aws cloud sustainability goals are composed of three parts the first is renewable energy so eight of us is working towards having their abs global infrastructure powered by 100 renewable energy by 2025 and abbas purchases and retires environmental attributes to cover the non-renewable energy for abyss global infrastructure so they would purchase things like renewable energy credits also known as recs guarantees of origin so gos the second point here is cloud efficiency so abyss infrastructure is 3.6 times more energy efficient than the medium of u.s enterprises data centers surveyed so that's going to really rely on that survey surveys are not always that great so you know take that with a grain of salt okay then we have water stewardship so direct evaporative technology to cool our data centers use of non-uh potable water for cooling purposes so the recycling water on-site water treatment allows us to remove us them to remove scale forming minerals and reuse waters for more cycles water efficiency metrics to determine and monitor optimal water use for each adibus region and you'll find that water plays a large part on making these um these data centers very efficient okay [Music] so i just wanted to show you where you get to that sustainability information so i just went to itabus global infrastructure you click sustainability and that's going to bring us over to oops i have my twitter open there to the sustainability in the cloud so if you want to read a bunch of stuff here about things that are going on that database is up to see how they are progressing with renewable energy there's cloud efficiency up here so you know how they being efficient it's worth the read to really understand that there's a lot of water involved like reducing water and data centers i thought that was really interesting um i mean they have native podcasts but i don't think there's really much to it a bi-weekly podcast of bite-sized stories about how tech makes the world better that's not necessarily a sustainability podcast it's just the endless podcast in general there's a download center um amazon's 2020 sustainability reports so i guess you can download the reports to see what is going on there so we could download the progress here and see what they've been up to okay so there's a bunch of numbers things like that okay very short reports but hey at least you can download them okay so just in case you're very interested in sustainability all right [Music] hey this is andrew brown from exam pro and we are taking a look at abus ground station so this is a fully managed service that lets you control satellite communications process data and scale your operations without having to worry about building or managing your own ground station infrastructure and so when we're talking about ground station a really good way to cement what the service is is just think of a big antennae dish that's pointing to the sky trying to communicate with satellites because that's essentially what the service is doing so the use cases here could be for weather forecasting surface imaging communications video broadcasts and to use ground station the idea is that you would schedule a contact so that's where you're selecting a satellite a start and end time in the ground location and then you use an abuse ground station ec2 ami and amazon machine image to launch ec2 instances that will uplink and downlink data during the contact or receive downlink data in an amazon s3 bucket a use case could be something like you are a company you've reached an agreement with a satellite image provider to use their satellites to take photos for a specific region or time or whatever and so the idea is that you are using aws ground station to communicate to that company satellite and download that as that image data to your s3 bucket okay [Music] hey this is andrew brown and we are looking at able's outposts and this is a fully managed service that offers the same ableist infrastructure services apis tools to virtually any data center co-location space or on-premise facility for a truly consistent hybrid experience and just to kind of summarize it it's a rack of servers running aws stuff on your physical location okay so before we jump into the service or technology itself uh let's talk about what is a rack server or just a rack so it's a frame designed to hold and organize it equipment so here's an example of a four to u rack and there's a concept of rack heights so the u stands for rack units or u spaces uh with it equal to 1.75 inches and the industry standard rack is a 4 8 u um so that is a seven foot rack so um a full uh size rack cage is commonly the four two high okay and uh in it you might have equipment that is of different sizes so there could be one u two u three u or four u high so here's an example of you know of an interior of a rack and notice that like one u two u for u they're all a little bit shaped differently uh but they give you kind of an idea of um you know what those are so it's outpost comes in three form factors the four to you the one you and the two you so the uh the first one here the four to you this is basically a full rack of servers provided by aws so you're not just getting the frame it actually comes with you know servers uh and so abs delivers it to your preferred physical site fully assembled and ready to be rolled into the final position it is installed by aws and the rack needs to be simply plugged in to the power and network and there's a lot of details about um the specs on this on the adabus website so you know i'm not going to go through them all here um then there are servers that you can just place into your existing racks so we have the 1u so this is suitable for 19 inch wide 24 inches deep cabinets it's using it with gravitron 2 cpus and you can have up to 64 virtual cpus we have 128 gigabytes 4 terabytes of local nvm storage um and then you have the u or sorry the 2u so suitable for 19 inch wide 36 inch deep intel processors up to 128 virtual cpus 256 gigabytes of memory eight terabytes of local nvme storage so there you go [Music] let's take a look at cloud architecture terminologies before we do let's talk about some of the roles that are around doing cloud architecture so the first is solutions architect this is a role in a technical organization that architects a technical solution using multiple systems via researching documentation and experimentation and then you have the cloud architect this is a solutions architect that is focused solely on architecting technical solutions using cloud services understand that in the actual marketplace a lot of times solutions architect is used to describe both a cloud architect and a solutions architect and you know these are going to highly vary based on your locality and how companies want to use these terms but this is just me broadly defining them here so just don't take them as a perfect word in terms of what they're representing so a cloud architect needs to understand the following terms and factors and factor them into their designed architecture based on the business requirements so we have the idea of availability your ability to ensure service remains available scalability your ability to grow rapidly or unimpeded elasticity your ability to shrink and grow to meet the demand fault tolerance your ability to prevent a failure disaster recovery your ability to recover from a failure and there are a couple other things that uh you that should be considered they're not terminologies but they're definitely important to a solutions architect or cloud architect and uh these are things you always need to consider as well and this is just me talking to my solutions architect friends where they'll always ask me these two questions after presentation they'll say how secure is the solution and how much is this going to cost all right and so for the terminologies up here we're going to define these right away and we're going to figure these out throughout the course we have two giant sections just on cost and security alone uh so there we go the first term we're looking at is high availability and this is your ability for your service to remain available by ensuring there is no single point of failure and or you ensure a certain level of performance so the way we're going to do that on aws is you'd want to run your workload across multiple availability zones to ensure that if one or two availability zones became unavailable your servers or applications remain available because those other those other servers are going to be there and the way we would accomplish that is via elastic load bouncer so a load balancer allows you to evenly distribute traffic to multiple servers in one or more data center if a data center or server becomes unavailable or unhealthy the load bouncer will route the traffic to only the available data centers within the server and understand that just because you have additional servers doesn't mean that you are you're available you have to you might need to meet a particular threshold of availability so you might need to have at least two servers always running to meet the demand so it's based on the the demand of traffic okay [Music] let's take a look here at high scale abilities so this is your ability to increase your capacity based on the increasing demand of traffic memory and computing power and we have the terms vertical scaling so scaling up this is where you upgrade to a bigger server and then there's horizontal scaling scaling out this is where you add more servers of the same size and the great thing about scaling out or adding additional servers is that you're also going to get high availability so if you do need two servers it's always better to you know add an additional server as opposed to having a larger server but it's going to be very dependent on a lot of factors okay [Music] so scalability and elasticity seem very similar but there is a crucial difference and this is your ability to automatically increase or decrease your capacity based on the current demand of traffic memory and computing power again it's the it's the fact that it happens automatically and you can go both ways increase or decrease so for horizontal scaling we have the concept of scaling out so add more servers of the same size and then scaling in removing underutilized servers of the same size and vertical scaling is generally hard for traditional architectures so you'll usually only see horizontal scaling described with elasticity and the way we would accomplish uh being highly elastic is using auto scaling groups asgs and this is a database feature that will automatically add or remove servers based on scaling rules you define based on those metrics okay [Music] let's talk about being highly fault tolerant so this is your ability for your service to ensure there is no single point of failure preventing the chance of failure and the way we could do that is with failovers so this is when you have a plan to shift traffic to a redundant system in case the primary system fails a very common example is having a copy or secondary uh of your database where all ongoing changes are synced the secondary system is not in use until a failover occurs and it becomes the primary database so when we're talking about databases on abs this is the concept of rds multi-az so this is when you run a duplicate standby database in another availability zone in the case your primary database fails [Music] and last here is high durability so this is your ability to recover from a disaster and to prevent the loss of data so solutions that recover a disaster uh from a disaster is known as disaster recovery so do you have a backup how fast can you restore the backup does your backup still work how do you ensure current live data is not corrupt and so maybe a solution aws would be using cloud endure which is a disaster recovery service which continuously replicates your machines in a low cost staging area in your target apes account and preferred region enabling fast and reliable recovery in the case of an i.t data center fails okay [Music] so to understand disaster recovery we need to know more about uh things around it like business continuity plans bcps and rtos and rpos so a bcp is a document that outlines how a business will continue operating during unplanned disruption in services so it's basically the plan that you're going to execute if that happens and so here we have a disaster and you can see that there's a chance of data loss and downtime and these two factors as rpo and rto are going to define the length of these durations so recovery point objective is the maximum acceptable amount of data loss after an unplanned data loss incident expressed this amount of time so how much data are you willing to lose and then recovery time objective so the maximum amount of downtime your business can tolerate without incurring a significant financial loss so how much time you're willing to go down okay so those are the two there and now let's go take a look at the disaster recovery options that we can use to define in our our bcp [Music] so let's take a look at our disaster recovery options uh and based on what you choose they're going to be a trade of cost versus time to recover based on the rpos your rtos of course and so sometimes this is re represented vertically like a a thermostat or you can do it horizontally here both are valid ways of displaying this information but i just have it horizontally here today and so we have low or high or you could say even though i don't have it written here this could be cold or this could be hot okay so um on the left hand side we got back up and restore pilot light warm standby multi active site notice we're using the like the words like pilot light warm things that are relating to temperature so again cold and hot all right so let's just walk through what each of these things us conceptually do uh in terms of architecture so when you're doing a backup restore you're back you basically back up your data and at the time of disaster recovery you're just going to restore it to new infrastructure for a pilot light the data is replicated to another region with the minimal services running to keep on replicating that data and so you might have some core services running a warm standby is a scaled down copy of your infrastructure so you basically have everything that you would absolutely need to run an application but the idea is it's not at scale and so at any time when there's an incident you're going to scale up to the capacity that you need and then you have multi-site active active where you you have a scaled up copy of your infrastructure in other regions so basically everything you have identically in another region and so in terms of the rpos and the rtos for back and restore you're looking at hours uh with the pilot light you're looking at 10 minutes with a warm standby you're looking at minutes and multi-site active active you're looking at real time so you know hopefully that gives you an idea of you know the difference in terms of scale but let's just look at more detail so for a backup and restore this is for low priority use cases restore data after event deploy resources after an event and it's very cost effective for pilot light you this is where you have less stringent rtos and rpos so that you're going to be just running your core services you're going to start and scale resources after the event and this is a little bit more expensive this is very good for warm standby is good for business critical services so you scale resources after the event uh and it's almost very it's very it's costly but it's not as expensive as a multi-site active active so you get zero downtime near zero loss uh you have it's great for mission critical services and it's just as expensive as your original infrastructure so you're basically doubling the cost there okay [Music] so we already defined rto but let's redefine it again based on what aws describes in their white paper and just look at how it maps against the disaster recovery options so recovery time objective is the maximum acceptable delay between the interruption of service and restoration of service this objective determines the what is considered an acceptable time window when service is unavailable and is defined by the organization and so this is the diagram found in the white paper and so on the left-hand side we have cost and complexity here and then lengths of service interruption and what you can see here is that the cost and complexity for a multi-site active active is very high but the length of service interruption is zero and then as we go down we have warm standby so it's significantly like at least half the complexity of that one then we have our pilot light down here and backup and restore but notice backup restore takes the longest amount of time and notice here we have a recovery time objective so in your bcp you kind of define where that is based on the cost of business impact so you might have to calculate that saying okay what is our cost over time based on the length of service interruption where do we want our rto to be what is the acceptable recovery cost and this is where you're going to decide uh what you want to do so here we have pilot light and backup and restoring so this company he has to decide whether they want to do a pilot light or they're going to do a back and restore but it sounds like this is where they're going to be which is at the pilot light for what is acceptable in their business use case okay let's do the same for rpo so recovery point objective is the maximum acceptable amount of time since the last data recovery point the objective determines what is considered an acceptable loss of data between the last recovery point and the interruption of service and is defined by the organization again we pulled this from the aws white paper for disaster recovery and uh we have cost and complexity but this time it's replaced with data loss before service interruption so uh for multi-site again it's going to be very expensive and high up here as you notice it's not like a perfect um curve it's just it's a bit different in terms of what it looks like so here we have warm stand standby pilot light and so you'll see that the data loss is um not a big deal but for backup and restore it really juts out there so you can see that you can get pretty good results just with the pilot light and the cost and complexity is very low again we have to look at our cost and business impact so we got to follow that line and we need to see where our acceptable recovery cost is and so you're going to notice that we have a bit of an intersection here okay and so we need to determine you know like are we going to be doing a warm standby it looks like we have the cost to do it um but you know it just really depends you know do we want to be down here or down there okay so hopefully that helps and visualize that information for you [Music] hey this is andrew brown from exam pro and what i want to show you here is a real world architectural diagram i created this a while ago this is a previous version of the example or technically teacher see platform that powers the learning experience for my cloud certifications and so i'm hoping that by giving you some exposure you'll absorb some information here and that will carry through to really help you cement what these services do and how they work together now you might be asking how did i make this well i'm in adobe xd it's by photoshop or sorry adobe it's free to download but there's a lot of options out there and but the first thing you'll need is those aws architectural icons so these are free on aws you can download them in powerpoint download those assets as svgs and pngs which is what i have done and start using them in your um whatever software you like there's also third party providers out there so like there's lucidcharts i love lucidcharts but i don't use it to make architectural diagrams for aws um but you know you can drag drop and stuff and they already have the library there and there's a bunch of them that you can choose from so uh you know that's interesting but let's take a look at one that we can download maybe everyone's familiar with powerpoint so here is the aws architectural icons and the reason i'm showing you this is not because it just contains icons but it also suggests how you should build them so if i go through here they'll give you a definition of those system elements uh how they would look like here so we have our group icons our layer group our service icons resource icons where they should go and then they have some interesting guidelines of like do's and don'ts so here's like a simple example of a get to an s3 bucket here's an example of using vpc subnets and things like that on the inside um and then you can see kind of like all the groups that we have and they'll show all like the uh the arrows it's a big faux pas to make a diagonal arrows that's just something that it was defined but you'll see a lot of people do them anyway and then you'll see all the icons so do you have to make them like eight of us suggest no but you know if you like the way they look that is fine everyone just does whatever they want honestly so anyway now that we've seen you know how we can go get the resources to make our own i have adobe xd opened up here and so i just kind of want to walk you through what's going on here so again i said this is a traditional architecture meaning that it's powered by virtual machines and so what we need to look for uh is ec2 because that's where it's going to start that's our virtual machine and you'll notice we have one here so there's a t2 um that's running over here and then over here we have a t2 okay so we have a blue and a green environment so this is our running environment so i'm just going to zoom on in here okay so the web app would be running on this and and then on the outside here we have an auto scaling group and so auto scaling groups allow us to manage a group of ec2 instances and they will automatically scale if the demand increases or or decline so if this machine can't handle it it will just automatically provision a new one and so i've contained it in this environment here because i'm representing a blue green deploy meaning that when i deploy this will get this will be the environment that replaces things and so you can see i have a lot of lines being drawn around here so over here we have uh parameter store so parameter store is a place where we can store our environment variables um or application configuration variables and so i have this line going here and it's just saying we're going to take these environment variables and put them into the application okay and then there's also uh the database credentials so here we are using postgres over here so and then we need the database credentials so we're grabbing those database credentials those are stored in secrets manager and we're giving to the application so the app knows how to connect to the database and this one knows how to uh configure it okay then we have a bunch of uh buckets here for different organizations and so you know s3 is for storage so this is a way we're going to um store a variety of things so like user data assets artifacts cloudformation templates so some of this is for the app some of them is for the infrastructure so that's one thing there okay then over here we have a ci cd pipeline so we have code pipeline and so code pipeline is triggered by github so we put our code in github and when that happens it's going to do a code build so that's going to build out a server and then from there it's going to run another code build server and then from there it's going to then um use codeploy and so codeploy is going to trigger a deploy what it will do is create a new environment so it's going to create a copy of this um sorry it's going to create a cop this is actually the environment that's running so we'll copy that and that will be our new environment right okay and so when the deploy is done it will swap and that environment will become this new one um and so you know again this is actually really the the running server it's just kind of easy to get hung up on this one but the idea here is that um you know that's how deployment works but let's say you know we want to get uh traffic to this actual instance this is going to come through the internet and the internet is going to probably go to revit three so reference three is used for domain names so this would be like example teacherseek.com we pass that over to our elastic load bouncer which in this case is an application load bouncer that's why it's called alb and that's going to distribute the traffic there if we wanted to run the server in another um in another availability zone so that we make it highly available you know alb the elastic load balancer application load bouncer is going to have some traffic go here and some traffic go there so this is just the blue environment or whichever the current environment is over here now when we want to deploy new versions we're going to use launch templates and launch templates um uh are necessary when using auto scaling groups so um you know you do have to define launch template it just says like what is the shape of this instance type like what's this family what should it be and then we need an amazon machine image so our amazon machine image is custom built because we are installing all the stuff that we want on it and so in order to automate that process we are using um ssm automation documents so ssm stands for systems manager and automation allows you to automate that step so what it's going to do is launch an instance install ruby install postgres download the code base then it's going to create that ami and then um it will do a bunch of other stuff here as well and this is going to run weekly or actually at the time uh it was running nightly so we're doing nightly builds so that we would always get the latest um updates to our server because it's a virtual machine there could always be uh new updates for that linux version or amazon machine limit next version we were using and then there's a bunch of other stuff here so you know hopefully that kind of gives you an idea of like the complexity of it and you know this is how i like to make my architectural diagrams very in detail so that we can um look at them but yeah if that was too much that's fine but you know that's just the complexity of it if you build your own you'll start to really grasp the stuff pretty well okay so what i want to do is just show you how high availability is built into some aws services where in other cases you have to explicitly choose that you want something to be highly available so what i'm going to do is make my way over to s3 and so with s3 this is where you can create s3 buckets and this allows you to store things and so the great thing about s3 is that it's basically serverless storage so the idea is that you're just going to choose your region and by default it's going to replicate your data across multiple um uh data centers or azs and so this one's already highly available by default with the standard tier and so that is something that's really nice but other services uh you know like ec2 the idea is that you are going to launch yourself an ec2 instance so we would launch that one and the problem with this is that if you launch a single ec2 that is not highly available because it's a single server running in a single um a z so here you know we would choose our subnet our subnet is our availability zone but you'd have to launch at least two additional servers and then you'd have to route um you'd have to have something that would balance uh the traffic to the to the three which is a load bouncer and so in this case you have to construct your high availability then you have services like elastic bean stock this is a platform as a service um and we'll go to environments here i'm not sure it wasn't showing up there uh and so the idea is that with elastic bean stock i'm just going to click on the main service here you're going to go ahead and create your application or create your environment you probably want to create environment first here okay and so i would choose a web server and then the idea is i'll just name it so my application here my environment and then down below you go configure more options whoops wants me to choose everything that's totally fine and we say configure more options we're not going to create it because um we don't want to create one but the idea is that you'd you could choose whether you want this to be highly available or not so see it's a single instance of free tier and then if you choose this what it's going to do is set up a bunch of stuff for you so it's going to set up an application load balancer for you it's going to set up auto scaling groups for you to make it highly available it's going to run at least between one to four instances so this does everything that ec2 you'd have to do manually setting up so that's really nice okay so you know some options have that if we make it our way over to rds and again we're not creating anything we're just looking at the options it gives us when we start things these up here we'll make our way over to rds and it gives us a moment here and if we go ahead and create ourselves a new database and we look at something like a postgres database notice that we have a production option and a dev test option and so i i mean usually it shows us the price down here so even test dev is 118 which is not true it can get cheaper than that but the idea is that when you choose between these two options um it's going to set up a multi-az it's going to that means that it's going to run an additional database and another availability zone replicate that data over so that it stays highly available um you know it's going to have auto scaling uh part of it and so some services you just choose it abstractly so you just have to understand what highly availability is going to mean underneath so hopefully that kind of gives you a picture of high availability on aws [Music] hey this is andrew brown from exam pro and we are looking at abuse application programming interface also known as database api so before we talk about the api let's describe what application programming interface is so an api is software that allows two applications or services to talk to each other and the most common type of api is via http requests and so the aws api is actually an http api and you can interact with it by sending https requests using an application interacting with apis like postman and so here's kind of an example of what a request would be that would be sent out and so the way it works is that each database service generally has a service endpoint so see where it says monitoring that's going to be cloudwatch so sometimes they're named after the services sometimes the name is a bit obscure and of course you can't just call and uh call a api request without authenticating or authorizing and so you have to sign your request and so that's a process of making a separate request with your idioms credentials to get back a a temporary token in order to authorize that and i don't have room to show it but the thing is is that what you'd be also going along with those requests would be to provide an action so when you look at um the aws api it will show you a bunch of actions that you can call they're basically the same ones you'll see in the in policies so it could be like describe ec2 instances or list buckets and they can also be accompanied with parameters okay so you know we're probably not going to show you how to make an api request directly because that's not something that you would generally do um but what you would do is you probably use the aws management console which is powered by the api use the abyss sdk which is powered by the api or using the aws cli so we'll cover all those three okay [Music] all right so what i want to do is just point you to where you'd find the resources to use the api programmatically uh we're not going to actually use the api because there's a lot more to it uh than what i'm going to show you here but at least you'll be familiar with how the api works so i'm on the aws.amazon.com website if you type in docs the type top there it's going to bring you to the main documentation and what we're looking for if we scroll on down there should be a general reference area where we have service endpoints if we click into here it's going to talk about how a server's endpoint is structured and if we go down to ibis api we can see some additional information of course to use um the api you're going to have to sign api requests first which is not a super simple process but you have to use an authorization header and send along some credentials and things like that so if you want to know what service endpoints are available to you if you search service endpoints list for aws this is the big list and so if i was to go down here and look for ec2 uh might be a common example here it's going to tell us what the endpoints are and as you can see they are regional based but the idea here is that i could take something like this okay i could grab that and using something like postman i could go and create a new request and it's probably a post i'm not sure what it's supposed to be it's probably a post and then you'd set your authorization header there might even be one in here for aws see where it says adab signature so you can go here and put your access key and secret within here um so that's something nice about postman so it's going to do the signing requests for you so it makes your life a lot easier and then from there what you do is you go to your body and you'd want to enter in json so to do json would probably be raw you drop down the format json and then you'd send your payload whatever it is so again i haven't done this in a while because it's not a very common uh thing that i have to do like describe ec2 instances but there probably is like an action and some additional information that you would send along um so you know hopefully that gives you kind of an idea how the api works but you know you should never pro in practice ever have to really work with the api uh this way directly okay [Music] hey this is andrew brown from exam pro and we are looking at the database management console so the italo's management console is a web-based unified console to build manage and monitor everything from simple web apps to complex cloud deployments so when you create your apps account and you log in that is what you're using the aws management console and i would not be surprised if you're watching this video and they've already changed um the default page here since adobe's loves to change the ui on us all the time but uh the way you would access this is via console.ableis.amazon.com when you click sign in or go to the console that's the link that it's going to uh and so the idea here is that you can point and click to manually launch and configure aws resources with limited programming knowledge this is known as click ops since you can perform all your system operations via clicks okay [Music] let's talk about the aws management console in brief here so you know of course when you're on the home page you go to aws management console and you will end up logging in and from there we will make our way over to the edwards management console when i say [ __ ] management console i'm referring to this homepage but i'm also referring to anything that i'm doing in this web ui whether it's a sub service or not so you know a lot of times people just call this the dashboard uh or the home page but you know it is technically the us management console but everything is the aws management console you can drop down services here if there's some that you like you can favorite them on the left hand side i don't find that particularly useful you can see the most recent ones here they'll also show recently up here as well we have the search at the top notice that there's a hotkey for alt s i don't think i ever use it if i was to type in a service like ec2 it's going to get me the services and then down below it's the sub features of it so if i just click into that there into this use this is the main this is a service console so i would call this the ec2 console or the ec2 service console so if you ever hear me saying go to the ec2 console that's what i'm saying and you'll notice here like there is stuff on the left hand side so i come back here ec2 image builder you see two global views these are considered services but if you drop down it says top features or you go down here it says dashboard limits amis you go over here the ec2 dashboard limits amis are here and limits are somewhere here right there so okay so those kind of map over pretty well polls and documentation knowledge based articles marketplace i don't think i've ever touched those in my life this here is the cloud shell so if you click it it will launch a cloud shell we'll cover that when we get to that section here we have this little bell it tells us about open issues i think this is for the personal health dashboard yeah it says phd in the bottom left corner or left corner so if i open that up it'll bring up the phd the personal health dashboard all right our region selector our support so nothing super exciting here but just kind of giving you a bit of a tour so that you know there are some things you can do um can you change the look of this i don't think right now as of yet um there is any way i'm sure it was thinking about it because it's been a high request that's in demand but this is what it looks like as of today okay [Music] all right so i just want to describe what a service console is so an aws service each have their own customized console and you can access these consoles by searching the service name so you would go ahead and type in ec2 and then what we refer to this screen as as the ec2 console the reason i'm telling you this is that when you're going through a lot of labs or follow alongs you'll hear the instructor say go to the ec2 console go to the sagemaker console go to the rds console what they're telling you is to go type the the name of the service and go to that particular services console okay some interest service consoles will act as an umbrella console containing many aws services so uh you know vpc console ec2 console systems manager console sagemaker console uh cloudwatch console these all contain multiple services so you know for um for ec2 you might say okay well i need a security group there's no security group console it's under the ec2 console okay so just be aware of that [Music] so now i want to show you some of these service consoles to kind of distinguish how they might vary per per service okay so if we were to look up ec2 um and we just did look at this but the interesting thing is that some consoles the ec2 console is the home for other database services and you just have to learn this over time to know that so for instance elastic block store is its own service but it's tightly linked to ec2 instances so that's why they always have it here same thing with amis security group same thing with that so these are interesting because these are basically part of virtual networking and so you'd think they'd be under the vpc console but they're actually under here with ec2 and so load balancing auto scanning groups tightly coupled to to ec2 if we make our way over to vpc you know here it's going to contain all the new stuff does it have a new experience no i guess this is the newest one it looks a bit old and a little bit new here but you know we have a lot of different things here like firewalls vpns transit gateways traffic mirroring we make our way over to cloudwatch okay and cloudwatch has uh very uh focused services they're all actually named and this is more like a feels more like a single service where you have these very focused services where you have alarms logs metrics events insights right but you're going to notice that like the ui highly varies so we had looked at cloudwatch and then we had looked at vpc and it looks like this and then we looked at ec2 and it looked like that and so there is inconsistencies because each um service team like that work on per service or whatever they have full control over their ui and so some of them are in different states of updating so some people might have updated the left-hand column but this part is old or you might click around like under something else like the ec2 dashboard or maybe a better example might be amis i remember we're in here and something looked old here yeah see these are the old buttons and that's just how it is so everything is very uh modular and so they get updated over time so that is the challenge that you're dealing with you're always having like three different versions that are cobbled together in each uh ui one thing that i found really interesting is that um vpc has its own console management console but if you were to look up this in the uh the sdk so if i was to look up abs sdk ec2 okay i'm just looking up ruby here as an example because that's what i know how to do if you look under here let's say you want to pragmatically work with vpcs you'd think that it would have its own top-level vpc because it has in the console its own its own management console but actually vpc is tightly coupled ec2 and so when you want to pragmatically use vpc you're going to be um using actually ec2 as as how it was built so the the the what i'm trying to get is the apis don't one-to-one match with this kind of stuff and so it's just kind of interesting that there's those kind of uh differences uh but again it's not that big of a deal i'm just trying to say like you know keep your mind open when you look at the stuff okay [Music] so every aws account has a unique account id and the account id can be easily found by dropping down the current user in the global navigation so what i'm going to do is pull up my pen tool here and just show you it's right there uh the imbus account id is composed of 12 digits and so it could look like this or this or this the universal account id is used when logging in with a non-root user account but generally a lot of people like to set their own alias because it's tiring to remember your account id the you use it when you're creating cross account roles so you'd have the target account e the source account id to gain access to resources in another's account when you're dealing with support cases awast will commonly ask you what your account id is so they can identify the account that they want to look at and it is generally good to keep your account id private as it is one of the many components used to identify an account for attack by malicious actor so you don't have to be overly sensitive with it but you know try to hide it when you can when it's easy okay [Music] all right so let's talk about the account id which appears up here in the top right corner uh where you can get the account id it also appears in im so if we go over to iam and you look on the right hand side it should show you the example here it keeps on trying to take us to the old dashboard that's fine but you'll notice that it's over here and i don't have mfa turned on because i'm in my imuser account but it should be turned on on everything that's a given but you know i just want to show you where it is and also where you might be using it so one example where you would use you would need to know your account id would be something like creating a cross account policy so i went here and went to policy and went create policy um and we went to maybe it's a role i think we actually sorry we want to cross account roles not the policy sorry we go here and we say i want to access something in another abs account what we have to do is specify the account id specify the accounts that can use this role so you give i think the the id of the other account okay and so that is one place where you'd use it another place would be when you're creating policies so if i go back to policies here i can create a policy here and i can just choose something like s3 okay and i'll just choose list and under the request conditions i might specify i think the account id it should be in here um i know i can limit based on account id principal account you could do principal account so if i just looked up this here address principal account and you just got to get used to google and things because that's always what's happening here and so we should be able to specify an account id yeah like that so that would be the principle there so if i just took that and it doesn't matter what it is we just put the value in here um string equals this add i should be able to go over here and now see the full statement nope sometimes that happens because we don't have it fully filled out but um yeah so that pretty much that's pretty much how we use it like it would normally show up as that so if i just go ahead and go next the policy contains an error you are required to choose a resource what do you mean the resource is this right oh down here okay sorry so we'll just say all resources then we flip over now it's valid and so here we can see our condition saying only from this account id that it is allowed um other places we're going to see account ids are in um arn's right so if we had an ec2 instance we don't have one launched right now but if i was to go ahead and oh maybe we have some prior ones yeah so if i was to check box this here and you might not have any prior ones so there might not be nothing for you to see but if you look for the arn where is our iron sometimes it doesn't show the iron in the services sometimes it does i wish that abuse always showed the iron to make our lives a bit easier but it could be because of other reasons why but even though we don't have the rn i think it shows it shows us the owner id and so that's the account the count id number you can tell because it's 12 digits so hopefully that gives you kind of a tour of the account id and what its purpose is in the account okay [Music] all right let's take a look at aws tools for powershell so what is powershell powershell is a task automation configuration management framework is a command like shell and a scripting language so here it is over here uh if you're a windows user you're used to seeing this because it has a big blue window so unlike most shells which accept and return text powershell is built on top of the dot net common language runtime clr accepts and returns the dotnet objects so aws has a thing called the interbus tools for powershell and this lets you interact with the aws api via powershell commandlets is a special type of command in powershell in the form of the capitalized verb and noun so in this case it'd be new uh hyphen s3 buckets so you know we looked at the awcli and that is generally for bash um you know shells and so powershell is just another type of shell that's very popular and i just wanted to highlight it for those people that are uh you know used to using microsoft workloads or azure workloads uh that this actually exists okay all right let's take a look at the powershell tools um i actually haven't used this one yet so i'm kind of curious i am on a windows machine so if i was to open cm or powershell and you probably can't see this but if i just bring this over here if i type in powershell on my computer you'll notice that i have it um so that's how you would launch it looks like a blue screen here okay um if you're on a mac you're not going to have that but that's totally fine we don't need to have a windows machine to use powershell because we can go ahead and use cloud shell so make sure you're in a region that supports cloud shell so i switch back to north virginia this is not important for the exam but it's just kind of fun for me to go through this with you if you just like want to watch uh here and so i want to change this over to powershell so i imagine that it must be over here um so how do we change to powershell so we'll type in advanced power or aws cloud shell power shell like how do we do it okay and so we're just going to scroll down here so the following shells are pre-installed uh the bash the powershell the z-shell you can identify them by that yeah of course to switch to new shell enter the shell's program name in the command line prompt oh wow that's easy so um if we want pwsh do we just type pwsh let's find out give it a moment to think oh there we go okay so now we're using powershell and so i would think that databus would give this pre-installed for us so if we go over here to the instructions and we scroll on down there's probably like oh wait like i don't use powershell a lot it's very easy to install modules i've done it before but i never remember how to do it but let's just see what we can find here so i want the documentation for powershell here and i'm going to go to the um the maybe the reference here because i just want to see some examples for the commandlets and so we'll look for s3 again never done this before but i'm always great at jumping into these things and all i want to do is just list out the buckets so i'm going to just search for the word list and just see if i can find something very simple here and calls to get the list buckets api operation so i think that is what we're going to be doing here so i'm going to click into that okay and then from there what i'm going to do is just see if i can copy this command so we will go ahead and copy this and paste it in here and i like how we got this little shell here so we can tweak it so we need the bucket name but i don't want to return a list of all the buckets owned by the author so we don't have a bucket name that we want to explicitly set here so it's required false so we can remove that okay we'll look at the next one select required false use the select command to control the command line output the default is bucket specifying select will result in turning all the whole buckets for that specifying the name but it says it's not required so let's just take that out as well i don't think we need any of these actually let's just go and put that in there and i think that there must be something we need to put in front of that right well let's just see what happens uh the term is not recognized as the name of the command function script is operable so i think we're missing something in front of here we'll go to the user guide here quickly and we'll get to the getting started i just want a super simple example here new bucket get bucket well let's try this one here because they have it here and so it should just work right i'm going to change this to usc 1. the term new bucket is not recognized as the name of the commandlet function so i'm guessing that the commandlet's not installed i would have thought that they would have installed it by default so i guess what we'll do is look at how to install it so installing on linux i suppose so [Music] you can install the modulized version of the powershell on computers to install aws tools on linux pwsh to start powershell core session so i guess that's how you must start it on linux and then install the module this way so yeah i said it's easy to install these things we'll hit enter cross your fingers hope this works hope this is fast i'm just going to take a look here peek forward here if you are not if you're notified the repository is untrusted you're asked if you want to trust anyway just hit y so we're waiting for that here um you're installing this module from untrusted repository it's funny that it's untrusted by but it's by aws maybe that's some kind of drama between microsoft not letting a bus have an official module there but it looks like it should be installed now so if i type in get s3 buckets here um unless i typed it wrong that still doesn't seem to be working if i go up here and try to create a new bucket still does not recommend recognize the command command lit here so there must be more going on here [Music] if you are notified you can now install the module for each service okay what did we do you're installing the the modules from untrusted if you trust it change the uh change its installation policy value by running set policy command are you sure you want to install this module from the ps gallery so i said yes and i gave it a capital y and it didn't do anything else so oh hold on here so this is the installer and then here is the actual tool that we want to solve so it installed oh so we just installed this thing and now we use this thing to install s3 okay great not hard okay and so we'll just say yes to all and so that's going to install i guess everything oh we said ec2 and s3 well we didn't need both but that's fine and so what i'm going to do is go get bucket and so now recognize it it lists out the items here we can go and create ourselves a new bucket so we'll do that okay we'll make our way back over the database management console we'll go to s3 just because i don't need all these buckets lying around here and i'm going to go ahead and delete some of these buckets here so we'll say delete my bucket great and we'll go to this one here and say delete my bucket excellent all right so we have an idea how to use powershell and so powershell is just really popular because it's the way you do inputs it's very standardized and the outputs that come so it's very popular um and a very powerful scripting tool that's our cli tool as well so you know hopefully that's that was interesting for you but what we'll do is just close these off here and go back to our home page always just clicking that logo there and there we go [Music] so amazon resource names uniquely identify aws resources and arms are required to specify resource and ambiguously across all of all of aws so the iron has the following format variations so there's a few different things here but just notice here that sometimes it has a resource id or it has a path so with the resource type or could be separated by a colon so the partition can either be aws china or gov cloud because this is basically the aws portal or url that are completely separated from each other as we talked about those earlier in the course then there's the service identifier so ec2 s3 iam pretty much every service has their own service that name here that would be identified so the region would be pretty obvious usc 1 ca central 1 you'd have a count id which would be 12 digits the resource id could be a name or a path so like for imusers we have user bob this is an ec2 instance and most of the irons are accessible via the airbus management console and you can usually click the rn to copy to your clipboard so here is it is for an s3 bucket and notice that it's a little bit different because it is a global service aws there's no reason to specify the region or the account id or anything else there like the resource type so straight away we already know it's a bucket so we can just say my bucket so that one's really short but in other cases it's really long so here it is for a load bouncer and it has all the information there and notice that like this as it passes load bouncer app my server will be and then it has the id okay for paths and arms they can also include a wildcard asterisk and we'll see these like with im policies or or paths these are really useful when you are doing um policies where you have to specify an army you want to say a group of things and things like that so there you go [Music] all right so now let's take a look at amazon resource name or also known as arn and so arns are used to reference objects they're very commonly used when you're using the cli or the sdk to reference to something um the easiest example is s3 right so if we go over to s3 here and we create ourselves a new bucket um so i'll go ahead and create ourselves a new one here we'll say my new bucket i'm just going to put a bunch of numbers in here it doesn't matter we'll hit create bucket and what we will see if we click into this is the orange should be under properties and there it is okay so there are many cases where you might want to use the iron and a lot of times you'll just copy it and a very common example would be again with i am policy so we go over to i am policies right and i want to get to policies here to save myself some trouble and we create a policy you know i might want to restrict someone to use only that bucket so let's say s3 okay and then i'm going to say i want to be able to read and write from a particular bucket we go drop down these resources here and so here we have a lot of options maybe i'll just get rid of the read option and i'm going to actually expand right because it's just creating too much work for me here and i just want to have put put object that's that's what we use to put something into a bucket so we expand the resource here and notice it says add the iron so we go here and we could type the bucket name so do that or we just paste it on in here at the top so it's probably easier just to grab it sometimes but if you don't know an iron a lot of times you can just expand this and then fill it in and that's how you get an iron so put that there let's list oh you could also do it that way which is easier too and so now if i go to json is it valid there we go so here it's saying um this policy allows somebody to put an object into this particular bucket and so that would be an example where we would use an iron okay or if you're doing uh if you're using uh itabus support you might have to use an arm to um to get help from support saying hey look at this particular resource exactly here and then the the cloud support engineer can help you okay [Music] hey this is andrew brown from exam pro and we are looking at the abs command line interface before we do that we got to define some terms so what is a cli so a command line interface processes commands to a computer program in the form of lines of text operating system implement a command line interface in a shell okay so we have a terminal say terminal is a text only interface so it has input output environment then you have a console this is the physical computer to physically input information into a terminal then you have the shell a shell is the command line program that users interact uh with uh to input commands popular shell programs or bash zsh powershell and uh you might remember this one ms dos prompt so this has been around for obviously a very long time so maybe this kind of primes your mind for what is a shell and just so you know people commonly erroneously use terminal shell or console generally describe interacting with the shell so if we say shell or console or terminal we're just talking about the same thing but there is technically a difference between these three things but most people do not care and i wouldn't worry about it too much okay so now let's take a look at the database command line interface which allows you to pragmatically interact with the adobe's api via entering single or multi-line commands into a shell and then here i say or terminal but really it's just the shell okay so uh here is an example of one so we're trying to describe uh ec2 instances and then we're getting the output because we asked to have it back in this table like view so the abcli is a python executable program so python is required to install the awcli the awcli can be installed on windows mac linux unix the name of the cli program is aws you'll notice that up here in the top left corner there's a lot more to this but this is all we need for now okay [Music] hey this is andrew brown from exam pro and we are taking a look at the abyss cli and the easiest way to get started with this is actually via the cloud shell so you'll notice this little icon here in the top right corner that is cloud shell and it's going to allow us to um uh pragmatically do things without having to set up our own environments so if i just click that there okay uh and i say do not show again close and by the way if you don't see cloud shell here it could be your region so like if i go to canada central it doesn't have it there and so if i was to search cloud shell here okay it's going to say it's only supported in those regions so that's a bit annoying but once cloud shell loads it already has our credentials loaded within our account and so this is going to save us a lot of time in terms of you know trying to get set up with the exception that you have to wait for this environment to create so it takes a little bit of time but it's not that bad um and while that is waiting what i'll do is show you actually how you'd install the cli yourself so if we typed in about cli install all right and we went here the way you install i believe it's a python library but if we went to version 2 and we just said linux you go down here they'll have instructions so you just curl it unzip it and do that um so you know it's if it's this and then once it's installed you'll have the 8 of cli commands this is still going so you know maybe i can show you what it would be like to install the cli by hand so if we wanted to do that one easy way to do this is if we just go to github it doesn't matter what repository i'm just looking for anything here and if i open up git pods so if we go on the top here and type in gitpod.com maybe that i just want to see whoops maybe it's get pods like that oh get pod you're not giving me oh you know what it's dot io that's why okay so if we go back here sorry and we type in dot io what this will do is launch me a temporary environment and so this is outside of aws so i'd actually have to install the cli so this would be a great opportunity to show you how to install the cli i'm just doing it this way because git pod is free to use and um you know it's going to set up an environment and how let us simulate installing the cli so here is the cli here i'm going to see if i can bump up the font let's make the font as large as we can go light or dark dark sounds good to me and so if we type in aws and give it a moment we can see that we have uh the command here so if i say abs s3 ls whoops that should be able to list things out in a bucket so this is what's currently in the bucket if you're wondering how do i know what these commands are i can just type in able cli commands okay and we go here and we go to the cli ref reference then we have um anything we want here right so we go down here and i just want to see what's running in s3 and i go here and i scroll on down it's going to show me commands like copy move remove sync uh mbrb list right and if you're looking for a particular command you go down say okay i'll look at ls here and it will explain to me all the little options that we can do with it and then it will always give me examples right so i can see examples like that so if i wanted to move something into an s3 bucket so let's say i want to create a new s3 bucket um we'll type in aws s3 and just hit enter and it should tell us um the sub commands maybe if i do like help like this and if we scroll on down so i guess it just pulls up documentation let's open it we give us like a tiny summary okay so what we can do here because i want to create a bucket type in like buckets if you don't know something you just go about s3 cli create bucket we'll go here and then what i do is i always just go to examples here so we have aws s3 api create bucket and i know it's unusual there's an s3 and there's an s3 api i don't know why that is but it's always been that way and i just don't question it anymore and so here i can go ahead and create a new bucket so i'll just go ahead and paste that command in i do want to change it up a bit here because this name could be that has to be unique so just to make sure i get what i want i'm putting random numbers in here we're going to choose the region as us east one if i wanted to do other things here i could scroll up and look at some flags here so uh it looks all fine to me so i think i'll go back here and just hit paste okay and so it created that bucket for me if i go over to s3 and we'll wait here a moment we can see that bucket now exists if i wanted to place something in that bucket what i can do is just like touch a file so i'll just say touch touches a linux command to make just an empty file so we'll say hello.txt and then it would be a bus s3 um [Music] it would be sp to copy it and i'm going to give it the local path hello dot txt and then i need to give it the bucket address so it'd be s3 colon forward slash forward slash the bucket name so we named it this i'm not even going to try to type that in by hand because it's too hard and then i want to say where i want to put this file so i'm going to say hello.txt and if i'm right that should work as expected and so it says i uploaded that file i make my way back over to s3 i refresh there is the file if i want to copy this file back locally all i have to do i'm just going to remove i'm going to delete the original hello txt file ls to show you that there's nothing there and what i need to do oops is just revert this so instead of saying the address here we can go and type in hello.txt and if i do ls there's the file if you don't know what the address is of the bucket um a lot of times you can go here and find it so it should be because they're always changing this ui on me but we'll go to properties here and there that's the iron uh usually a good way to find it is if you go into an actual object so if you go here it will give you the full url so i could have grabbed that and i could have just pasted that in there um but you know you learn after time it's not hard to remember this s3 colon forward slash forward slash the unique name i do want to show you how to install it by hand so here i'm in get pods i'm not sure how i can change this to a dark theme because i really don't like this on my eyes we'll go down below here to color theme and we'll say get dark there we go and so this is a temporary workspace so when i close it it'll be gone so i'll be totally fine and so i'm going to type in abs to see that it's not installed we're going to go over here this runs linux by default so i already know that i'm going to use linux we want to use version 2 here so for the latest version use this command for a specific version no we just want the generic one so i'm going to go ahead and copy this whoops yes allow we'll paste that in we'll hit enter okay then we'll take the next command paste that in hit enter we'll go take the next command here we'll hit enter you can now run uh aws so we type aws and there's a command so uh the only thing is that if we do a bus s3 ls it's not going to work because we don't have any credentials set so we'll give it a moment to think so it says unable to locate credentials you can configure credentials by running it was configured so we type in ito's configure and by the way if this font is too small i believe i can bump it up like this not a great way to do it but it works and so it says databus access key id so what we can do is go over to iam and what i'm looking for is my particular user over here and if you remember when we first created our account it generated out access key so i go to security credentials and so we have a key here but i need the secret so this key is useless to me so i'm going to go ahead and deactivate it just because i don't even want this key and i'm going to create myself a new key so i'm going to have an access id and secret whenever you generate these out never ever ever ever ever show anyone what these are these are your yours and yours alone okay so this is cloud shell we're fine we're just gonna close that for now and i'm gonna go back over to get pods here and hit enter so that's the id i'm gonna go grab the secret hit enter paste and i want it to go to us east 1 to save myself some trouble you can change the output from json to tables i'm going to leave it as the default here and so now if i type a bus s3 ls i get a list and so if i want to grab that file there and grab that s3 uri and we type in aws s3 api or sorry it's just ls sorry or sorry cp and we're going to paste that link in and we're going to say hello.txt and i must have done the command wrong it's because we're missing s3 here i just hit up on the keyboard to get that command back and so i type in ls for list and i mean i have some other code here so you know again any repo you want on github it doesn't really matter but you'll see there is that file probably shouldn't use this one because it makes a bit of a mess um but yeah it's pretty straightforward just to one thing to show you is where those credentials are stored so by default they're going to be stored in um it's going to be in the hidden directory in your root or your home directory called above stock credentials so if i just do like ls here you can see there's a config file and a credentials file cat lets me print out the contents of that file so i go here and it's saying the default region is usc 1. this is a tombl file even though it doesn't have a dot tom along the end of it i just know by looking at it that's what it is config lets you set defaults that are going to apply to all of your credentials and then within the credential file here is the actual credentials so if you wanted to just set them you could go in here and just set them in here you can also set multiple credentials so if i go here and i'm going to open up and buy because i'm not sure how to open it up here in the main one but if you wanted multiple accounts you would do like exam pro and then you just repeat these with different keys right and then when you wanted to use a cli command actually i'm going to go back here for a second okay and if you want to um and by the way i'm using vi if you never use vim it's it's a bit tricky to use uh you might want to use nano instead if you're if you're kind of new to this because this will use like regular key key cuts and then down below it shows you what it is so this is like control x or alt x alt text no control x there we go um but anyway so if i go into this file and i delete the original one right and now i try to do um this command here even though we already have that file it should either hang or complain i could just kill that by doing control c if i do a bus s3 ls just notice that it's hanging so unable to locate credentials because there's no default one but if i go and i put profile and i say exam pro all right it'll now use that profile so that's the way we do it but hopefully that gives you kind of a crash course into the cli so yeah there you go okay so i'm just going to go ahead and close these off you can delete this bucket if you don't want it it's probably a good idea to delete this here and i'm just going to say permanently delete okay very very good okay close that off and yeah that's the introduction to the cli so yeah there you go [Music] hey this is andrew brown from exam pro and we are taking a look at software development kits uh so a software development kit or sdk is a collection of software development tools and one installable package so you can use the aws sdk to programmatically create modify delete or interact with aws resources so the innovas sdk is offered in a variety of programming languages so we have java python node.js ruby go.net php javascript c plus and so here would be an example of some ruby code where we are creating ourselves um an s3 bucket so we're just uploading a file there okay [Music] okay so now what i'm going to do is show you how to use the abyss sdk and so uh to do that uh we're going to need some kind of ide a a basically code editor and so we had looked at get pods which is a third party service and that's fine but let's take a look at cloud9 because that is built into aws so if i just type in cloud9 here and go over to ide i'm going to launch myself a new environment so i'll hit create i'm going to say my sdk environment env if you if you have our timetable environment like me and we have some options so create an ec2 instance for direct access create it via systems manager run a remote with ssh i'm going to leave it as the default then we have the option to choose what size i want to leave it on t2 micro because that is the free tier then we're going to scroll on down we have amazon x2 linux ami i'm going to stick with uh amazon linux 2 and we can have it turn off after 30 minutes a great option for us here we'll go ahead and hit next and we'll hit create environment and so we're going to have to wait a little bit for this to launch it'll take a few minutes as that is going let's go to google type in in-bus sdk to get to the main page and so the idea here is that there are a bunch of different languages you can use c plus plus go java javascript.net node.js php python and ruby uh and so i'm a really big fan of ruby i've been using ruby since 2005 and so that's what we're going to do it in it's also really easy to use and it's a really great language so um you know down below it's just showing you that there's all these different things if we go down to the sdk here and we click on ruby we'll we have examples where you have the developer guide the api reference and so this tells you how to get started even here it's saying like hey go get started with cloud nine which is great as well i suppose um and so here might show you how to install it um and when we open up the api references this is what it looks like so a lot of times when i want to do something i know it's like i want to do something with s3 so i scroll on down here and i look for s3 right and then i just kind of like uh scroll around and look you know what i mean sometimes you have to expand it go into the client every api is slightly different so you do have to kind of figure out how to navigate that i'm actually under s3 right now so i'm looking for the client and i just know this from memory that this is where it is so first you create yourself a client and then you can do api operations so if i wanted to like list buckets i just searched the word list and i just scroll on down and there it is i click into that and i have an example of how to list a bucket so i'm going to go back to cloud9 and it is ready and it started in dark mode if yours is not in dark mode which really honestly why wouldn't you want dark mode if we go up to i think it's like file where is it uh preferences here gotta click the cloud9 option and i'm just seeing if it like remembers my settings i really like two two soft tabs here but uh there should be something for themes down below and so um that doesn't seem like that's it it used to be like a oh here it is if you go here and just choose like whatever you want i'm on jet dark here and so if it's on classic light or something you don't like you can fix that there but i'm just going to go here and just fiddle with my settings because i really like to use vim keys i don't recommend this if you are to change this if you are not a programmer but i'm just going to change it so that i can type here efficiently so i'm just looking for the option here and they moved it on me where did they move it it'd probably be like key bindings ah bin mode there we go again don't do that this is just for me so i can move around in a different way so what i want to do and by the way it looks like this default screen we could have just changed it here i just clicked through all that for nothing was here the entire time but what we need is we need to make sure that we have our credentials so if you type in aws s3 ls that's like my sanity check that i always like to do to make sure i have credentials notice that we didn't have to set up any credentials it was already on this machine which was really nice and so i'm going to create a new file here and it's okay if you don't know anything about ruby we're just going to have fun here and just follow along so i'm going to do example.rb i'm going to make sure ruby's installed by doing ruby hyphen v so it is installed which is great uh you need a gem file so say new gem file here and if we go back to the installation guide we need the gem sdk here actually i'm going to look at how to generate a gemfile gem file because there's some stuff that goes to the top of those files like this here i think we just need this line here so i'm just going to grab that whoops paste that in allow good and i you can do gem aws sdk that will install everything but uh we only want to work with s3 and so this is going to vary based on each language but i know that if we type in s3 we'll just get s3 and that's all we really need and so once we have that what we'll need to do is use a bundle install so we're going to make sure we're in the correct directory i'm going to type in ls down below notice the gem file is there and by the way if the fonts are too small i should probably bump those up let's see how we can do that uh editor size font user settings good luck trying to find today um project no you think it'd have to be under user settings right ah here it is okay so this is for probably the editor so we'll go to 18 here co code editor here i'm trying to find the one for the terminal probably over here there we go much easier okay so notice we have example.rb and gemfile so we're in the correct directory make sure i save that i'm going to type in bundle install and that's going to install the gems give it a moment there it's going to fetch notice that it installed the aws sdk s3 and everything that it was dependent on and so now if we go over to our example.rb file really when you're coding for the cloud you can pretty much copy paste everything so over here we found this code here for s3 list buckets and so i'm going to go ahead and paste that on in okay and i know it looks really complicated but we can quickly simplify this so i know that this is just the output so i don't need that okay and in ruby you don't need parentheses or curlies if uh if you don't have any things there and so all i need to do is define a client so if i click uh if i go to the top here of this file i think we're in the client right now all the way the top all the way the top here that's what we need okay and so i'm going to paste that in now we can set the region here so i'm going to say us east one right and then you'd have your credentials because the credentials are on the machine in the credentials file they're going to auto load here i believe so i don't think i need to set them so i'm just going to take that out here for a second okay and i can do this if i want this is just slightly different syntax it might be easier to read if i do it this way for you okay and i don't need double client there so we have the client i like to name this like s3 so i know what it is and i put puts for the response i'm gonna do inspect and so puts is like print okay and so now if i type in bundle exect let's just make sure that it's in the context of our bundler file ruby example.rb um we have a syntax error on this line here unexpected thing here oh it's because of this it's because i commented it out so i'm just going to do curly parentheses comment out here okay actually to make it a bit easier i'm just going to bring this down like this okay and we'll paste that there okay and we'll try this again initialize constants a to bus oh yeah we have to require it so we have to require abs sdk s3 i think we'll hit up and uh we got a struck back so it is working we are getting an object back if we want to play around with this a bit more i'm just going to install another gem called pry pry allows us to um inspect code so we're going to do bundle install and i'm going to go back to ruby here i'm going to put a binding pry in here and then if i hit up and i do bundle exec ruby example.rb um i installed it right bundle install yes undefined method pry oh because i have to require it again bad habit here okay we'll hit up and so now i have an interactive shell and i can kind of analyze that object so we have a response if i type in rsp here i have the structure object i can type in buckets here okay and it's showing me a bucket i can give it get its name um oh i think it's an array so i think i'd say like i'd say like zero here or i could say first this is just how the ruby language works we say name i get the name creation date okay so you get the idea whatever you want to do you know you search for it you just say i want to delete a bucket i want to create a bucket right and you look for it so i say create bucket here i click on this and i can see the options and they are always really good about giving me an example and then down below they always tell you all the parameters that you have there so that's how the sdk works uh but yeah the credentials were soft loaded here but you could easily provide them yourself i should just show you that before anything else just because there's some variations there and i'm just trying to look for it because it is separate code so you could do this this is one way of doing it so you could do it separate from the code so if you only wanted to configure it once right because you could you could have a lot of clients you wouldn't want to keep on like for each client you wouldn't want to put region in every time so i could take this and put this right here okay and this is the file here where we have the credentials so this would be our um our access key and our id and so you never want to put your code directly just in here so if i open up if you go cat you would never want to do this but i'm just going to show as an example here credentials oops i got to get out of this exit address credentials oh did they not even show it on this machine which would be smart we wouldn't really want to see our credentials here uh hit up say ls oh no it's there okay cat whoops cru credentials there it is okay so you know if we look here we can see that there are credentials set it's a little bit different we have this like session token i guess it's to make sure that this expires over time but if i was to take these okay and i was just to paste them in here that's one way you would do it you never ever want to do this ever ever ever ever you never want to do this because you'll end up committing that to your code so this is really dirty to do so i don't ever recommend to do it if you wanted to have this applied to everything you could put it up here and so now when we call the clients we don't have to do it um of course if the they're loaded on the machine you don't have to do it the other thing is like if you if you want you could load them in via environment variables that's usually what you want to do so you say a bus access key right and then you say environment databus access secret and so you'd set those by doing i think it's like an export environment variables set in linux you think i know after like 15 years of doing this but i never remember so you type in export so you go down into whoops here you type in export and you just say something like i'm going to show an example to see if it works so i'm going to say hello world okay and if i do hello like that echo see it prints it out so that's how you would set it you'd set those there's but there's actually very specific ones that aws uses for the api and it's these ones here so you always want to use those okay so you put that in there and then there but of course you know like if they're already set in your machine you don't have to even specify those because it would auto load those environment variables i don't think they're set right now if we type in echo just take a look here is are they going to get auto loaded here no so but anyway so we could go here just as an example and well actually they just show them right here so you see your access key but we go and we type in export and i'm going to paste the key in there and i'm going to go to the front of it we're going to type a bus access key id equals enter and so now if i did echo on this aws access key id okay shows up but i just want to show you how it can kind of vary and those conditions around it so yeah that is the abuse sdk um and yeah a lot of times you're just copying pasting code and just kind of tweaking it you're not really writing real programming okay so hopefully that is less intimidating so i'm just going to close these off and i want to close down this cloud9 environment um i might have to reopen this up in another tab and go to the management console here and then go over to cloud9 and just close this tab and then i'll go ahead and delete this environment oops i'll just type delete here even if you didn't it would turn off after 30 minutes and you have that free tier so it's not that big of a deal it's up to you whether you want to use cloud9 or git pods cloud9 is really good because it allows you to it allows you to uh use it runs on a virtual machine right so you have a a container runtime there and so it's very easy to run containers on it um whereas in like i've had some issues with git pods but um yeah those are the two okay [Music] well let's take a look at adam's cloud shell which is a browser-based shell built into the database management console and so cloud shell is scoped per region it has the same credentials as the logged in user and it's a free service so this is what it looks like and the great thing about this is that you know if you have a hard time setting up your own shell or terminal on your computer or maybe you just don't have access or privilege to do so it's just great that abuse makes this uh available to you and so what you can do is click the shell icon up at the top and that will expand this here some things to note about cloud shell is that it has some pre-installed tools so it has the cli python node.js kit make pip pseudo tar tmux vmwget vim and more it includes one gigabyte of storage free per aws region it will save your files in a home directory available for future sessions for the same in this region and it can support more than a single shell environment so it has bash powershell and zsh um and so enemies cloud shell is available in select regions so when i was in my canada region i was like where's the little shell icon but i realized it's limited for some areas okay [Music] hey this is andrew brown from exam pro and we're taking a look at infrastructure as code also known as iac and this allows you to write a configuration script to automate creating updating or destroying your cloud infrastructure the way you can think of isc it's a blueprint of your infrastructure and it allows you to easily share version or inventory your cloud infrastructure so aws has two different offerings for iac the first is cloud formation uh commonly abbreviated to cfn and this is a declarative iec tool and then you have abs cloud development kit commonly known as cdk which is an imperative iac tool so let's just talk about the difference between declarative and imperative and then we'll look at these tools a little bit closer uh each okay so declarative means what you see is what you get it's explicit it's more verbose but there's zero chance of misconfiguration unless the file's so big that you're missing something uh commonly declarative files are written in things like json yaml xml so for cloud formation it's just json and yaml and so that's that side there so for imperative you say what you want and the rest is filled in so it's implicit uh it's less verbose you could end up with some misconfiguration that's totally possible uh but it does more than declarative and you get to use your favorite programming language maybe python javascript actually cdk does not support ruby right now but i just have that in there just as a general description of what imperative is okay [Music] all right so just a quick look at cloudformation so cloudformation allows you to write infrastructure as code as either json or yaml the reason why was aws started with json and then everybody got sick of writing json and so they introduced yaml which is a lot more concise which you see on the right hand side so cloud formation is simple but it can lead to large files or is limited in some regards to creating dynamic or repeatable infrastructure compared to cdk a confirmation can be easier for devops engineers who do not have a background in web programming languages a lot of times they just know scripting and this basically is scripting since cdk generates out cloudformation it's still important to be able to read and understand cloud information in order to debug iac stacks knowing cloudformation is kind of a cloud essential when you go into the other tiers of aws like solutions architect associate professional or any of the associates you need to know cloud information inside and out okay [Music] okay so what i want to do now is introduce you to infrastructure as code and so we're going to take a look at cloud formation and so we were just using cloud9 for the sdk so we're going to go back and create ourselves a new cloud9 environment because we do have to write some code so i'll go ahead and hit create here and i'm going to just say uh cfn that's sort for cloudformation example and we'll hit next step and we'll create ourselves a new environment t2 micro amazon x2 is totally fine we'll hit next it'll delete after 30 minutes we'll be fine we're within the free tier we're going to give this a moment to load up and remember you can set your theme your your keyboard mode whatever you want as that loads and as that's going we're going to look up cloud formation and so cloud formation is very intimidating at first but once you get through the motions of it it's not too bad um so we'll go to the user guide here as we always do if you go to getting started it's going to just tell us some things it's going to read about yaml files um i don't think i really need to read much about this here so i think we'll just go start looking up some codes so something that might be interesting to launch is an ec2 instance cloudformation so that's what i'll do is i'll type in what i want so an ec2 instance and i'll just start pasting in code so if we scroll on down below here i'm going to go to examples because i want a small example here this is something that i might want to do and we're going to give that a moment here it's almost done you can do a database come on as that is going i'm going to open a new tab i'm going to make my way over to cloudformation okay and you can see i have some older stacks here notice cloud9 when we create an environment actually creates a cloudformation stack which is kind of interesting um but if we go here we can create a stack and we can create a file and upload it here so okay this is good i'm going to go ahead and make a new file we're going to call it template dot yaml just so you know yaml can be yml or y-a-m-ml there's a big debate as to which one you use um i think that adabus likes it when you use the full version so i just stick with y-a-m-l i'm going to double click into that and so in the cc2 example i'm just going to copy this okay and i'm going to paste this in here and i'm going to type in resources oops capital okay so that's a resource i want to create um when you create cloud formation you always have a template version so i just need a basic example here at the top i guess that's a simple one is like a hello world bucket maybe we should do a bucket because it'll be a lot easier we don't have to make our lives super hard here okay um but what i'm looking for is the version because that's the first thing that you specify i'm just trying to find it within an example here oh for frick's eggs cloudformation version so they don't have the format version it's going to complain there it is okay so we'll copy that we'll go back over here we'll paste that in there it might be fun to do like an output here so i'm gonna do like an output outputs and uh maybe instead of doing this we'll type in a bus s3 confirmation because what i'm looking for is what we can set as output so we'll say return values here um maybe we just want returns the domain name so we'll just say uh value ref that that's going to get the reference for it and we have to say hello bucket uh type string i'll say outputs confirmation example and even though i've written tons of cloud information it's just like if you're not doing it on day in day out you start to forget what it is so here for outputs we need a logical id description value and export so um that is what i want so i'm going to go ahead and copy that back here this is just so that when we run it we're going to be able to observe an output from the cloud formation file okay so the logical id is whatever we want so hello bucket domain it's funny because this is how you do do kind of that would be the format for terraform i was getting that mixed up so the domain of the bucket the value here is going to be ref hello bucket domain name that's the output export value to export uh can i get an example here else name oh you know what export is for uh cross stacks we don't need to do that okay so that's fine so what we'll do is set that and we'll take out our old one and so this should create us an s3 bucket so with cloudformation you can provide a template here by providing a url or you can upload a file directly so i'm just trying to decide here how i want to do this you can also use a sample file or create a template in the designer i'm going to go over to the designer because then we can just like paste in what we want so if i go over to yaml here and we go back over here i copy this i'm just going to paste this in here and we're going to hit the refresh button nobody ever uses the designer but this is just kind of an easy example for me to place this in here it's not really working maybe i go to template dude here refresh there we go so there's our bucket it's nice to have a little visualization and i believe this is going to work as expected so now that we have our designer template i think if we hit close what's this button say validate template probably good idea validating the template template contains errors unresolved resource dependency in the output block of the template hello domain bucket seems like it should be fine let's go oops let's go back over here that's what i did i said reference that value oh uh maybe it's get a trib okay it's get att sorry get a trib cloud formation i can't remember if there's an r on the end of it oh it's just att this is if you're trying to get a return intrinsic value so a reference is like what the default one is but every time we do like a logical name and attribute that's how we get that there so what i'm going to do here is just hit refresh and i'm going to validate that one more time now it's valid if i hover over this is it going to upload it create the stack we could save this save it but we can save it in s3 bucket so we'll say hello bucket and so now we have this url so i'm going to copy it honestly i never use this editor so it's kind of interesting i'm going to leave and we're probably going to hit create stack but i just find it a bit easier if we just kind of do it through this here so go back create the stack we're going to paste in the url we're going to say next and we're going to say my new stack and i didn't see what the name of the bucket was oh there's no name so it's going to randomize that's perfect so we'll go next we have a bunch of options here we'll hit next we'll give it a moment here i guess we have to review it create the stack and this is the part where we watch so it says create in progress and we wait and we hit refresh and we can see what's happening it's trying to create a bucket and if we go to resources this is this is a lot easier to track because you can see all the resources that are being created if you notice that when you use the cl uh when you're using the abs management calls in korean s3 bucket it's instantaneous but like with cloud formation there's a bit of delay because there's some communication going on board but here it is and notice if we go to our outputs this is the the value of the bucket domain name if we were to make it with uh self-hosting which is not what we're doing with it we could also have an export name which would be used for cross-referencing stacks which is not something we care to do but yeah that's how you create a stack that way um but you know we can also do it via the sdk here so what i can do um is look up what is the inves cli cloud formation because they have their own commands here if i go here there's a new one and there's an old one so if we go create stack yeah there's things like this like create stack update um so if we wanted to do it this way okay and i copied this here i'm just gonna put this in my readme here for a second uh so here what you do is you say my new stack and you can provide the template url or you could specify the local path here so we have like a template body so i'm gonna go ahead and grab that okay this would be like yaml and um i need to specify this file here so template.yaml and i'm just gonna go pwd here to get the full path okay and i'm going to just paste that in there oops okay i'm going to do ls okay so that gives us the full path of the file you can also specify the template url and so this should work as well if i take this and paste that on as a command it's unable to locate parameter file there's three three triple slashes there we'll just fix that there paste unable to load param file no such file of directory and there's a t missing okay be like don't be like me and make sure you don't have spell any mistakes okay i can type clear down here so i can see what i'm doing we'll hit enter whoops unable to load the parameter file notice file or directory home well i you didn't want the forward slash so another thing we can try to do i think it will take it relative so if i do this it should work i don't ever remember having to specify the entire path an error occurred while calling the crate stack my new stack name already exists if i go back over here give this a refresh oh that's what we named our stack the the one that we did so i'm going to say stack2 okay format unsupported structure when calling the create stack operation are you kidding me i do this all the time template body yaml file cloudformation unsupported structure take a look here oh you know what i think uh this one's out of date that's why so what we can do is go to our old stack here and we can actually see the template i can go ahead and copy this whoops and we can go ahead and paste that in there and then now what i can do so you know that's that's the reason why it wasn't working okay so we'll hit enter um unsupported structure it should be supported let's see if cloudformation can help us out um apparently there was very unhelpful error message formatting so try the validate template option i wonder if we could just do this maybe if that would help here i'm just heading up to try to run it again nope i guess we can try to validate it here it's like i'm not having much luck here today so we'll just say this here maybe it's not even loading that file where it is i so there's no errors i'm just going to make this one line okay created so for whatever reason i must have had a bug there and so sometimes putting on one line helps that out because i must have had an obvious mistake there and now we can see the stack is creating it's doing the exact same thing it's creating a different bucket though if we go over to our s3 here again you know you don't need to be able to do this yourself to pass the exam it's just so i'm just trying to show you like what it is so you kind of absorb any kind of knowledge about what's going on here notice down below it uses the stack name followed by uh the read the logical name of the resource there okay and what we'll do is wait for that to create once that's created we can go ahead and delete these stacks we could also use the aws cloud formation to say like delete stack but i don't want to bore you with that today and so we'll hit refresh here wait for those stacks to vanish okay those are gone uh what i'm going to do is kill this cloud9 environment if there's a way to do it from here i have never known how to do it go back to your dashboard well that's nice to know we'll go ahead and just delete this okay we'll close that tab and so now we are all in good shape and so that was our introduction to cloudformation okay [Music] let's take a look here at cdk so ctk allows you to use your favorite programming language to write infrastructure as code and technically that's not true because they don't have ruby and that's my favorite but anyway some of the languages include node.js typescript python java.net and so here's an example of typescript typescript was the first language that was introduced for cdk it's usually the most up-to-date so not always does cdk reflect exactly what's in cloud formation but i think they're getting better at that okay so cdk is powered by cloudformation it generates outcloud formation templates so there is an intermediate step uh it does sometimes feel a bit slow so i don't really like that but you know it's up to you cdk has a large library of reusable cloud components called cdk constructs at constructs.dev this is kind of the concept of terraform modules it is really really useful uh and they're really well written and they can just reduce a lot of your effort there ct cdk comes with its own cli um and i didn't mention this before but cloud formation also has its own uh cli okay cdk pipelines uh are allow you to quickly set up ci cd pipelines for cdk projects that is a big pain point for cloud formation where you have to write a lot of code to do this whereas the cdk has that off the bat makes it really easy for you cdk also has a testing framework for unit and integration testing i think this might be only limited to typescript because i didn't see any for the rest of the languages but um you know i wasn't 100 sure there this one thing about cdk is that it can be easily confused with sdk because they both allow you to pragmatically work with aws uh using your favorite language but the key difference is that cdk ensures uh it opponents of your infrastructure so what that means that's such a hard word to say but what that means is that um you know if you use this cdk to say give me a virtual machine you'll always have a single virtual machine uh because it's trying to manage the state of the file whereas uh when you use sdk if you run it every time you'll end up with more and more servers uh and it's not really managing state so hopefully that is clear between the difference there [Music] okay so we looked at cloud formation but now let's take a look at cdk cloud formation or confirmation cloud development kit it's just like cloud formation but you use a programming language in order to implement your infrastructure as a code i don't use it very often i don't particularly like it but you know if you are a developer and you don't like writing cloud formation files and you want to have something that's more pragmatic you might be used to that um this i think should be deleting because we were deleting the last one here and notice how it's grayed out i can't select it so don't worry about that create a new one it will say example we'll hit next t2 micro ec2 instance amazon x2 you know the drill it's all fine here we'll go ahead and create ourselves a new environment we're going to let that spin up there and as that's going we're going to look up adabus cdk so it was cdk and we probably want to go to github for this okay because it is open source and so i want to go to getting started and i have used this before but i never can remember how to use it probably the easiest way to use this is by using typescript so here's an example initialize a project make directory cdk oh first we gotta install it right so give that a moment so this is node you know how we did like bundle install this is like the same thing but for uh typescript installer update the it was cdkcli from npm we recommend using this version etc etc so again we're just waiting for that to launch but as we wait for that it's very simple we're just going to install it create a directory go into that directory initialize the example here it's setting up an sqsq which is um that's quite a complex example but you can see it's code right and then we run cdk deploy and we'll deploy it and then hopefully we'll have that resource so again we're just waiting for cloud nine there we go so cloud nine is more or less ready uh terminal seems like it's still thinking and we have a javascript one which i do not care about there we go there's our environment we're going to make sure we have npm so we can type in npm great it says version 8.1.0 and so this is asking for 10. okay i don't know if this gives us like nvm installed mvm it does so what we can do is do mvm list that stands for node version manager ruby has one as well and so it's telling us what version we're on i want to update um looks like we have a pretty uh pretty new version but what i want is the latest version of oh but that's node version that's not necessarily npm so we'll do node version oh 17 okay we're well well in the uh range of the new stuff so what i'm going to do is scroll on down we're going to grab this link here or this code here hit enter and that's going to install the adabus cdk so it says file already exists oh so maybe it's already installed on the machine um cdk let's type in cdk because of course aws wants to make it very easy for us this software has not been tested with what was that warning with node 1701 you may encounter runtime issues great aws you're like the one that installed this stuff here so we get a bunch of the commands which is great and so what we'll do is follow their simple instructions we'll say hello cdk we will cd into this and now what we can do is run cdk init and this language here and so that's going to do a bunch of stuff creates tons of files it's going to vary based on what you're using like which language because cdk comes available in a variety of languages so if you type in aws cdk documentation here notice up here python java.net so i think it has more than just those three languages but um you know i wish it supported more like yeah i see here is c-sharp java but i really wish there was a ruby so we'll give this a moment here to get installed and i will see you back here when it is done okay okay uh it turns out i only had to wait like a second there but it says there's a newer version of the cdk you probably should install it but i just want to get going here so as long as i don't run into any issues i do not care um but anyway so looking at this and again i rarely ever look at this but i'm a developer so it's not too hard for me to figure out but under the lib this is our stack that we're creating and here is it is loading in sqs it's loading in sns and then the core library it's creating an sqsq and it's setting the visibility of that timeout it's also creating an sns topic so those are two resources that we expect to be created if we scroll on down to the getting started it just says cdk deploy so what we'll do is go ahead and hit enter and let that do whatever it wants to do and it is thinking there we go so here we have i am statement changes so it's saying this deployment will potentially make potential sensitive changes according to your current security approval options there is there may be security related changes not in this list do you want to deploy sure we'll hit y deploying creating cloud information change that so cdk is using cloudformation underneath it's not complicated and as that is going what we'll do is we'll make our way over to our aws amazon.com console and if we go over to cloudformation we'll see if we see anything yet so it's creating a stack here we can click into it we can go over to our events see that things are being created this is always confusing so i always go to resources to see what is individually being created and they're all done so we go over here and they exist so here it says that we have a queue called this right sometimes they have links you can link through it so notice here i can click on the topic and get to that resource in sns which is nice for sqs i'm just going to type in sqs enter and there it is okay so we don't really understand what those are we could delete the stack this way there's probably a cdk way to delete the stack so cdk destroy i assume that's what it is destroy okay so we'll type in cdk destroy given a moment we're going to say yes okay it's deleting in progress we can even go back here and double check still thinking again you know if we deleted these for real it would take like a second but you know sometimes they're just slow sometimes it's because a resource can get hung as well but uh i don't think anything is a problem so here we can see what the problem is not necessarily a problem but it's just the sqs is taking a long uh longer time to delete where the s subscription is a lot faster so i'll just see you back here in a moment okay okay so after a short little wait there it finally finished uh i just kept on hitting refresh until i saw it deleted and so it's out of there and so we'll get rid of our cloud9 environment since we are done with it so type in cloud9 up at the top and we'll go ahead and delete and we will go ahead and delete this here thank you and we will go back to our aws amazon.console here just so we can get our bearings straight here and there we go [Music] all right let's take a look here at the aws toolkit for vs code so aws toolkit is an open source plugin for vs code to create debug deploy it was resources since vs code is such a popular editor these days i use vim but it's very popular um i figured i should make sure you're aware of this um plugin so it can do four things you get the abyss explorer this allows you to explore a wide range of database resources linked to your aws account uh and sometimes you can view them sometimes you can delete them it's going to vary per service and what's available there then you have the aws cdk explorer this allows you to explore your stacks defined by cdk then you have amazon elastic container service ecs this provides intellisense for ecs task definition files intellisense means that when you type and you you'll get like autocompletion but you'll also get a description as to what it is that you're typing out then there is serverless applications and this is pretty much the main reason to have database toolkit it allows you to create debug deploy service applications via sam and cfn and so there you can see the command palette and you can kind of access stuff there okay [Music] let's take a look here at access keys so an access key is a key and secret required to have pragmatic access to database resources when interacting with the awps api outside of the aws management console so uh access key is commonly referred to as aws credentials so if someone says database credentials so you generally are talking about the access key not necessarily your username and password to log in so a user must be granted access to use access key so when you're creating a user you can just check box access key um you can always do this after the fact but it's good to do that as you're creating the user and then you can generate an access key and secret so you should never share your access keys with anyone they are yours if you give them to someone else it's like giving them the keys to your house it's dangerous never commit access keys to a code base because that is a good place for it to get leaked at some point you can have two active keys at any given time you can deactivate access keys obviously delete them as well access keys have whatever access a user has to aims resources so you know you can do the database management console so can the key so access keys are to be stored in the aws.aws credentials file so um and if you're not familiar with linux this tilde here this actually represents your home folder so whether you're on windows or a linux that's going to be your home folder and then you have this period aws that means that it's a hidden folder but you can obviously access it and so in the it's just a tommel like file i think it's tommel um but i never uh 100 verified that it's tommle it looks just like tarmal and so what you'll have here is your uh default profile and so this is what you would use um or this is what any of your tools you use like the cli or anything else would automatically use if um if you did not specify a profile you can of course store multiple access keys and then give it a profile name um so if you are doing this for the first time you might just want to type in aws config and it'll prompt you and you'll just enter them in there as well i think that sets the default one when you're using the sdk you would rather probably use environment variables because this is the safest way to access them when you are writing code all right so there you go [Music] all right let's talk about access keys access keys are very important to your account um and so what we'll do is go to im if you are the root user you can go in and you can uh generate access keys for people um but uh generally you're doing it yourself for your own account so i go to users i'm going to click into mine here and we'll go over to security credentials and here you're going to notice access keys and one thing that is interesting is that you can only ever have two access keys at a time so hit create i'm just going to close that notice that the button is grayed out i can deactivate them if i feel that i haven't used them in a while and i can make them active again so i can bring them back into access or what i can do is make them inactive right and then i can delete them and so what i recommend right even if you do not want to pragmatically be using your account for anything you always want to fill up both these and the reason why and this is for security reasons is that if somebody wanted to come in and uh uh get into your account what they would do is they would try to find a user where they have access to them and then they would try to generate out a key so if both these keys are taken up so if you generate both these keys okay and this is the one you want to use you deactivate the other one okay we're not going to use that one and so now there's no way for them to fill up that other slot okay and so that is my strong recommendation to you but there's again only ever two here i'm just going to uh delete both of these so that when we want to uh do whatever next in a tutorial we'll go generate that out okay so go ahead and clear that out so hopefully that is enough for you to understand what to do with these axis keys okay so i'm gonna go back here there you go [Music] let's take a look here at aws documentation which is a large collection of technical documentation on how to use aws services which we can find at doc.abs.amazon.com and so this is kind of like the landing page where you can see all the guides and api references if you expand them in there into ec2 and you click on the user guide you can see html in pdf format kindle and you'll notice there's a link to github and that's because all of these docs are open source and you can contribute to them if you choose to do so i've done so multiple times in the past it's quite fun so aws is very good about providing detailed information about every individ service and the basis of this course and any aws certification will derive mostly from uh the adabus documentation so i like to say that i'm not really coming up with new information i'm just taking what's in the docs and trying to make it more digestible and i think that's the thing is like the docs are really good you can read them end to end but they are very dense and so it can be a bit hard to figure out what you should read and what you should not um but they are a really great resource and you should spend some time in there okay [Music] so i just want to quickly show you the aws documentation like give you a bit of a tour of it so if we go to about.amazon.com and type in docs i'm sure you might have seen this through other tutorials but the idea is that you have basically documentation for basically any possible service that you want and a lot of times you'll click into it and what you'll get are these little boxes and they'll show you different guides and it's going to vary based on service but a lot of times there's a user guide there's an api reference those are the two that you'll see there maybe go to something simpler like s3 that might be a simple example yeah user guide api api reference and so all of these are on github right if you open these up the documentation is here if you find something you don't like you can submit issues and uh and correct things you can even submit your own examples i have um i have committed uh example code to the docs specifically for ai services so you might be looking examples that i implemented or even ruby examples since i really like to promote ruby on aws you can download it as a pdf or you can take it as html a lot of times you're going to the user guide and the way i build the courses here is i actually go through and i read these end to end so you know if you wanted to do that you want to be like me you can do that or you can just watch my courses and save yourself the trouble and not worry about everything that is here but generally the documentation is extremely extremely good there are some exceptions like amazon cognito where the content is good but it's just not well organized so i would say aws out of every other provider they have the most complete documentation they generally don't keep their examples or like tutorials within here it's usually pretty light they'll have some examples um but like they like to have items labs separately so you type if it's labs github right you go here and a lot of stuff is in here instead so you have a lot of great tutorials and examples over there okay um but yeah pretty much that's all there is to it is there consistency between documentations no they kind of vary um you know but uh it's all there is my point and they're always keeping up to date so that's all you need to know about the aws documentation [Music] hey this is andrew brown from exam pro and we are taking a look at the shared responsibility model which is a cloud security framework that defines the security obligations of the customer versus the cloud service provider in this case we're talking about aws and they have their own shared responsibility model it's this big ugly blob here and the thing is is that every single csp has their own variant on the model so they're generally all the same but some visualizations make it a little bit easier to understand or they kind of include a little bit more information at different parts of it and so just to get make sure that you have well-rounded knowledge i'm going to go beyond the aws's shared responsibility model and just show you some variants uh there's also variance not just per uh csp but also the type of cloud deployment model and sometimes these are also scoped based on a cloud service category like compute or machine learning and these can result in specialized share responsibility models so that's what we'll look at in this section okay [Music] all right so let's take a look at the adab shared responsibility model and so i've reworked the graphic because it is a bit hard to uh digest and so i'm hoping that this way will be a little bit easier for you i cannot include the in and of here just because we're limited for space but don't worry we'll follow that up with the next slide here so there are two people that are responsible or two organizations that are responsible the customer and aws and on investors side they're going to be responsible for anything that is physical so we're talking about hardware global infrastructure so the regions the availability zones the edge locations the physical security so think of all that hardware that's there those data centers um everything like that then there's also software the services that they're offering and so you know this extends to all their services but generally it breaks down to the four core and so we're talking about compute storage database and networking okay and when we say networking we're talking about like physically setting up the wires and also you know the software to set up the routing and all that kind of stuff there now looking at the customer side of it they're responsible for configuration of managed services or third-party software so the platforms they use so whether they choose to use a particular type of os the applications so if they want to use like ruby on rails uh iam so identity and access management so if you uh create a user and you grant them permissions if you give them things they're not supposed to have access to that's on you right then there's configuration of virtual infrastructure and systems so that would be choosing your os that would be the networking so there could be networking on the um the virtual machines themselves or we could be talking about cloud networking in this case then there are firewalls so we're talking about virtual firewalls again they could be on the virtual machine or it could be configuring like knuckles or security groups on aws then there's security configuration of data uh and so there is client-side data encryption so if you're moving something from s3 from your local machine to s3 you might need to encrypt that first before you send it over then there's server side encryption so that might be turning on server-side encryption within s3 or turning it encryption on your ebs volume then there's networking traffic protection so you know that's turning on vpc flow logs so you can monitor them turning on aws guard duties so that it can detect anomalies with your traffic or or activities within your aws account and then there's customer data so that's the data that you upload on the behalf of your customers or yourself and what you decide to um you know like what levels of sensitivity that you want to lock it down do you want to use amazon macy to see if there's any public facing uh personally identifiable information that's up to you so there's a lot here and honestly it's a lot easier than you think um instead of thinking about this big diagram what i do is i break it down into this and so we have the in and the oven that's what i said i could not fit on the previous slide there but the idea is customers are responsible for the security in the cloud so that's your data and configuration so if it's data that's residing on there or is this something you can configure you are responsible for it on the adaba side they are responsible for the security of the cloud so if it's anything physical or hardware the operation of managed services or global infrastructure that's going to be on them and this in and of thing is very important for the exam so you should absolutely know the difference between the two this is kind of an aws concept i don't see any other cloud service provider talking about in and of uh so you definitely need to know it okay [Music] so one variant we might see for the uh shared responsibility model would be on the types of cloud computing this could also be applicable to the types of deployment models but we're doing types of cloud computing here and so we have the customer's responsibility and then the cloud service provider's responsibility so we're seeing on-premise infrastructure as a service platform as a service and software as a service and so when you are on-prem you're basically responsible for everything apps data runtime middleware os virtualization servers storage networking basically everything and just by adopting the cloud you're almost cutting your responsibilities in half here so now the cloud service provider is going to be responsible for the physical networking uh the physical storage those physical servers and because they're offering virtual machines to you they're setting up a hypervisor on your behalf so virtualization is taken care for you and so um you know if you launch an ec2 instance you know you're going to have to choose the os that's why you're responsible whatever middleware there the run time so whatever kind of programs you install on it the data that resides on it and any kind of like major applications okay then we have platform as a service uh and so you know the cloud service provider is gonna take even more responsibility there so when we're talking about this we're thinking like abos elastic bean stock right so you know the you just choose what you want and it's all managed so you might say i want a ruby on a rail server but you're not saying what os you need um you're not saying exactly you might say what version of ruby you want but you don't have to manage it if it breaks or it might be managed updates and things like that the last thing here is like software as a service and this is something where the csp is responsible for everything so if you're thinking of a software as a service think of like microsoft word where you're just writing uh you know writing stuff in there and you know you you are responsible for where you might choose to store your data but the data is like still handled by the cloud service fighter because you know it's on the cloud so on their servers right so yeah hopefully that gives you kind of an idea across types of cloud computing responsibilities [Music] all right so what i want to do here is just shift the lens a bit and look at the shared responsibility model if we were just observing a subset of cloud services such as compute and so we're going to see infrastructure as a service platform as a service software as a service and now we have function as a service and so that's what i mean when we shift the lens we get new information and so you can just see that you really don't want to look at this from one perspective okay so starting at the top here we have bare metal uh and so abs's offering is called the ec2 bare metal instance and this is where you basically get the whole machine uh you can configure the entire machine with with the exception of the physical machine itself so as the customer you can install the host os the host os so the operating system that runs on the physical machine and then you can install your own hypervisor um and then awesome is going to be responsible for the rest the physical machine now normally the next step up would be dedicated but dedicated doesn't exactly give you more responsibility it gives you more assurance because it's a single tenant virtual machine and that's why i kind of left it out here but we'll see it in the next slide that it is kind of on the model and shares the same spot as uh ec2 but ec2 is a virtual machine and so um here the customer is responsible for the guest os so that means that you can choose what os you want whether it is ubuntu or debian or windows but that's not the actual os that is running on the physical machine and so you're not going to have control of that aws is going to take care of that then there's the container runtime so you know you you can install docker on this or any kind of container layer that you want um so that's another thing that you can do so aws is going to be responsible for the hypervisor uh the physical machine and the host os all right then looking at containers it just has more than one offering for containers but we'll just look at ecs here and so this is where you are going to have uh you don't you don't install the guest os right the guest os is already there for you what you are going to do is choose your configuration of containers you're going to deploy your containers you're going to determine where you need to access storage for your containers or attach storage to your containers and databus is going to be responsible for the guest os the there might not even be a guest os but they're the host os the guest os the hypervisor the container runtime and you're just responsible for your containers okay then going to the next level here we have platform as a service and so this one also is a little bit odd where it fits um because the thing is is that this could be using anything underneath it could be using containers it could be using virtual machines and so that's where it doesn't exactly fit well on a linear graph but let's just take a look at some things here so this is where you're just uploading your code you have some configuration of the environment you have options of deployment strategies the configuration of the associated services and then a bus is going to be responsible for the servers the os the networking the storage the security so it is taking on more responsibility than infrastructure as a service um uh whereas you know aws is just gonna be responsible for that so if it's a virtual machine that's being under uh under the use their business is going to be responsible for this customer stuff okay you're not if it's containers that abuse is going to be responsible for this but it just depends on how that platform as a service is set up actually the way elastic bean stock is set up is that you actually have access to all that infrastructure and you can fiddle with it and so in that case uh whereas like if you were to use heroku which is a a third-party provider you know they would take care of all this stuff up here um and so you would not have to worry about it but on aws you actually are responsible for uh the underlying infrastructure because you can you can configure it you can touch it so that's where you know again these do not fit perfectly and you can't look at platform as a service meaning that um you're not responsible for certain things it really comes down to the service offering okay then we're taking a look at software as a service so on aws this is going to be something like um amazon work docs which is i believe a competitor not a very popular competitor but a competitor to microsoft sharepoint and this is for content collaboration says the customer you're responsible for the contents of the document management of the files configuration of sharing access controls and the database is responsible for the servers the os networking the the storage the security and everything else so you know if you use the microsoft word doc and you type stuff in it you say where to save it that's what you're responsible for okay the last one here on the list is our uh functions here and so aws's offer is it was lambda and so as the customer all you're doing is you're uploading your code and database is going to take care of the rest so deployment container runtime networking storage security physical machine basically everything um and so you're really just left to develop okay so you know hopefully that gives you kind of an idea and again you know we could have thrown in a few other services like what we could not fit on this slide here was um it was fargate which is a serverless container as a function or sorry serverless serverless container as a service or container as a service so you know that has its own unique properties in the model as well okay so let's just have kind of a visualization on a linear graph here so we have the customer's responsibility on the left-hand side and it was a responsibility on the right and we'll look at our broad category so we got bare metal dedicated virtual machines containers and functions and so no matter which type of compute you're using you're always responsible for your code for containers you know if uh you know like uh the functions when you're using functions there are pre-built containers so you'd say i want to use ruby and there's a ruby container and you don't have to configure it but obviously um you know when you're using container service you are configuring that container you are responsible for it for virtual machines you know you're responsible for the run time so you can install a container runtime on there or install a bunch of different packages like ruby and stuff like that the operating system you have control over in the virtual machines for the dedicated and we saw with bare metal you have both uh controls of the host os and the guest os and then only bare metal allows you to have control of the virtualization where you can install that hypervisor so hopefully that gives you an idea of compute and databases offering there and also kind of how there's a lot of little caveats when we're looking at the shared responsibility model okay [Music] all right so i have one more variant of the share responsibility model and this one is actually what is used by google so um we're going to apply to aws and uh see how it works so let's just kind of redefine shared responsibility model or just in a slightly different way so we fully understand it so the share responsibility model is a simple visualization that helps determine what the customer is responsible for and what the csp is responsible for related to aws and so across the top we have infrastructure service platform as a service software as a service but remember there's other ones out there like function as a service it's just not going to fit on here okay so and then along the side here we have content access policies usage deployment web application security identity operations access and authentication network security remember that's cloud networking security the guest os data and content audit logging now we have the actual traditional networking or physical networking storage and encryption and here we're probably talking about the physical storage hardened kernel ipc uh the boot the hardware and so then here we have our bars so we have the csp's responsibility and the customers responsibility so when we're looking at a sas software as a service uh the customer is gonna be responsible for the content remember like think of like a word processor you're writing the content the access policies like say i want to share this document with someone the usage like how you utilize it can you upgrade your plan things like that then next on our list here is platform as a service so generally uh you know platform is a services for developers to develop and deploy applications and so they will generally have more than one deploy strategy and uh you know there might be some cost saving measures to choose like uh you might have to pay additional for security uh or you or it's up to you to configure in a particular way or you might have to integrate it with other services uh and you know we saw that pass is not a perfect uh definition or fit because you know when we look at elastic bean stock if you have access to those resources and you can change them underneath then you might have more responsibility there than you think that you would okay the next one here is infrastructure as a service and so this is extending to identity so who's allowed to uh you know log into your aws account operations the things that they're allowed to do in the account access and authentication do they have to use mfa things like that network security obviously you can configure the security of your cloud infrastructure or cloud network um you know so you know do you isolate everything a single vpc how do you set up your security groups things like that we know with virtual machines you can set up the guest os there's data and content but remember that bare metal is part of the uh infrastructure service offering and so that's where we'd see hardware or not hardware but you'd have the host of the host os or virtualization and so this again is not a perfect representation but it generally works okay and then last and list there or just looking at what the aws is responsible for auto logging so of course database has cloudtrail which is for uh logging api um events but auto logging could be things that are internally happening with those physical servers then the networking the physical storage hardening the kernel airbus has i think what's called the nitro system where they have like a security chip that's uh installed on all their servers then it's the the boot os uh and then the hardware itself okay so just remember the customer is responsible for the data and configuration of access controls that reside in aws so if you can configure it or you can put data on it you're responsible for it okay the customer is responsible for the configuration of cloud services and granting access to users via permissions right so if you give one of your employees access to do it you know even if it's their fault it's your fault so remember that again the csp is generally responsible for the underlying infrastructure we say generally because you know there's edge cases like bare metal and coming back to aws is in the cloud and of the cloud so in the cloud so if you configure it or store it then you the customer are responsible for it and of the cloud if you cannot configure it then the csp is probably responsible for it okay [Music] hey this is andrew brown from exam pro and we are looking at the shared responsibility model from the perspective of architecture and if you're getting sick of share responsibility model don't worry i think this will be the last uh slide in this section but let's take a look here so uh we have uh less responsibility more responsible at the bottom so what we have down here is traditional or virtual machine architecture so global workforce is most familiar with this kind of architecture and there's lots of documentation frameworks and support so maybe this would be using elastic beanstalk with platform as a service or using ec2 instances alongside with auto scaling groups code deploy load balancers things like that the next level here is micro services or containers this is where you mix and match languages better utilization of resources so maybe you're using fargate which is serverless containers or elastic container service or elastic kubernetes service for containers on the top here we have serverless or commonly with functions as a service so there are no more servers you just worry about the data and the code right so literally just functions of code and so you could be using the amplify serverless framework or maybe able lambda for creating serverless architecture so there you go [Music] hey this is andrew brown from exam pro and we're looking computing services and before we jump into uh the entire suite of computing services database have let's just talk about ec2 for a moment which allows you to launch virtual machines so what is a virtual machine well a virtual machine or vm is an emulation of a physical computer using software server virtualization allows you to easily create copy resize or migrate your server multiple vms can run on the same physical server so you can share the cost with other customers so imagine if your server or computer was an executable file on your computer okay so that's the kind of way you want to think about it when we launch a vm we call it an instance and so ec2 is highly configurable server where you can choose the ami so the amazon machine image that affects options such as amount of cpus or vcpus virtual cpus amount of memory so ram the amount of network bandwidth the operating system so whether it's windows ubuntu amazon s2 the ability to attach multiple virtual hard drives for storage so elastic block store um and so the amazon machine image is a predefined configuration for a vm so just remember that and so ec2 is also considered the backbone of aws because the majority of services are using ec2 as the underlying servers whether it's s3 rds dynamodb or lambdas that is what it's using so um what i say also is just because when we talk about the aws network that is the backbone for global infrastructure and the networking at large and so ec2 is for the services okay [Music] hey this is andrew brown from exam pro so we just looked at what ec2 is but let's look at more of the broader services for computing and these are the more common ones that you'll come across there's definitely more than just what we're going to see on the single slide here so break this down with virtual machines containers and then serverless for virtual machines remember that's an emulation of a physical computer using software and ec2 is the main one but for our vm category we have amazon light sale this is a managed virtual server service it is the friendly version of ec2 virtual machines so when you need to launch a linux or windows server but you don't have much invoice knowledge you could launch a wordpress here and you could hook up your domain and stuff like that so this is a very good option for beginners we have containers so virtualizing an operating system or os to run multiple workloads on a single os instance so containers are generally used in microservice architecture when you divide your application into smaller applications that talk to each other so here we would have ecs elastic container service this is a container orchestration service that supports docker containers launches a cluster of servers on these two instances with docker installed so when you need docker as a service or you need to run containers we have elastic container registry ecr this is a repository of container images so in order to launch a container you need an image an image just means a safe copy a repository just means a storage that has version control we have ecs fargate or just fargate now people are kind of forgetting that it's it runs on ecs these days that's why i have it in there it is a service orchestration container service is the same as ecs accept you pay on demand per running container so with ecs you have to keep a ec2 server running even if you have no containers running so it is manages the underlying server so you don't have to scale or upgrade the ec2 server so there's the advantage over ecs okay then we have elastic kubernetes service eks this is a fully managed community service criminal or so kubernetes commonly abbreviated to k8 is an open source orchestration software that was created by google as generally the standard for managing microservices so when you need to run kubernetes as a service then we have serverless categories so when the underlying servers are managed by device you don't worry or configure servers soybes lambda is a servless function service you can run code without provisioning or managing servers you upload small pieces of code choose much uh how much memory how long you want the function to run is allowed to run before timing out and you are charged based on the runtime of the service function rounded to the nearest 100 milliseconds so there you go [Music] hey this is andrew brown from exam pro and what i want to do is just show you a variety of different computing services on aws so i'm going to try to launch them and we're not going to do anything with them i'm just going to simply launch them okay so the first i want to show you is ec2 and by the way we will go more in depth and ec2 later on in this course here but what i'm going to do is go ahead and launch the instance don't worry about all this stuff but just choose the amazon linux 2 so it's in the free tier all right we're going to choose an instance type of a t2 micro so that's part of the free tier it's going to be set as one all these options are fine i want you to go ahead and review and launch we're going to launch and i don't want to generate any key pair i'm going to proceed without a key pair i'm going to acknowledge that because i don't want it and that's all there is to launching an ec2 instance and so i can go here and view my instances and what you'll see is it's pending okay and usually it has like a little spinning icon maybe they've updated it since then so i go here it's hard to see because there's all these terminated ones but i don't need to do anything with it i just wanted to show you the actions that you'd have to do to launch it actually we'll leave it alone maybe we'll see it when it's launched the next one i want to show you is elastic container service um and wow this this is all let's go let's get the new experience please that's so old okay check box that on and we'll hit get started and we'll say create a cluster and we have some options here networking only ec2 linux plus networking uh for use with either aws fargate or external windows um this is if you're doing fargate which we're not doing right now fargate is part of elastic container service it used it well it used to be it is called ecs fargate but it was markets it as a separate service we'll go to next we'll say my ecs cluster um we can create an empty cluster but that would make it a fargate cluster which we don't want there's an on-demand server look it's m6i large if you're very afraid of a lot of spend here you don't have to do this you can just watch me do it and just learn well what i'm going to do is try to find something super cheap so i want a t2 micro or a t3 micro t2 micro is part of the free tier i don't know if we get to choose t2 anymore in here they might not let you there it is but you know t3 micro is great too i just whatever says it's free that's what i'm going to go for number of instances one the amazon linux version is fine i don't care about a key pair [Music] use existing vpc i don't want to have to make a new one select the existing ones okay let it create a new security group that's totally fine allow those to be fine create a new role that's fine create okay and so that's going to create ourselves a cluster i'm going to just make a new tab here let's just check on our ec2 instance and so if we look at our ec2 instance it is running okay great so it has a private ip address it has a public ip address all right there's not much we can do with it i can't even log into it because we didn't generate it out of key pair a lot of times you want to name these things so let's go here name it my server okay go back to our ecs instance and the cluster is ready so we'll go here and oh nice we got a new ui and so if we wanted to deploy something as a service or a task um we would need to create a template like a task definition file uh they don't have a new ui for this you're being redirected to the previous version console because this isn't available in the new experience yet of course it isn't so we can create a new task definition file that's what's used to run it it's basically like a docker file compose file whatever you want um we have fargate or ec2 we are doing ecs so we're going to have to do ec2 so we'll say my ecs task def file um task role optional i am role i don't need one network mode i don't care and then this is the idea is that because a container allows you to use up a particular amount of the thing we don't have to use all of the memory so we should look up what a t2 micro is because i don't even remember what size it is okay t2 micro aws so we go here we look at the instance types and we're gonna flip over to t2 and it says that it's one vcpu one gigabyte of memory so what i'll do one yeah one okay that's fine so what we want and this is in megabytes so we'll say 500 megabytes and um i don't know if we can do less than one but i'm going to do one here the task cpu must be an integer greater than or equal to 128 okay fine 128. oh i guess it's 1024 would utilize the whole thing so i could say 512 okay and this is where we would add our container so i don't do this every day so i don't remember how to do this we'll say my container and i need a repository here so i need like docker hub hello world okay i don't care what it is i just need a image that's simple and i'm looking for the address here um i'm hoping that's just this docker hub url so it'd be something like this right docker io probably docker io docker image docker hub url in ecs okay it goes to show how often i'm launching these things so repository url docker image so i think that what we're going to do here hmm i would really just like the url please reviews tags where is it where is it it's somewhere here right uh uh well let's just try it we'll go and we'll type in says image and tag so docker dot io hello world i really need an image id image url hello world docker hub they're not making my life easy here today anything i just want to see like a single example docker dot io docker io url examples ecs this is what it's like you know this is what you're going to be doing if you are um you know a cloud engineer you're going to be googling a lot and just trying to find examples here so here it says docker io the name the hostname okay so we'll just try it okay so i think that the the the name here is underscore and then it's hello world and that's what's throwing me off here right docker io just hold on here repository url and then there's the tag i don't know if like is the tag gonna be like latest view available tags latest okay so what i'll do here and that's the thing you have to have a lot of confidence too so hard limit soft limit do i have to set it do i have to set any of these things can i just go to the bottom and hit add looks like i can okay so we'll scroll on down create we create our task definition file which is fine we're going to go back to our cluster it's going to bring us back to the new experience we're going to click into this cluster holy smokes uh we're going to hit deploy and we're going to choose service that means it's going to continuously run task means that when it's done running it ends we're going to choose our family or version that's the task definition file there it's not compatible with the selected compute strategy my task file what if i just choose task take that okay so maybe some you have to like code it so that it continuously runs i don't care we don't need to run a service here the selected task definition is not compatible with the selected compute strategy okay let's see why can you double check if you're using fargate strategy instead of the ec2 uh blog design for the ec2 strategy so probably what it's suggesting is that the strategy file i made is not for the right one here task definitions go back over here well what's wrong with it taskroll none my container so what i'm going to do because i don't trust this i'm going to go ahead and delete this can i delete this how do i delete this oh boy actions deregister deregister we'll create a new one and so it was has tools like it was copilot cli to make this a lot easier because you can see this is very frustrating but i chose this so my task def requires compatibility of ec2 default 512 512 add container we're going to uh was it docker dot io underscore what's it called hello world i will just say hello world here and we'll just say 512 which is fine i don't care about any port mappings i'm just reading it carefully here to see what it wants we'll say 512 maybe because i didn't specify them it's complaining this looks fine we'll hit add okay constraints type this all looks fine so we'll try this again and so we now have our file let's see we can just run this task from here ec2 this is just another way to do it so we just choose the cluster this is actually a lot easier to do it this is old old old eh this is ugly and so now it launches so you know if you have trouble one way then just do it another way and sometimes it'll work here so i don't expect this task to really work in any particular way if it's pending that's fine if it fails it's fine if it's successful that's fine i don't care i just want to go through the motion so it was successful it ran and then it stopped i don't know if we could see like the output anywhere probably what it would do is it would log out something like into somewhere and so i don't know if like there's logs turned on for this if i go over to like cloud watch logs maybe i could see something a lot of these services will automatically create cloud watch logs so sometimes you can just go look at them there so we'll drop down we'll go to log groups here there is some stuff here um there's a couple that i created from before just go ahead delete those and so what i'm looking for is like ecs so no there's no logging happening here which is totally fine so that is ecs um for fargate it's pretty much the same the difference is that fargate is like it has to start up and run so it's a lot slower to watch okay and now let's go take a look at lambda okay so this is our serverless compute so go ahead and create ourselves a function uh we can start from a blueprint that doesn't sound too bad and i personally like ruby so no i'm not getting much here but we can do is look for something like hello do we have like a hello world there we go hello world and we'll click that we'll say my hello world uh it's going to create those permissions that's fine it's showing us the code it's very simple okay it's going to console log out these values not a very good hello world function doesn't even say hello world how can you call it a hello world function if it doesn't say hello world i don't understand so i'm going to go ahead and create this function usually doesn't take this long okay so uh here is our function here is our code notice that this is cloud9 okay and you can even move that over to cloud9 they didn't have this button here before that's kind of cool i hit test they used to have it up here but i guess they wanted to make it more obvious so they moved it down here which is nice so what i can do is hit this oops my test it's going to send a payload here to the actual function and it's going to tell us if it worked okay so can i run my test go over here to test it changed a bit so i guess i created there it succeeded so i have my logs okay so it's going to output those values there so there are the three values which basically is nothing maybe you were supposed to set those an environment variable but you can see you're just uploading uh some code right it's just a bit of code it's not like a full app or anything so we launched an ec2 container we did a a um sorry an ec2 instance a container we did a serverless function there's other things like eks but that is really really hard to set up okay because you'd have to use like kubernetes commands and stuff like that and my kubernetes knowledge is always very poor um i'm just taking a peek here to see if they've updated it so yeah you create the cluster but like deploying it is forget it i'm just trying to think of there's anything else i kind of want to show you um no those are the main three i would say so i'm pretty happy with that um what i'm gonna do is go and kill all these things so we're gonna go over to lambda okay and i'm going to go ahead and delete this as you saw ecs was the hardest and no matter how many times i've built things in ecs and i've deployed full things on ecs i can't remember i always have so much trouble with task definition files it's unbelievable we'll go over to our cluster here and ecs cluster up here make sure you're not in the fargate cluster i know i'm clicking really fast but there's just so many things to click and i'm going to click into this cluster we're going to hit edit because this is running an ec2 instance right i need to destroy it um it just took me back to the old one here i want to delete no i want to delete the cluster click back here where do i delete it up here here i can't checkbox anything uh how do i delete this do i have to delete the task first maybe so we'll go here i mean it's already stopped there's nothing to do edit uh account settings wow this is confusing okay how to delete ecs cluster you gotta be kidding me i have to actually look this up so open the ses console from navigation in the navigation choose clusters and the new turn off the turn off new ecs experience and choose the old console the delete cluster workflow is not supported in the ec ecs console are you serious then why why do you have it like why even let people use the new experience if that you don't have all the functionality there um oh i was gonna give it feedback but it didn't let me here's it says uh i need to delete an ecs cluster no okay so i'm here there's my big ugly cluster delete cluster okay so yeah it's a struggle okay like things are always changing on me but you just have to have confidence and if you've done it a few times you know that you can do it right um and that's one of the biggest hang-ups to cloud i would say so it's going to take a few minutes apparently to delete the cluster as that is going let's go over to ec2 i didn't close it i kept this tab open and uh there's our ec2 instance we can go ahead and terminate that instance terminate okay and if this says it's terminating then we're in good shape terminator shutting down that's fine and notice here that's the ecs instance just make sure you shut down the my server not the um the ecs instance because that's going to stop and so this has already terminated but if we go back here notice that it says that it's not done but clearly clearly has shut down okay so i'm going to wait here for a bit even though i know it's been deleted maybe it's deleting things like the auto scaling group so we go down below here right so that's probably what it's doing it's probably trying to destroy the auto scaling group but it doesn't show any here so it must have already destroyed it yeah so task services delete so i'll be back here in a bit but i know it's safe it's already deleted but i'll see you back here in a bit okay so i waited literally a second and it's now deleted so we deleted our lambda we deleted our oh did we delete our lambda good question now i'm not really worried about the lambda because i guess we did but i'm not really worried about it because um you know when it rests at idle it's not costing us anything where the ecs and the ec2 are backed by ec2 instances so we do have to shut those down okay and again remember you make sure you're in the correct region sometimes that gets flipped over and then you think those resources are gone but they're actually not they're just running in another region so there you go [Music] hey this is andrew brown from exam pro and we're taking a look at higher performance computing services on aws so before we do we've got to talk about the nitro system so this is a combination of dedicated hardware and lightweight hypervisor enabling faster innovation and enhanced security all new ec2 instant types use the nitro system and the nitrous system is designed by aws okay so this is made up of a few things we have nitro cards these are specialized cards for vpcs ebs instant storage and controller cards you have nitro security chips these are integrated into the motherboard protects hardware resources and we have the nitro hypervisor this is the lightweight hyper visor memory and cpu allocation bare metal like performance there's also nitro enclaves but that's a bit out of scope here but that has to do with like ec2 isolation okay then we have bare metal instances so you can launch ec2 instances that have no hypervisor so you can run workloads directly on the hardware for maximum performance and control we have the m5 the r5 ec2 instances that can run bare metal there's other ones i believe i've seen as well but you know if you are running bare metal you can just go investigate at the time of okay we have bottle rocket this is a linux based open source operating system that is purpose built by adabus for running containers on vms or bare metal hosts then let's just define what hbc is so it's a cluster of 100 of thousands of servers with fast connections between each of them with the purpose of boosting computing capacity so when you need a super computer to perform computational problems too large to run on a standard computer or computers or would take too long this is where you know hbc comes into play one solution here is abs parallel cluster which is an ada supported open source cluster management tool that makes it easy for you to deploy and manage high performance computing hpc clusters and aws so hopefully that gives you an idea of this stuff okay [Music] all right so let's take a look at hpc or high performance computing on aws so hpc is for uh running large complex simulations and deep learning workloads in the cloud with a complete suite of high performance computing product services gains insight faster and quickly move from idea to market blah blah blah it's for ml or very complex scientific computing stuff these run at least on c5 ends okay and the way it works is that you use this cli called p cluster variable's parallel compute or it was parallel cluster stuff and so let's see if we can get this installed very easily um so what i'm going to do is see how hard it is to install now i don't recommend you running this because i don't know what it's going to cost me and if i make a misconfiguration i don't want you to have that spent here but i don't think it's that dangerous so i'm going to go back over to usc 1 here i'm going to open up cloud shell and i'm going to give it a moment to load and so as that is loading let's take a look at how we would go ahead and install this so install the current parallel it is parallel i think we just copy that line okay and so we have to wait for environment to spin up alright so once it has spun up we will install it and then we will jump over to this tutorial here okay so we'll give this a moment and after waiting a little while here it looks like our shell is ready it looks like it's in bash um i'm just going to type in aws s3 ls that's a sanity check okay and it works that's great so go back over here and i'm going to go back up to install for linux and what i need is that single command where is it so i'm certain that we already have linux or python installed but i just want the command to install it we saw it a moment ago here i'm just going to back out until i can find it one more there it is so it's under oh it's this link here and that's what i talk about the documentations being tricky sometimes you have to click these uh headings here to find stuff so this is the first time installing it so we'll grab that usually you're supposed to create in virtual environments with python i don't care this is my cloud shell it doesn't matter to me so we're going to go ahead and download that and hopefully it is fast and it was super fast which was really nice and so what we'll do is go check out the p cluster version okay and that looks fine to me i'm going to go down below here to run our first job the returns the it gives outputs i don't think we need to configure it because we already have our cli so what i'm going to do is go ahead and create ourselves a new cluster um beginning cluster creation configuration file config not found so i guess we do have to configure this configure and it's asking what region do we want to be in um if i have usc 1 i would choose it for some reasons all the way for number 13 that is not a lucky number but i'm going to choose it anyway anyway no key pair found in us east 1 region please create one of the following so create an ec2 key pairs no options found for ec2 key pairs that's fine so what i'll do is go over here and we'll go over to ec2 and we will go over to key pairs key pairs key pairs key pairs we'll create ourselves a new one here so say hpc key pair or just my hpc so we know what it is for we have putty or pem we're going to do pem because we're on linux we'll create that and notice that it downloaded the pen down down here and we're going to need that for later um and so what i'll do is i'll type in p cluster here again configure we'll choose 13 we'll choose number one here allowed values for the scheduler i have no idea what these are uh let's choose the number one allowed values for the operating system amazon linux 2. i know what that is minimum cluster size one maximum cluster size two head notice instance oh t2 micro you can do that yeah let's do it i didn't know we could do that enter compute type uh t2 micro sure so i thought that we'd have to use a c5n but i guess apparently not automate vpn vpc creation yes of course network configuration so allowed values for the network configuration a head node in a public subnet and can and compute fleet in a private subnet a head node and compute that will do in the both just to make our lives easier i don't care first one sounds more secure of course and so oh it's creating cloud information sack wow this is easy i thought this was going to be super painful okay so we'll go over here we'll go take a look at what cloudformation's doing all right now i don't care if we actually run a task on here but it was just interesting to go through the process to see how hard it was and we will go look at what resources are being created so it's creating an internet gateway so it's literally creating a isolate vpc for it which is totally fine i guess it's creating a subnet it's creating a route table refresh here um i'm not sure how much it wants to create here it just looks like vpc that's all it's creating i thought maybe the ec2 instances would show up here but maybe it's going to launch that at on a need be basis okay so that's all created oh now it's doing a vpc gateway i think vpc gateways cost money let's go take a look here people say pricing yeah there's a transfer fee so just be careful about that you know again you just can just watch along here you don't have to do it default route depends on public so now it's creating ec2 route i don't know what an aws ec2 route is i've never seen that before sometimes what we can do is go into ec2 and then take a look on the left hand side you see anything in here we don't know what it is we just type in ec2 route cloud formation sometimes cloudformation is great for figuring out what a component is not all components are represented in the um um management console so specify route in the row table oh it's just a route okay and we'll go back here we'll refresh so that is done is the stack done created complete good we'll go back to our cloud shell it says you can edit your configuration file or simply do etc so now let's see if we can create the cluster i assume this would create ec2 instances so the job scheduler you are using is sge this is deprecated in future use parallel cluster well should have told me okay there is a new version of three zero one parallel available i don't understand because i just installed it right we'll go back to cloudformation we're just gonna probably create nested stacks which that's what i thought it would do nessa stacks means that it's reliant so there's one main one and then there's uh children stack so go here see what resources it's creating a whole bunch of stuff wow so many things that sqsq sns a network interface a dynamodb table yeah you probably don't want to run this you just want to watch me do it and then we go into here it's creating an ec2 volume so that's going to be ebs and then here we have a log group i don't know why they separated those out it seemed very necessary we are waiting on the elastic ip that always takes forever creating elastic ip root instance profile that is the item role for it that didn't take too long these these take a long time i never know why you create a role it's really easy but attaching an iron policy you're always waiting for those um so i'm gonna just stop it here i'll be back in a second because i don't want to have to make you watch me stare at the screen here okay all right so after a really really long wait um and it always takes some time there it finally created i'm not sure what it's made i mean we generally saw over here in the outputs but usually the cost that i'm worried about is whatever it's launching under ec2 it might not even have launched any servers here we're going to take a look here and see if there's anything so we have a master and a compute and they're t2 micro so seems pretty safe here this compute is not running yet so i'm assuming that this is like the machine that does the computing and maybe if you had multiple machines here like that would be the cluster like would manage multiple computes i'm not particularly sure but let's just keep going through the tutorial and see what we can do the next step is we need to get this pen key in our cloud shell here so this i don't know where this is but what i'm going to do is i'm going to move it to my desktop i'm doing this off screen by the way so i'm moving it to my desktop and then i'm just going to go and upload the file okay and there it is so we'll say open and we'll say upload and it's going to upload it here onto this machine and i believe this is on like uh i think this uses an efs instance like if you're wondering where the storage for cloud shell is if we go over here i think it's efs is it uh i don't know where it is okay maybe it's just uh maybe it's somewhere else okay i can't remember where it is but anyway um so now it's created the cluster can i hit enter here okay can i create a tab like if i quit this is it going to kill it it exited it which is i think it's fine i don't think it stopped running and so now if i do an ls there's my key and so we can go back to our instructions we just have too many tabs open here drag this all the way to the left here and so we can try to use our key here to log in so what i'm going to do is go here and we'll say my hpc pem and see if that works we'll say yes and permission denied it is required your private key is not accessible that's because we have to mod it um i never remember the command anymore because i rarely ssh into machines but if we go to connect and we go to ssh client it will tell us that we need to run chamod 400 okay so that's what we need to do is we need to do a chamat400 just wanted to grab that code there okay and now if we hit up we should ssh into the machine there we are we are in the instance we'll type it exit and so now we want to run our job on this machine and if we go back over to here i guess we can go create our first job so i'm just doing this in vi and i'm gonna paste that in yep and i don't want the first line oh okay that's perfect oh great right quit oh there's no file name hold on here so i need to name this file something so i'm going to say job.sh and we're going to paste that again here we'll say paste and i don't know if that's cut off yeah it is okay great is that one okay i don't trust that the first line is there so what i'm gonna do is go back to our tutorial here it's shebang forward slash bin forward slash bash uh this then that forward slash bin forward slash bash just double check it looks good to me we're going to quit that i'm just going to make sure that it is what we said it is so job.sh looks correct to me good and so we'll try to run our job here so i'm going to say q um job.sh ls and i guess it really depends on what we decided to use when we set up that thing i can't remember what we choose as our queue we do qstat oh you okay okay okay so i think the thing is like you see how we have sg i think that that's what we use to queue up jobs and so we have to have that installed probably so install configure sun grid engine sg install linux oh boy that looks like a lot of work so i don't think we need to do anything further here but as far as i understand the idea is that you're choosing some kind of way to manage these and so i'm not sure what q q sub is let's just go look at what that is what is q sub oh that is the sun grid engine okay so how do we install that um [Music] i'm just gonna see if we can install it so i'm gonna do i think this is using yum so if i do clear here clear yum install q sub let's see if i can do it sudo yum install qsum no package available amazon linux 2 q sub because that's probably what we're running in cloud shell q sub doesn't tell us how to install it that's great so that's probably what it is and so in order to use this we would have to install that sun whatever whatever and then we go through we do q sub it would cue it up um you could do q stat cat hello destroy it that's pretty much all we really need to know to understand this it would have been nice to queue up a job and see it work but you know we're getting kind of into a hairy territory here and i think that we fundamentally understand how this does work so what i'm going to do is i'm going to go here i'm going to remove the job.sh here and i want to destroy this cluster so i'm going to do p cluster commands to figure out what all the commands are and there's probably a delete command so we'll go back up here be cluster where is our credit so we'll say delete okay and so what that's going to do is just tear down all the stuff now so if we go over to cloudformation okay and it looks like it's destroying so yeah i'll see you here uh back in a bit when it's all destroyed okay all right so after a short little wait there it has destroyed it has been so long that i uh my connection vanished but just make sure if you did follow along for whatever reason uh you know make sure that the stuff is deleted and it looks like it did not destroy uh this so i'm going to go ahead and delete that that's just vpc stuff so i'm not too worried about it i know that's going to roll back no problem and so i'm going to consider this done so i'm going to make my way back to the management console close this stuff up and we are good to go for our next thing hey this is andrew brown from exam pro and we're taking a look at edge and hybrid computing services so what is edge computing when you push your computing workloads outside of your network to run close to the destination location so an example would be pushing computing to run on phones iot devices external servers not within your cloud network what is hybrid computing when you're able to run workloads on both your on-premise data center and the above vpc okay so we have a few services here starting with abus outposts this is a physical rack of servers that you can put into your data center invoice outputs allows you to use aws api and services uh such as ec2 right in your data center then we have abs wavelength this allows you to build and launch your applications in a telecom data center by doing this your applications will have ultra low latency since they will be pushed over the 5g network and be closest as possible to the end user so they've partnered with things like verizon vodafone business and a few others but those are the two noticeable ones okay we have vmware cloud on aws so this allows you to manage on-premise virtual machines using vmware within ec2 instances the data center must be using vmware for virtualization for this to work okay then we have abs local zones which are edged data centers located outside of the database region so you can use it as closer to the edge destination when you need faster computing storage databases in populated areas that are outside of aws region you could do this there's some other edge offerings on aws that aren't listed here like sagemaker has was called like neo stage maker unless you do edge computing with ml but i mean this is good enough okay [Music] all right so i wanted just to show an example of edge computing because we didn't cover it in our generic compute and so there's a variety of services that allow you to do edge computing like wavelength and so i've never actually launched wavelength before and i think that you have to request it so if i go over to support here again i've never done this before but i'm sure we can figure it out pretty easily i feel that if we create a case um maybe it's like service limit we type in wavelength here well nope not there so how do we get wavelength wavelength request so that's what i'm looking for here okay how do i use wavelength aws whoops and sometimes what i'll do is go to the docs here opt into wavelength zones before you specify wavelength zone for resource or service you must opt into it to opt in go to the aws console okay so we'll go to ec2 and then it's going to say use the region selector in the navigation bar to select the region which supports your wavelength so i know that there's stuff in uh us west because of las vegas right or not las vegas but los angeles right so if we go over here there's definitely that over there on the navigation pane of the ec2 dashboard under account attribute select zones okay do we see zones here zones oh ec2 dashboard zones let's go check here again on the navigation pane choose ec2 dashboard we are there right and under account attributes settings account attributes oh over here okay oh it's here zones and so there we have two zones and we see switch regions to make zones a different region okay so under zone groups turn on wavelengths zone groups okay nothing there so i'm just going to switch over to another one here oh maybe oregon maybe cs west 2. oh look at all the stuff we have here i've never seen these before okay so here is the wavelength one so that is the los angeles one we can go ahead and enable this before december the zone group i'm not sure what zone groups cost so wavelength zone pricing again you might just want to watch me do this because it might cost money and so you might not want to have to spend for that pricing provides mobile networks wavelengths are available across whatever learn about the data transfers enterprise about ec2 instances okay so what's the price we're going to here alright so what i'm going to suggest to you is don't do this but i'm going to do it and we're just going to see what the experience is like okay so i'm going to update my zone so now i have this one so we'll say enable i'm going to assume that it has to do with like data transfer costs okay and uh we're going to go over to ec2 and we're going to go over to instances here we're going to launch an instance and we're going to see if we have that available now i don't know if we're restricted to particular to particular uh instances i assume we can launch a linux machine it'd be really weird if we couldn't you know we'll go over to configuration and what we want to do is choose the zone so how do we do it so once it's turned on confirmation confirm it configure your network so create a vpc create a carrier gateway so you can connect your resources into the vpc to the telecommunication network holy smokes this is complicated but it's just kind of interesting to see like the process right you know it's not for our use case but uh carrier gateway right and as i do this i always check up all the costs here so i say carrier gateway pricing aws because maybe that's where the price is okay if you don't get a pricing page then usually that's hard to say logically isolated virtual networks again it's not telling me what um to use carrier you need to opt into at least one wavelength zone but i did right and sometimes what happens is that it just takes time for the opt-in to to go so go here manage the zone settings that was a lot easier way so we have one it's we're opted in right here okay and okay we'll go here again if that one didn't work um we can try so i guess these are all the regions denver things like that can i opt into this one opt-in it's not super exciting like all we're going to do is launch an ec2 instance but you know we'll go through the process here a bit and i don't know why i can't create one so we'll go back over to the instructions here credit so you can connect so create a route table using the vpc to the route table so i think that's as far as we're going to get here because i'm not seeing any options here but the idea was that we would have to create a carrier gateway we'd update our route tables and all we would be doing is launching an ec2 instance so you know it's no different than launching it you just choose a different subnet so i think you'd have to create a subnet for that zone and launch it in there and that would be edge computing another example of edge computing would be something like via cloudfront which we have these edge functions or not edge functions have functions here and so these are functions that are deployed to cloudfront so my cloudfront function and these would be deployed to um edge locations right and all you can use here is javascript so here's an example of one and um i'm fine with this development live this function is not published we'll go to test test the function it's good publish publish that function and so the advantage of this is that you know if you have functions that are in it with lambda there's a chance of cold start um whereas if they're deployed on the edge here there's still probably a cold start but it's going to be a lot faster because it's a lot closer to the edge location so um you know it's just the different uh different cases but yeah there was one where we're launching ec2 workload into wavelengths which we couldn't complete which is totally fine and then we have these functions on the edge there's other edge computing services like within sagemaker you can deploy i think it's called like neo sagemaker and then for iot devices those are obviously on the edge so you can deploy those as well but generally that gives you an idea of edge computing okay [Music] hey it's andrew brown from exam pro and we're looking at cost and capacity management computing services so before we talk about them let's define what is cost management so this is how do we save money and we have capacity management how do we meet the demand of traffic and usages through adding or upgrading servers so let's get to it the first are the different types of ect pricing models so you got spot instances reserved instances saving plans these are ways to save on computing by paying up in full or partially or by committing to a yearly contract or multi-year contract or by being flexible about the availability interruption to computing services we have it was batch so this plan schedules and executes your batch compute workloads across the full range of aws computing services which can utilize spot instances to save money we have abyss compute optimizer so suggest how to reduce costs and improve performance by using machine learning to analyze your previous usage history we have ec2 auto scan groups so asgs these automatically add or remove ec2 servers to meet the current demand all of traffic they will save you money and meet capacity since you only run the amount of servers you need then we have elb so elastic load bouncer so this distributes traffic to multiple instances we can reroute traffic from unhealthy instances to healthy instances and can route traffic to ec2 instances running in different availability zones and then we have elastic beanstalk here which is easy for deploying web applications without developers having to worry about setting up and understanding the io underlying aweso services similar to heroku it's a platform as a service so not all these are about cost some of them are about capacity management like elb but yeah there you go [Music] hey this is andrew brown from exam pro and we are looking at the types of storage services and no matter what cloud service provider using they're usually broken down into these three where we have blocks file and um uh object okay so let's take a look at the first so this is going to be for block storage so for aws this is called elastic block store data is split into evenly split blocks directly accessed by the operating system and supports only a single right volume so imagine you have an application over here and that application is using a virtual machine that has a specific operating system and then it has a drive mounted to it uh it could be using fc or scuzzy here but the idea here is when you need a virtual drive attached to your vm is when you're going to be using block okay the next one here is for um file or it's just basically a file system so this is about elastic file storage so the file is stored with data and metadata multiple connections via a network share supports multiple reads writes locks the file so over here we could have an application but it doesn't necessarily have to be an application and so it's using nasa exports as the means to communicate and so the protocols here can be nfs or smb which are very common uh file system protocols and so the idea here is when you need a file share where multiple users or vms need to access the same drive so this is pretty common where you might have multiple virtual machines and you just want to act as like one drive one example that could be like let's say you're running a minecraft server you're only allowed to have one world on a particular single drive but you want to be able to have multiple virtual machines to maximize that compute that'd be a case for that um so there you go then the last one here is like object storage and so for aws this is called amazon simple storage service or also known as s3 so object is stored with data metadata any unique id scales with limited uh with limited no file limit or storage limit so there's really very there's very little limit to this it just basically scales up supports multiple reasons right so there are no locks and so the protocol here we're going to be using https and api so when you just want to upload files and not have to worry about the underlying infrastructure not intended for high iops so input and outputs per seconds okay so depending on how fast you have to do your read and writes are going to determine uh you know whether you're going uh this direction or the other way um or you know how many need to actually connect at the same time and whether it has to be connected as a mount drive to the virtual machine okay hey it's andrew brown from exam pro and we're going to do a short introduction into s3 because on the certified cloud partitioner they ask you a little bit more than they used to and so we need to be a bit familiar with s3 because it is um at least i think that abel's considers its flagship uh storage uh service and it really is one of the earliest services is the second one ever launched okay so what is object storage or object-based storage so data storage architecture that manages data as objects as opposed to other storage architectures so file systems where these are others right so which manages data as files and a hierarchy and block storage which manages data as blocks within sectors and tracks that get stored on an actual uh drive and so uh the idea here is we have s3 which provides basically unlimited storage you don't need to think about the underlying infrastructure the s3 console provides interface for you to upload and access your data okay so we have the concept of an s3 object so objects contain your data they are like files but objects may consist of a key this is the name of the object a value the data itself made up of a sequence of bytes the version id when versioning enabled the version of the object metadata additional information attached to the object and then you have your s3 buckets the buckets hold objects buckets can also have folders which in turn hold objects s3 is a universal namespace so bucket names must be unique it's like having a domain name okay and one other interesting thing is an individual object can be between zero bytes and up to five terabytes so you have unlimited storage but you can't have uh files of uh incredible size uh i mean five terabytes is a lot but nothing beyond that for a single file but just understand that you can actually have a zero byte file uh and for like associate certifications that can be a an actual question so that's why it's there all right let's take a look at s3 storage glasses um and so for the certified cloud partitioner we need to know generally what these are for associated levels we need more detail than we have here but let's get through it so adabus offers a range of s3 storage classes the trade retrieval time accessibility durability for cheaper storage and so the farther down we go here the more cost effective it should get pending uh you know certain conditions okay so when you put something to s3 it's going to go into the standard uh tier the default tier here and this is uh incredibly fast it has 99.99 availability 11 9's durability and it's replicated across three azs and so uh you know we have this cheaper meter here here on the left-hand side that would apply this is very expensive and it's not actually expensive but it is expensive at scale when you can uh better optimize it with these other tiers so just understand that then you have the s3 intelligent tiering so this uses ml to analyze objects and usage and determine the appropriate storage class it is moved to the most cost effective access tier without any performance impact or added overhead then you have s3 standard ia which stands for infrequent access this is just as fast as s3 standard but it's cheaper if you access the files less than once a month there's going to be an additional retrieval fee applied so if you do try to retrieve data as frequently as s3 standard it's going to actually end up costing you more so you don't want to do that okay then you have s3 one zone ia so as it says it's running in a single zone so it's as fast as s3 standard but it's going to have lowered availability but you're going to save money okay there is one caveat though your data could get destroyed because it's remaining in a single uh a z so if that a z or data centers um suffer a catastrophe you're not going to have a duplicate of your data to retrieve it okay and then you have s3 glacier so for long-term cloud storage retrieval of data can take minutes to hours but it's very very very cheap and then you have esri glacier deep archive which is the lowest cost storage class but the data retrieval is 12 hours and so you know um all of these here to here these are all going to be in the same abyss s3 console or amazon s3 console s2 glacier is basically like its own service but it's part of s3 so kind of lives in this weird state there's one here that we didn't have a list here which is s3 outputs because it has its own storage class it doesn't exactly fit well into this kind of leaner cheaper thing here okay [Music] hey it's andrew brown from exam pro and we're taking a look at the aws snow family so this is storage and compute devices used to physically move data in or out of the cloud when moving data over the internet or provide private connection that is too slow difficult or costly so we have snow cone snow ball edge and snowmobile and so there originally was just snowball and then they came out with snowball edge and edge introduced edge computing that's why there's edge in the name but pretty much all of these devices have edge computing uh and they do individually come with some variants so with the snowball snow cone it comes in two sizes where it has eight terabytes of usable storage and then there's one with 14 terabytes of usable storage for snowball edge it technically has like four versions but i'm going to break it down to two for you we have storage optimize where we have 80 terabytes of use of usable storage there and then compute optimize 30.9.5 terabytes and even though it's not here you get a lot of vcpus and increased memory which could be very important if you need to do edge computing before you send that over to aws and then last here we have snowmobile which can store up to 100 petabytes of storage um in the associates i cover these in a lot more detail because there's so much more about these like the security of them how they're tamper proof something like how they have networking built in the the connection to them but you know for this exam that's just too much information um you just need to know that there are three uh three ones in the family and generally what the sizes are and that they're going to be all placed into amazon s3 uh what's interesting is that you know snowmobile only does a hundred petabytes but adabus markets it as you can move exabytes of of um content because you can order more than one of these devices so uh they'll market it saying like snowball edge is when you want to move uh petabytes of data and snowball mobile is when you want to move exabytes but you can see that a single thing isn't in the exabytes just in the petabyte okay [Music] hey this is andrew brown from exam pro and we are taking a look at all the innova storage services in brief here so let's get to it so the first is simple storage service s3 this is a serverless object storage service you can upload very large files and an unlimited amount of files you pay for what you store you don't worry about the underlying file system or upgrading the disk size you have s3 glacier this is a cold storage service it's designed as a low-cost storage solution for archiving and long-term backup it uses previous generation hdd drives to get that low cost it's highly secure and durable we have elastic block store ebs this is a persistent block storage service it is a virtual hard drive in the cloud and you attach to ec2 instances you can choose different kinds of hard drives so ssd iops ssd throughput hdd and a cold hhd okay we have elastic file storage so efs it is a cloud native nfs file system service so file storage uh you can mount to multiple ec2 instances at the same time when you need to share files between multiple servers we have storage gateway this is a hybrid cloud storage service that extends your on-premise storage to the cloud we've got three offerings here file gateway so extend your local storage to amazon s3 volume gateway cache is your local drive to s3 so you have a continuous backup of the local files in the cloud tape gateway so stores files onto virtual tapes for backing up your files on very cost-effective long-term storage we got warmer page here because there's a lot of services here we have eight of us snow family so these are storage devices used to physically migrate large amounts of data to the cloud and so we have snowball and snowball edge these are briefcase size data storage devices between 50 to 80 terabytes i don't believe snowball is available anymore it's just snowball edge but it's good to have all of them in here so we can see what's going on we have snowmobile this is a cargo container filled with racks of storage a compute that is transported via a semi-trailer tractor truck to transfer up to 100 petabytes of data per trailer i don't think we're going to be ordering that anytime soon because that's pretty darn expensive but that's cool we have snow cone this is a very small version of snowball that can transfer eight terabytes of data we have aws backup a fully managed backup service that makes it easy to centralize and automate the backup of data across multiple services so ec2 ebs rds dynamodb efs storage gateway you create the backup plans we have cloud endure disaster recovery so continuously replicates your machine in a low cost staging area in your target able's account and preferred region enabling fast and reliable recovery in case of i.t data center failures we have amazon fsx this is a feature rich and highly performant file system that can be used for windows so that would be using smb or linux which uses luster and so there we have the amazon fsx for windows file server so use smb protocol and allow you to mount fsx to windows servers and then the luster one which uses a linux luster file system and allows you to mount ffsx linux servers are there any storage services missing here not really i mean you could count elastic container repositories one but that's kind of something else or you could also count maybe um code commit but you know i kind of put those in a separate category where we those are in our developer tools or our containers okay [Music] all right so what i want to do is show you around s3 so we'll make our way up here and type in s3 and we'll let it load here and what we're going to do is create a new bucket if you do not see the screen just click on the side here go to buckets and we'll create ourselves a new bucket so bucket names are unique so let's say my buckets and we'll just pound in a bunch of numbers i'm sure you're getting used to making buckets in this in this course so far so if we scroll on down notice that it says block public access settings for this bucket and this is turned on uh like the blocking is turned on by default because s3 buckets are the number one thing that are a point of entry for malicious actors where people leave their buckets open so if we want to grant access to this bucket for people to see this publicly we'd have to turn this off okay but for now we're going to leave that on you can version things in buckets which is pretty cool you can turn on encryption which you should turn on by default and use the amazon s3 key on the certified cloud partitioner it's going to ask you about client-side encryption and server-side encryption so you definitely want to know what these are i'm going to turn it off for the time being so we can kind of explore uh here by ourself here then there's object lock so we can lock files so that um you know there you know people aren't writing to them multiple times so go ahead and create a bucket and it's very quick so here's the new bucket we made and you'll notice we have nothing here which is totally fine if i go to properties um you know we can see that we can turn on bucket versioning turn on encryption but what i'm going to do is i'm going to go grab some files i remember i saved some files recently here i'm just going to make a new folder called star trek i just have some graphics you can pull anything off the internet you want to do this yourself but i'm just going to prepare a folder here it'll take me a moment okay just a moment okay great so now i have my folder prepared and so what i want to do is upload my first file so i can go here and upload and actually i can upload multiple files you can add a folder which is nice and so in here if i want to upload these files here whoops i'll just select multiples i'll hit open it'll queue them up which is really nice we can see the destination details here if we want to turn it versioning on we could there we could apply permissions for outside access but we have uh things turned on but what's really important is the properties where we have these different tiers and so based on the tier that you use the the lower you go at least it should be the cheaper it's going to get but it's going to have some trade-offs let me cover that through the course then there's that server side encryption um and i'm going to hit upload we'll just individually turn it on so you're going to see this progress go across the top these have all been uploaded i'm going to click click on my destination bucket and so we can do is we can open these if they're images they'll show us right here in the browser we can download them so if we need to get them again all right we can create a folder here and just say star trek or enterprise d enter prize d here okay but it's not really easy it's not like i can drag this into there um i might be able there's no move option so you'd actually have to copy it into the destination and then delete the old one it's not like using a file system you know there's a lot more work involved but you know it's a great storage solution um so let's look at encryption so i have this selected here if i click into it i can go to permissions i can go to versions see that i'm looking for encryption here we go so if i turn it on i can enable encryption and i can choose whether i want to use an amazon s3 key so ss e s3 so an encryption key that amazon s3 creates manages and uses for you then you have itabus ssc kms and i believe this uses aes up here which is totally fine then you have kms down here and it's interesting because they're like database will manage the key for you and then this one abyss will manage the key for you it's just slightly different this one of course is a lot simpler it's not many reasons not to turn on encryption but i'm going to go turn this one so that it is encrypted here and just because it's encrypted doesn't mean we can't access the file i can still download it i can still view it because aws is going to decrypt it right so if i go i click on this one and i say open okay even though it's encrypted i can still view it right it just means that it's encrypted on the storage right so if somebody were to steal that hard drive or whatever hard drive it's sitting on on a bus if they can't even figure it out it's encrypted they're not going to be able to open up the file right so that is the logic there but through here i can get it something that's really interesting with um s3 is the ability to um have life cycle events so i'm just kind of looking where that is it's usually in the bucket so if i go to management up here i can set up a lifecycle rule and what i can do is say like move this to deep storage okay and then i can say what it is that i want to filter so maybe it's like data.jpg or i can say apply to all objects in the bucket i acknowledge that and we say move current versions of objects between storage classes and i check box that on and i can say move them to glacier after 30 days i think if i go lower it'll complain probably when i save there and so the idea is that we can move things into storage so maybe you have files coming in down below it's showing you here right so a file is uploaded and then after 30 days then move them into glacier so we save money okay that's a big advantage of s3 there's a lot of things going on in s3 here like you can turn on um uh wherever it is you can turn on web hosting so you can turn this into like a website down below here there's a whole a whole bunch of things that you can do okay so we're not going to get into that because that's just too much work but you know we learned the basics of s3 so what i want to do to delete this i have to empty it first watch it'll be like you cannot delete it you need to empty the bucket first so go ahead and empty it and i'll save my bucket empty or sorry i guess i have to type in permanently delete perm [Music] delete no they used to oh yeah i can copy it okay great and so once the bucket is emptied i can go back to the bucket and i'll go back one layer and then i'll go ahead and delete my bucket and you can only have so many buckets i think it's like a hundred you get like 100 buckets how many buckets can you have in aws 100 buckets yeah i was right and i think if you wanted to know how many you pro there's probably like a service limits page service limits service quotas so you go here you say aw services s3 how many buckets 100 right there okay so you know that gives you kind of an idea what's going on there but there you go that's s3 [Music] all right so let's go take a look at elastic block store which is uh virtual hard drives for ec2 so what i'm going to do is make my way over to the ec2 console because that is where it's at and on the left hand side if we scroll on down you'll see elastic block volumes or elastic block store volumes and so we can go here and the idea is we can go ahead and create ourselves a volume and what you'll notice is that we have a few different options here we have general purpose provisioned iops cold hdd throughput optimized magnetic magnetic beam basically like physical tape that you can use to back up like the old school stuff and so you have all these options here and you can choose the size so when you change these options you're going to notice that some things are going to change like the through throughput or iops so notice that general purpose is fixed at between 300 to 3000 and notice that it goes from one gigabyte to how many ever that is that's a lot there and so it's not too complicated but in practicality i don't really create volumes this way what i do is i'll just go launch an ec2 instance so i'll say launch ec2 instance and we'll choose amazon linux 2 and again you know if we haven't done the ec2 follow along we'll cover all this stuff in more detail don't worry about it we go to configure instance then we go to add storage and this is what you're going to be doing when adding ebs volumes um to your ec2 instances and you'll notice we always have a root volume that's attached to the ec2 instance that we cannot remove we can change the size up here i believe the oh it shows us right here that we have up to 30 gigabytes so sometimes you might want to max that out to take advantage of the free tier you notice we can also change this there might be some limitations in terms of the root volume so notice that we have a few more options here we can't have a cold hdd or hdd as our root volume uh notice we have a delete on termination so ebs volume persists independently from the running life so you can choose to automatically delete ebs volume when the associated instance is terminated so if you take this off if the ec2 instance is deleted the volume will still remain which could be something that's important to you uh for encryption here um you might want to turn it on and so generally aws always has a kms managed key which is free so you check box that on it will be encrypted you can turn it on later but you can never turn encryption off but you should always uh turn encryption on and so just be aware to turn that on you can also add file systems down below here but maybe we'll talk about that later because i think that gets into um efs okay so that is a different type of file storage there but that's pretty much all there is to it uh you just go ahead and create your volume there and then it would show up under ebs we could take snapshots of them to back them up that goes to s3 but that's all we really need to know here okay [Music] all right let's take a look at elastic file system or efs storage manage file storage what is efs stand for efs system elastic file system okay sorry and so what we can do is go ahead and create a file system here so i'm going to say my efs and the great thing is that it basically is serverless so it's only going to be what you consume right so what you store and what you consume and i think that's what's going to be based on we have to choose a vpc i want to launch it in my default vpc and we have the choice of regional or one zone i guess this is going to be based on what gets backed up to s3 possibly so one zone probably is more cost effective but i'm going to choose regional and that's a new option i never noticed before i just opened it up to see a few more things here we have general max io bursting provision things like that we'll hit next we'll choose our azs and uh then you might have to set up a policy so i'm going to hit next here you'll go ahead and hit create so you know this is really interesting but the trick to it is really mounting it to a dc2 instance and that's kind of the pain okay so if we go into this um you have to mount it and there are commands for it so like efs mounting linux commands okay i've done this in my solutions architect associate uh but you know again i'm not doing on a regular basis so i don't remember and so if we go here i'm just trying to see if we can see some code that tells us how to mount it so mounting on an ec2 ec2 linux instance with the ef-s mount helper um so i don't know if they had that before but that sounds interesting so pseudo mount hyphen t the file system the efs mounting point yeah this looks a lot easier than what we had before okay so before i had to enter a bunch of weird commands but now it looks like they've boiled it down to a single command but once you have your efs instance um i'm going to assume that there is an entry point here just clicking around here seeing what we can see i would imagine we have to create an access point so my access point sure i don't know if it's going to let me just do that it did and so i would imagine that you'd probably use an access point let's go back here if that's mount point i think that's the same thing i think the mount point and the access point you create access points and that's what you use uh we can go here we can attach it so oh yeah here's the command so um mount via dns or mount via ip address so it doesn't look too hard we can try to give it a go i haven't done it in a while it looks like they've made it easier so maybe we'll try it out okay so go to ec2 here and i'm going to launch an instance i'm going to choose amazon linux2 okay we're going to go and choose that and then we want to choose a file system and so it's going to mount to here okay and storage is fine all this is fine and i'm going to go ahead and launch this and i need a new key pair so create a new key pair this will be for efs example okay we're going to download that key pair there we're going to launch this instance okay and then we're going to go view this and as that is launching what i'm going to do is open up my cloud shell and i'm going to want to upload this pen so again like before i'm going to drag it to my desktop off screen and then what i'm going to do is upload this file so i have it efs example okay we're going to upload it i just want to see if we can access that efs volume and so if i do ls that's our old one which i can delete by the way i'm never going to use that anytime soon yes ls and i'm just delete the hello text there so it's a bit cleaner for what we're doing and so we need to mod that 400 uh efs example and we saw that's how like if you want to try to connect to a server remotely that's what you do right so i believe that the drive is mounted if i go to storage does it show up here it doesn't show up under here but what we're waiting for are these two status checks to pass and then we can ssh into this machine and i'm just going to go back here and take a look here so using the efs mount helper so sudo mount hyphen t efs tls this volume to efs and so i imagine it's going to mount it to efs here using the nfs client so i guess it just depends on what we're going to have available to us even if the sas checks haven't passed i'm going to try to get into this anyway so what we can do is click on this grab the public ip address we'll type in ssh ec2 hyphen user at sign paste this in hyphen i efs example pem i usually don't log in via ssh um but you know just for this example i will and so i want to see if this drive exists it usually be under mount right there it is okay so it already mounted for us so i can do touch hello world dot text say sudo here i can say sudo vi i'm going to open up the file and say hello from another computer okay and so i've saved that file and what i want to do now oops oh okay sorry i'm in the cloud shell here but what i want to do now is i want to kill this machine okay and what i'm going to do is spin up another ec2 instance i'm going to see if when i mount that if that file is there if it actually worked but wow that is so much easier than before i can't tell you how hard it was to attach an efs volume the last time i did it um so we'll go ahead we'll add that and the storage is fine we're gonna go to review here we're gonna say launch and i'm just gonna stick with the same key pair there we're going to give that moment to launch and we're going to go to view instances and so now this one is launching as that's launching let's just go peek around and see what we can see so you know i imagine if we didn't add that file system during the the boot um and we were we're adding it after the fact we probably could just ran that line and added it really easily i'm not going to bother testing that because i just don't want to go through that trouble to do that i still can't remember what these access points are for um but uh that's okay it's kind of out of the scope for the certified cloud partitioner and so i'm just curious so we have some nice monitoring here right so that's kind of nice um i guess they're trying to suggest here like aws backup data sync transfer so that would just be backing up simplify automates accelerates moving data okay that's pretty straightforward transfer family fully managed sftp okay so nothing exciting there and we're going to refresh that there and this is initializing so let's go see if we can connect to this one so i'm going to go ahead and grab that public ip address i'm going to hit up okay i'm going to swap out that ip address and we're going to see if we can connect to that machine yet so we'll say yes and we got into it so that's great and so what i'm going to do is go again into the mount directory efs fs1 ls and there it is i'm going to do cat hello world and so it works and so that's the cool thing about dfs is that you have a file system that you can share among other ec2 instances i'm sure users could connect to it using the nfs protocol i'm not the best at like networking or storage networking so i'm not going to show that here to you today but that gives you a general idea how efs works again you only pay for what you store it is serverless so we'll go here and type delete because i'm done with this i'll probably destroy the instance first it doesn't get mixed up and just so we clean up a little bit better here i'm going to delete these keys here delete okay and we'll go ahead and delete this one as well delete so i'm done with that uh we'll make sure that that is tearing down that is good and we'll make our way back over here and it says enter probably the id's name in so we'll enter that in and hit confirm and we'll see is it deleting i'm not confident with it i'm going to do it one more time confirm that by entering the the file system's id so we'll put it in again is it destroying i cannot tell there we go so it's destroying we are in good shape it is gone our data is gone um but yeah that is efs [Music] all right let's take a look at um the snow family in aws so if we type in snow up here and we click into into the snow family this is where we can probably order ourselves a device i might not be able to order them at least when i originally looked at this like way back in the day it wasn't available in canada so i'm kind of curious to see what there is but the idea is that you're going to go here in order and you have some options so you can import into s3 or export from s3 and then down below we have local compute storage so perform local compute storage workloads without transferring data you can order multiple devices and clusters for increased durability and storage capacity so it sounds like you're not you're not um transferring data you're just using it locally on to um it's like basically buying renting temporary computers which is kind of interesting i never saw that option before but we're going to choose import into aws s3 and we're just going to read through this stuff and it's not my expectation that we're going to be able to submit a job here and you probably don't want to because it's going to cost money but i just want to show you the process so we can see what there is here so snow job assistance if you're new to snow family run a pilot of one to two devices so batch file smaller than one megabyte benchmark and optimize deploy staging workstations discover remediate environmental uh issues early files and folders name must conform to amazon s3 prepare your ami once the pilot is completed confirm the number of snow family devices that you can copy devices to simultaneously follow the best practices use the following resources to manage your snow devices so we have aws open hub and then there's the edge client cli so open hub is a graphical user interface you can use to manage snow devices so that's kind of cool and then we have the cli which i imagine is something that's very useful to use so just close those off here and then we have other things so i'm going to say i acknowledge i know what i'm doing which i don't really but that's okay and then here we are going to enter in our address so we say andrew brown and i'm not gonna i'm not gonna enter this in for real just whatever so it'll be toronto exam pro um canada oh see so there's there's the thing you can only ship it to the us and so that's as far as i can get okay um and that's the thing is like if you really want to know aws inside and out you got to be in the us but let's pretend that we do have an address in the states what's a very famous address so what is the address of the white house okay there it is so i'm just going to copy that in because again we're not going to submit this for real i just want to see what's farther down the line here okay uh what's nw is that the state it's in washington right is is this part of it nw northwest is that a thing i'm from canada so i couldn't tell you um so we'll go down here and we have washington do we have a second address line it doesn't look like it um we have a zip code i believe this is the zip code and do we need a phone number looks like we do four one six uh one one one one one one one okay we have one day or two day shipping why not just have one right and so then we can choose our type of device so we have snow cone snow cone ssd snow cone optimized i'm surprised i never took a screenshot of this earlier um compute optimized things like that so you can choose which one you want it looks like we're going to see some different options but we'll go with snow cone my snow cone and snow cones do not ship with a power supply or ethernet cable snow cone devices are powered by 45 watt cb c usb c power supply i'll provide my own power supply and cable do not ship with a power supply or ethernet cable that's fine uh snow cone wireless no can connect your wireless connection connect the buckets you want there's a bucket we created earlier computing use compute using ec2 instances use a device as a mobile data center by loading ec2 ami so here's an ami that i might want to use uh aws iot green grass validated ami not interested remote device management you can use opshub or etc to monitor reboot your device that's fine and so then we need to choose our security key i don't know if i'll have to set the service role we'll see what happens here and we'll let it update that's fine and so then i guess we just hit create job and so i don't really want to order one um so i'm not going to hit that button and also it's going to go to the white house and they're going to be like andrew brown why did you do that so that's not something i feel like doing today but at least that gives you an idea of that process there and i imagine that uh if you go the other way it's gonna be pretty similar yeah it's just like same stuff i think so it saved that address that it's not a real address and the the options are a little bit uh limited here and it's like nfs space s3 based so it's slightly different but it's basically the same process just curious we'll take a look at the last one there since there are three options just curious okay similar thing okay so yeah that's pretty much all i want to know about um the snow family and that's about it okay hey this is andrew brown from exam pro and we are taking a look at what is a database so a database is a data store that stores semi-structured and structured data and just to emphasize a bit more a database stores more complex data stores because it requires using formal design and modeling techniques so databases can generally be categorized as either being relational so structured data that strongly represents tabular data so we're talking about tables rows and columns so there's a concept of row oriented or columner oriented and then we have non-relational databases so these are semi-structured that may or may not distinctly resemble tabular data so here is a very simple example the idea is that you might use some kind of language like sql put in your database and you'll get back out tables for relational databases let's just talk about some of the functionality that these databases have so they can be using a special specialized language to query so retrieve data so in this case sql specialized modeling strategies to optimize retrieval for different use cases more fine-tuned control over the transformation of the data into useful data structures or reports and normally a database infers someone is usually using a relational row-oriented data store so you know just understand that when people say database that's usually what they're talking about like postgres mysql relational row store is usually the default but obviously there's a lot more broader terms there okay [Music] hey this is andrew brown from exam pro and we are taking a look at what is a data warehouse so it's a relational data store designed for analytical workloads which is generally column oriented data store okay so companies will have terabytes and millions of rows of data and they'll need a fast way to be able to produce analytics reports so data warehouses generally perform aggregation so aggregation is the idea of grouping data together so find a total or an average and data warehouses are optimized around columns since they need to quickly aggregate column data and so here's kind of a diagram of a data warehouse and so the idea is that it could be ingesting data from a regular database here i'm just getting out my pen tool so it could be a regular database or it'd be coming from a different data source that isn't compatible in terms of the schema and you use like etl or elt or etl to get that data into that data warehouse so data warehouses are generally designed to be hot so hot means that they can return queries very fast even though they have vast amounts of data data warehouses are infrequently accessed meaning they aren't intended for real-time reporting but maybe once or twice a day or once a week to generate business and user reports of course it's going to vary based on the the service that is offering the data warehouse a data warehouse needs to consume data from a relational database on a regular basis and again it can consume it from other places but you'll have to transform it to get it in there okay hey this is andrew brown from exam pro and we're taking a look at a key value store so a key value store or database is a type of non-relational database or nosql that uses a simple key value method to store data and so key value stores are dumb and fast but they generally lack features like relationships indexes aggregation of course there are going to be providers out there have managed solutions that might polyfill some of those uh issues there but i want to show you the underlying way that key value stores work to kind to kind of distinguish them between document stores so a key value stores literally a unique key alongside a value and the reason i'm representing that is zeros and ones is because i want you to understand that that's what it is it's basically just some kind of of data there and how the key value store interprets it is going to determine what it is so when you look at a document database that is just a key value store that interprets the value as being documents right and so key value stores can and do commonly store um multiple uh like an associative array that's pretty common so even for dynamodb that's how it does it and so that's why when you look at a key value store it looks like it uh a a table but it's not actually a table it's schema-less because underneath it's really just um you know that associative array and so that's why you can have columns or sorry rows that have uh different amounts of columns okay so due to the design they are able to scale very well beyond a relational database and they can kind of work like a relational database without all the bells and whistles so hopefully you know that makes sense okay [Music] all right let's take a look at document stores so a document store is a nosql database that stores documents as its primary data structure and a document could be an xml type of structure but it also could be something like json or json-like document stores are sub-classes of key value stores and the components of a document store are very uh comparable to relational databases so just kind of an example here where in a relational database they'd be called tables now you have collections they were called rows now they're called documents you had columns they had fields they may have indexes and then joins might be called embedding and linking so you can translate that knowledge over uh you know they they're not as they don't have the same kind of feature set as a relational database but you have better scalability and honestly document stores are just key value stores with some additional features built on top of it okay [Music] hey it's andrew brown from exam pro and we're going to take a look at the girl database services that are available on aws so we have dynamodb which is a serverless noaa skill key value and document database it is designed to scale to billions of records with guaranteed consistent data returned in at least a second you do not have to worry about managing shards and dynamodb is adabus's flagship database service meaning whatever we think of a database service that just scales is cost effective and very fast we should think of dynamodb and in 2019 amazon the online shopping retail uh shut down their last oracle database and completed their migration to dynamodb so they had 7500 oracle databases with 75 petabytes of data and with dynamodb they reduced that cost by 60 and reduce the latency by 40 percent so that's kind of to be like a testimonial between relational and a no school database so when we want a massively scalable database that is what we want dynamodb for and i really just want to put that there because if you remember that you're going to always be able to pass or get those questions right on the exam okay then we have documentdb so this is a no scroll document database that is mongodb compatible so mongodb is very popular nosco among developers there were open source licensing issues around using open source mongodb so aws got around it by just building their own mongodb database basically so when do you want a mongodb like database you're going to be using documentdb we have amazon key spaces this is a fully managed apache cassandra database so cassandra is an open source nosql key value database similar to dynamodb that is columnar store database but has some additional functionality so when you want to use apache cassandra you're using amazon key spaces [Music] hey this is andrew brown from exam pro and we are taking a look at relational database services starting with relational database service rds and this is a relational database service that supports multiple sql engines so relational is synonymous with sql and online transactional processing oltp and relational databases are the most commonly used type of database among tech companies and startups just because they're so easy to use i use them i love them rds supports the following sql engines we first have mysql so this is the most popular open source sql database and it was purchased and is now owned by oracle uh and there's an interesting story there because when oracle purchased it they weren't supposed to have it um mario db was or sorry my squad was sold to oracle sun systems and then within the year um uh oracle purchased it from them and the original creators never wanted it to go to oracle just because of their uh the way they do licensing and things like that and so the original creators came back and they decided to fork mysql and then maintain it as mariodb just so that you know oracle never kind of pushed away the most popular database so that everyone had to go to a paid solution then you have postgres so psql as it's commonly known is the most popular open source sql database among developers this is the one i like to use because it has so many rich features over mysql but but it does come with added complexity then oracle has its own sql proprietary database which is well used by enterprise companies but you have to buy a license to use it then you have microsoft sql so microsoft's proprietary sql database and with this one you have to buy a license to use it then you have aurora so this is a fully managed database uh and there's a lot more to going on here with aurora so we'll talk about it almost acts as a separate service but it is powered by rds so aurora is a fully managed database of either mysql so five times faster or postgres sql three times faster database so when you want a highly available durable and scalable and secure relational database for postcode to mysql you want to use aurora then you have aurora serverless so this is a serverless on-demand version of aurora so when you want the most of the benefits of aurora but you can trade off to have cold starts or you don't have lots of traffic or demand this is a way you can use aurora in a serverless way then you have rds on vmware so this allows you to deploy rds supported engines to on-premise data centers the data center must be using vmware for server virtualization so when you want databases managed by rds on your own database center uh and yeah i realize that this is a small spelling mistake should say just on here but yeah there you go [Music] hey this is andrew brown from exam pro and we're looking at the other database services that abuse has because there's just a few loose ones here so let's talk about redshift so it is a petabyte size data warehouse and data warehouses are for online analytical processing olap and data warehouses can be expensive because they are keeping data hot meaning that they can run a very complex query and a large amount of data and get that data back very fast so when you need to quickly generate analytics or reports from a large amount of data you're going to be using redshift then you have elastic cache so this is a managed database of an in-memory and caching open source databases such as redis or memcache so when you need to improve the performance of an application by adding a caching layer in front of your web servers or database you're going to be using elastic cache then you have neptune this is a managed graph database the data is represented as interconnected nodes i believe that it uses gremlin as the way to interface with it which is no surprise because that's what it looks like most clusters providers are using so when you need to understand the connections between data so mapping fraud rings or social media relationships very relational database heavy information you're gonna want to use neptune we have amazon time streams it's a fully managed time series database so think of devices that send lots of data that are time sensitive such as iot devices so when you need to measure how things change over time we have amazon quantum ledger database this is a fully managed ledger database that provides transparent immutable cryptographically variable transaction logs so when you need to record a history of financial activities that can be trusted and the last one here is database migration service dms it's not a database per say but it's a migration service so you can migrate from on-premise database to aws from two databases in different or same database accounts using different sql engines and from an esque wall to a nosql database and i'm pretty sure we cover this in a bit greater detail in this course okay [Music] all right let's go take a look at dynamodb which is awesome's nosql database so we'll go over to dynamodb and what we'll do is create ourselves a new table and we'll just say my dynamodb table and you always have to choose a partition key you don't necessarily have to have a sort key but it could be something like um like it you want to be really unique so it could be like email and this one could be created at right and so we have string binary notice that the the types are very sim simple then for settings we have default settings or customized settings so the default is use provision capacity mode rewrite five rules etc custom no secondary indexes use kms so i'm gonna just expand that to see what i'm looking at we have two options here on demand so simplify billing by paying the actual reads and rights that you use or provisioned which is this is where you get a guarantee of performance so if you want to be able to do you know whatever it is a thousand i don't know what it goes up to but like a thousand read writes per second then that's what you're paying for okay you're paying for being able having a guarantee of that um of that capacity okay i'm not going to create any secondary indexes but that's just like another way to look at data notice down below that we have a cost of two dollars and ninety-one cents uh then we have encryption at rest so you can do owned by amazon dynamodb that's pretty much the same as like a bus has or s3 has ssc s3 there you could use uh actually i guess most of these are probably kms i would imagine we'll go ahead and create the table here and that's going to create the table this is usually really really fast we'll go here and what we can do is insert some data so as it's just starting up here we can go over to our tables they recently changed this ui so that's why i look a bit confused view items up here okay and then from here we can create an item so i can add something say so andrew at exam pro dot co and 2021 uh well we'll just do the future so let's say 20 25 0.505 i don't want to have to think too hard here but we can add additional information so i can say like uh [Music] today true we could say um make it like a list you know food and then i could go here and then add a string it is not working oh there we go there we are so we could say like um banana and then we could say pizza right we can go ahead and create that item so now that item is in our database uh we can do a scan that will return all items we can query we can actually have some limitations of what we're choosing there's the party cue editor so we can use sql to select it um i have not used this before party q aws or partyq dynamodb examples i'm hoping i can just find like an example of some of the language getting started here i don't need to i don't need an explanation just show me an example query here and i will i'll get to it here okay so here's some examples right so maybe we can give this a go um so we have our table here so my dynamo db table and i just want the email back we don't need a where we'll run this see if it works there we go i'm not sure if we could select additional data there so i know that we had some other things like food there it is okay so that's really nice um addition to it dynamodb can stream things into a dynamodb stream to go to kinesis and do a lot of fun things so there's all sorts of things you can do with dynamodb but i'm pretty much done with this so i'm going to go ahead and delete this table and notice that it also creates some cloudwatch alarms so we want to delete this as well create a backup no we do not care go ahead and delete that and that is dynamodb [Music] okay so now i want to show you uh rds or relational database service so go to the top here type in rds and we'll make our way over there and so rds is great because it allows us to launch relational databases sometimes the ui is slow i'm not sure why it's taking so long to load today but every day is a bit different and so what we're going to do is go ahead and create a new database you're going to notice that we're going to have the option between creating a standard or easy i stick with standard just because i don't like how easy hides a lot of stuff from us even here like it says two cents per hour but it's not giving us the full cost so i really don't trust it because if you go down here and you chose their dev test here look it's like a hundred dollars it's not showing the the cost preview right now maybe because we didn't choose the database type sorry i wanted to choose postgres but before we do that let's look at the engine types we have amazon aurora so we have between mysql and postgres mysql maritab postgres oracle microsoft sql notice for microsoft sql it comes with a license you don't have to do anything with that it might change based on the addition here nope comes with a license for all of them which is great if you want to bring your own license that's where you need a dedicated host right running microsoft sql for oracle uh you have to bring your own license that's going to be based on um importing with the abs license manager but we go over to postgres which is what i like to use we're going to set it to dev test to try to get the cheapest cost scroll down look 118 dollars we can get it cheaper we get super cheap so here the password is going to be testing one two three capital on the t so an explanation mark on the end okay because it has a bunch of requirements of what it wants here i want a t2 micro so i'm just going to scroll down here what is going on here standard oh look m classes i don't want an m class i want a burstable class that's the cheap ones and so we go here can we still do a t2 micro or is it now t3 so i don't see t2 so i imagine a t3 micro must be the new it was free tier so we go it was three tier here right and if i go to [Music] databases um rds on the t2 micro 750 hours but i can't select it so i'm going to assume that the t3 micro must be the new tier if it's not there right let's just say include previous generations and then maybe i can see it then okay so i don't see it there i really don't like how they've changed this on me okay so the oldest i can choose is a t3 micro which is fine i just i just know t2 being the free tier that's all uh this is fine we don't want auto scaling turned on for our example here we do not want a multi-az so do not create a standby that's going to really jump up our cost we don't need public access it will create a vpc that's fine password authentication is fine we have to go in here which i don't know why they just don't keep that expanded because you always have to come in here name your database so my database we choose our postgres version here i'm going to turn backups off because if we don't if we don't it's going to take forever to launch this thing encryption is turned on you can turn it off but generally it's not recommended we can have performance insights turned on i'm going to turn the retention i will leave it to seven days because we can't turn that off we don't need enhanced monitoring so i'm just going to turn that off and uh that's fine we're not going to enable delete protection here and so we are good we can now go ahead and create our database and what we'll do here is wait for that database to be created so the thing is is like if we're doing the solutions architect or the developer associate stuff i'd actually show you how to connect to the database um it's not that hard to do like you just have to connect uh grab all the database information so it's going to have an endpoint a port stuff like that and you'd use something like table plus or something to connect to the database but that's out of scope of the certified cloud partitioner i'm just going through the motions to show you that you can create an rds database very easily but not how to connect to it and actually utilize it okay and so that would spin up and we would have a server and after that we can just go ahead and delete the server here so just say delete me okay and that's all there really is to it there is the special type of database like aurora doesn't have its own like console page it's part of rds so if you want to spin up aurora you just choose the compatibility you want you can choose between provisioned or serverless the serverless is supposed to be really good for scaling to zero cost so that's something there so you'd fill that all out but the initial cost is a lot more expensive you can't choose a t2 micro here um unless it lets you now it is for provisioned it's uh oh t2 t3 medium is the smallest you can go okay so if you reach the point we're using a medium-sized database then you might consider moving over to aurora just because it's going to be highly scalable et cetera like that um so that's a consideration there there's also something called babelfish um that it was announced last year when i when i shot this or when i'm shooting this as of now and the idea was to make it compatible with mysql sql server to migrate over to aurora post sql which is kind of interesting um but that's about it so if our database is destroying i think it is just going to go back over here to rds it's taken a long time to load today and i think it's already deleted maybe we go to databases here it's deleting so i'm confident it's going to delete so there we go [Music] all right let's take a look at redshift so redshift is a data warehouse and it's generally really expensive so it's not something that you're going to want to launch uh day to day here but let's see how far we can get with it um just by running through it so what we'll do is go ahead and create a cluster and again you can just watch me do this you don't have to create you don't have to create one yourself so free trial configure for learning that sounds good to me it's free for a limited time if your organization has never created a cluster well i rarely ever create these so when the trial ends delete your cluster to avoid the charges of on-demand okay that sounds fair so here we're going to have two v3 cu's it's going to launch a dc a dc too large so let's look that up for pricing me prices please please please um i think it's loading right here okay so i don't know how much it is but i know it is not cheap and down below we have sample data is loaded into your redshift cluster that sounds good to me ticket is the sample data okay ticket sample data redshift i just imagine they probably have like a tutorial for it here they do right here and so because i want to know what we need to do to query it right if we can even query it via the interface here so the admin user is adabus user and the password is going to be capital t testing one two three four five six exclamation and we'll hit create cluster oh cool we can query the data right in here so that's what i wasn't sure about whether we would be able to just query it in line because before you'd have to use java with jdbc or an odbc driver and download the jar and it's not as fun as it sounds of course but looks like we can query data once the data is loaded so that looks really good i guess we can pull data in from the marketplace so that's looks pretty nice too and i guess we could probably integrate it into other things like quicksite because you probably want to adjust your data over there again i usually don't spend a lot of time in redshift but it looks like it's a lot easier to use i'm very impressed with this so i don't know how long it takes to launch a redshift cluster i mean it is 160 gigabytes of storage there it's even at the smallest it's pretty large so what i'm going to do is to stop the video and i'll be back when this is done okay okay so after a short little wait here um it was a lot faster than i was expecting but uh it's available and so looks like here it says to query the sample data use redshift version 2. so i'm going to click that and i'm sure there's tons of buttons to get here and it'd be great if it just populated the query for me it doesn't but this looks really nice really nice ui i wonder if it has like some existing queries no that's okay so what i'm going to do here is i'm going to go ahead and pull out this query and see if we can get this to work here never found out what those prices were though okay and what we'll do is hit run i like how there's like a limit of 100 but here it has that so we'll go ahead and hit run and see what data we get so relation sales does not exist okay so what's going on here um we'll go up here so most of the examples in the redshift documentation uses a sample database called ticket the sample the small database consists of seven tables you can load the ticket data set by following the this here okay so to load the sample data from amazon s3 okay so i would have thought it already had the data in there i could have swore it would have dev public tables zero tables okay so i don't think there's any data in here and so we're going to have to load it ourselves i really thought it would have added it for us let's go ahead and create these tables and see if this is as easy as we think so run that create that table cool okay we got it down here we'll run that we'll just run each at a time i think there's seven of them so date already exists okay that's fine event already exists saying all these tables exist maybe i just wasn't patient hmm okay um interesting all right so maybe we'll go back and uh run that query maybe we just had to wait a little while for that data to load run okay so you know what i think it was doing this for us if if it did not create it for us we would have to go through all these steps which is fine because we're learning a little bit about redshift but looks like we just had to wait there so it looks like you would run those you download that you use the copy command to bring it over there it looks like you can do all of this via the uh this interface here and we've done a query so that's kind of cool um i imagine you probably could like save it or export it what if we chart it what happens okay you can chart it it's kind of fun can we export it out to just we can save it i thought maybe it could export out to quicksite but i i suppose you'd rebuild it in quickside a but yeah i guess that's it right there so that's pretty darn simple so what i'm going to do is make my way back over to redshift because we are done for this example and we will go over to clusters here and i'm going to go ahead and delete my cluster delete create file snapshot nope delete delete the cluster there we go so i'm pretty sure that will succeed no problem there and we are done with redshift and redshift is super expensive so just make sure that thing deletes okay [Music] hey this is andrew brown from exam pro and we are taking a look here at cloud native networking services um and so i have this architectural diagram i created which has a lot of networking components uh when people create networking diagrams for aws they don't always include all these things here even though they're there so we're just being a little bit verbose so you can see okay the first thing is our vpc our virtual private cloud this is a logically isolated section of the database cloud where you can launch database resources that's where your resources are going to reside not all services uh require you to select a vpc uh because they're managed by aws but i wouldn't be surprised if under the hood they are in their own vpc okay then if you want the internet to reach your services you're gonna need an internet gateway um then you need to figure out a way to route things to your various subnets and that's where route tables come in then we need to define a region that it's going to be which is a geographical location on your network then you have your availability zones which are basically your data centers where your resources are going to reside then you have subnets which is a logical partition of an ip network into multiple smaller network segments and these pretty much map to your availability zones if you're making one per a z and then we have knuckles these act as a firewall at the subnet level then we have security groups that act as a firewall at the instance level so hopefully that gives you a good overview okay [Music] all right so now let's take a look at enterprise or hybrid networking so we have our on-premise uh environments or your private cloud and then we have our aws account or our public cloud so there's a couple services here that we can bridge them together the first is aws virtual private network vpn it's a secure connection between on-premise remote offices and mobile employees then you have direct connect this is a dedicated gigabit connection from on-premise data center to aws so it's a very fast connection a lot of times a direct we say it's a private connection but that doesn't necessarily mean secure it's not encrypting uh the data in transit so very commonly these services are used together not just singular okay and then uh we have private links and so this is where you already uh are using aws but you want to keep it all within databus never going out to the internet okay so these are generally called vpc interface endpoints and then the marketing pages call them private links which is a bit confusing but you know it just keeps traffic within the database network so it does not transverse out to the internet okay [Music] hey this is andrew brown from exam pro and we are taking a look at vpcs and subnets so a vpc is a logically isolated section of the database network where you launch your aws resources and you choose a range of ips using a cider range so cider range is an ip address followed by this netmaster subnet sub mask that's going to determine how many ip addresses there are and there's a bunch of math behind that which we're not going to get into but anyway so here is an architectural diagram just showing a vpc with a couple subnets so subnets is a logical partition of an ip network into multiple uh smaller network segments and so you're essentially breaking up your ip ranges for vpcs into smaller networks so just thinking about cutting up a pi okay so subnets need to have a smaller cider range to the vpcs represent for their portion so uh four slash 24 is actually smaller which is interesting the the higher the number gets the smaller it gets and so this would allocate 256 ip addresses and so that's well smaller than 16 okay we have the concept of a public subnet so this is one that can reach the internet and a private subnet the one that cannot reach the internet and um these are not strictly enforced by aws so the idea is that when you have a subnet you can just say don't by default assign publicly assignable ip addresses but it's totally possible to launch an ec2 instance into your private subnet and then turn on um the ip address so you've got to do other things to ensure that they stay private or public okay [Music] hey it's andrew brown from exam pro and we are comparing security groups versus knackles so i have this nice architectural diagram that has both knuckles and security groups in them and we'll just kind of talk about these two so knackles stand for network access control lists and they act as a virtual firewall at the subnet level and so here you can create and allow uh and deny rules and this is really useful if you want to block a specific ip address known for abuse and i'm going to just kind of compare that against security groups because that's going to be a very important difference okay so security groups act as a firewall at the instance level and they implicitly deny all traffic so you create only allow rules so you can allow an ec2 instance to access port on uh port 22 for ssh but you cannot block a single ip address and the reason i say that is because in order for you to block a single ip address and secure group you would literally have to block or you'd literally have to allow everything but that ip address and that's just not feasible okay and so if you can remember that one particular example you'll always be able to remember the difference between these two one other thing that aws likes to do is is ask which ones are stateless which ones are stateful but at the cloud particular level they're not going to be asking you that okay all right let's learn a bit about networking with aws so what i want you to do is go to the top and type in vpc which stands for virtual private cloud and what we'll do is set up our own vbc it's not so important that you remember all the little bit of details but you get through this so that you can remember the major components so what i'll do is create a new vpc i'm going to call this my vpc uh tutorial and here i'm going to say forward slash 10.0.0.06 the reason you're wondering why i'm doing that if we go to x y x y z here this tells you the size of it so i go here i put 16 so you can see we have a lot of room if we do 24 it takes up it it's smaller see so this is basically the size of it right the empty blocks over here so we're gonna have a lot of room so we do ten zero zero zero sixteen we don't need ipv6 we're gonna go ahead and create that and once we have that we can go ahead and create a subnet which we will need so we're going to choose our vpc we'll go down here and say my subnet tutorial and we'll choose the first say z you can leave it blank it'll choose it random and then we need to choose a block that is smaller than the current one so 16 would be definitely um well 16 is the size that we have now so we can match that size but 10.0.0.0 forward slash 24 would be absolutely smaller okay so go ahead and create that subnet and so that is all set up now um let's see if our route table is hooked up so our route table says where it links to and it says to local so it's not going anywhere and that's because we need to attach a internet gateway that allows us to reach the internet so if we go over here and create a new internet gateway we'll say my igw and we'll go ahead and create that and what we'll do is associate that with our vpc we created here okay and so now that we have the internet gateway attached we want that subnet to make its way out to the internet so if we go to the route table we can edit the route table association here i like how it keeps on showing me this as if i don't know what i'm doing but i do and so this would change that particular association but i want to add to that route table so i thought when i clicked that it would allow me to add more but apparently i got to go to route tables over here and i'm looking for the one that is ours we can see that it's over here you could even name it if we wanted to like my route table notice that we apply names it's actually just applying a tag see over here it's always what that is so go over to routes and we want to edit the routes and we want to add a route and we want this to go to zero zero zero and we're gonna choose the internet gateway okay we're gonna say save changes and what that's going to allow us to do is to reach the internet um and so what i want to do is go back to subnet i was just curious about this i've never used this before um so it looks like we could just choose some options here i'm not too concerned about that but i assume like that's used for debugging azure's had those kind of services for a long time and so it was been starting to add those so you can easily debug your network which is nice so we have a subnet the subnet can reach the internet because there's a there's a internet gateway and it's hooked up via the route table one thing that matters is will it assign a public ip address so that is something that we might want to look into it's not the default subnet which is totally fine so it says auto assign is no so that might be something that you might want to change so here we would go to edit the rope table association no it's not there they changed it on me it used to be part of the setup instructions you should just checkbox it now and they moved it modify the autoassign so we'll say enable so that means it's always going to give it a public ip address on launch and while we're here i'm just going to double check if i have any elastic ips i did not release okay just double checking here and so this is all set up and we should be able to launch a ec2 now within our our new vpc so i'll go over here to ec2 okay and i'm going to launch a new instance let's say amazon links 2 we're going to choose this tier here and now what we should be able to do is select that and that is our subnet there okay go ahead and launch that i don't care if we use a key whatsoever so i'm gonna go ahead and launch that there okay we'll go back and so there you go it is launching so we created our vpc and we launched uh in it no problem whatsoever so hopefully that is pretty darn clear um so yeah what i'm going to do is i'm going to let that launch because i want to show you security groups so within aws you can set security groups and knackles and that's going to allow or deny access based on stuff and when we launched this ecto instance it has a default security group that was assigned we could have created a new one but what i might want to do is create myself a new security group here okay and you can end up with a lot really fast like here's a bunch and i can't even tell what's what so like this bunch for load balancers and things like that and so i might just go ahead and delete a bunch of these because i cannot tell what is going on here and we'll delete these security groups and sometimes they won't let you delete them because they're associated with something like a network interface or something all right but we need to find out which one we're using right now so the one that we are using is the launch wizard4 so we'll go into here and i don't know if you can rename them after they've been created i don't think so which is kind of frustrating because if you want to rename it it's like i don't want that to be the name so what's interesting is you can go here and you can edit the routes the rules sorry the inbound rules and the outbound rules and so here it's open on port 22 so that allows us to ssh in we could drop this down and choose different things so if we want people to access a website we go port 80 and we save from anywhere ipv46 so now anyone can access it you might want to do something like give it access to postgres that runs on port 5432 things like that um could be something else like maybe you need to connect a redshift that's on that port you can go ahead and save those rules we're just going to say uh from anywhere it can even say my ip so maybe only i'm allowed to connect to it right so you added inbound rules you don't really ever have to touch outbound rules it's set for all traffic so it's stuff that's leaving uh the that there one interesting thing to note about security groups is that you don't have a deny option right so let's say you only wanted a particular ip address you only wanted let's say what's my ip my ip address so that is my ip address and let's say i wanted to block it right so i go here and i say okay i want to block on all tcp i want to block this number right but i can't do that all i can say is i allow this number so in order to do it i would have to enter everything but this number in here and you can enter ranges in with like these forward slashes and stuff like that but you would imagine that'd be really hard because you have to starting to like you'd have to start and go through every single ip address in the world to get it out of here and that's almost impossible and that's the key thing i want to remember about security groups so that's security groups and there's also knackles knackles they're associated with subnets so they probably show up under vpc i rarely touch knackles rarely ever have to um i mean they're great tools but you know for me i just don't ever need them so knackles are associated with subnets so we can go here and try to see my subnet tutorial so we created our subnet we got a knackle for free and we can set inbound and outbound rules and so here here is where we could say okay i want to add a new rule and i want to and i want to make the rule number 150 you always do these in the hundreds okay or the power of tens so that you can move them around easily and i can say all traffic that comes from this ip address i'm gonna put the forward slash zero that just means a single ip address and i say deny right and so now uh this at my address i can't access that ec2 instance okay if i try to go there's nothing running on the server but if i was to try to use it i wouldn't be able to do it and and this applies to anything for that subnet it's not for a particular instance it's for anything in that subnet so hopefully that is is pretty clear there but that's pretty much all you really need to know i mean there's lots of other stuff like network firewalls all these other things it gets pretty complicated it's well beyond what we need to learn here but what we'll do is tear down that ec2 instance okay we'll terminate that and once that instance is destroyed we can get rid of our security group and a bunch of other stuff and there's always a bunch of these darn things so we'll say delete one security group associated so we go here this is the one we are using but i wanna get rid of all these other ones okay if i go here it could be because like of inbound rules so see this one because you can reference another security group within a security group so i'm just going to go save that there see any my ip there oops it's set to nfs so that might have been set up for our access point i could just delete it that would probably be easier okay so that's one that's kind of of a pain so i'm just looking for rules that might be referencing other security groups to get rid of them okay let's try this again we'll go ahead and delete i'm leaving the um i'm leaving the uh the defaults alone because those come with your vpcs and you don't want to get rid of those it won't let me delete this one so i'm going to go edit that rule delete it save it you might not have this kind of cleanup to do it's just might be me here you know um outbound inbound let's try this again here delete and i'll open this one up must be this one that is referencing the other one i'm just going to delete the rule and this is something that's just kind of frustrating with aws but it's just how it is where sometimes it's hard to get rid of resources because you have to click through stuff so it's not always a clean you might have like lingering resources and this isn't going to cost us anything but it's just the fact that um that it just makes things harder to see what you're doing you know this last one really doesn't want to go away so i'm just trying to delete all the rules out of here get rid of it can i delete this one now one group associated it will not show me what it's talking about okay here it is um ah okay this is referencing it it was the one there was an old one i don't know what this is we'll go down here and we'll go here and delete that and while i've been cleaning all these up now we can go over to our instance make sure that it's terminated it is good because if our instance is not terminated we cannot destroy the vpc uh prior the vpc could not be destroyed unless you detach the internet gateway i wonder if it's going to still complain about that we'll say yes it actually looks like it includes it in the cleanup type delete here there we go so we're all good we're all cleaned up there you are [Music] hey this is andrew brown from exam pro and in this video i just want to show you cloud front so let's make our way over to cloudfront and cloudfront is a content delivery network and it's used to cache your data all over the place as you can see i have some older ones here if you have a splash screen what you can do is just look for the left-hand side there might be a hamburger menu open that up and then click on distributions and what we're going to do is create a new distribution if you don't want to create one because these do take forever to create you can just kind of watch along i don't even feel like i'm going to hit the um the create distribution button because i just hate waiting for so long but the idea is that you have to choose an origin and so the origin could be something like an s3 bucket a load bouncer media store this is where um the the content distribution network is going to source its content right so if i say this bucket here and i just it will probably default to the root path the idea is that it's going to be able to pull content from there and then cache it everywhere and then down below you can say okay set the type of protocol redirect to here you can set up caching rules or like how often do you want it to cache like cache a lot don't cache a lot but the great thing is like you have these edge or these um lambda edge functions so you can read and modify the requests and response to the cdn which is very powerful but what i'm going to do is i'm just going to go look at what we already have because again i said they take forever to spin up and we're not going to see too much if we do so once it's spun up this is what it looks like so you'll have an origin it says where it's pointing to you can create multiple origins group them uh you can modify your behavior so that was basically what we're looking at before as you can see we have our behavior there nothing super exciting we can set up error pages you can restrict based on geographical locations so if you're for whatever reason if you if you're not allowed to serve content in uk you could say exclude this geographical region right so you have an allow list or a block list saying like okay we can't do uk because like let's say you just don't want to do um england you don't want to do um uh gdpr for whatever reason you could block out i don't know i'm having a hard time here britain england it's england right united kingdom there we go so you just say okay forget united kingdom i don't have to do gdpr now uh for invalidations the idea is that you know it is a cache so things can get stale or just persist and so here you can just type in say i want to get rid of image.jpg and then you create that invalidation and then it will go delete it out of the cache and so the next time someone requests they'll get the fresh content this usually doesn't take that long but that's pretty much cloudfront in a nutshell okay [Music] hey this is andrew brown from exam pro and we are taking a look at ec2 also known as elastic compute cloud and so this is a highly configurable virtual server or it's also known as a virtual machine and that's what we're going to generally refer to it uh ec2 is resizable compute capacity it takes minutes to launch new instances and anything and everything on database uses ec2 instances underneath that's why we generally call it the backbone to all the eight of the services and uh you're gonna just have to choose a few options here so the first thing you'll need to do is choose your os via your amazon machine image so that's where you get red hat ubuntu windows amazon linux zeus it might also come with pre-installed libraries and things like that then you can choose your instance type that's going to determine things like your vcpus your memory so here you can see how many there are and you'll have like a monthly cost and that's the name of the instance type then you have to add storage so very commonly you're attaching elastic block storage or elastic files system or service and so you know if you do choose your ebs you are going to have to determine what type it is so whether it's a solid state drive a hard disk drive a virtual magnetic tape or even attaching multiple volumes not just a single one and the last thing is configuring your instance so this is configuring the security groups the key pairs user data im rolls placement groups all sorts of things so we will experience in that because we will show you how to launch an ec2 instance and it'll make a lot of sense if it does not make sense right now okay [Music] all right let's take a look here at ec2 instance families so what are instance families well instant families are different combinations of cpu memory storage and networking capacity and instance families allow you to choose the appropriate combination of capacity to meet your application's unique requirements different instance families are different because of the varying hardware used to give them their unique properties and we do talk about this thing about uh capacity reservation where aws can actually run out of a particular type of instance family because they just don't have enough hardware in that data center so you have to reserve it but let's go through the different types of instance families the first is general purpose and these are the names of the different families uh very popular ones is the t2 the t2 and one that's really interesting is the mac which actually allows you to run a mac server so these are great balance of compute memory and network resources so you're going to be using these most of the time the use cases here would be web servers code repositories things like that then you have compute optimize so um they all start with c uh no surprise there they're ideal for compute bound applications that benefit from high performance processor thread cases here are scientific modeling dedicated gaming servers ad server engines things like that then you have memory optimized um and so there's a variety here these are fast performance for workloads that process large data sets and memory they're great for in-memory caches and memory databases real-time big data analytics then you have accelerated optimize so this is your p2 p3 p4 things like that these are hardware accelerators or coprocessors these are great for machine learning computational finance seismic analysis speech recognition if you're doing um uh ml on aws you'll start coming across these types aws technically has a separate page on sagemaker ml machines but they're all pulling from these instance families okay then we have storage optimized so i3 i3en things like that these are highly high sequential read and write access to very large data sets on local storage the use cases here would be nosql in memory or transactional databases data warehousing for the certified cloud partitioner you just need to generally know these five categories not the names of the instance families if you're doing associates or above you definitely want to know these things in a bit more detail and i want to say that commonly instant families are called instance types but an instance type is a combination of size and family but even aws documentation doesn't make this family distinction clear but i know this because you know in azure they make that very clear and and gcp and so i'm bringing that language over here to just kind of normalize it for you okay [Music] let's take a look at what ec2 instance types are so an instance type is a particular instant size and instance family and a common pattern for instance sizes you'll see is things like nano micro small medium large x large 2x large 4x large 8x large and you know generally they're you to the power of twos but sometimes it'll be like 12 14 16 where it's even uh and so when you go to launch your ec2 instance you're going to have to choose that instance type and so here you can see you know here's our ttmicro and then we have um the small the bdm the large the x large okay but there are exceptions to this pattern for sizes so you know there is one particular one called uh dot metal and so that's going to indicate that this is a bare metal machine and then sometimes you get these oddball ones like 9x large so you know the rule of power of two or even numbers is not always the case uh but generally it'll be pretty even for you know the start here okay uh just talking about instant sizes so the easy two instance sizes generally double in price and attributes so uh just bringing up these numbers a little bit closer starting at the small here you're gonna notice one two it doesn't maybe double there but four and here we see twelve twenty four uh almost doubles there almost doubles there but i want to show you that the price is generally almost double so 16 33 67 135 and so a lot of times like you always have the option to say okay do i want to go to the next instance size up or have an additional distance of the same size and sometimes it's a better approach to get an additional instance because then you can distribute it across another az but then you also meet additional capacity so there you go [Music] so we talked about dedicated instances and hosts a little bit but let's just make that distinction very clear so dedicated hosts are single tenant ec2 instances designed to let you bring your own license so byol based on machine characteristics and so we'll compare the dedicated instance to the dedicated host across isolation billing physical characteristics visibility affinity between a host and instance targeted instance placement automatic instance placement and add capacity using allocation request so for isolation for dedicated instance you're going to get instance isolation so you can have the same customer on the same physical machine but there is virtualization there for them and there's a guarantee of that for a dedicated host you have physical server isolation so you get the whole server for billing uh on a dedicated instance it's per instance billing and it's gonna have an additional fee of two dollars per region and for dedicated hosts it's per host billing so it's a lot more expensive but you get the whole machine uh for visibility of physical characteristics you're not going to get any of that information for a dedicated instance for dedicated hosts you are such as sockets core host host id and this is really important when you have a bring your own license and they're saying this license is for x amount of cores or x amount of sockets then we have affinity so there's no affinity for dedicated instance for dedicated hosts you'll have consistency with deploys to the same instance the same physical server there's no control of target instance placement for dedicated instance you do have control on a dedicated host for automatic instance placements you have it for both and to add capacity using allocation requests it's a no for dedicated instance and it's a yes for dedicated host so i want to come back to the main point that's what's highlighted here is that on a dedicated host you have visibility of sockets core host id and this is really really important when you're bringing your own license byol such as you know microsoft sql servers where you have to specify the mana cores and things like that okay [Music] so we've been talking about uh tendency and i just wanted to make it very clear uh the difference between the different levels of tendency on aws so we have three okay so we got dedicated hosts so your server lives here and you have control of the physical attribute so basically the whole server okay then we have dedicated instances so your server is on the same uh physical machine as other customers but the actual slot that you have the dedicated instance will always be the same and then we have the default so your instance will live somewhere on the server uh and when you reboot it's going to be somewhere else so there's no guarantee that it's going to be in the same place every single time okay [Music] hey this is andrew brown from exam pro and in this follow along we're going to be looking at ec2 and also um services that are adjacent to it so like auto scaling groups load bouncers elastic ips things like that so we fully understand ec2 you don't have to know tons for the exam but you should be able to go through the motions of this with me so you can cement that knowledge for some of those deeper concepts like working with key pairs and things like that so let's make our way over to the ec2 console and learn what we can learn um and generally when you go over the ec2 console it'll bring it to the dashboard for whatever reason to bring me there and then the idea here is that on the left hand side we can make our way over to instances okay and this is where we can launch our first instance so we go here and launch our instance the first thing we're going to be presented with is to choose our mi or amazon machine image and so that is a template that contains the software configuration so the operating system applications and other binaries that would be installed on that os by default all right and so we have a variety that we can choose from in the quick starts and generally the ones that you're going to see first are the ones that it'll support so there are amis or operating systems that aws will support when you contact them and then there's ones that are outside that where they'll still help you with but they might not have the knowledge on so just understand that if you pick from these core ones you're going to be in good shape the most popular is the amazon linux 2 because it's part of the free tier and it is very minimal and well hardened by aws so it's a very good choice there but you can see you can install a bunch of things so like if you want to launch a mac os server you can absolutely do that a red hat susie ubuntu a windows server you name it they have it if you wanted something more farther out there you can go to the marketplace and subscribe to one that is managed by company basically everything exists under the sun here or you can get a community ami so these are ones that are contributed by the community for free but we're going to go back to quickstart here and what i want you to notice is that there is this ami id that's how we can uniquely identify what we're using if we were to change region even with the same amazon x2 instance this thing will change so just understand that it is regional based and it comes in a 64-bit variant and a arm variant and so we're going to be using the x86 here you can notice here you can change it on the right hand side we're going to stick with x86 i'm going to go ahead and hit next so now we're going to choose our instance type and so this is going to decide um greatly how much we're going to be spending because the larger it is the more we're going to spend so see this t2 micro if we wandered into the pricing for that we go to ec2 pricing aws and once we get to ec2 pricing we want to go to on demand and from here this will load and so down below we can kind of go find our price it should show us it should show us the list ah here it is okay so i can say a t2 micro and we can see the on demand is this so it seems really cheap what you got to do is do the math so if you do time 730 that's how many hours there are in a month if we launch a ttmicro and let's say we didn't have the free tier we you do if you first made your account you're going to have 700 750 hours for free for the free tier but if you didn't it would only cost you eight dollars and and 46 cents usd okay so just be aware of that if you ever need to figure something out go there copy it do the math 730 it's pretty easy so here we have a t2 micro in the t2 family it's going to have one v v cpu notice it has a v for virtual so there could be more than a single cpu on the underlying hardware but we're only going to have access to one virtual cpu we have one gigabyte of memory it's for low to moderate network performance so that's a factor that can change if you need like gigabit stuff like really fast connections for on-prem hybrid connections and you have specialized servers for that but for this this is fine the ct micro is great uh if you want you can also search this way to see all the instance families and things like that you can filter for current generations all generations so this is fine okay so from there we're going to go to configure our instance type you can say let's launch multiples of these instances let's turn on spot to save money and try to bid for a particular price we can change our vpc it's going to default to the default vpc um if you have no subnets just going to pick one at random here which is fine whether to auto assign a public ip address if you do not have an ip address you cannot reach the internet so generally you want this to be enabled this is dependent on the subnet whether it will default to enabled but it doesn't matter if you have an ec2 instance in a private or public subnet you can always override this and give it a public ip address you have placement groups which allows you to place servers together closely not something for the certified cloud partitioner there's capacity reservations so if you're worried about database running out of this you can reserve capacity so that's kind of interesting domain join directory this isn't something that i've done much with but i imagine that has something to do with um direct active directory or something like that to join information then you need to uh have an im role and we absolutely do need an item rule here so what i want you to do is create a new role it's going to close off these other tabs here and we will go wait a moment create a new role here and we want to do this for ec2 so we say ec2 is what we're creating the rule for we'll hit next and i don't know if i have a policy but i'm gonna go ahead and um oh well i don't need to make a new policy but i just want ssm and the reason i want ssm is so that i can um use sessions manager to log in so we don't have to use key pairs we will use key pairs but if we didn't want to use it that's what we could do and this used to be the old roll it'll tell you hey go use this new one here so i just want to make sure i know which one it is and so we'll just checkbox that on we'll hit next we can add tags right here it'd be well actually we don't need to add any tags here so that's fine we'll sit next and then i'll just say my ssm ec2 role okay and we'll create that role and now that we have created that role we can go back to our first tab here and give this a refresh and then drop down and it should show up here if we go down here a little bit we could turn on extra monitoring there is monitoring built in but if you wanted to monitor it to a lower uh like it more frequently you could do that as well we want share tenancy right this is where you change the dedicated instance or dedicated host obviously these costs more but we're gonna stick with shared elastic conference so this is for um uh attaching a a fractional gpu great for ml not something that we want there's credit specification i don't remember seeing this before selecting unlimited for credit specification allows for uh to burst beyond the baseline so it's for bursting here you can attach an uh efs so if you need a file system that you want to mount or attach um then there's the enclave option so nitro enclave enables you to create isolated compute environments to further protect your and securely process highly sensitive data so it might be something you might want to checkbox on um based on your use case and then down below are we have the ability to enter our user data and this is something we want to do because we want to install apache so that we have something to work with here so what i'm going to do is make a shebang so that is a pound and an exclamation mark i know that's really small so i'll try to bump up my font here so you can see what i'm doing and we're going to do a forward slash bin and a forward slash bash on the next line here we're going to do yum install hyphen y httpd that's going to install apache and why it's not called apache i don't know why but they call it httpd there's no apache in the name there and so we'll say systemctl start httpd system ctl enable httpd so we're saying startup apache and then make sure that it stays running if we restart our machine very simple so from there we will go to our storage we'll say add or storage and this is at 8 gigabytes by default we could turn that up to 30 if we like so you can go all the way up to 30 if you like and you might want to do that but i'm going to leave it at 8. we could change our volume type i'm fine with gp2 because that's very cost effective and if we want to turn encryption and you should always turn on encryption there's no reason not to and so we'll turn that on it's not like it's going to cost you more it's going to be the same cost it's just your choice there if we want to add a tag yes we're going to add a name and we're going to say my ec2 instance okay and so that's going to give us a name which is something we would really like to have then we have a security group i'm going to just create a new query book called my ec2sg here and you'll say my ec2 sg something you cannot do is rename a security group once you've made it so make sure you don't make a spelling mistake up here and we want to be uh accessing that http http or it's going to launch a website so in order to do that we need to make sure we have http as the type with the port 80 open and we want it from anywhere so we'll say anywhere and that will be 0.0.0.0.0 and that's for the ipv4 this is for the ipv6 okay so we'll just say internet and this is for ssh right and for this i would probably suggest to say my ip but since we might be using a cloud shell to do that we're going to leave it as anywhere so that we don't have any issues connecting so from here we'll review and launch and you can review what it is that's going on here it's going to say here hey you have an open port that's okay we we want the internet to see our website because that's the whole point there and we'll go ahead and launch it it's going to ask for a key pair we can go down and say proceed without key pair but what i'm going to do is i'm going to create a new key pair because i want to show you how those work and i'm sure we've already done in this course once but we'll do it again and so i'm going to just name this as my ec2 instance here and then we're going to go download that key pair it's going to download a pem file there and so now we can go ahead and launch that instance and while that is launching so i'm going to just close this other tab here we're going to click on the view instances and so here is that instance that's why we put the tag so we can have a name there we're going to wait for that to start but as that's going i'm going to make a new tab by just right clicking here on the logo click anywhere pretty much to do that and once we do that we'll click on cloud shell and as that is going what i want to do is take this pim down below i'm going to move it to my desktop to make it easier for me to upload i'm doing this off screen okay and uh once this environment's running i'm going to go ahead and upload that okay so we'll just give it a moment to do that we're also waiting for the server to spin up as you'll notice there is a public ip address here it says it's running so if we want we can copy it we're looking for those two checks to pass so the server could be available but generally you want to wait for those two system checks because one says hey the hardware is fine the network's fine things like that okay but if i take that ip address paste it on and up here we have the web page so that is working uh no problem there so that's great and we'll go over to cloud shell and that is still starting uh it's not the fastest but that's just how it is and um you know we'll get going here in a second as soon as this decides to load there we go so it's loaded i can type clear here just to clear that screen out and so what i want to do is upload that pem file so i'm going to go and upload that file we're going to go ahead and select it i'm going to go to my desktop here whoops my desktop and we are going to choose my ec2 instance pem all right and from there we'll hit upload that's going to upload that pem file once that is uploaded we're going to do ls okay and so this is from a previous tutorial so i'm going to go ahead and just delete that other one there we'll say remove efs example pem yes okay we'll type clear and then what we can do here is type in chamod and i believe it's 400 and what do we call this my ec2 instance pem if you hit tab it will auto complete which is nice and if you do ls hyphen la we can take a look at that file and see it should look like this should have only one r here so the idea is you're locking it down so it's not writable or executable it's just readable because that's what you have to have it if you want to ssh and so if we want ssh what we'll do is hit the connect button here and we have four options they just give you too many options it's gonna be a fifth one for sure soon but right now we're talking about ssh so for ssh um we had the chamod or file which we did and then we need to use this dns to connect to it and so this is the full line here if you click on this copy that over and paste it in that should be everything and notice we're doing ec2 user followed by this you could put the ip address in here it said if you preferred so if you were over here you could go and take that ip address which is i think shorter nicer but um you know if you just click that one button it works that's fine you always have to accept the fingerprint then you'll be inside the instance you can type who am i to see which user you are you're the ec2 user that's the user that aws creates for their amazon linux instances it's going to vary per ami so not all amis have an ec2 user it might be something else but that's generally the ones that adas uses for their supported ones and so if we do um an ls again we're in the server right now we can tell because it says right here or if we do a pwd we can kind of just kind of look around so i think it's going to be at var ww that's where ht httpd or apache always puts their files here so i go in here whoops i'm just looking for the index file so i thought the index file was in cd bar www hmm html well where the heck is it so i'm going to just touch a file here and see if it overrides it oh i don't care i'll just type sudo and what we can do is just try to restart this system ctl um there's a very similar command that's like uh service and so i always forget the order of it so i think it'd be i'm just checking probably restart httpd and so fail to restart the policy was not provided as the name service service uh maybe sudo there we go and so if we go back here i'm gonna see if it changed because it will take whatever is in the index.html file so if there's no file there it's going to show that there and so what i can do is i can edit this file i'm going to type vi index html and um i'm going to hit i for insert mode oh it says it's read only so what we have to do cue colon cue quit oops clear ls and so what we need to do is do sudo vi index html and so vim every single key is a hot key okay um i'm not teaching vim here but i'm going to teach you the basics but the idea is that when you're here notice that the cursor is blinking when i hit i it enters insert mode now i can type normally so i'd say hello uh hello cloud okay and i'm gonna hit escape to go back to um navigation mode whatever you wanna call it i'm gonna hit colon so it brings up the command i'm gonna type in uh write and quit okay and hit enter and so i'll type clear and so oops clear and so we'll hit up till we get that command sudo systemctl restart httpd we'll hit that hit enter okay and it should restart pretty fast there it is this is hello cloud i probably didn't even have to restart it to do that but anyway so now that instance uh you can see how we're updating that so what i want to do is just do a sanity check and make sure that if we restart this instance that we're going to be able to have apache running that's something you should always do if you have an app and you or anything you install it restart your server make sure that everything works so what i'm going to do is uh just hit exit here so we go back to the top level cloud shell type clear i'm going to go back over to my ec2 instance i'm gonna have to click around to find it here and what i want to do is reboot it okay and if i reboot the machine the ip address is going to stay the same okay so if we reboot it the ip address is going to stay the same and the reboot's going to happen really fast if we want to observe that reboot we could go over to um here on the right hand side go to the system log and it would show us that it it had rebooted i think so yeah it does cloud in it there i think it rebooted not sure but anyway if it's rebooted then we can go ahead and connect and make sure everything's fine so let's just go here and hit enter and let's see if the what the webpage is here notice that it's hanging right so it's probably because it's still restarting even though it doesn't look like it is and that's something that you have to understand about the cloud is that you have to think about what you're doing and have confidence that it is happening and also just double check it but uh that's something that can be kind of frustrating because these are globally available services uh they're massively scalable and so one of the trade-offs is that you don't always have the most uh responsive uh uis aws has one of the most responsive uis out of all the major providers but even still like sometimes i have to second-guess myself but the page right now is not working now it is so it's fine so it just took time for that to reboot and so um what i want to do is connect a different way so we're going to go here and we're going to hit um we're going to checkbox that on we're going to hit connect and instead of using ssh client we're just going to go to sessions manager and hit connect and this is the preferred way of connecting because you don't have to have this this ssh key and that's a lot more secure because if someone has that key and you you know you hand it to someone they could hand it to somebody else and then you have a big problem on your hands so here this looks very similar but if you type who am i it actually logs in as the ssm user which is kind of annoying so i type in sudo su i have to do this hyphen here and then i'm going to say the user i want to be which is ec2 user and then if i type umi we are the correct user you can't do anything in that ssm hyphen user or ssm user so you got to switch that over and i can bump this up to make it a bit larger so this is obviously not as nice as working over here or even in your own terminal but it's a lot more secure and it's tracked and all these other things so we really should be using it okay and um i really don't like having to bump this up with my html i'm just go back to zero there there's probably a way to configure that but anyway let's just go and take a look at our file i'm gonna type buy again and we're gonna do var www.html index html i could put pseudo in front of there and again remember you have to hit i to go into insert mode and what i'm going to do is just capitalize that hello cloud give that exclamation mark colon wq to quit right quit i'm going to go back here refresh okay so we don't have to restart our server which is nice all right so um that's that that's pretty clear so i'll hit terminate here and i don't think we need cloud shell for anything so i'm just gonna close that and so that's pretty much it when it when it comes to working with an ec2 instance and so the next thing i want to show you is elastic ip okay [Music] okay so now i want to show you elastic ip commonly abbreviated to eip and so all that is it's just a a static ip and ip that does not change because this ec2 instance here notice that it's 54 163 4 104 and what would happen if we were to stop this instance not reboot it but stop it because for whatever reason we had to or or um for whatever reason and if we were to stop this instance and we were to restart it okay and we have to wait for it to stop but that ip address is going to change okay so 54 163 4104 hopefully we can observe that i'm just going to write that down so we do not forget so i can prove to you that it does change and now that it it's still stopping here so as that's stopping we're just going to go ahead and get our elastic ip and i will prove that as we go here so i'm going to go over to here and so what i want to do is reserve or allocate an elastic ip address and so i'm going to say us east 1 and it's going to say from the amazon pool of ipv4 addresses so eight of us has a bunch of ip addresses they're holding on to and so you can just allocate one and once you've allocated that's your ip address so coming back to here okay this has stopped notice there is no public ip address we're going to start it again okay and then we'll just checkbox it on and we just have to wait a little while to see what the ip address is going to be i'm going to tell you it's going to be something else so if i go back here this is 54 235 12 110 and our original one was 54 163 for 104. so the reason why it's important to have the same address is that if uh you have a load balancer well not a load bouncer but if you have a domain pointing to your i your server and you reboot then the route you have a dangling um a path or route where revenue 3 which is going to be pointing to nothing and so it was does have things to mitigate that like aliases and things like that but in general you know there's cases where you just have to have a static ip address and so we had allocated one over here and if we want to assign it we're going to associate that elastic ip address we're going to drop it down choose the cc2 instance um i suppose the private ips as well and then we're going to go ahead and hit allocate or associate and once it's associated it should now have 34 199 121 116. so we go over here and we're going to take a look here and that's its ip address we can pull it up okay and that's that so yeah that's elastic ip [Music] okay so now that we have our elastic ip we have our ec2 instance running let's say um you know we lose the server we terminate it so we would lose all of our configuration so if we wanted to bake this ami to save it for later what we'd have to do is go and create an image so to do that we go to the top here and we go to images and templates and we can create an image or we can create a a template which is a lot better but for the time being we're going to go ahead and create an image and when you create an image you're basically creating an ami and so here i'm just going to say my ec2 and i'm going to 000 to just kind of like number it so that's a very common numbering just do three zeros and then increment by one and so here i'm going to say my apache server and so it's going to save some settings like the fact that there is a volume you could save some tags there and so i might go ahead and add a tag and it'll say name and we'll just say my ec2 server or so that it remembers that okay and then what we'll do is go ahead and create our image and so this can take a little bit of time if we go over to uh images here it's going to be spinning for a while and we'll just wait until it's done okay all right so after waiting a little while here our ami is ready so we're just waiting for it to go available if you do not see it just make sure you hit the refresh because sometimes aws will just spin forever and so that's just something you'll have to do so you know hopefully that makes sense what we'll do is go make our way back over to instances here and we can launch one this way well actually we can do it over from the ami page so what i'm going to do is just terminate this instance we're all done with it okay and we'll hit terminate it's totally fine and it had a message about elastic ips about releasing them so when it does that the elastic ip is still over here so it did not release it so what we're going to do is go ahead and disassociate the elastic ip okay and then we're also going to release the ip address because if we don't we're going to have this ip addresses sticking around that we're not using it this is going to charge us a dollar a month over month so just be aware of those because that's just kind of like a hidden cost there but what we're going to do is go over to ami and we're going to select it here we're going to go to actions we're going to go ahead and launch and what it's going to do is make us fill all this other stuff again so if you had made a launch template we wouldn't have to fill out all the stuff it'd be part of it but that's what i'm trying to show you with this ami stuff so instead of filling out all this what i'm going to do is now go create a launch template just to kind of show you that that would be a much easier way to work so we go over to ec2 instances and then on the left-hand side we're looking for a launch template launch launch configurations is the old thing um launch templates here we go so what we'll do is create ourselves a launch template we'll just say my apache server and then down below we need to choose our ami so we're going to go here and we need to type it in so what did we call it my ec2 i really don't like this search here it's very slow and frustrating but once we find it whoops that's why i don't like it because a lot of times you'll be loading and you'll end up clicking the wrong thing okay so uh i don't like this okay we'll type in my give it a second there it is and just wait because it will keep loading and then once it's loaded hit enter and so it has that instance selected and then from there uh don't include in the launch template so here we could be explicit i would say i want this to be t2 micro but we could exclude it if we wanted to we could specify the key pair here um not that we really want to use key pairs we'll say my ec2 instance then down down here for the networking we can specify that security group we created so we created one here called myec2sg um storage is fine it's going to be encrypted network interface is fine advanced details what i want to do is set the i am instance profile that's really important because we don't want to have to figure out that role every single time so put that there and that should be everything and we could put user data in there but it's already baked into our ami so we don't have to worry about anything so what i'm going to do here is go ahead and create this launch template and then we're going to view this launch template and so now what we can do is then use it to launch an instance okay and so we're going to look here and it's very similar to dc tube except it's vertical so we're going to have one instance it's going to use that ami that instance type so you can see how you can override them which is nice we're going to check the advanced details and make sure that iom profile is set and we'll go ahead and launch this from a template so from there we can go ahead and click the instance value there and just be aware that when you do click through links like that you'll end up with a search so i was just check box that off so i can see what i'm doing and so we're just waiting for this instance to show up and the only thing i noticed is it didn't set our darn tags so i wanted the name in there and i think it's because we set it in the ami but it didn't carry over to the launch template so i'd have to go back to the launch template and update it probably so if i go into here into the launch template we can probably modify create a new version and then add tags there so we say name my apache server i realize i'm changing between them and so that should allow us to have a version two so we'll create that and but anyway that will be for the next time we launch it okay and so this instance is running i'm gonna go grab the ip address the server may or may not be ready we'll take a look here and so it's just spinning if it's spinning it's either the server is not ready or um our port's not open so it was just getting ready to work there so it is working now so that is our launch template so now you know we don't have to worry about losing our stuff and if we need to make new versions we can just bake new amis and increment them and attach them as new versions of the launch template okay [Music] all right so what i want to show you in this follow along is to set up an auto scaling group for our ec2 instance and the idea behind this is that we'll be able to always ensure that a single server is running or increase the capacity if the demand requires it so in order to create an auto scaling group we can go all the way down below to here and so you know i really don't like the autoscaling group form but it's okay we'll work our way through it so the first thing is we'll have to create our or name our auto screen group so let's just say my asg and then we'll have to select a launch template which is great because we already have one and then we'll have to select the version i'm going to select version two so that it applies that tag name and we'll go to next and so here it's going to need to select a vpc and then we need some subnets so we're going to choose three just because to have high availability you have to be running at least three different availability zones so that's why we have three different subnets and then down below we have the instance type requirements so uh t2 micro launch template looks good to me so we'll go ahead and hit next and then from here we can choose to do a load balancer and so i want to do the load balancer separate so we won't do it as of yet but very often if you're going to have an auto selling group you're going to usually have a load balancer but we'll talk about that when we get to that point there so we'll just go to the bottom here and hit next and so this is what's important so how many do you want to be always running and so we always want to have one and maybe the maximum capacity is two and you want the desired cast capacity to be around a particular number so if you had three and you said the desired is two there are things that could try to work to always make sure there's two but we just want to have one for this example we can set up scaling policy so i do target tracking scaling policy and so here we could do it based on a bunch of different things so if the cpu utilization went over 50 percent it would launch another server so that might be something we might want to set so we're not going to try to trigger the scaling policy but we might as well just apply because it's not too hard and you can also do a scaling scale in protection policy so if you want to make sure it does not reduce the amount of servers that's something you could do we could add a notification to say hey there's a scaling policy happening here which is fine we don't have to worry about that and there's tags so add text to help you search filter etc so i'm going to put a tag here i'm going to say name i'm just wondering if this is going to attach to the ec2 instance or this is for the auto scanning group you can optionally choose to add tags to instances by specifying tags in your launch template so we already did that so i don't need to put a tag here and so we can review our auto scaling group and go ahead and create that auto scaling group okay and so that auto scaling group expects there to be a single instance so what's going to do is it's going to start a launching an instance and so what i'm going to do is just get rid of this old server because we don't need it anymore this old one here okay and you can already see okay that the load balancer is launching this new one here and remember we updated our version two to have that name so that's how we know that it is so if we go back over to our auto scaling group okay it's now saying there's an instance we don't have a status as of yet and so there are ways of doing uh status checks to for it to determine whether or not the server is working because if the server is unhealthy what it would do is it would actually kill it and then start up a new one right so if i go down below it's right now doing the ec2 health check and the ec2 health check just means that is the server working right is it running it doesn't necessarily mean like hey can i load this web app um but you know it's very simple so we'll give it a moment here to start up and just make sure that it's working okay and i think it's ready so if i take that public ip address here and paste it in there it is okay so if we were to tell it to increase the capacity to three then what it would do is it would launch three and then it should probably launch it all evenly to those other it should evenly launch it to all those other uh availability zones and then we'll have something that is highly available okay so that's pretty much it for this and then we'll move on to auto scaling groups [Music] all right so we have our ec2 instance now managed by an auto screen group and the great thing is that if we terminate this instance this auto discounting group will launch another uh instance to meet our particular capacity um the only thing though is that if we were to have multiple ec2 instances running like three of them um how would you distribute traffic to the mall right so you know you have an ip address coming in from the internet but let's say you want to evenly distribute it and that's where a load bouncer comes into play and even if you have a single server you should always have a load balancer because it just makes it a lot easier for you to scale when you need to and you it acts as an intermediate layer where you can attach a web application firewall you can attach an ssl certificate for free so there's a lot of reasons to have a load balancer so what we'll do is go down below on the left-hand side and we're going to make our way over to load bouncers and we're going to create ourselves a new load balancer so i'm going to hit create load balancer here and you're going to see we have a lot of options application load bouncer network load balancer gateway load balancer and then the classic load bouncer and so we are running an application so i'm going to create an application load balancer and here i'm going to say my alb for an application load balancer this is going to be internet facing it's going to be ipv4 we're going to let it launch in the default subnet and we're going to choose the same the same azs right so that we get the same subnets as our that are in our auto scanning group and that's really important okay and then here um you know we need to have a security group and i just feel like selecting the same one here because that should work no problem there and we want to make sure that we can listen on port 80 and then it's going to forward it to a a target group it looks like i might have a target group there from before so just to reduce that confusion you won't have this problem i'm just going to double check if that's true so do i have a target group from there before before yes i do that came from i'm not sure it might have been created by um elastic bean stock and wasn't deleted okay so i'll go back over to here just so there's less confusion and we were selecting our target group so we're going to create a new target group so we'll go over here and here you can choose whether it's instance ip lambda application load balancer so you could point it specifically to an ip address and so if it was a static ip address that would make sense uh apparently you can port uh point it directly to instances i don't remember seeing that option before i guess that makes sense yeah no sorry that makes sense because i would go to uh vpcs okay or sorry uh asgs autoscaling groups it's just that you're pointing them to auto screen groups you're not pointing them to instances so that's why that's confusing so i'm going to say my target group it'll be for port 80 here protocol http 1 is fine we want to be in the same vpc so that's fine as well and down below we have our health check and so the forward slash means that it's going to hit the index.html page and so if it gets back um something healthy and that that something healthy is going to be um port 80 then it's going to be considered good and then we can say the threshold of check so i'm just going to reduce this so it's not so crazy so we'll say three uh two and then ten okay and then it expects back a 200 which i think that's what we'll get back so we'll go ahead and hit next and so now we have our target group and it should register instances so it's saying hey we detected this and this fits the requirements for this so this is now uh this is now in this target group okay so we can go back over here and we can now drop down and choose oops hit the refresh button and choose our target group so i'm not seeing it here so i'm gonna go back over here oh we didn't create it okay and now we can go back hit refresh and there it is and yeah that looks all good so we'll go ahead and hit create load bouncer we can view the load balancers and these crate really fast if we scroll on up what we can do is now access our server through this dns name okay so we copy that paste that on in there does it work not as of yet so if it's not working there because we did say look at these instances another way is to directly associate your auto scaling group with the load balancer so if i go into here and we hit uh edit there is a way aha a little bouncer so we want to associate this way and we want to say this target group here and also while we're here we might as well set it to elb so it's going to use the elb check so that makes it so the auto scaling group if it wants to restart server it's going to use the elbs check which is a lot more sophisticated and then what we'll do is go hit update okay and now if we go back over to our load balancer i'm just going to close some of these tabs so it's a little less confusing a little bouncer here i think we should be able to see through here whether it is seeing it let's go down below listeners monitoring integrated services no it's going to be through the target group okay i mean it already had it there so maybe it's just that it hasn't finished the check so over here it has a health status check oh now it's healthy okay so if it's healthy in the target group and the load bouncer is pointing to it then it should technically work so we're going to go ahead and copy the dns again here make a new tab paste it in and there it is okay so that's how you're gonna access all your all your instances that are within your auto scanning groups you're gonna always go through the dns and so if you had a row 53 domain like your domain managed by aws you just point to the load balancer and that's how you hook it up so that's pretty much it so yeah there you go all right so there you go we learned everything we wanted to know about ec2 so the the last thing to do is to tear everything down so we have a load balancer we have an auto scanning group um and those are the two things we'll have to pull on down so the first thing would be to take down the auto scaling group and when you delete another scaling group it's going to delete all the ec2 instances so we'll do it that way if you tried to delete the ec2 it would just keep on spinning up so you have to delete that first and so as that's deleting then we'll be able to delete our load balancer i'm going to try anyway to see if i can delete it at the same time and so i'll go up here i'm going to go ahead and delete that load balancer actually it did work no problem i'm gonna make sure i don't have any elastic ips i'm gonna also make sure i don't have any key pairs you can keep your key pairs around but like i just want to kind of clean this up so okay okay and that instance should be terminating go back to the auto scan group here if we click into it we can check its activity here so it's just saying successful so it is waiting on elb connection draining which is kind of annoying because we deleted elb so there's nothing to drain um draining is just to make sure that uh you know there's no interruptions when terminating services so just trying to be smart about it and all i want to see is that it's just saying terminating over here and then i think we're done okay so we'll just have to wait a little while here okay and i'll see you back in a moment okay all right so after waiting a very long time it did destroy so if i go down over to my load balancer here we're gonna see that it does not exist so there was that connection draining thing which was kind of annoying it's probably because i deleted the load balancer first and then the um the uh the autoscaling group second and probably connection draining was turned on but it's not a big deal we just waited and it did eventually delete so we're pretty much all done here so there you go [Music] hey this is andrew brown from exam pro and we are taking a look at ec2 pricing models and there are five different ways to pay with ec2 remember each two are virtual machines so we have on-demand spot uh reserved dedicated and adamus savings plans so what we'll do is look at these in summary here and then we'll dive deep onto each of these different pricing models so for on demand you are paying the uh a low cost and also you have a lot of flexibility with this plan uh you are paying per hour so this is a pay-as-you-go model uh or you could be paying down to the second which we'll talk about uh the caveats there when we get to the on-demand section this is suitable for workloads that are going to be short-term spiky unpredictable workloads uh that cannot be interrupted and it's great for first-time applications and the on-demand pricing model is great when you need the least amount of commitment for spot pricing you can see we can save up to 90 percent which is the greatest savings of out of all these models here uh the idea here is you're requesting spare computing capacity that database is not using and that's where you're gonna get that savings you have flexible start and end times but your workloads have to be able to handle interruptions because these servers can be stopped at any time to be giving to more priority customers and this is great for non-critical background jobs very common for like scientific computing where jobs can be started and stopped at any given time this has the greatest amount of savings then you have reserve or reserved instances this allows you to save up to 75 percent this is great for steady state or predictable usage you're committing with aws for ec2 usage over a period of one or three year terms you can resell on unused reserve instances so you're not totally stuck with this if you buy them this is great for the best long term savings then you have dedicated so these are just dedicated servers and technically not a pricing model but more so that the fact that it can be utilized with pricing models um but the idea here is it can be used with on demand reserved or even spot this is great when you need to have a guarantee of isolate hardware for enterprise requirements and this is going to be the most expensive so yeah there you go and we'll dive deep here okay [Music] so the on-demand pricing model is a pay-as-you-go model where you consume compute and then you pay later so when you launch an ec2 instance by default you are using that on-demand pricing and on-demand has no upfront payment and no long-term commitment you are charged by the second up to a minimum of 60 seconds so technically a minute or the hour so let's just talk about the difference between those uh per second billing and those per hour billing so per second are for linux windows windows with sql enterprise windows with sql standard windows with sql web instances that do not have a separate hourly charge and then everything else is going to be um per hour and so you know when i'm launching ec2 instance i can't even tell when something's per second or per hour you just have to know that it has a separate hourly charge but generally you know if you're just launching things it's going to probably be the per second billing when you look up the hourly or the the pricing it's always shown in the hourly rate so even if it is using uh per second billing when you look up that pricing it's always going to show it to you like that but on your bill you'll see it down to the second okay up to the first 60 seconds and on demand is great for workloads that are short-term spiky or unpredictable uh but when you have a new app development this is where you want to experiment and then when you're ready to uh start saving because you know exactly what that workload's going to be over the span of a year or three that's where we're going to get into reserved instances which we'll cover next [Music] hey this is andrew brown from exam pro and we are taking a look at reserved instances also known as ri and this is a bit of a complex topic but uh you know if we do get through it it's going to serve you well through multiple aw certifications so let's give it a bit of attention here so ri is designed for applications that have a steady state predictable usage or required reserve capacity so the idea is that you're saying to aws i'm going to make a guaranteed commitment saying this is what i'm going to use and i'm going to get savings because abuse knows that you're going to be spending that money okay so the idea here is that the reduced pricing is based on this kind of formula where we have term class offering the r a tributes and payment options technically the ra tributes don't exactly factor into it other the fact that they on our attribute could be like the instance type size but i'm going to put that in the formula there just because it is an important component so let's take a look at each of these components of the formula to understand how we're going to save so the first is the term so the term uh the idea here is the longer the term the greater the savings so you're committing to a one year or three year contract with aws um and one thing you need to know is that these do not renew so at the end of the year the idea is that you have to purchase again and when they do expire your instances are just going to flip back over to on demand with no interruptions to service then you have class offerings and so the idea here is the less flexible the offering the greater the savings so the first is standard and this is up to a 75 reduction in the price compared to on demand and the idea here is you can modify some ra attributes which we'll we'll talk about when we get to the um ra tribute section there then you have convertible so you save up to 54 reduced pricing compared to on demand and you can exchange uh ris based on the ri tributes if the value is greater or equal in value and there used to be a third class called schedule but this no longer exists so if you do come across it just know that abuse is not planning on offering this again for whatever reason i'm not sure why then there are the payment options so the greater upfront the greater the savings so here we have all upfront so full payment is made at the start of the term partial front so a portion of the cost must be paid up front and the remaining hours in the terms are billed at a discounted rate and then there's no upfront so you are billed at a discounted hourly rate for every hour within the term regardless of whether the reservation is being used and this is really great this last option here because basically you're saying to aws you're saying like i'm just going to pay my bill as usual but i'm going to just tell you what it's going to be and i'm going to save money so if you know that you're going to be using a t2 medium for the next year uh you can do that and you're just going to save money okay so ris can be shared between multiple accounts within an organization and unused rise can be sold in the reserved instance marketplace but we'll talk about the limitations around that when we get a bit deeper in here just to kind of show you what it would look like at the end of this console and they updated it i love this new ui here the idea here is you're going to filter based on your requirements and that's going to show you ris that are available and then you'll just choose the desired quantity you can see the pricing stuff there you're going to add it to cart you're going to check out and that's how you're going to purchase it okay [Music] so another factor to that formula were ri attributes and sometimes the documentation calls them r attributes sometimes they call them instance attributes but these are limited based on class offering and can be uh can affect the final price of the r instance and there are four rh attributes so the first is the instance type so this could be like an m4 large and this is composed of an instance family so the m4 and the instant size so large okay then you have the region so this is where the reserved instance is purchased then you have the tendency whether your instance runs on shared so the default which would be multi-tenant or a single tenant which would be dedicated hardware and then you have the platform whether you're using windows or linux even if you're using on-demand of course this would just affect your pricing but there are some limitations around here which we'll get into as we dive a bit deeper here with our eye okay [Music] all right let's compare regional and zonal ri so when you purchase an ri you have to determine the scope for it okay so this is not gonna affect your price but it's gonna affect the flexibility of the instance uh so this is something you have to decide so we're gonna talk about regional ri which is when you purchase it for a regional and zonal ri when you purchase it for an availability zone so when you purchase it for a regional ri it does not reserve capacity meaning that there's no guarantee that those servers will be available so if anybody runs out of those servers uh you're just not going to have them but when it's zonal uh you are reserving capacity so there's a guarantee that those will be there when you need them in terms of az flexibility you can use the regional ri for any az within that region but for the zonal ri you can only use it for that particular region we're talking about instance flexibility you can apply the discount to uh any instance in the family regardless of the size uh but then when we're looking at a z there is no instance flexibility okay so you're just going to use it for exactly what you defined you can queue purchases for regional ri you cannot queue purchases for zonal ri so there you go [Music] let's talk about some ra limits here so there's a limit to the number of reserved instances that you can purchase per month and so the idea here is that you can purchase 20 regional reserve instances per region and then 20 zonal reserve instances per az so if you have a region that has three az's you can have uh 60 zonal reserved instances in that region okay there are some other limitations here so for regional limits you cannot exceed the running on demand instance limit by purchasing regional reserve instances the default for on-demand limit is 20 so before purchasing your ri ensure on-demand limit is equal to or greater than your ri you intend to purchase you might even want to open up a service limit increase just to make sure you don't hit that wall for zonal limits you can exceed your running on demand instance limit by purchasing zonal reserve instances if you're already uh have 20 on-demand instances and you purchase 20 zone reserved instances you can launch a further 20 on-demand instances that match the specification of your zonal reserved instances so there you go [Music] let's talk about capacity reservation so ec2 instances are backed by different kinds of hardware and so there is a finite amount of servers available within an availability zone per instance type of family remember an availability zone is just a data center or a collection of data centers and they only have so many servers in there so if they run out because the demand is too great you just cannot spin anything up and so that's what's happening you go to launch specific ec2 instant type but abs is like sorry we don't have any right now and so the solution to that is capacity reservation so it is a service of ec2 that allows you to request a reserve of vcc instance type for a specific region and a z so here you would see that you just select the instance type platform a z tendency the quantity and then here you might manually do it specify time or you might say okay i can't get exactly what i want but can give me something generally around that kind of stuff or that type that i want so the reserve capacity is charged at the selected instance type on demand rate whether an instance is running in it or not and you can also use regional reserve instances with your capacity reservations to benefit from billing discounts so there you go [Music] so there are some key differences between standard and convertible ri so let's take a look at it here so the first is that with standard ri you can modify your attributes so you can change the az within the same region you can change the scope from a zonal ri to original ri or vice versa you can change the instant size as long as it's a linux and it has the default tendency you can change the network from ec2 classic to vpc and vice versa but where you're looking convertible you you don't modify ri tributes you perform in exchange okay and so standard rise cannot do exchanges where convertible ri you can uh exchange during the term for another convertible ri with new ra attributes and this includes the instance family instant type platform scope and tenancy um in terms of the marketplace you ca they can be bought in standard ri uh in the marketplace or you can sell your ri if you uh don't need them anymore but for convertible ri they cannot be sold or bought in the marketplace you're just dealing with aws directly okay [Music] hey this is andrew brown from exam pro and we are taking a look at the reserved instance marketplace we had mentioned a prior so let's give it a little more attention here so it allows you to sell your unused standard ri to recoup your spend for alright you do not intend or cannot use so reserved instances can be sold after they have been active for at least 30 days and once database has received the upfront payment you must have a u.s bank account to sell ri on the ra marketplace there must be at least one month remaining in the term for the ri you are listing you will retain the pricing and capacity benefit of your reservation until sold and the transaction is complete your company name and address upon request will be shared with the buyer for tax purposes a seller can set only the upfront price of an ri the usage price and other configurations such as instance type availability zone platform will remain the same as when the ri was initially purchased the term length will be rounded down to the nearest month for example a reservation with 9 months and 15 days remaining will appear as 9 months on the rm market you can sell up to 20 000 usd in reserved instances per year if you need to sell more ri reserved instances in the govcloud uh region cannot be sold on the ra marketplace so there you go [Music] hey it's andrew brown from exam pro and we are taking a look at spot instances so a bus has unused compute capacity that they want to maximize the utility of their idle servers all right so the idea is just like when a hotel offers booking discounts to fill vacant suites or planes offer discounts to fill a vacant seats all right so spot instances provide a discount of 90 compared to on-demand pricing spot instances can be terminated if the computing capacity is needed by other on-demand customers but from what i hear rarely rarely does spot instances ever get terminated it's designed for applications that have flexible start and end times or applications that are only feasible at very low compute costs so you see some options here like load balancing workloads flexible workloads big data workloads things like that um there is another service called ada's batch which is for doing batch processing and this is very common what you use spot with and so you know if you find the spot interface too complicated you're doing batch processing you want to use this service instead um there are some termination conditions so instances can be terminated by aws at any time if your instance is terminated by a bus you don't get charged for a partial hour of usage if you terminate an instance you will be still charged for an hour that it ran so there you go [Music] hey this is andrew brown from exam pro and we are taking a look here at dedicated instances so dedicated instances is designed to help meet regulatory requirements innovas also has this concept called dedicated hosts and this is more for when you have strict server-bound licensing that won't support multi-tenancy or cloud deployments and we'll definitely distinguish that in this course but just not in this slide in particular um and so to understand uh dedicated instances or hosts we need to understand the difference between multi-tenancy and single tendency so multi-tenancy you can think of it like everyone living in the same apartment and single tendency you can think of it everyone having their own house so the idea here is that you have a server i'm just going to get my cursor or my pen out here to say server and you have multiple customers running workloads on the same hardware and the idea is that they are separated via virtual isolization so they're using the same server but it's just software that might be separating them okay and then we have the idea of single tenancy so we have a single customer that has dedicated hardware so the physical location is what separates customers um and the idea here is that dedicate can be offered via on-demand reserved and spot so that's what we're talking about dedicated here in the pricing model just so you know that you know even though these are a lot more expensive than on-demand uh you can still save by using reserve and also spot which i was very surprised about um and when you want to choose dedicated you're just going to launch your ec2 and you'll have a drop down where you have that shared so that's the default dedicated so you have dedicated instance and dedicated hosts and again we'll talk about dedicated hosts later when we need to here um and so again the reason why um you know enterprises or large organizations may want to use dedicated instances is because they have a security concern or obligation about against sharing the same hardware with other aws customers okay [Music] hey this is andrew brown from exam pro and we are taking a look at ava savings plans and this is similar to reserved instances but simplifies the purchasing process so it's going to look a lot like all right at the start here but i'll tell you how it's a bit different okay so there are three types of saving plans you have compute savings plan ec2 instance saving plans and sage maker saving plans uh and so you just go ahead and choose that you can choose two different terms so one year three year so it'd be simple as that and then you choose the following payment options so you have all upfront partial payment and no upfront and then you're going to choose that hour of the commitment you're not having to think about standard versus convertible uh regional versus zonal ri attributes it's a lot simpler uh let's just talk about the three different saving plans or types in a bit more detail so you have compute so compute savings plans provides the most flexibility and helps to reduce your cost by 66 percent these plans automatically apply to ec2 instances usage aws fargate abuse lambda service uses regardless of the instance family size az region os or tenancy then you have ec2 instances so this provides the lowest prices offering saving up to 72 percent in exchange for commitment to usage of instance uh individual instance families in a region so automatically reduce uh your costs on the selected instance family in the region regardless of az size os tenancy gives you the flexibility to change your usage between instances with a within a family in that region and the last is sagemaker so helps you reduce stage maker costs by up to 64 percent automatically apply to stage maker usage regardless of instance family size component aws region if you don't know what sagemaker is that's aws's ml service and it uses ec2 instances or specifically ml ec2 instances so everything's basically using ec2 here but there you go [Music] all right let's take a look at the xero truss model and the zero trust model is a security uh model which operates on the principle of trust no one and verify everything so what i mean by that is malicious actors being able to bypass conventional access controls demonstrates traditional security measures are no longer sufficient and that's where the zero trust model comes into play so with the zero trust model identity becomes the primary security perimeter and so you might be asking what do we mean by primary security perimeter the primary or new security perimeter defines the first line of defense and its security controls that protect a company's cloud resources and assets um if this still doesn't make sense we do cover a part of the defense in depth where you see the layers of defense from data all the way to physical and so you can kind of see you know what we're talking about in that model there but the old way that we used to do things is network-centric so we had traditional security focused on firewalls and vpn since there were few employees or workstations outside the office or they were in a specific remote office so we treated the network uh the network as kind of like the the boundary so if you're in in office there's nothing to worry about but we don't think like that anymore because everything is identity centric so this is where we have bring your own device remote workstations which are becoming more common uh we can't always trust that the employee is in a secure location we have uh identity-based security controls like mfa or providing provisional access based on the level of risk from where when and what a user wants to access and identity centric does not replace uh but it augments network-centric security so it's just an additional layer of consideration for uh security when we're thinking about our database cloud workloads okay [Music] all right so we just loosely defined what the zerotrust model is so let's talk about how we would do zero trust in aws and so zero trust has to do a lot with identity security controls uh so let's talk about what is at our disposal on aws so on database we have identity and access management i am this is where we create our users or groups or policies so time policy is a set of permissions that allow you to say okay this user is allowed to use these services with these particular actions then you have the concept of permission boundaries and so these are saying okay these aren't the permissions the user has currently but these are the boundaries to which we want them to have so they should never have access to um uh ml services and if someone's to apply them uh uh permissions it'll always be within these boundaries then you have service control policies and these are organization-wide policies so if you have a policy where you don't want anyone to run anything in the canada region you can apply that policy at the organizational level and it will be enforced then within an ion policy there are the concept of conditions and so these are all the kind of like uh little knobs you can tweak to say how do i control based on a bunch of different factors so there's source ip so restrict where the ip address is coming from a requested region so a restrict based on the region as we were just mentioned as an example uh multi-factor auth presence so restrict if mfa is turned off uh current time so restrict access based on time of day maybe your employees should never be really using things at night and so that could be an indicator that someone is doing something malicious so you know only give them access during a certain time of day and so that's where we're going to figure out you know based on all these type of control security controls uh to our aws resources we can kind of enforce the zero trust model aws itos does not have a ready-to-use identity controls that are intelligent which is why abuse is considered not to have a zero trust offering for customers and third-party services need to be used so what i'm saying is that technically you know this check box is this thing saying okay we can kind of do zero trust on aws but there's a lot of manual work and you know if i was to say okay i don't want anyone using this at nighttime that doesn't really detect you know what i'm saying it's not going to say oh i think this time is suspicious or malicious so then restrict access only to these core services and anything outside of the services can't be used it just can't exactly do that without a lot of work yourself and that's what i'm talking about here where we have a collection of services that can be set up in an intelligent intelligent ish detection way for identity concerns but requires expert knowledge so the way you might do that aws is that everything all the api calls go through awes cloudtrail and so what you could do is feed those into amazon guard duty and guard duty is an intrusion [Music] intrusion detection and protection system so it could detect suspicious from malicious activity on those cloudtrail logs and you could follow that up with remediation or you could pass that on to amazon detective that could analyze investigate and quickly identify security issues that it could ingest from guard duty but i'm telling you that this stuff here is not as easy for the consumer and so you of course you can do zero trust model but it's going to take a lot of work here and there are some limitations which we'll talk about next here so now let's see how we would do zero trust on a bus with third parties so it was just does technically implement a zero trust model but does not allow for intelligent identity security controls which you know you can do it but it's a lot of work so let's kind of compare it against kind of a third party where we would get the controls that we would not necessarily get with aws so for example azure active directory has a real-time and calculated risk detection based on data points than aws and this is based on device and application time of day location whether mfa is turned on what is being accessed and the security controls verification or logic restriction is much more robust so you know just as one particular example like device and application is not something that aws factors in uh with the existing controls or at least not in a way that is consumer friendly and you know i can't say on a bus okay when you think that this is the type of threat only allow them access to these things or if you think they're in a risky area or risky uh location only give them access to you know these things or where there's not sensitive data you can't exactly do that in a database very easily and so this is where third-party solutions are going to come into play so you have azure active directory google beyond corp jump cloud and all these have more intelligent security controls for real-time detection um and so the way you would use these is these would be your primary directories uh for google beyond corp is just a zero trust framework so i guess you'd use google's uh cloud directory but the idea anyway here is that you use single sign-on to connect those directories to your aws account and that's how you'd access access those uh aws resources and you get this more robust functionality okay [Music] hey it's andrew brown from exam pro and we're looking at identity now we need to know a bunch of concepts before we talk about identity on aws so let's jump into it the first is a directory service so what is directory service well it's a service that maps the names of network resources to network addresses and the directory services shared infrastructure or information infrastructure for locating managing administrating and organizing resources such as volumes folders files printers users groups devices telephone numbers and other objects a directory service is a critical component of a network operating system and a directory server or a name server is a server which provides a directory service so each resource on the network is considered an object by the directory server information about a particular resource is stored as a collection of attributes associated with that resource or object well-known directory services would be a domain name service so the directory service for the internet microsoft active directory and they have a cloud hosted one called azure active directory we have apache directory service oracle internet directory so oid uh open ldap uh cloud and identity and jump cloud okay hey this is andrew brown from exam pro and we're taking a look at active directory now you might say well we're doing a bus why are we looking at this well no matter what cloud provider you're using you should know what active directory is especially when it comes to identity because you can use it with aws so let's talk about it so microsoft introduced active directory domain services in windows 2000 to give organizations the ability to manage multiple on-premise infrastructure components and systems using a single identity per user and since then it's uh involved evolved obviously it's running beyond windows 2000 as of today and they even have a managed one called azure ad which is on microsoft azure but just to kind of give you an architectural diagram here the idea is that you would have your domain servers here and they might have child domains and the idea is that you have these running on multiple machines so that you have redundant ability to log in from various places when you have a bunch of domains it's called a forest and then within a domain you actually have organizational units and with them within organizational units you have all your objects like your users your printers your computers your servers all things like that okay [Music] hey it's andrew brown from exam pro and we're talking about identity providers or ipds so so hey this is andrew brown from exam pro and we are talking about identity providers also known as idps so an identity provider is a system entity that creates maintains and manages identity information for principles and also provides authentication services to applications with a federation or distributor network a trusted provider of your user identity that lets you use authent lets you authenticate to access other service identity providers so this could be like facebook amazon google twitter github linkedin uh federated identity is a method of linking a user's identity across multiple separate identity management systems and so some things that we can use for that is like open id so this is an open standard and decentralized authentication protocol allows you to be able to log in to different social media platforms using google or facebook account open ideas about providing who you are then we have oauth 2.0 this is an industry standard protocol for authorization oauth doesn't share password data but instead uses authorization tokens to prove an identity between consumers and service providers oauth is about granting access to functionality and then we have saml so security assertion markup language which is an open standard for exchanging authentication and authorization between an identity provider and a service provider and this is important to use for saml which we use for single sign-on via the web browser okay [Music] hey this is andrew brown from exam pro we're looking at the concept of single sign-on so sso is an authentication scheme that allows the user to log in with a single id and password to different systems and software sso allows it departments to administer a single identity that can access many machines and cloud services so the idea is you have azure active directory this is just an example of a very popular one you'd use saml to do sso you can connect to all things slackly the best google workspaces salesforce or your computer uh the idea here is uh once you log in you don't have to log in multiple times so you log into your primary directory and then after that you're not going to be presented with a login screen some services might show an intermediate screen but the idea is you're not entering your credentials in multiple times so it's seamless [Music] all right let's talk about ldap so lightweight directory access protocol is an open vendor neutral industry standard application protocol for accessing and maintaining distributed directory information services over ip network so a common use of ldap is to provide a central place to store usernames and passwords ldap enables for same sign-on so same sign-on allows users to use a single id and password but they have to enter it every single time they want to log in so maybe you have your on-premise active directory and then it's going to store it in that ldap directory and so the idea is that you know all these services like google kubernetes um jenkins is going to deal with that ldap server so why would you use ldap over sso which is more convenient or seamless so most sso systems are using ldap under the hood but ldap was not designed natively to work with web applications so some systems only support integration with ldp and not sso so you got to take what you can get okay [Music] let's take a look here at multi-factor authentication also known as mfa and this is a security control where after you fill in your user's name and email password you have to use a second device such as a phone to confirm that it's you that is logging in so mfa protects against people who have stolen your password mfa is an option in most cloud providers and even social media websites such as facebook so the idea is i have my username or email and password i'm going to try to log in this is the first factor and the second factor or multi-factor is i'm going to use a secondary device so maybe my phone we're going to enter in different codes or maybe it's password list so i just have to press a button to confirm that it's me and then i'll get access so in the context of aws it's strongly recommended that you turn on mfa for all your accounts especially the aws root account we'll see that when we do the follow alongs [Music] let's take a look at security keys so a security key is a second device used as a second step in authentication process to gain access to a device workstation or application a security key can resemble a memory stick and when your finger makes contact with a button of exposed metal on the device it will generate and autofill a security token a popular brand of security keys is the ubi key and this is the one i use and is looks exactly like the one that's right beside my desk it works out of the box with gmail facebook and hundreds more supports fido 2 web auth n uh u2f it's waterproof and crest resistance it has variations like usb a usb nfc dual connectors on a single key can do a variety of things so when you turn on mfa on your aws account you'll have virtual mfa device so that's when you're using something like a phone or using software on your phone to do that then there's the u2f security key so this is what we're talking about right now and there's even other kinds of hardware mfa devices which we're not really going to talk about but you know just security keys tie into mfa and this is a lot better way than using a phone because you know you can have it on your desk and press it um and you know you have to worry about your phone being not charged okay [Music] hey this is andrew brown from exam pro and we are taking a look at aws identity and access management also known as iem and you can use this service to create and manage database users groups uh use permissions to allow and deny their access to adab's resources so there's quite a few components here let's get to it so the first is i am policies so these are json documents which grant permissions for specific users groups or a role to access services and policies are attached to im identities then you have impermissions or permission and this is an api action that can or cannot be performed and they're represented in the i am policy document then there's the i am identity so we have i am users these are end users who log into the console or interact with aws resources pragmatically or via clicking ui interfaces you have im groups so these these group up your users so they all share the same permission levels so that maybe its admins developers or auditors then you have i am roles so these roles grant endless resources permissions to specific database api actions and associate policies to a role and then assign it to an aws resource so just understand that roles are when you're attaching these to resources so like if you have an ec2 instance and you say it has to access s3 you're going to be attaching a role not a policy directly okay [Music] hey this is andrew brown from exam pro and we are looking at iron policies a little bit closer here and they are written in json and contain the permissions which determine the api actions that are allowed or denied um and rarely do i write these out by hand because they have a little wizard that you can use to write out the code for you but if you want to you absolutely can write it out by hand but we should know the contents of it and how these json files work so the first thing is the version which is the policy language version and it's been 2012 for a very long time i don't see that changing anytime soon if they happen to change uh what or what the structure of the json is then you have the statements and these are for policy elements and you're allowed to have multiples of them so the idea is that this is the policies or permissions we should say that you uh plan on applying then you have the sid this is a way of labeling your statements um this is useful for like visualization or for referencing it for later on but a lot of times you don't have to have a sid then there's the effect it's either allow or deny then you have the action so here we're saying give access to s3 for all actions under it there's another action down below where it's saying give access and get my pen tool out here just to create a service link role so it's a cross account rule there then there's the principal so this is the account user role or federated user to which you would like to allow access or deny so we're specifically saying this user named barkley um in our aws account there uh then there are the resources so the resources to which the action applies um so in this one up here we are specifying a specific aws bucket here we're seeing all possible resources in enables account and then the condition so there's all sorts of different kinds of conditions so this is a string like one it's saying look at the service name and if it starts with this or that then they'll have access to that so this person even though it says all resources they're really only going to have access to rds okay [Music] so in this follow along we're going to take a closer look at im policy so go to the top and type in iam and what we'll do is make our way over here all the way over to policies and what i want to do is create a new policy that only has access to limited resources so let's say we want to create an amazon ec2 instance and that ec2 instance has access to a very particular s3 bucket so what i want you to do is make your way over to s3 and we're going to create ourselves a new bucket and i'm going to go ahead and create a bucket here we're going to call this um policy tutorial and i'm going to just put a bunch of numbers here you'll have to randomize it for your use case and so now that we have our bucket what we're going to do is go ahead and create a policy and the policy is going to choose a service we're going to say s3 and what i want to do is only be able to list out actions i'm going to expand this so i don't want everything so we're just going to say list buckets okay and then what we'll do is uh expand this here and i want to say for a particular bucket so we'll go back over here click into our bucket and we're going to go ahead and set those permissions by finding that iron we're going to paste that we're going to paste that iron up there sometimes it's a bit tricky it vanishes on you and we could set other conditions if we wanted to but this is pretty simple as it is and so that's our rule here right so we're saying this policy allows us to list this bucket for that okay so what we'll do is go ahead and hit next we'll hit review and we'll just say my bucket policy and we'll create that policy okay so there's a few other things i think that i'd like to do with this policy i'm going to pull it back up here so if we want to find it uh you used to be able to filter these based on the ones that you created but um yeah they should like the little icon so these are ones that i've created up here and so there's my bucket policy and i feel like i want to update this policy to have a bit of extra information here so i'm going to go edit this policy no you know what i think this is fine so what i want to do is now create a role and we're going to create a new role and i'm going to call this um well before i do i need to choose what it's for so it's going to be for ec2 so we're going to go ahead and hit next we're going to choose our policy so my bucket policy there it is and i want to add another one here because i want to be able to use sessions manager because i really don't want to use an ssh key to check that this works and so um for this i i need to use ssm so i'm going to type in ssm here and i'm going to just make sure this is the new one so this policy will soon be deprecated use amazon ssn managed core instance should always open these up and read them and see what they do and so that's the one that's going to allow us to access simpson's manager so we can use sessions manager okay and so we're going to say my ec2 roll4s3 and we go ahead and create ourselves a roll and so now that we have our role i'm going to go over to ec2 and i'm going to go ahead and launch myself a new instance we're going to choose amazon linux 2 we're going to stick with t2 micro i'm going to go over to configuration here everything is fine here i'm fine with all that storage is fine we'll go to security group and i don't want any ports open because i'm not going to be using ssh we're going to launch this instance i don't even want a key pair okay and then we're going to go over here and so what we're waiting for is this instance to launch as that is going what i want to do is go over to my s3 bucket and i want to place something in this bucket so i do have some files here so what i'm going to do is create a new folder here whoops i'm going to go back and i'm just going to create a folder first create a folder enter prize d and i'm going to click into this and then i'm going to upload all my images here so you'll have to find your own images off the internet this is just the ones i have and we'll go ahead and upload those give that a moment okay and so we don't have access to read those files we'll adjust our policy as we go so that we can do that okay so this instance should be running um it has doesn't have the two status checks passed we should be able to uh connect to it so click on connect here and so we have options like ec2 instance connect sessions manager ssh client i want you to go to sessions manager it says we weren't able to connect your instance common reasons ssm agent was installed we absolutely have that installed the required item profile oh right so we were supposed to attach i forgot to do we were supposed to attach an iron profile right so an iron profile is the role uh it holds the role uh that's going to give the permissions to that instance and since we didn't add it we got to go retroactively at it after the fact and so i'm going to modify the i am roll and we're going to choose my ec2 roll for s3 and we're going to save that and actually when that happens you have to reboot the machine you only have to do that if you have no roll attached like prior no profile attached and they're attaching it for the first time but after that you never have to reboot the machine this is the only case where you'd have to do that that's why when i launch an ec2 instance i always at least have the ssm role attached the managed one that gets sessions manager so that i don't ever have to do a reboot in case i have to update the policy and so we will give that a moment there it says initializing so i'm going to try again to connect to it okay and we still don't have that option there um so i'm going to go back to my instances i'm going to check to see if the role the role or policy is attached or profile i should say so i'm just looking for it here there it is and so if i click into this into the role we can see that we have the amazon ssn managed instance core there so that's set up and the my bucket policy so this has everything that it should be able to do it no problem okay so i'm going to try that again okay so now the connection shows up aws is finicky like that you just have to have confidence in knowing what you're doing is correct okay we'll go ahead and hit connect i didn't have to use ssh keys or anything and this is a lot more secure way to connect your instances when it logs us in it's going to set us as the ssm user but we want to be the ec2 user okay that's uh aws always makes their uh am like their linux versions as the ec2 user and that's what you're supposed to use but it's just you just that's how you have to get to that you have to type that sudo su hyphen ec2 user okay just once and if you type who am i that's who you are if you type exit you'll go back to that user so if i type exit and i type who am i and now this person so i'm going to go back hit up go back in there type clear so now i want to see if i have access to s3 so i have to do abs s3 ls let's see if i can list buckets it says axis denied so i mean that kind of makes sense because if you have list buckets and we're just saying only that bucket that might not make a whole lot of sense so i'm going to go back to my policy i might just written a crummy policy but we'll say i am here if we have that one open we should just go here and click on this policy here i'm going to edit that policy so what i'm going to do is i'm just going to change it i'm going to say all resources review the policy save changes and we'll see how fast that propagates okay because i'm pretty sure i don't have to do anything here it should just now give me full access to s3 i'm just going to keep on hitting up here so i'm going to do is i'm just going to take like a three four minute break gonna get a drink i'm gonna come back here and see if this propagates i'm pretty sure i don't have to do anything for that to propagate and i think that i've attached everything correctly here okay okay so i haven't had much luck here it's still having the same issue so if that is happening what i'm going to do is i'm just going to reboot it because maybe i didn't give it a good opportunity to reboot there again i don't think we should have to reboot it every time we're changing um things there but we will give it another go here and see if that fixes that problem there so those sessions matter is going to time out here which is totally fine it's going to kill that session there and so what we'll have to do is close this out because there's not much we can do with that and we're going to go ahead and go back to connect and so we're waiting for this button to appear because it is rebooting so if we want to monitor that stuff usually there is an option here to monitor where it will show us the system logs of what it's doing so here it's just like restarting the machine i'm not sure if we expect to see something after this so i can click that there and uh yeah it's so easy to get turned around so i can connect to it again now we'll type in sudo su hyphen ec2 user aws s3 ls and we still have access deny for list buckets so if that's the case it could be that um sometimes you need other permissions when doing list policy like list buckets so if that's the case we're going to do a sanity check i'm just going to say all permissions here okay and this way there's no way that i've set this incorrectly um it just has to work now so type this in there we go okay so there has to be something more to it so just because you say list buckets you know like means there must be more to it right so if i go here to this right and i say whoops and i say uh list buckets here we'll say copy paste okay here it's saying maybe i need get object as well so i just know from using it about a long time that that's the case that it could be more than one thing so you know that was in the back of my mind that that could be happening and i guess that is but notice i didn't have to restart my uh my server boot my server to get those to work um but anyway let's go lock that down and see if we can just kind of make this uh more focused so let's say all resources i'm going to specify the condition so i might want to just say for particular buckets so we'll say specific when you checkbox everything then you have to do this so for storage accounts these are fine any for objects that could be something we'll say multi-region access bucket any bucket but what i'm going to say is i want to only allow them to access things in a particular bucket and so if i go to arn here um what is our bucket name our bucket name is policytutorial3414 whatever right and so we can actually give it a wild card or we can say enterprise d and we learned this in the course that you can provide orange with randomized things there i don't know if i spelt it wrong over here so i should really double check i should probably just copy it oops i still want to type it wrong and so this okay means that we should only be able to get stuff from there i'm going to review the policy let's see if it takes save the changes and if i just view the json here notice it says anything from here right so allow s3 anything as long as it's within here and then it also broke it up into sub 1's 4 here okay so anyway what i want to see is what happens if i upload something into the loose area here so i'm going to say upload and i'm going to say add a file and we're just going to grab data here and upload it go back to our bucket there's our file we have that stuff in there and so if i go back over to my ec2 instance which i'm still connected to uh who am i okay great clear so i'm going to say aws s3 ls see if that works still it does good and so what i want to do is see if i can copy a file locally so i'm going to do a bus s3 copy i think it was s3 8 no it's just s3 copy polis uh s3 forward slash forward slash policy tutorial three four one four one whoops three four tutorial hyphen three four one four one four slash enterprise d data dot jpg i think it's a jpg let's go double check yeah it is okay and then i just want to say data.jpg and it downloaded it right so i'm going to remove that one and so now what i'm going to do is i'm just going to see if my policy is working or maybe my permissions aren't exactly what i think they are and i was able to download it so it's these policies can get kind of tricky because like this one says allow all actions for these but then these say all actions and so that makes it hard because i want get object so another thing we can do and if that one doesn't work really well i'm just going to write one by hand it's not that scary to write these by hand you just get used to it so i'm going to say effect um is it disallow or maybe it's deny deny action s3 get object i believe that's what it is resource and then i'm going to specify exactly the resource i don't want it to allow so we're going to say arn aws s3 3 colons policy tutorial 34141 and just say data.jpg now if this is not valid it's going to complain and say hey you didn't write this right and it and it's fine okay so we'll save those changes and so that should deny access to that right hopefully i got the policy right okay so that one doesn't work which is fine and that one's fine so that worked we were able to deny that but you can see there's a little bit of an art to creating these policies as you make more of them it becomes a lot easier so hopefully it's not too scary but that's all there really is to it that i want to show you today so what we're going to do is clear out this bucket we're done with this bucket here so we'll say delete whoops we got to empty it first and we'll just say permanently delete here okay and we will exit that out we're gonna go ahead and delete that bucket grab its name here and uh we'll go back over here i think i forgot to delete this bucket from earlier i'm just going to delete that because i don't need that bucket so that's okay with you just going to go ahead and delete that and we have that ec2 instance running so we want to stop that so we go ahead and we're going to terminate that yes please and then we'll go to im and do some cleanup i have some custom rolls i've been creating um you know from prior things a lot of those usually there's a way to uh we've redesigned it okay where's the redesign this is the redesign that can't be it because it'll be like roles that ada best makes i think these are all roles that i've made um i don't want to delete service roles but i want to get rid of some of these because i just have too many you know it's getting out of hand for me and i'm going to just see if it will let me delete all of these let's delete those there we go just clean up a bit i still have a lot here but there's like service roles that aws creates once and you really don't want to delete those because you don't um and then i have a bunch of these like i'm never going to use these so i might as well detach them delete detach you really don't want to keep like rolls that you're never going to use around things like that like gauze we're going to be using that again delete there's that bucket we just created anyway you get the idea so yeah that's uh that's i am okay [Music] principle of least privilege pulp is the computer security concept of providing a user role or application the least amount of permissions to perform an operation or an action and the way we can look at it is that we have just enough access so jea permitting only the exact actions for the identity performer task and then we have just in time j-i-t permitting the smallest length of duration an identity can use permission so usually when we're talking about pulp it's usually a focus on here uh but now these days uh there's a larger focus on jit as well and so jit is the difference between having long lived permissions or access keys versus short-lived ones and the most progressive thing in polp is now risk-based adaptive policies so each attempt to access a resource generates a risk score of how likely the request is to be from a compromised source so the risk score could be based on many factors such as device user location ip address what services being accessed and when did they use mfa did they use biometrics things like that and right now at as of this time it was does not have a risk-based adaptive policies built into iam you can roll your own what's interesting is cognito has risk-based adaptive policies they call it like adaptive authentication but that's for user pools and not identity pools user pools is for getting access to an app uh that you built through an ipd where identity pools incognito is about getting access to itabus resources so uh you know maybe i'm sure about will get it eventually but they just don't have it right now and you have to rely on third-party identity solutions uh to get risk-based adaptive policies now talking about just enough access in just in time just in time is like you think how would you do that with aws you just add and remove permissions manually but one thing you could do is use something like console me so this is an open source netflix project to self-serve short-lived i am policies so an end user can access database resources while enforcing jea and jit and so there's a repo there as well but the idea is they have like this self-serve wizard so you say i want these things and then the machine decides okay you can have them or you you don't need them and it just frees you up asking people and worrying about the length and stuff like that okay hey this is andrew brown from exam pro and we are taking a look at the edibus root user uh and this gets confusing because there's energies account root user and regular users let's distinguish what those three things are so here we have an apes account and the account which holds all the aws resources including the different types of users then you have the root user this is a special account with full access that cannot be deleted and then you have just a user and this is a user for common tasks that is assigned permissions so just understand that sometimes people say it was account they're actually referring to the user and sometimes that when they're saying this account they're actually referring to the invoice account that holds the users i know it's confusing it just it's based on what people decide the context is when they're speaking so the in-apps account user is a special user who's created at the time of the invoice account creation and they can do uh they have a lot of conditions around them so the reuser account uses an email and password to log in as opposed to the regular user who's going to provide their account id alias username and password the root user account cannot be deleted the root user account has full permissions to the account and its permissions and cannot be limited and when we say cannot be limited we're saying that if you take an im policy to explicitly deny the user access the resources it's not something you can do however you can do it in the case of innovative organizations with service control policies because a service control policy applies to a bunch of accounts so it just it's one level above and so that is a way of limiting root users but generally you can't limit them within their own account there can only be one root user uh per aws account the real user is instead for very specific and specialized tasks that are infrequently or rarely performed and there's a big list and we'll get into that here in a moment and the abyss root account should uh not be used for daily or common tasks it's strongly recommended to never use the root users access keys because you can generate those and use them it's strongly recommended to turn on mfa for the root user and any of us will bug you to no ends to tell you to turn it on so let's talk about the tasks that you should be performing with the root user and only the user can perform so changing your account settings this includes account name email address root user password root user access keys other account settings such as contact information payment currency preference regions do not require the root user credentials so not everything restore im user permissions so if there is an i i am admin so just a user that has admin access who actually revokes their own permissions you can sign into the root user to edit policies and restore those permissions so you can also activate im access to the billing and cost management console you can view certain tax invoices you can close your aws account you can change or cancel your aws support plan register as a seller in the reserved instance marketplace enable mfa delete on s3 buckets edit or delete an amazon s3 bucket policy that includes an invalid vpc id or vpc endpoint id sign up for govcloud and something that's not in here which this i took this from the documentation but uh you can use the aws account user to create the organization you can't create that with any other user so um you know the ones i highlighted in red are very likely to show up your exam and that's uh why i highlighted them there for you but there you go [Music] hey this is andrew brown from exam pro and we are taking a look at adabus single sign-on also known as aws sso and so this is where you create or connect your workforce identities in aws once and manage access essentially across your items organization so the idea here is you're going to choose your identity source whether it's sso itself active directory saml 2.0 idp you're going to manage user permissions centrally to items accounts applications saml applications and it uses it can you get single click access to all these things so you know just to kind of zoom in on this graphic here you know you have your on premise active directory it's establishing a ad trust connection over to uh it will single sign-on you're going to be able to apply permissions to access resources within your abilities account so via aws organizations in your organizational units down to your resources you can also use aws sso to access custom saml based applications so you know if i build a web app and i like the example platform and i wanted to use saml based uh connections for single sign on there i could do that as well and even connect out sso access to your business cloud application so office 365 dropbox slack things like that so there you go well let's take a look here at application integration so this is the process of letting two independent applications to communicate and work with each other commonly facilitated by an intermediate system so cloud workloads uh strongly encourage systems and services to be loosely coupled and so itabus has many services for the specific purpose of application integration and these are based around common systems or design patterns that utilize application integration and this would be things like queuing streaming pub sub api gateways state machines event buses and i'm sure there are more but that's what i could uh think about that are the most common ones okay [Music] so to understand queuing we need to know what is a messaging system so this is used to provide asynchronous communication and decouple processes via messages and events from a sender receiver or a producer and a consumer so a queuing system is a messaging system that generally will delete messages once they are consumed it's for simple communication it's not real time you have to pull the data it's not reactive and a good analogy would be imagining people that are queuing in a line to go do something so for aws it's called simple queuing service sqs it's a fully managed queuing service that enables you to decouple and scale microservices distributed systems and serverless applications so a very common use case in a web application would be to queue up transactional emails uh to be sent like sign up reset password and the reason why we have queuing to decouple those kind of actions is that if you had a long-running task and you had too many of them it could hang your applications so by decoupling them and letting a separate compute service take care of that that would be something that would be very useful okay let's take a look here at streaming and so this is a different kind of messaging system but the idea here is you have multiple consumers that can react to events and so in streaming we call messages events and then in a queuing system we just call them messages but events live in the stream for long periods of time so complex operations can be applied and generally streaming is used for real time stuff whereas cueing is not necessarily uh real time and so adabus's solution here is amazon kinesis you could also use kafka but we'll focus on kinesis here so amazon kinesis is the aws fully managed solution for collecting processing and analyzing streaming data in the cloud so the idea is that you have these producers so that are producing events could be ec2 instances mobile devices it could be a computer or traditional server they're going to go into the data stream there's a bunch of shards that scale and there's consumers on the other side so maybe redshift wants that data dynamodb s3 or emr okay but the thing you have to remember is that streaming is for real-time data and as you can imagine because it's real-time and it's doing a lot more work than a queueing system it's going to cost more okay [Music] so we have another type of messaging system known as pub sub so this stands for publish subscribe pattern commonly implemented in messaging systems and a pub sub system the sender of messages the publishers do not send their message directly to receivers they instead send their messages to an event bus the event bus categorizes their messages into groups then receivers of messages subscribers subscribe to these groups whenever new messages appear within their subscriptions the messages are immediately delivered to them so the idea is you have publishers event bus subscribers and event buses appear more than once so it actually appears in streaming appears in this pub sub model and then it can appear in other variations so you're going to hear it more than once the word event bus so the idea here is the publisher has no knowledge of who the subscribers are the subscribers do not pull for messages messages aren't said automatically immediately pushed to the subscribers and messages and events are interchangeable terms in pub sub all right and so you know the idea here with publisher subscribers just imagine getting like a um a magazine subscription right if you think of that you kind of think of the mechanisms that are going here in terms of practicality it's very common to use these as a real-time chat system or a web hook system so you know hopefully that gives you an idea there in terms of aws's solution we're using simple notification service sns this is a highly available durable secure fully managed pub sub messaging service that enables you to decouple microservices distributed systems and serverless applications so here we have a variety of publishers like the sdk the cli cloud watch aid with services you'll have your sns topic you can filter things fan them out and then you have your subscribers to lambda sqs emails https looks very similar to streaming but again you know um you know there's not a lot of communication going back between it it's just publishers and subscribers and it's limited to you know these things here so it's a very managed service right whereas uh kinesis you can do a lot more with it okay [Music] so what is api gateway well it is a program that sits between a single entry point and and multiple back-ends api gateway allows for throttling logging routing logic or formatting of the request and response when we say request a response we're talking about https requests and responses and so the service for aws is called amazon api gateway so api gateway is just a type of pattern and this is the few cases where aws has named the thing after what it is and so we have amazon api gateway which is a solution for creating secure apis in your cloud environment at any scale create apis that act as a front door for applications to access data business logic or functionality from back end services so the idea is that you have data coming in from mobile apps web apps iot devices and you actually define the api calls and then you say where do you want them to go so maybe tasks are going to go to your lambdas and then other routes are going to go to rds kinesis ec2 or your web application and so these are really great for having um this uh being able to define your api routes and change them on the fly and then and always write them to the same place okay [Music] so what is a state machine it is an abstract model which decides how one state moves to another based on a series of conditions think of a state machine like a flow chart and for aws the solution here is itabus step function so coordinate multiple aw services into a serverless workflow a graphical console to visualize the components of your application as a series of steps automatically trigger and track each step and retries when there are errors so your application executes in order as expected every time logs the state of each step so when things go wrong you can diagnose and debug problems quickly and so here's example of using a bunch of steps together on the uh the abyss step functions service and so you know this is generally applied for service workflows but it is something that is very useful if in application integration okay [Music] so what is an event bus an event bus receives events from a source and routes events to a target based on rules i'll get my pen tool out here so we have an event it enters the event bus we have a rules tell it to go to the target it's that simple and we have been seeing event buses in other things like uh streaming and uh pub sub but aws has this kind of event bus offering that is kind of high level it's called eventbridge and it's a service event bus service that is used for application integration by streaming real-time data to your applications the service was formerly known as amazon cloudwatch events they gave it a renaming to give it a better opportunity for users to know that it's there to use and they also extended its capabilities and so the thing is is that a lot of aw services are always emitting events and they're already going into this bus and so if you utilize this service it's a lot easier than having to roll your own thing uh with other services so amazon event bridge will just define an event bus so there is an event bus holds event data defines the rules on an event bus to react to events you always get a default event for every single abs account you can create custom event buses scoped to multiple accounts or other abas accounts you have a sas event bus scope to third party sas providers you have producers these are aidable services that emit events you have events these are data emitted by services they're json objects that travel the stream within the event bus you have partnered sources these are third-party apps that can emit events to event buses you have rules these determine what events to capture and pass to targets and then targets which are aidable services that consume events so yeah it's all just this great built-in um uh uh stuff that's going on here and so you know there there might be a case where you can use eventbridge and save your time uh a lot of time and effort uh doing application integration okay [Music] hey this is andrew brown from exam pro and we are taking a look at application integration services at a glance here so let's get through them so the first is simple notification service sns this is a pub sub messaging system sends notifications via various formats such as plain text email https web hooks sms text messages sqs and lambda pushes messages which are then sent to subscribers you have sqs this is a queuing messaging system or service that sends events to a queue other applications pull the queue for messages commonly used for background jobs we have step functions this is a state machine service it is it coordinates multiple aimed services into a serverless workflow easily share data among lambdas have a group of lambdas wait for each other create logical steps also works with fargate tasks we have a vent bridge formerly known as cloudwatch events it is a service event bus that makes it easy to connect applications together from your own application third-party services and aws services then there's kinesis a real-time streaming data service creates producers which send data to a stream multiple consumers can consume data within a stream used for real-time analytics click streams ingesting data from a fleet of iot devices you have amazon mq this is a managed message broker service that uses apache active mq so if you want to use apache activemq there it is manage kafka service and this gets me every time because it says msk and that is the proper initialization but you think it'd be mks it is a fully managed apache kafka service kafka is an open source platform for building real-time streaming data pipelines and applications similar to kinesis but more robust very popular by the way we have api gateway a fully managed service for developers to create publish maintain monitor and secure apis you can create api endpoints and route them to ada services we have appsync this is a fully managed graphql service graphql is an open source agnostic query adapter that allows you to query data from many different data sources so there you go [Music] hey this is andrew brown from exam pro and we are comparing virtual machines to containers so i know we covered this prior but i just want to do it one more time just to make sure that we fundamentally understand the difference before we jump into containers so the idea is that if you were to request an ec2 instance it has a host operating system that we don't really know much about but we don't really need to know and then the idea is you have a hypervisor which allows you to deploy virtual machines and so when you launch an ec2 instance you're actually launching a vm on top of a hypervisor on a server uh with on uh within the aws data centers servers there and you're going to choose an operating system so like ubuntu and it might come with some pre-installed packages or you can install your own libraries packages and binaries and then you decide what kind of workloads you want to run on there so it could be django mongodb so your database and some kind of queueing system like rabbitmq the difficulties with virtual machines so you're always going to end up with some unused space because you're going to want to have some headroom uh to make sure that uh you know if you know django needs more memory or or mongodb needs more storage that you have that room that you can grow into but the idea is that you're always paying for that even when you're not utilizing it and so you know that can be uh not as cost effective as you'd like it to be so when we're looking at doing this again and we are using containers um instead of the hypervisor we have container virtualization a very common one would be called docker daemon for docker of course and so now you're launching containers and so maybe you have alpine and this is for your web app and then you install exactly the libraries packages and binaries you need for that and then for mongodb you want to have a different os different packages and same thing with rabbitmq maybe you want to run it on freebsd and the idea is that uh you know you're not going to have this waste because it it's kind of changed the sense that these containers are flexible so they can expand or decrease based on the the use case of what they need uh and you know if you use particular services like it was fargate you know you're paying like for running the containers not necessarily uh for uh over provisioning okay so vms do not make best use of space apps are not isolated which could cause uh config conflict security problems or resource hogging containers allow you to run multiple apps which are virtually isolated from each other launch new containers configure os dependencies per container okay [Music] hey this is andrew brown from exam pro and we are taking a look at the concept of microservices and to understand microservices we first need to understand monoliths or monolithic architecture and the idea here is that we have one app which is responsible for everything and the functionality is tightly coupled so i'm going to get my pen tool out here and just to highlight notice that there is a server and everything is running on a single server whether it's load balancing caching the database maybe the marketing website the front-end javascript framework the backend with its api uh the orm connected to background tasks things like that and that's the idea of a monolith and that's what a lot of people are used to doing but the idea with microservice architecture is that you have multiple apps which are responsible for one one thing and the functionality is isolate and stateless and so just by leveraging um various cloud services or bolting it onto your service you know you are technically using microservice architecture so maybe your web app is all hosted uh in containers so you have your apis your your orm your reports maybe you've abstracted out some particular functions into lambda functions you have your um marketing website hosted on s3 you have your front-end javascript hosted on that three you're now using elastic load balancer elasticash rds sqs and that's the idea between monoliths and microservices okay [Music] well let's take a look here at kubernetes which is an open source container orchestration system for automating deployment scaling and management of containers it was originally created by google and now maintained by the cloud native computing foundation so the cncf kubernetes is commonly called k-8 the eight represents the remaining letters for kubernetes which is odd because everyone calls it kubernetes with the s on there but that's just what it is the advantage of kubernetes over docker is the ability to run containers distributed across multiple vms a unique component of kubernetes are pods a pod is a group of one or more containers with with shared storage network resources and other shared settings so here is kind of an example where you have your kubernetes master it has a scheduler controller etcd you might be using it uses an api server to run nodes within the nodes we have pods and within the pods we have containers kubernetes is ideally for micro service architectures where company has tens to hundreds of services they need to manage i need to really emphasize that tens to hundreds of services all right so you know kubernetes is great but just understand that it is really designed uh to be used for massive amounts of microservices if you don't have that need you might want to look at something just easier to use okay [Music] all right let's take a look here at docker which is a set of platform as a service products that use os level virtualization to deliver software in packages called containers so docker was the earliest popularized open source container platform meaning there's lots of tutorials there's a lot of services that uh integrate with docker or make it really easy to use and so when people think of containers they generally think of docker there's of course a lot more options out there than docker to run containers but this is what people think of and so we said it's a suite of tools so the idea is you have this docker cli so these are cli commands to download upload build run and debug containers a docker file a configuration file on how to provision a container docker compose which is a tool and configuration file when working with multiple containers docker swarm an orchestration tool for managing deployed multi-container architectures docker hub a public online repository for containers published by the community for download and one really interesting thing that came out of docker was the open container initiative oci which is an open governance structure for creating open industry standards around container formats and runtimes so docker establishes oci and it is now maintained by the linux foundation and so the idea is that you can write a docker file or or do things very similarly and use different types of um technologies that can use containers as long as they're oci compatible you can use them so docker has been losing favor with developers due to their handling of introducing a paid open source model and alternatives like podman are growing and that's why we're going to talk about podman next okay [Music] so let's take a quick look here at podman which is a container engine that is oci compliant and is a drop-in replacement for docker i just want to get you exposure here because i want you to know about this um and that you can use it as opposed to using docker there are a few differences or advantages that podman has so podman is daemon-less where docker uses a container d daemon podman allows you to create pods like kubernetes where docker does not have pods uh podman only replaces one part of docker podman is is to be used alongside builda and uh scopio so you know docker is an all-in-one kind of tool everything is done via a single cli and everything is there but you know they just wanted to make it more module and so these other tools anytime you say podman it usually means we're talking about podman builda and scopio so builda is a tool used to build the oci images and scopio is a tool for moving container images between different types of container storages palm is not going to show up in your exam but you should practically know it just for your own benefit okay [Music] let's take a look here at the container services offered on aws so we have primary services that actually run containers provisioning and deployment on you know tooling around provisioning deployment and supporting services so the first here is elastic container service ecs and the advantage of this service is that it has no cold starts but it is a self-managed dc2 so that means that you're going to be always paying for the resource as it is running all right then he has aws fargate so this is more robust than using abus lambda it can scale to zero costs and it's being managed by adabus managed ec2 however it does have cold starts so you know if you need containers launching really fast you might be wanting to use ecs then you have elastic kubernetes service eks this is uh open source it runs kubernetes um and this is really useful if you want to avoid vendor lock-in um which is not really a problem but batteries just you want to run kubernetes then you have abs lambda so you only think about the code it's designed for short running tasks if you need something that runs longer you'd want to use that is serverless you'd use abus fargate which is serverless containers you can deploy custom containers so prior aws lambda just had pre-built runtimes which were containers but now you can create any kind of container and use that on it was lambda for provisioning deployment you can use elastic bean socks so it can uh deploy elastic container service for you um which is very useful there now there's app runner which kind of overlaps on what elastic beanstalk does but it specializes it specializes for containers um and i believe that it can actually i don't know what it uses underneath because it is a managed service so elastic bean stock is um open you can see what is running underneath an app runner i don't believe you can see what is running underneath is just taken care of by aws then there's abyss copilot cli so this allows you to build release operate production ready containerized applications on app runner ecs enables fargate for supporting services you have elastic container registry this is repo for your containers not necessarily just docker containers but containers in general probably oci compliant containers x-rays so analyze and debug between micro services so you know it's distributed tracing then you have step functions so stitch together lambdas and ecs tasks to create um a state machine and the only thing i don't have on here would be you know being able to launch an ec2 instance from the marketplace that has um a a container runtime installed like docker i just don't feel that that's very relevant for the exam but it is another option for containers not something that people do very often but there you go [Music] hey this is andrew brown from exam pro and we're taking a look here at organizations and accounts so aws organizations allow the creation of new aws accounts and allows you to centrally manage billing control access compliance security and share resources across your aws accounts so here's kind of a bit of a structure of the architecture of aws organizations and we'll just kind of walk through the components so the first thing you have is a root account user this is a single sign-in identity that has complete access to all eight of the services and resources in an account and each account has a root account user so generally you will have a master or root account and even within that you'll have a root account user and for every additional account that you have you'll notice over here we have a root account user then there's a concept of organizational units uh these are commonly abbreviated to ous so they are a group of aws accounts within an organization which can contain other organizational units creating a hierarchy so here is one where we have called starfleet and here's one called federation planets and underneath we have multiple accounts it was accounts within that organizational unit and even though it does not show it here you can create an organizational unit within an organizational unit then we have service control policies scps and these give uh central control over the allowed permissions for all aws accounts in your organization helping to ensure your accounts stay within your organizational guidelines what they're trying to say here is that um there's this concept of aws i am policies and all you're doing is you're creating a policy that's going to be uh organizational unit-wide or organizational-wide or for select accounts so it's just a way of applying iron policies across multiple accounts it was organizations must be turned on and once it's turned on it cannot be turned off it's generally recommended that you do turn it on because basically if you're gonna run any kind of serious workload you're gonna be using awesome organizations to isolate your abus accounts based on workloads you can create as many aws accounts as you like one account will be the master or root account and i say root account here because this is the new language here and some of the documentation still calls it master account so understand this is the root account not to be confused with the root account user so another clarification i want to make is an ito's account is not the same as a user account which is another thing that is confusing so when you sign up for aws you get an aws account and then it creates you a user account which happens to be a root user account so hopefully that is clear [Music] so aws control tower helps enterprises quickly set up a secure aws multi account it provides you with a baseline environment to get started with a multi-count architecture so it does this a few a few different ways the first thing is it provides you a landing zone this is a baseline environment following well architected and best practices to start launching production-ready workloads so imagine you wanted to go have um you know the perfect environment that you know is secure is correctly configured and has good logging in place that's what a landing zone is and so itabus's landing zone for control tower is going to have sso enabled by default so it's very easy to move between ips accounts it will have centralized logging for aws cloud trail so that you know they're going to be tamper evident or tamper proof away from your workloads where they can't be affected it'll have cross account security auditing um so yeah landing zones are really great to have then there's the account factory they used to call this um a vending machine but they changed it to account factory the idea is it automates provisioning of new accounts in your organization it standardizes the provisioning of new accounts with pre-approved account configuration you can configure account factory with pre-approved network configuration and region selections enable self-service for your builders to configure and provision to accounts using able service catalog able service catalog is just pre-approved uh workloads uh via cloud formation templates so you created to say okay you're allowed to launch this server or these resources and the third and most important thing that ava's control tower comes with is guard rails so these are pre-packaged governance rules for security operations compliance the customers can select and apply enterprise-wide or to specific groups of accounts so abus control tower is the replacement of the retired aws landing zone so if you remember abel's landing zones which was never a self-serve easy thing to sign up for it required a lot of money and stuff that go in there they just don't really have it anymore and it was control tower is the new offering um there okay [Music] hey this is andrew brown from exam pro and we are taking a look at abs config and to understand it was config we need to know what compliance as code is and to understand compliance as code we need to understand what change management is so change management in the context of cloud infrastructure is when we have a formal process to monitor changes enforce changes and remediate changes and compliance is code also known as cac is when we utilize programming to automate the monitoring enforcing and remediating changes to stay compliant with the compliance program or expected configuration so what is adabus config well it's a compliance code framework that allows us to manage change in your aws accounts on a per region basis meaning that you have to turn this on for every region that you need it for and so here is a very simple example where let's say we create a config rule and we have an ec2 instance and we expect it to be in a particular state and then in the other case we have a rds instance and it's in a state that we do not like so the idea is that we try to remediate it to put it in the state that we want it to be and those configurables are just powered by lambdas as you can see based on the lambda icon there so when should you use database config well this is when i want this resource to stay configured a specific way for compliance i want to keep track of configuration changes to resources i want a list of all resources within a region and i want to use analyze potential security weaknesses and you need detailed historical information so there you go [Music] hey this is andrew brown from exam pro and in this follow along we're going to take a look at aws config so itaps config is a tool that allows you to ensure that your services are configured as expected so i've already activated it in my north virginia region so what i'm going to do is just go over to ohio here because it is per region activated and i'll go over to config and then what we'll have to do is set it up so there is this one click setup and it did skip me to the review step because it's kind of piggybacking on the configuration of my original one here but the idea is that you'll just say uh record all resources in this region or things like that you'll have to create a service role link if you have not done so so this will look a little bit different but here it's using the existing one you'll have to choose a bucket so or create a bucket uh it's not super complicated so you get through there you hit confirm and basically you're going to end up with this so the inventory lets you see all the the resources that are not all of them but most resources that are in your aws account in this particular region it this will not populate right away so you will have to wait a little bit of time for that to appear one really nice thing are conformance packs i really love these things when nativists first brought these out there was only like a couple but now they have tons and tons and tons of performance packs so you can go deploy a conformance pack and you can open up the templates i just want to show you look at how many they have so there's some you might recognize like nist cis things like that well architected uh stuff and all these are um and i'm not sure if it's easy to open these up but all these are if we open them up they're on github is these are just cloud formation templates to set up configuration rules so there's a variety of suggested rules uh like around i am best practices and things like that that we can load in um but the idea is that you're just going to create rules so you go here and you add a rule and they have a bunch of managed rules here that we can look at but i think it might be fun to actually run a conformance pack i'll just show you what it looks like to add a rule first so let's say we wanted to do something for s3 and it was making sure that we are blocking public access so we go next here generally you'll have a trigger type you can choose whether it's configured when it happens or it's periodic this is disabled in this case here and you just scroll on down and then once you've added the rule what you can do is also manage remediation so if this rule said hey this thing is non-compliant we want you to take a particular action you have all these aws actions that you can perform and you can notify the right people to correct it or have it auto correct if you choose to do so for rules you can also make your own custom ones so that's just you providing your own lambda functions you're providing that lambda iron and so basically you can have it do anything that you want whatever you want to put in a lambda you can make aws config check for okay so it's not super complicated here but this one here is just going to go ahead and check and so if we go and reevaluate we might just take some time to show up so they're gonna say that it's compliant or non-compliant okay and i it should be compliant but while we're waiting for that to happen let's just see how hard it is to deploy a conformance pack because i feel like that's something that's really important oh you just drop them down and choose them that's great so we might want to go to iam here oops identity and access management and hit next and say my [Music] im best practices and you might not want to do this because it does have spend and i want to say spend it's not going to happen instantly but the idea is that if you turn this on and forget to remove it you will see some kind of charges over time because it does check based on the rules it's not super expensive but it is something to consider about but anyway so it looks like we created that conformance pack so if i refresh it looks like it's in progress i wonder if that's going to set up a cloud formation template i'm kind of curious about that so make our way over to cloudformation and it is so that's really nice because once that is done what we can do is just tear it down by deleting the stack so i'm going to go back over to our conformance pack here let's take a look here and so it still says it's in progress but it is completed and we can click into it and we can see all the things that it's doing so it says item groups have user check performance pack and so it looks like there's a bunch of cool rules uh here so what we'll do is we'll just wait a little while and we'll come back here and then just see if um this updates and see how compliant we are from a uh a basic account okay all right so after waiting a little while there it looks like some of them are being set so i just gave it a hard refresh here uh and here you can see that it's saying is root account um oops we'll give it a moment here to refresh but uh is the root account mfa applied yes have we done a password policy no and actually i never did a password policy which is something i forgot to do but here they're just talking about the minimums and maximums of things that you can do okay so that's a conformance pack but if we go to rules actually i guess it's all the rules here i can't really tell the difference between the conformance pack rules and our plane rules it's kind of it's kind of all mixed together here i think yeah so it's a bit hard to see what's going on there if we go to the performance pack and clicking again it might show the rules yeah there we go so here's the rules there we're seeing a little bit more information so use a hardware mfa so you know how they're talking about using a security key like what i showed you that i had earlier in the course things like that um i am password policy things like that so you know not too complicated but um i think i'm all done here so what i'm going to do is i'm going to go over to cloudformation and tear that on down but you get the idea well i might want to show you uh drift so there used to be a way it's cause i keep changing things on me here but there's a way to see uh history over time and so that was something that they used to show and i'm just trying to like find where they put it because it is like somewhere else resources maybe ah resource timeline okay so they moved it over into the resource inventory and so if we were to take a look at something anything maybe this here resource timeline and there might not be much here but the idea is it will show you over time how things have changed so the idea is that not only can you say what about config is something compliant but when was it complying and that is something that is really important to know okay so very simple example maybe not the best but the idea is that we can see when it was and was not compliant based on changes to our stuff but anyway that looks all good to me here so i'm going to make my way over to cloudformation actually i already already have it open over here we can go ahead and delete that stack um termination protection is enabled you must first disable it so we'll edit it disable it whatever okay we'll hit delete there and as that's deleting i'm going to go look for and config my original rule there again i'm not really worried about it i don't think it's going to really cost me anything but i'm also just kind of clear the house here just so you're you're okay as well and so if we go over to our rules um the one that i spun up that was custom i think was this one here because these are all grayed out right so i can go ahead there delete that rule type in delete and we are good so there you go that is it all right [Music] aws quick starts are pre-built templates by ada best and ebay's partners to help deploy a wide range of stacks it reduces hundreds of manual procedures into just a few steps the quick start is composed of three parts it has a reference architecture for the deployment a database cloud formation templates that automate and configure the deployment a deployment guide explain the architecture implementation and detail so here's an example of one that you might want to launch like the adabus q a bot and then you will get an architectural diagram a lot of information about it and from there you can just go press the button and launch this infrastructure most quick start reference deployments enable you to spend up a fully functional architecture in less than an hour and there is a lot as we will see here when we take a look for ourselves [Music] all right so here is uh it was quick starts where we have a bunch of cloud formation templates uh built by aws or amazon or a best partner networks apn partners and there's a variety of different things here so i'm just going to try to find something like q and a bot q and a bot just type in bot here and i don't know why it was here the other day now it's not showing up which is totally fine but um you know i just want anything to deploy just to kind of show you what we can do with it so you scroll on down we have uh this graphic here that's representing what will get deployed so we have cloudfront s3 dynamodb systems manager lex paulie all these kind of fun stuff and there's some information about how it is architected and the idea is you can go ahead and launch in the console or view the implementation guide let's go take a look here um and there's a bunch of stuff so we have solutions and things like that conversational things like that but what i'm going to do is go ahead and see how far i can get to launching with this it doesn't really matter if we do launch it but it's just the fact that um i wanted to show you what you can do with it so if we go to the designer it's always fun to look at it in there because then we can kind of visualize all the resources that are available and i thought that that would populate over there but maybe we did the wrong things i'm just going to go back and click i'm just going to click out of this oops cancel let's close that leave yes and we will launch that again and so this oh view in the designer hit the wrong button okay so now this should show us the template it might just be loading there we go so this is what it's going to launch and you can see there's a lot going on here i'm just going to shrink that there uh and i don't know if you can make any sense of it but clearly it's doing a lot and so if we were happy with this and we wanted to launch it i know i keep backing out of this but we're going to go back into it one more time we can go here and we go next and then we would just fill in what we want so you name it put the language in and this is stuff that they set up so maybe you want a mail voice set the admin and stuff like that and it's that simple really um and every stack is going to be different so they're all going to have different configuration options but hopefully that gives you kind of an idea of what you can do with quick starts okay [Music] let's take a look at the concept of tagging within aws so a tag is a key and value pair that you can assign to any of this resource so as you are creating a resource is going to prompt you to say hey what tags do you want to add you're going to give a key you're going to give a value and so some examples could be something like based on department the status the team the environment uh the project as we have the example here the location and so tags allow you to organize your resources in the following way for resource management so specific workloads so you can say you know developer environments cost management and optimization so cost tracking budgets and alerts operations management so business commitments sla operations mission critical services security so classification of data security impact governance and regulatory compliance automation workload automation and so it's important to understand that tagging can be used in junction with i am policy so that you can restrict access or things like that based on those tags okay [Music] all right i just want to show you one interesting thing about tags um and it's just the fact that it's used as the name for some services so when you go to ec2 and you launch an instance uh the way you set the name is by giving it a tag called name and i just want to prove that to you just like one of those little exceptions here so we choose an instance here we go to configure storage and then what we do is we add a tag and we say name and my server name okay and then we go ahead and review and launch we're going to launch this i don't need a key pair so we'll just say proceed without key pair i acknowledge okay and we will go view the instances and you'll see that is the name so um that's just like one of those exceptions or things that you can do with tags if there's other things with tags i have no idea that's just like a a basic one that everybody should know and that's why i'm shown to you with the tags but there you go [Music] so we just looked at tags now let's see what we can do with resource groups which are a collection of resources that share one or more tags or another way to look at it it's a way for you to take multiple tags and organize them into resource groups so it helps you organize and consolidate information based on your project and the resources that you use resource groups can display details about a group of resources based on metrics alarms configuration settings and at any time you can modify the settings of your resource groups to change what resources appear resource groups appear in the global console header which is over here and under the systems manager so technically it's part of aws simple systems manager or systems manager interface but it's also part of the global interface so sometimes that's a bit confusing but that's where you can find it okay [Music] all right so what i want to do is explore resource groups and also tagging so what i want you to do is type in resource groups at the top here and it used to be accessible not sure where they put it but it used to be accessible here at the top but they might have moved it over to systems manager so i'm going to go to ssm here not sure why i can't seem to find it today and on the left hand side we're going to look for resource groups you all right so what i want to do is take a look at resource groups and i'm really surprised because it used to be somewhere in the global now but i think they might have changed it um and what's also frustrating is if i go over to systems manager it was over here as well and so on the left-hand side i'm looking for resource groups it's not showing up so i don't really the best you keep moving things around on me and i'm i can only update things so quickly in my courses but if you type in resource groups and tag editor it's actually over here um i guess it's its own standalone service now why they keep changing things i don't know but uh the idea is we want to create a resource group so you can create unlimited single region groups in your abel's account use the group to view related insights things like that so i'm going to go ahead and create a resource group you can see it can be tag based or cloud formation based but i don't have any tags i don't really have anything tags so what i'm going to do is make my way over to s3 we're just going to create some resources or a couple resources here with some tags so that we can do some filtration so i can go ahead and create a bucket i'm going to say my bucket uh this like that whoops and then down below i'm going to go down to tags and we're going to say project and we're going to say um rg for resource group okay and then i can go back over here and then i'm going to just say i can say exactly what type i want i'm going to support all resource types and i'm going to say project rg see how it auto-completes and we'll go down below we'll just say my rg a test rg we'll create that and so now we have a resource group and we can see them all in one place resource groups are probably useful for using in policy so you can say say like resource group i am policies that's probably what they're used for okay so before i use i am managed to actually realize groups you should understand i am features things like that and so administrators can use json policies to specify who has access to what and so a policy action a resource group is used following the prefix resource groups so my thought process there is that if you want to say okay you have access to a resource you can just specify a resource group and it will include all the resources within there and so that might be a better way to apply permissions at a per project basis um and that could save you a lot of time writing out i am policies so that's basically all there really is to it also you kind of get an overview of of the resources that are there so that can be kind of useful as well there's the tag editor here i can't remember what you use this for you can set up tag policies tag policies help you standardize tags on resource groups and your accounts use to define tech policies and absorb to attach them to the entire organization um we're not in the org account so i'm not going to show you this and it's not that important but just understand that resource groups can be created and they are used within i am policies in order to um grant or deny access to stuff you go ahead and delete that resource group and really aws stop moving that on me if you move one more time i'm just never going to talk about resource groups again okay hey this is andrew brown from exam pro and we're taking a look at business centric services and you might say well why because an exam guide it explicitly says that these are not covered but the thing is is that when you're taking the exam some of the choices might be some of these services as distractors and if you know what they are it's going to help make sure that you um guess correctly and the thing is that some of these services are useful you should know about them so that's another reason why i'm talking about them here so the first one is amazon connect this is a virtual call center you can create workflows to write callers you can record phone calls manage a queue of callers based on the same proven system used by amazon customer service teams we have workspaces this is a virtual remote desktop service secure managed service for provisioning either windows or linux desktops in just a few minutes which quickly scales up to thousands of desktops we have workdocs which is a shared collaboration service a centralized storage to share content and files it is similar to microsoft sharepoint think of it as a shared folder where the company has ownership we have chime which is a video conference service it is similar to zoom or skype you can screen share have multiple people on the on the same call it is secure by default and can show you a calendar of upcoming calls we have work mail this is a managed business uh email contacts calendar service with support of existing desktop and mobile email client applications that can handle things like imap similar to gmail or exchange we have pinpoint this is a marketing campaign management service pinpoint is for sending targeted emails via sms push notifications voice messages so you can perform um a to b testing or create journey so complex email response workflows we have ses this is a transactional email service you can integrate ses into your application to send emails you can create common templates track open rates keep track of your reputation we have quicksite this is a business intelligence service connect multiple data sources and quickly visualize data in the form of graphs with little to no knowledge definitely you want to remember quicksite ses pinpoint because those definitely will show up in the exam the rest probably not but they could show up as distractors okay [Music] hey this is andrew brown from exam pro and we are taking a look at provisioning services so let's first define what is provisioning so provisioning is the allocation or creation of resources and services to a customer and its provisioning services are responsible for setting up and managing those awes services we have a lot of services that do provisioning most of them are just using cloud formation underneath which we'll mention here but let's get to it the first is elastic bean stock this is a platform as a service to easily deploy web apps eb will provision various adwords services like ec2 s3 sns cloud watch ec2 auto scaling groups load balancers and you can think of it as the heroku equivalent to aws then you have opsworks this is a configuration management service that also provides managed instances of open source configuration managed software such as chef and public puppet so you'll say i want to have a load balancer or i want to have servers and it will provision those for you indirectly then you have cloudformation itself this is an infrastructure modeling and provisioning service it automates the provisioning of aws services by writing cloud formation templates in either json or yaml and this is known as iac or infrastructures of code you have quick starts these are pre-made packages that can be launched and configure your abus compute network storage and other services required to deploy a workload on the bus we do cover this in this course but quick starts is basically just confirmation templates that are authored by the community or by um amazon partner network okay then we have abs marketplace this is a digital catalog for thousands of software listings of independent software vendors that you can use to find buy and test and deploy software so the idea is that you know you can go there and provision whatever kind of resource you want we have abs amplify this is a mobile web app framework that will provision multiple able services as your backend it's specifically for serverless services i don't know i didn't write that in there but you know like dynamodb um things like uh whatever the graphql service is called api gateway things like that then we have aws app runner this is a fully managed service that makes it easy for developers to quickly deploy containerized web apps and apis at scale with no prior information experience required it's basically a platform as a service but for containers we have abas copilot this is a command line interface that enables customers to quickly launch and manage containerized applications any bus it basically is a a cli tool that sets up a bunch of scripts to set up pipelines for you makes things super easy we have aws codestart this provides a unified user interface enabling you to manage your software development activities in one place usually launch common types of stacks like lamp then we have cdk and so this is infrastructure as a code tool allows you to use your favorite programming language generates that confirmation templates as a means of ic so there you go [Music] hey this is andrew brown from exam pro and we're taking a look at aws elastic beanstalk before we do let's just define what passes so platform as a service allows customers to develop run and manage applications without the complexity of building and maintaining the infrastructure typically associated with developing and launching an app and so elastic bean stock is a pass for deploying web apps with little to no knowledge of the underlying infrastructure so you can focus on writing application code instead of setting up an automated deployment pipeline or devops tasks the idea here is you choose a platform upload your code and it runs with little knowledge of the infrastructure and aws will say that it's generally not recommended for production apps but just understand that they are saying this for enterprises and large companies if you're a small to medium company you can run elastic beanstalk for quite a long time it'll work out great elastic being stock is powered by cloudformation templates and it sets up for you elastic load balancer asgs rds ec2 instances pre-configured for particular platforms uh monitoring integration with cloudwatch sns deployment strategies like in-place blue-green deployment has security built in so it could rotate out your passwords for your databases and it can run dockerized environments and so when we talk about platforms you can see we have docker multi-container docker go.net java node.js ruby php python tomcat go a bunch of stuff and just to kind of give you that architectural diagram to show you that it can launch of multiple things okay [Music] hey it's andrew brown from exam pro and in this follow along we're going to learn all about elastic bean stock maybe not everything but we're going to definitely know how to at least use the service so elastic beanstalk is a platform as a service and what it does is it allows you to uh deploy web applications very easily so here i've made my way over to elastic beanstalk open environment and app and then we set up our application we have two tiers a web server environment a worker environment worker environment's great for long running workloads performing background jobs and things like that and then you have your web server which is your web server and you can have both and it's generally recommended to do so um but anyway what we'll do is create a new application so let's say my app here and there's some tags we can do and then it will name based on the environment then we need to choose an environment name so let's say my environment and just put a bunch of numbers in there hit the check availability scroll on down and we have two options manage platform custom platform and i'm not sure why custom is blanked out but it would allow you to um it would allow you to i think use your own containers so i'm a big fan of ruby so i'm gonna drop down to ruby and here we have a bunch of different versions and so 2.7 is pretty pretty new which is pretty good and then there's the platform version which is fine and the great thing is it comes with a sample application now you could hit create environment but you'd be missing out on a lot if you don't hit this configure more options i don't know why they put it there it's a not very good ui but if you click here you actually get to see everything possible and so up here we have some presets where we can have a single instance so this is where it's literally running a single ec2 instance so it's very cost effective you can have it with spot spot pricing so you save money there's high availability so you know if you want it set up with a load balancer an auto scaling group it will scale very well or you can do custom configuration we scroll on down here you can enable amazon x-ray you can rotate out logs you can do log streaming um there's a lot of stuff here and basically it's just like it sets up most for you but you can pretty much configure what you want as well if we have the load bouncer set if i go here go to high availability now we'll be able to change our load balancer options you have different ways of deploying so you can go here and then change it from all at once rolling immutable traffic splitting depends on what your use case is we can set up a key pair to be able to log into the machine there's a whole variety of things you can connect your database as well so it can create the database alongside with it and then it can actually rotate out the key so you don't have to worry about it which is really nice what i'm going to do is go to the top here and just choose a single instance because i want this to be very cost effective we're going to go ahead and hit create environment and so we're just going to wait for that to start up and i'll see you back when it's done okay okay so it's been uh quite a while here and it says a few minutes so if it does do this what you can do is just give it a hard refresh i have a feeling that it's already done is it done yeah it's already done so and here it says on september 2020 elasticity so i can use etc default default i don't care but anyway so this application i guess it's an appending state i'm not sure why let's go take a look here causes instance has not sent any data since launch none of the instances are sending data so that's kind of interesting because um i shouldn't have any problems you know what i mean so what i'm going to do is just reboot this machine and see if that fixes the issue there but usually it's not that difficult because it's the sample application it's not up to me um as to how to fix this you know what i mean so i'm not sure but um what we'll do is we will let the machine reboot and see if that makes any difference okay all right so after rebooting that machine now it looks like the server is healthy so it's not all that bad right if you do run in issues that is something that you can do and so uh let's go see if this is actually working so the top here we have a link and so i can just right click here it says congratulations your first aws elastic beanstalk ruby application is now running so it's all in good shape there's a lot of stuff that's going on here in elastic beanstalk that we can do we can go back to our configuration and change any of our options here so there's a lot of stuff as you can see we get logging so click the request log so if we click on this and say last 100 lines we should be able to get logging data we have to actually download it i wish it was kind of in line but here you can kind of see what's going on so we have sdo access logs error logs puma logs elastic bean stock engine so you could use that to debug very common to take that over to support if you do have issues for health it monitors the health of the instances which is great then we have some monitoring data here so it gives you like a built dashboard so that's kind of nice you can set up alarms um you have not defined any alarms you can add them via the monitoring dashboard so i guess you'd have to you'd have to somehow add them i don't think i've ever added alarms for um elastic beanstalk but it's nice to know that they have them you can set up schedules for managed events then this is event data so it's just kind of telling you it's kind of like logs it just tells you of things that have changed so there's stuff like that what i'm looking for is to see how i can download the existing application because there's a version uploaded here oh the source is over here okay so i think it's probably over here the one that's running so that's it if it was easy to find what i probably would do is just modify it and oh yeah it's over here so if we go here and download the zip i wonder if it'd be even worth um playing with us so let's i'm just going to see if we can go over to cloud9 and give this a go quickly so if we go over and launch a cloud9 environment maybe we can tweak it and upload a revised version so we'll say create new we'll say eb um environment for elastic beanstalk we'll set it all the defaults that's all fine it's all within the free tier we'll create that environment what i'm going to do is just take this ruby zip file and move it to my desktop and as that is loading we'll give it a moment here i'm just going to go back and i was just curious does it let you download it directly from here no the only thing is that you know if you download that application elastic beanstalk usually has a configuration file with it and so i don't know if they would have given that to us but if it did that would be really great but we just have to wait for that to launch there as well i guess you can save configurations and roll back on those as well um but we will just wait a moment here while it's going i might just peek inside of this file to see what it is this zip contains just going to go my desktop here open up that zip so it looks pretty simple it doesn't even look like a rails app it looks like maybe it's a sinatra app i thought before that they would it would have deployed a ruby on rails application but maybe they keep it really simple um i don't see usually it's like yaml files they use for configuration i don't see that there so it might be that the default settings will work fine there's a config.ru and stuff like that but once cloud9 is up here we will upload this and see what we can do with it okay so there we go cloud9 is ready to go and so if we right click here whoops right click here we should be up be able to upload a file if not we can go up here to the top or it's here or there where is the upload i've i've uploaded things in here so i absolutely know we can i just gotta find it is that the upload upload files cloud9 oh boy that's not helpful that's not helpful at all so let me just click around a little bit here i mean worst case i can always just bring it in via a curl oh upload local files there it is i was just not um being patient okay so we'll drag that on in there and we will did it upload yep it's right there okay great and so we need to unzip it so what i'll do is just drag this on up here i'll do an ls and we'll say unzip ruby.zip and so that unzipped the contents there i think the readme was part of cloud9 so i'm going to go ahead and delete that out not that it's going to hurt anything and so now what we can do we'll delete the original original zip there um and let's see if we can make a change here so i'm just going to open up see what it is so it's yeah it's running sinatra so that's pretty clear there we have a profile to see how it runs we have a worker sample so that just tells how the requests go you don't need to know any of this i'm just kind of clicking through it because i know ruby very well we have a cron yaml file so that could be something that gets loaded in here so i think basically a sinatra app probably just works off the bat here but if we want to make a change we probably just mix up a change over to here so i'll go down here and this is your second aws elastic bean stock application so the next thing we need to do is actually zip the contents here i don't know if it would let us zip it within here but also look like zip the contents of a directory linux just goes to show google is everything so the easiest way to zip a folder um zip everything in the current directory linux okay that's easy so we'll go back over here and we will type in zip and it wants hyphen r for recursive which makes sense and then the name of the zip so ruby2.zip and we'll do period zip warning found is who is zip oh uh yum install zip maybe we have to install uh zip but maybe it's not installed pseudo yum install zip since amazon likes to uses yum and so package already installed so i'm gonna type zip again so zip is there now great oops don't need install twice zip warning ruby two zip not found or empty okay so install zip and use zip hyphen r you can use the flag to best compensate so if that's not working what i'm going to do is just go up a directory why is it saying not found or empty hmm maybe i need to use okay so i think the problem was i was using the wrong flag so i put f instead of r i don't know why i did that so i probably should have done this okay and so that should have copied all the contents of that file so what i'm going to do is go ahead whoops make sure i have that selected and download that file and once i have downloaded that file i'm going to just open the contents to make sure it is what i expect it to be so we're going to open that up and oops get out of here winrar and it looks like everything i want so what i'm going to do is go back over to here i'm going to make sure i have my ruby 2 on my desktop and we're going to see if we can upload another version here so upload deploy choose the file we're gonna go all the way to my desktop here and we're gonna choose ruby two and um like ruby two will be the version name or we'll just say two and we'll deploy and we'll see if that works okay but there are like uh elastic being stock configuration files like gamble files that can sit in the root directory and so generally you're used to seeing them there but you know i imagine that databus probably engineered these examples so that it uses all the default settings but once this is deployed i'll see you back here in a moment okay after a short little wait it looks like it has deployed so what i'm going to do is just close my other tabs here and open this up and see if it's worked it says your second awesome beanstalk ruby application so we were successful uh deploying that out which is really great so what we can do now is just close that tab there and since we have that cloud no environment it will shut down on its own but you know just for your benefit i think that we should shut it off for right now so go ahead and delete that i'm going to go back over to elastic bean stock here and i just want to destroy all of it so we'll see if we can just do that from here terminate the application enter the name so i think we probably have to enter that in there and so i think that oh a problem occurred right exceeded what let's say aws for you so it's not a big deal i would just go and check it again and maybe what we'll do is just delete the application first okay so that one is possibly deleting let's go in here is anything changing can't even tell we'll go ahead oh can't take that one out delete application again if it takes a couple times it's not a big deal it's aws for yes so there's a lot of moving parts so it looks like it is terminating the instance and so we just have to wait for that to complete uh once that is done we might have to just tear down the environment so i'll see you back here when it has finished tearing this down okay all right so after a short little wait here i think it's been destroyed we'll just double check by going to the applications going to the environments yeah and it's all gone probably because i initially deleted that environment and then took the application with it so i probably didn't have to delete the app separately um but uh yeah so there you go just make sure your cloud9 environment's gone and you are a-okay there'll probably be some like lingering s3 buckets so if you do want to get rid of those you can it's not going to hurt anything having those around but they do tend to stack up after a while which is kind of annoying so if you don't like them you can just empty them out as i am doing here whoops i'll just permanently delete copy that text there then go back to here and then just go take out that bucket let's delete that there oh if you get this this is kind of annoying but uh elastic beanstalk likes to put in an imp permission or policy in here so if you go down here there's a bucket policy you just have to delete it out it prevents it from being deleted and we'll go back over here and then we will delete it okay and yeah there we go that's it [Music] so let's take a look at several services on aws and this is not including all of them because we're looking at the most purely serverless services uh if we try to include all the server services it would just be too long of a list but let's take a look here so before we do let's just redefine what is serverless so when the underlying servers infrastructure operating system is taken care by the csp serverless is generally by default highly available scalable cost effective you pay for what you use the first one is dynamodb which is a serverless nosql key value and document database it's designed to scale to billions of records with guaranteed consistent data returned in at least a second you do not have to worry about managing charge you have simple storage service s3 which is a serverless object storage service you can upload very large and unlimited amounts of files you can pay for what you store you don't worry about the underlying file system we're upgrading the disk size we have ecs fargate which is a servless orchestration container service is the same as ecs except you pay on demand per running container with ecs you have to keep a ec2 server running even if you have no containers running where aws manages the underlying server so you don't have to scale or upgrade the ec2 server we have aws lambda which is a serverless function service you can run code without provisioning or managing servers you upload a small piece of code choose how much memory you want how long you want the function is allowed to run before timing out your charge based on the runtime of the service function rounded to the nearest 100 milliseconds we have step functions this is the state machine service it coordinates multiple services into serverless workflows easily share data among lambdas have a group of lambdas wait for each other create logical steps also work with fargate tasks we have aurora serverless this is a serverless on-demand version of aurora so when you want most of the benefits of aurora but trade you have to trade off those cold starts or you don't have lots of traffic or demand so things several services that we could have put in here as well is like api gateway appsync it was amplify um and those are like the the first two were application integrations you could say sqs sns those are all serverless services but you know again we'd be here all day if i i listed them all right [Music] all right let's take a look at what is serverless and we did look at it from a server perspective earlier in the course but let's just try to abstractly define it and talk about the architecture so serverless architecture generally describes fully managed cloud services and the classification of a cloud service being serverless is not a boolean answer it's it's not a yes or no but an answer on a scale where a cloud service has a degree of serverless and i do have to point out that this definition might not be accepted by um everybody because serverless is one of those uh terms where we've had a bunch of different cloud service providers define it differently and then we have thought leaders that have a particular concept of what it is so you know i just do my best to try to make this practical here for you but a servless service could have all or most of the following characteristics and so it could be highly elastic and scalable highly available highly durable secure by default it abstracts away the underlying infrastructure and are built based on the execution of your business tasks a lot of times that that cost is not uh it's not always represented as something that is like i'm paying x for compute it could be abstracted out into some kind of um credit that doesn't necessarily map to something physical then we have serverless can scale to zero meaning when it's not in use the serverless resources cost nothing uh and these two last topics basically pull into pay for value so you don't pay for idle servers you're paying for the value that your service provides and my friend daniel who runs the serverless toronto group he likes to describe serverless as being similar to like energy efficient rating so an analogy of service could be similar to energy rating labels which allows consumers to compare the energy efficiency of a product so some services are more serverless than others and again you know some people might not agree with that where there's a definitive yes or no answer but i think that's the best way to look at it okay [Music] hey it's andrew brown from exam pro and we're taking a look at windows on database so abs has multiple cloud services and tools to make it easy for you to run window workloads on aws so let's get to it so the first is windows servers on dc2 so you can select from a number of windows server versions including the latest version like windows server 2019 for databases we have sql server on rds you can select from a number of sql server database versions then we have aws directory service which lets you run microsoft active directory ad as a managed service we have aws license manager which makes it easier to manage your software licenses from software vendors such as microsoft we have amazon fsx for windows file server which is a fully managed scalable storage built for windows we have the aws sdk which allows you to write code in your favorite language to interact with a database api but it specifically has support for net a language favorite for windows developers we have amazon workspaces so this allows you to run a virtual desktop you can launch a windows 10 desktop to provide secure and durable workstations that is accessible from wherever you have an internet connection about lambda supports powershell as a programming language to write your serverless functions and we have abs migration acceleration program map for windows is a migration methodology for moving large enterprises items has amazon partners that specialize in providing professional services for map this is not just everything for windows on aws like if you want to move your sql server over to rds postgres i believe they've like created an adapter to do that but yeah hopefully that gives you an idea what you can do with windows on aws okay [Music] hey this is andrew brown from exam pro and i want to show you how you can launch a windows server on aws so what you're going to do is go to the top here and we are going to type in ec2 and from here uh what we'll do is we'll go ahead and launch ourselves a new ec2 instance and we are going to have a selection of instances that we can launch and so we're looking for the microsoft windows server and this is interesting there's actually a free tier eligible that is crazy because if you go over to azure they don't have a free tier windows server like any bus does so that's pretty crazy um and it runs on a t2 micro no that can't be right there's no way it can run a tt micro that seems like that's too small let's try it okay i just don't believe it because when you use azure you have to choose a particular size of instance by default and it's a lot more expensive and there is no free tier so we'll go here there are free tiers just not really for windows in particular so we'll go here this looks good security groups this opens up rdp so we can get into that machine we're gonna go next here and launch this machine says if you plan to use ami the benefits the microsoft license mobility check out this form that's not something we're worried about today and i mean i guess we can create a key pair i'm not sure what it we would use a key pair for here um for windows amis the private key file is required to obtain the password used to log into the instance okay so i guess we're going to need it so windows key great we'll launch that instance and uh i'll see you back here when it launches but i just don't believe that it would launch that fast you know all right so after a short little wait here the server is ready and so let's see if we can actually go ahead and connect to this so i'm going to hit connect here and we'll go over to rdb client so you connect to your windows instance using your remote desktop client of your choice and downloading and running the rdb shortcut below so i'm going to go ahead and download this and you're going to have to be on a windows machine to be able to do this or have an rdb client installed i think there's one for mac that you can get from the apple store but all i'm going to do is just double click the file so you probably can't see it here i'm just going to expand this trying to oh my computer is being silly but anyway there we go we moved it over there i'm just going to drag over here and just double click this image so you can see that i'm doing it i'm saying connect okay and that's going to ask for a password so i'm going to hope that i can just click that and get the password so to decrypt the password you will need your key pair instance you'll have to upload that and i don't know if i remember having to do that before but it's a great security measure so i'm fine with it i'm going to drag my key to my desktop so i can see what's going on there as well and we're going to go grab that and decrypt the password and so now um where's our password oh it's right here okay so we're going to grab that password there we will paste that in said okay say yes and see if we can connect to this instance if this is running on a t2 micro i'm going to lose it because that is just cheap it just just doesn't seem possible to me because again on azure you have to launch an instance with a lot of stuff and it just uh seems uh crazy what's also interesting is that itabus uh on windows like launches so fast it's unbelievable how fast these servers spin up and it's just very unusual but yeah so we are in here um it's not asking me to activate or anything so i guess there's already a windows license here and i'm not sure if there's any kind of like games installed like do we have minesweeper can i play minesweeper on here it's a data center server so i'm assuming not but yeah so this is a windows server and it's pretty impressive that this works i'm not sure if this is going to have an outbound connection here um just because we probably would have to configure it let's just say okay i just i really don't think it's going to go out to the internet by default yeah so you'd probably have to do some stuff you know oh no there we go so yeah we got to the internet so it's totally possible but uh yeah that's about it that's all i really wanted to show you so what i'm going to do is just go back to ec2 and we're going to shut down the server here just expand that there and we will go here and we will terminate that instance good we'll give that a refresh that's shutting down and we are done [Music] hey this is andrew brown from exam pro and we are taking a look at abyss license manager and before we do let's talk about what byol or bring your own license means so this is the process of reusing an existing software license to run vendor software on a cloud vendor's computing service byol allows companies to save money since they may have purchased the license in bulk or a time that provided a greater discount than if purchased again and so an example of this could be the license mobility provided by microsoft's volume licensing to customers with eligible server applications covered by the microsoft software assurance program uh and i don't know what i was trying to do there i guess maybe it was just sa and i missed the parentheses there on the end no big big deal but aws license manager is a service that makes it easier for you to manage your software licenses from software vendors centrally across aws in your on-premise environments able's license manager software that is licensed based on virtual cores physical cores sockets or a number of machines this includes a variety of software products for microsoft ibm sap oracle and other vendors so that's the idea you say what is my license type it's it's bound to this amount of cpus items license manager works with ec2 with dedicated instances dedicated hosts and even spot instances and for rds there's only for oracle databases so you can import that license for your oracle server just understand that if you're doing microsoft windows servers or microsoft sql server license you're generally going to need a dedicated host because of the assurance program and this can really show up on your exam so even though ava's license manager works on dedicated instances and spot instances just trying to gravitate towards dedicated hosts on the server or on the exam okay [Music] all right let's take a look at the logging services that we have available in aws so the first one here is cloudtrail and this logs all api calls whether it's sdk or the cli so if it's making a call to the api it's going to get tracked between aws services and this is really useful to say who can we blame who was the person that did this so who created this bucket who spent up that expensive ec2 instance who launched the sagemaker notebook and the idea here is you can detect developer misconfigurations detect malicious actors or automate responses through the system then you have cloudwatch which is a collection of multiple services i commonly say this is like an umbrella service because it has so many things underneath it so we have cloudwatch logs which is a centralized place to store your cloud services log data and application logs metrics which represents a time ordered set of data points a variable to monitor event bridge or previously known as cloudwatch events triggers an event based on a condition so every hour take a snapshot of the server alarms triggers notifications based on metrics dashboards creates visualizations based on metrics and that's not all of the things that are under cloud watch but those are the core five ones you should always know um absolutely there then we have aws x-ray this is for distributed tracing systems so you can use it to pinpoint issues within your services so you see how data moves from one app to another how long it took to move and if it failed uh to move forward okay [Music] let's take a closer look here at ibis cloud trail because it's a very important service so it's a service that enables governance compliance operational auditing and risk auditing of your adwords account and the idea is that every time you make an api call it's going to show up as some kind of structured data that you can interact with or read through so this cloud trail is used to monitor api calls and actions made on the database account easily identify which users and accounts made the call to aws so you might have the where so the source ip address the when the event time the who the user agent and the what the region resource in action so i'm just gonna get my pen tool out here for a moment and just notice you have the event time so when it happened the source the name the region the source ip address the user agent uh who was doing it so here was laforge of the response element so you know it's very clear what is going on here um and then you know cloudtrail is already logging by default and we'll collect logs for the for the last 90 days via event history if you need more than 90 days you need to create a trail which is very common you'll go into aws and make one right away trails are outputted to s3 and do not have gui like event history to analyze the trail you have to use amazon athena and i'm sure there are other ways to analyze it within aws but here's just what the event history looks like so right off the bat you can already see that there are information there i'm not sure if they've updated the ui there they might have uh as even when i'm recording this i kind of feel like if we go into the follow along which we will um i bet they might have updated that the idea here is that you know you can browse the last 90 days but anything outside of that you're gonna have to do a little bit of work yourself okay [Music] so we're not going to cover all the cloudwatch services there's just too many but let's look at the most important ones and one of the those important ones is cloudwatch alarms so cloudwatch alarms monitors a cloudwatch metric based on a defined threshold uh so here you can see there's kind of a condition being set there so if the networking is greater than 300 for one data point within five minutes it's going to breach an alarm so that's when it goes outside it's defined threshold and so the state's going to either be something like okay so the metric or expression is within the defined threshold so do nothing alarm the metric or expression is outside of the defined threshold so do something or insufficient data the alarm has just started the metric is not available not enough data is available and so when the state has changed you can define actions that it should take and so that could be doing a notification auto scaling group or an ec2 action um so cloudwatch alarms are really useful for a variety of reasons the one that we will come across right away will be setting up a billing alarm [Music] so let's take a look here at the autonomy of an alarm and so i have this nice graphic here to kind of explain that there and so the first thing is we have our threshold condition and so here you can just set a value and say okay the value is a thousand or a hundred whatever you want it to be and this is going to be for a particular metric the actual data we are measuring so maybe in this case we're measuring network in so the volume of incoming network traffic measured in bytes so when using five-minute monitoring divide by 300 we get bytes per second if you're trying to figure out that calculation there you have data points so these represent the metrics measurement at a given point then you have the period how often it checks to evaluate the alarm so we could say every five minutes uh you have the evaluation period so the number of previous periods and the data points to alarm so you can say one data point is breached in evaluation period going back four periods so this is what triggers the alarm uh the thing i just want you to know is that you can set a value right and that it's based on a particular metric and there is a bit of logic here in terms of the alarm so it's not as simple as just it's breached but there's this period thing happening okay [Music] well let's take a look at cloudwatch logs so to understand that we have logs streams and log groups so a log stream is a stream that represents a sequence of events from an application or instance being monitored so imagine you have an ec2 instance running a web application and you want those logs to be streamed to cloudwatch logs that's we're talking about here so you can create log streams manually but generally this is automatically done by the service you are using unless you were collecting application logs on an ec2 instance as i just described here is a log group of a lambda function you can see the log streams are named after the running instance lambda's free frequency run on new instances so the stream contains timestamps so what i'm trying to say here is that there's a variety of different services lambda rds what have you and they already send their logs to cloudwatch logs and and they're going to vary okay so here's a log group of an application log running on ec2 you can see here the log streams are named after the running instance id here is the log group for aws glue you can see the log streams are named after the glue jobs and so you know we have the streams but let's talk about the actual data that's made up of it the log events so this represents a single event in a log file log events can be seen within the log stream and so here's an example of you would open this up in cloudwatch logs and you can actually see what what was being reported back by your server you can filter these events to filter out logs based on simple or pattern matching syntax so here i'm just typing in saying give me all those debug stuff and you know this is a very robust but awes does have a better way of analyzing your logs which is log insights which we'll look at here in a moment [Music] so we're just looking at cloudwatch log events and how those are collected but there's an easier way to analyze them and that's with login sites so you can interactively search and analyze your cloudwatch log data and it has the following advantages more robust filtering than using the simple filter in the in a log stream less burdensome than having to export logs to s3 and analyze them via athena cloudwatch login site supports all types of logs so cloudwatch log insights is commonly used via the console to do ad-hoc queries against log groups so that's just kind of an example of someone writing a query and cloudwatch log insights uses a query syntax so a single request can query up to 20 logs create timeout after 50 minutes if not completed and queries results are available for seven days so abras provides sample queries that you can get started for common tasks and and ease the learning into the query syntax a good example is filtering vpc flow logs so you go there you click it and you start getting some data you can create and save your own queries to make future repetitive tasks easier on the certified cloud partitioner they're not going to ask you all these details about this stuff but i just conceptually want you to understand that in log insights you can use it to robustly filter your logs based on this query syntax language you get this kind of visual and it's really really useful let's take a look here at cloudwatch matrix which represents a time ordered set of data points it's a variable that is monitored over time so cloudwatch comes with many predefined metrics that are generally namespaced by aw services uh so the idea is that like if we were to look at the ec2 it has these particular matrixes so that we have cpu utilization discrete ops disk write ops disk read bytes disk write bytes network in network out network packet in network packets out and the idea is that you can just like click there into ec2 and then kind of get that data there and so cloud metrics are leveraged by other things like cloudwatch events cloudwatch alarms cloudwatch dashboards so just understand that okay [Music] all right so what i want to do in this follow along is show you a bit about cloudtrail so we're going to go to the top here and type in cloudtrail the great thing about cloudtrail is it's already turned on by default so it's already kind of collecting some information and so it's here it says now use i am access analyzer on cloud trail trails that sounds pretty cool to me but we shouldn't have to create a trail right off the bat because we'll have some event history and the event history allows us to see things that are happening within our account in the last 90 days but the thing is if you want something beyond 90 days you're going to have to create a trail but if we just take a look here we can kind of see as we've been doing a lot of things all the kind of actions that's been happening so here we have an instance that i terminated so if i go in here and and look at it i can kind of see more information about it so we can see when it terminated who had done that what access key they had used the event source the request id the source ip what whether it was read only what was the event type that was called the resource there and this is the actual raw record so this is generally how i would look at it or this is how you had to look at it back in the day but the idea is that you would have that user identity described the event time the source the event name the region the source ip the the agent all the information there okay and so this is a great way to kind of find stuff so you can go through here and try to debug things this way so you can go to the event name and so if you if you go here you can kind of get uh see a bit of stuff here so if i was just trying to say like maybe create i'm just trying to find something that i know that i've been doing like create access keys i can see the access keys that have been created within this sandbox account here for the user and things like that so it's a great way to kind of find things but generally you're going to always want to turn on uh or create your own trail so if you go here and hit create trail say my new trail and um you're gonna need an s3 bucket for that you'll probably want encryption turned on which sounds good to me you'll absolutely want log file validation and generally you don't want to store your your cloudtrail logs within the existing account you want to have a isolated hardened account that's that is infrequently accessed or only by your your cloud security engineers away from here because you don't want people tampering with it deleting it or changing stuff but let's take an existing one here i don't want a customer manager don't i have one that is managed by aws here new custom um let's choose that one i don't know which one that is we'll just hit next usually adamus gives you a managed key there so i was kind of surprised you can also include additional data so if you do data events this would collect information from s3 but the thing is you might not want to track everything because if you track to everything it can get very expensive very quickly but if you don't you just leave on management events it'll save you more money there's inside events uh this is new i haven't seen this yet so i didn't identify unusual activity errors users of behavior that sounds really good but these could come also at additional charges but i'm going to hit next anyway for fun i'm going to create that trail okay and uh the key policy does not grant sufficient access to etc etc so i'm gonna go turn that off even though i should really have it turned on but i just want to be able to show you this okay so we have this new trail and so this trail is being dumped to s3 so we might not be able to see anything in here as of yet but i'm just going to pop over here and just see right i probably have one in my other account but it's not it's not that important we basically saw what the data would look like so we go into here there's a digest i don't remember there being a digest so that's nice so there's no data yet but when there is it will pop into there um i'm not sure if we're gonna be able to do anything with insights here at least not in this account insights are events that are showing usual api activity and things like that so that's kind of cool i don't know what cloudwatch insights looks like uh inside events are shown in the table for 90 days okay so i'm just curious if we can see kind of a screenshot of what that looks like whoops well at least on the article here so i guess you could kind of get like some kind of graphs or something saying like hey this looks unusual and they might select it so not pretty clear in terms of what that looks like but i mean sounds like a cool feature and i'm sure when i i'm working on my security certification course i will definitely include them there but that's pretty much all there is to it i'm going to go ahead and delete that trail because i i just don't really need it in this account but generally you always want to go in and create a trail and what you can do is if you're in your root account i'm not this is actually a an account that's part of an organization but if you're at that organization level you can create a trail that ex that spans all the regions that spans all the interest accounts with an organization and that's what you should be doing okay but that's about it [Music] hey this is andrew brown from exam pro we're looking at ml and ai services on aws but let's first just define what is aiml and deep learning so ai also known as artificial intelligence is when machines that perform jobs that mimic human behavior ml or machine learning are machines that get better at a task without explicit programming and deep learning or dl are machines that are have an artificial neural network inspired by the human brain to solve complex problems and a lot of times you'll see this kind of onion where they're showing you that you know ai can be using ml or deep learning and then deep learning is definitely using machine learning but it's using neural networks and so for aws their flagship product here is amazon sagemaker it is a fully managed service to build train deploy machine learning models at scale um and there's a bunch of different kind of open source frameworks you can use with it like apache mx net audios which is an open source deep learning framework that is the one that it has decided to say hey we are going to back this one and so you'll see a lot of example code for that one we have tensorflow that you can use pie torch hugging face other things as well okay and so there's a lot of services underneath some that might be of interest to mention right away is like amazon sagemaker ground truth which is a data labeling service where you have humans that label a data set that will be used to train machine learning models or maybe something like amazon uh augmented ai so human intervention review services when sagemaker uses machine learning to make a prediction that is not confident that it has the right answer queue up to predict for a human review and these are all about just labeling data um you know when you're using supervised um supervised learning but there are a lot of services under sagemaker itself and just ai services in general so we'll look at that next okay [Music] all right let's take a look at all the ml and ai services and there's a lot on aws so the first is amazon code guru this is a machine learning code analysis service and code guru performs code reviews and will suggest to improve the code quality of your code it can show visual code profiles to show the internals of your code to pinpoint performance next we have amazon lux this is a conversation interface service with lux you can build voice and text chat bots we have amazon personalized this is a real-time recommendation service it's the same technology used to make product recommendations to customers shopping on the amazon platform then we have amazon poly this is a text-to-speech service upload your text and an audio file spoken by synthetic synthesize voice and that will be generated you have amazon recognition this is an image and video recognition service uh analyze image and videos to detect and label objects peoples and celebrities then we have amazon transcribe this is a speech to text service so you upload your audio and it'll be converted into text we have amazon text extract this is an ocr tool so it extracts text from scanned documents when you have paper forms and you want to digitally extract that data you have amazon translate this is a neural machine learning translation service so use deep learning module models to deliver more accurate and natural sounding translations we have amazon comprehend this is an nlp so natural language processing service find relationships between text to produce insights looks at data such as customer email support tickets social media and makes predictions then we have amazon forecasts this is a time series forecasting service and it's you know uh i mean technically i guess it's a bit of a database but the idea here is that it can forecast business outcomes such as product demand resource needs or financial uh performance and it's powered by ml or ai if you want to call it we have aws deep learning amis so these are amazon ec2 instances they're pre-installed with popular deep learning frameworks and interfaces such as tensorflow pytorch apache mxnet chainer gluon uh horovod and kires we have adabus deep learning containers so docker images instances pre-installed with popular deep learning frameworks interfaces such as tensorflow pytorch apache mxnet we have aws deep composer this is machine learning enabled musical keyboard i don't know many people using this but it sounds like fun it was steep lens is a video camera that uses deep learning it's more of like a learning tool so again we don't see many people using this airbus deep racer is a toy race car that can be powered with machine learning to perform autonomous driving again this is another learning tool for learning ml they like to do these at re invent to have like these racing competitions amazon elastic interface so this allows you to attach low-cost gpu perform powered acceleration to ec2 instances to reduce the cost of running deep learning interfaces by 75 percent we have amazon fraud detector so this is a fully managed fraud detection as a service uh it identifies potentially fraudulent online activities such as online payment fraud and the creation of fake accounts amazon kendra so this is an enterprise machine learning search engine service it uses natural language to suggest answers to questions instead of just simple keyword matching so there you go [Music] hey it's andrew brown from exam pro and we're going to do a quick review here of the big data and analytic services that are on aws but before we do let's just define what big data is so it's a term used to describe massive volumes of structured or unstructured data that is so large it is difficult to move and process using traditional database and software techniques so the first tier we have is amazon athena this is a serverless interactive query service it can take a bunch of csv or json files in an s3 bucket and load them into a temporary sql table and so you can run sql queries so it's one you want to query csv or json files if you've ever heard of apache presto it's basically that okay then we have amazon cloud search so this is a fully managed full text search service so when you want to add search to your website we have amazon elastic search service commonly abbreviated to es and this is a manage elastic elasticsearch cluster and elasticsearch is an open source full-text search engine it is more robust than cloud search but requires more server and operational maintenance then we have amazon elastic mapreduce commonly known as emr and this is for data processing and analysis it can be used for creating reports just like redshift but is more suited when you need to transform unstructured data into structured data on the fly and it leverages open source um technology so like spark um hive pig things like that then we have kinesis data stream so this is a real time streaming data service it creates producers which sends data to a stream it has multiple consumers that can consume data within a stream and use it for real-time analytics click streams ingestion data from a fleet of iot devices then we have kinesis fire hose this is a serverless and a simple version of a data stream and you pay on demand based on how much data is consumed through the stream and you don't worry about the underlying servers then you have amazon kinesis data analytics this allows you to run queries against data that is flowing through your real-time stream so you can create reports and analysis on emerging data and last on the kinesis side here we have amazon kinesis video streams this allows you to analyze or apply processing on real-time streaming videos on the second page here we have managed kafka service msk and it might be mks now that i'm looking at it here so just be aware that that might be incorrect but a fully managed apache kafka service kafka is an open source platform for building real-time streaming data pipelines and applications it is similar to kinesis but with more robust functionality then we have redshift which is um it was this flagship big data tool it's a petabyte size data warehouse the data warehouses are for online uh online analytical processing olap so data warehouses can be expensive because they are keeping data hot meaning that we can run a very complex query and a large amount of data and get that data back very fast but this is great when you need to quickly generate analytics or reports from a large amount of data we have amazon quick site this is a business intelligence tool or business intelligence dashboard bi for short you can use it to create business dashboards to power business decisions it requires little to no programming and connect and adjust to many different types of databases have you ever heard of tableau or power bi this is just the aws equivalent we have aw data pipelines this automates the movement of data you can reliably move data between compute storage and services we have abs glue this is an etl service so it allows you to move data from one location another where you need to perform transformations before the final destination it's similar similar to dms but it's more robust we have abus lake formation this is a centralized curated and secured repository that stores all your data so it's a data lake it is a storage repository that holds a vast amount of raw data in its native format until it is needed and then last on here we have aws data exchange this is a catalog of third-party data sets you can download for free or subscribe or purchase data sets so they might have like the kovid 19 foot traffic data the imdb tv movie data historical weather data and sometimes this is really great if you're just trying to learn how to work with these tools okay [Music] hey this is andrew brown from exam pro and we are taking a look here at amazon quick site which is a business intelligence dashboard or bi dashboard that allows you to ingest data from various database storage or database services to quickly visualize business data with minimal programming or data formula knowledge so here's an example of a quick site dashboard and so the way quicksite is able to make these dashboards super fast is via spice the super fast parallel in memory calculation engine and the thing is you don't have to use spice but generally it is good to use it and there are some caveats when getting your data into quicksite sometimes it can't ingest it directly from a particular data store so you might have to dump it to s3 first but it's not too bad because you can use it with glue to transform that data over um there are additional features sometimes marketed services but we have quick site ml insights this detects anomalies perform accurate forecasting it can generate natural language narratives so basically like you know describe it as if you're going to read it out as a business report you know then there's amazon quick site queue this allows you to ask questions using natural language on all your data and receive answers in seconds so there you go [Music] hey this is andrew brown from exam pro and let's go take a look at amazon quick sites which is a or quick site which is a business intelligence tool so when you go here you have to sign up because it's kind of part of aws but on its own separate thing and then you have to choose what you want so we have enterprise and standard um i do not want to pay that much so i'm going to go to standard over here i'm not really sure what the difference is it's not really telling me what between standard and enterprise but i'm going to assume standard is more cost effective but here we it says user use i am federator identities which is fine use i am federal identities only um we can stick with the top one there that seems fine to me we need to enter a name so we'll just say my quick site account and we probably have to fill something in there so let's say andrew example co and these are the services that are going to integrate with athena s3 rds things like that i guess we could select some of those buckets i'm not too worried about doing that right now the provided account name is not available that is a terrible ui but that's aws for you so i'm just going to dump some numbers there i'm going to put my email in here again um we probably want some s3 buckets i'm going to make a new bucket because i think that's how we're going to do this we're going to have to make a bucket here and say quick cite data okay and we're gonna create ourselves a bucket here i'm gonna go back and hopefully that shows up uh it does not so what i'll have to do is just back out and i'm just gonna give it a hard refresh here and we'll hit quick sign up for quick site again and we'll choose standard and we'll say my quick site account a bunch of numbers there android example.co i don't really care about ingesting data from everywhere else i just want it from s3 there's my data sure we'll give it right permissions even though i don't plan to do anything with athena here today and we'll give it a moment to load so what i'm thinking is so what i'm thinking is just making like an excel spreadsheet here and just filling in some data so oh it says our account is set up here so we'll go to quick site because i bet i can import like a csv or something um i'm more of a tableau or power bi kind of person um but uh you know for the purpose of the cloud practitioner i am going to show you this amazon quick set lets you easily visualize data and etc that sounds great next next next i know what i'm doing oh do we have some examples great so i don't even have to make a spreadsheet okay so what we'll do is just click on that and we have stuff it looks like they've really improved this since the last time i've seen it which is quite nice but i could try and make my own i'm just trying to think how do we do this again yeah we have the spice there so it's a lot easier from starting from scratch i'm just gonna say close and [Music] these are analysis we want data sets in here oh we already have some data sets these are coming from s3 i think that's the old s3 logo i'm not sure why they're using that one we can go here and create a new data set oh we can upload directly so i don't even have to use s3 that's great so what i'm going to do is just have some values in here so i'm going to just say um type value so we'll say banana 125 123 we'll say apple 11 orange nobody likes oranges i shouldn't say i'm sure it's like lots of people like oranges oh we gotta put pears on there i actually really like paris people think i like bananas which is not true i actually like pears that's what i like so i'm going to go ahead and save this save as and i'm just going to save this to my desktop here so just give me a moment just doing this off screen and i'm going to save this uh data set quick site csv it can even take an xls so i don't have to save it as a uh i'll just save it as an xls okay and so we're going to just upload that so there is that data set it's going to scan that file it's going to see that sheet you can even preview it there's the information we're going to add that data i get added as a data data set well how do i where do i it's like it says add the data i just want to add it as a data set so they set up here maybe save and visualize up here and is it autographing it maybe if i drag in is it working is it thinking okay it's 100 so i'm going to just drag that onto there and it says pear orange banana just kind of trying to make sense of this here is it taking in count the value maybe put the value down there wow that's so much easier i haven't used this for like a year and um i'm gonna tell you this has gotten a lot easier to use so i'm quite impressed with this but yeah i mean this is pretty much what quicksite is if you want to visualize things in different types you can drag them out you can probably like click on the wheel here and change it again i'm not sure exactly how all the uh the dials and knobs work here but i mean another thing we could do is just drag out like another object and do the same thing so maybe i'd want a pie chart um so add a visual yeah it's not as nice as power bi but like it's still great that it's here you know type value so we got a nice pie chart there uh let's try something weird let's give this one a go doesn't color it which is not very nice um there's probably some kind of way to color it but focus on banana only i don't know i don't know the point of there but anyway that's quick site so um i really don't want to pay for this so what i'm going to do is go up here um there's you have to deactivate i'm just trying to remember how because they change the interface again they change everything on you so there we go i'm on a trial for four days here maybe quantity four just the four 29 day trial so if i want to get out of this trial what do i do i don't want to use it anymore um so how to delete aws quicksite canceling your subscription so before you can unsubscribe uh you're assigned in the im account your quick site administrator you're the root i am administrator sure you deleted any secondary namespaces to find the existing namespace etc so choose your username in the application bar to quick site account settings unsubscribe so i was almost there i thought i was in the right place uh this one no i was just there manage quick site your subscriptions edit there's no unsubscribe option so i'm not sure can i cancel unsubscribe button does not appear in quick site okay just because we're on trial and so maybe after the end of the trial it will uh it will vanish there they are not making this easy for me account settings ah delete accounts this is what we probably want to do permanently delete the account yes i mean that has to get rid of the description because it gets rid of everything there we go we'll say confirm delete account unless you're using them in the services blah blah blah successful okay great so now i should go back to adress.amazon.com and just to confirm that it's gone i'm going to go to quicksite again and just see if it's trying to ask me to sign up again so it is so i've gotten rid of my account so we're all in good shape and uh yeah that is that is quick site [Music] hey this is andrew brown from exam pro and we are taking a look at the aws well architecture framework so this is a white paper created by aws to help customers build using best practices defined by aws you can find this at adabus.amazon.com forward slash architecture forward slash well architected this idea is not unique to aws the other providers have it but i believe aimbots was the first one to define this and they have a really good uh a good approach to this and this is pretty much essential knowledge that you have to have four certifications when we're looking at the cloud practitioner the system architect associate and professional because there's a lot of principles here best practices that adabus uses themselves to architect their infrastructure okay so the framework is divided into five sections called pillars which address different aspects or lenses that can be applied to a cloud workload so imagine you have your cloud workload you're going to want to adopt that as well architect framework some things that you know people don't consider outside the five pillars is that you need to know general definitions uh general design principles and the review process and then from there you have your five pillars so you have operational excellence security reliability performance efficiency and cost optimization and all these have major sections in this white paper but outside of just the main white paper each of these have their own white papers that go even into farther detail so if you really want to really focus on security and get a lot more information they have that as well okay [Music] let's take a look at the general definitions for the well architecture framework starting with the pillars so the operational excellent pillar is there to run and monitor systems the security pillar is to protect data and systems to mitigate risk the reliability pillar is to mitigate and recover from disruptions the performance efficiency pillar is about using computing resources efficiently or effectively and the cost optimization pillar is about getting the lowest price and this is where you're going to find all the business value and i put an asterisk there because you know you might obsess saying we need to meet the requirements for all these pillars and that's not the case you can trade off pillars based on the business context so you know don't take it as literally implement every single thing but just consider that uh you know you might have to adapt it based on your workloads then we have some general definitions that we will come across so there's components so code configuration it was resources against a requirement a workload so a set of components that work together to deliver business value milestones so key changes of your architecture through the product lifecycle then there's architecture itself so how components work together in a workload and then we have technology portfolio so a collection of workloads required for the business to operate okay [Music] so the well architected framework is designed around a different kind of team structure so when you're looking at enterprises they generally have a centralized team with specific roles where adabas structures their teams as being distributed with flexible roles and so this new kind of methodology of distributed teams uh has some major advantages but it does come with some risks and so aws has baked in some uh practices or uh things that they do to mitigate these issues okay so let's compare on-premise enterprise uh to what abuse is proposing for your team structure so on-premise what we'd see is a centralized team consisting of technical architects solution architects data architects network architects security architects and you kind of see that they all have a specialized vertical and they are usually managed by either togaf or zac uh man framework so those are just ways of structuring your teams those are very popular and so what a bus is proposing here is that you have a distributed team and the way you're going to make that team work because obviously just thinking about a distributed team they're going to be a lot more agile but to make sure that they effectively work you have practices like team experts who raise the bar making sure that you know in any areas we can always say how can we do this better then there are mechanisms in place for automated checks for standards so that's the great thing about cloud can all be automated to say hey does it meet our regulatory compliance or what have you and then there's the concept of the amazon leadership principles which we will cover on in the next slide in detail and so um you know itabus is not obviously using uh these other frameworks because it has its own which is this one here but the mechanism to which they stay organized and up to date is they are supported by a virtual community of subject matter experts principal engineers so that what they'll do is they'll engineer things like lunchtime talks and then recycle that into their onboarding material or into this framework itself okay [Music] so we're taking a look here at amazon's leadership principles and these are a set of principles used during the company's decision making problem solving simple brainstorming and hiring all right um and so i can't say that i like all of these but uh definitely some of them really stand out as being great especially the first one which is customer obsession so instead of worrying about what your competitors are doing think about what the customer wants work your way back and you know really focus on the customers needs then there's ownership so if you're going to go do something you know try to be your own mini boss uh and take responsibility for whatever it is you're building event and simplify so you know always look for the simplest solution don't try to engineer something super complicated if it's not necessary are right a lot so you know try to be right uh learn and be curious so that's pretty self-explanatory hire and develop the best insist on the high standards aws always refers to this as raising the bar think big bias for action frugality and abuse is really frugal if you didn't know that but not just for like themselves but also for their customers they want customers to spend the least amount of money possible when using their infrastructure earn trust dive deep have a backbone disagree and commit deliver results strive to be the earth's best employer success and scale bring broad responsibility and if you want to read these in detail because they have a big block of text for each of these you can go to amazon.jobs for en forward slash principles and read all about it okay [Music] all right let's talk about some general design principles that you should be considering when you are designing your infrastructure no matter what pillar that you are looking to adopt the first is stop guessing your capacity needs so the great thing with cloud computing is you use as little or much based on demand whereas on premise you would have to purchase a machine and you'd have to make sure you have additional capacity so that you could grow into it right and so here with uh cloud you do not have to worry about that test systems at production scale so be able to clone your production environment to testing tear down testing while not in use to save money so a lot of people will have a staging server that they run all the time but the great thing here is that with cloud you know it's you can just spin it up and have it right away and then tear it down and save money there's automating to make architectural experimentation easier this is talking about using infrastructure as a code so for aws it should be using cloud formation creating change sets which kind of um uh say exactly what is going to change stack updates drift detection to see if your stuff is being changed over time by developers through manual configuration things like that then we have allow for evolutionary architectures so this is about adapting ci cd um doing nightly releases or if you're using serverless if you adopted lambdas they deprecate over time forcing you to use the latest version and so that is evolutionary architectures then we have drive architectures using data so um when you're using cloud there's a lot of tooling in there to automatically start collecting data so cloudwatch will be collecting some things by default and cloudtrail will as well so you know that is another thing and then improving things through game day so this is about simulating traffic on production or purposely killing ec2 instances or or messing with your services to see how well they recover all right [Music] before we jump into each of the pillars let's go open them up and take a look at what structure we should expect to see so we have design principles definition best practices and resources all the pillars follow this to a t so let's just talk about what these are so the design principles are a list of design principles that needs to be considered during implementation and that's where we're going to focus a lot of our energy then you have definition so this is an overview of the best practice categories then you have the best practices themselves these are detailed information about each practice with various aws services and then you have resources these are additional documentation white papers and videos to implement this pillar and i just want to tell you that if you're doing the certified cloud practitioner we're really just going to cover the design principles but for the solutions architect associate or anything uh that's associated or above that's we're gonna actually dive deep into the implementation of the best practices because there is a lot of stuff there so yeah there we go [Music] let's take a look here at the design principles for operational excellence so the first here is perform operations as code supply the same engineering discipline you would to application code to your infrastructure so by training your operations as code you can limit human error and enable consistent responses to events generally we're talking about infrastructure infrastructure as a code here so this would probably be like things like cloud formation there's other things you could do like policy as a code and a bunch of other ones then we have make frequent small reversible changes so design your workloads to allow components to be updated regularly uh this could be talking about doing rollbacks incremental changes blue green deployments having a ci cd pipeline refined operations procedures frequently so look for continuous opportunities to improve your operations here you use game days to simulate traffic or event failure on your production workloads anticipate failure so perform post modems on system failures to better improve write test code kill production servers there's a small spelling mistake it should have an r here so servers to test recovery learn from all operational failures so share lessons learned in a knowledge base for operational events and failures across your entire organization but you know if you can just remember these headings here and be able to categorize what would be under operational excellence you'll be okay all right [Music] all right let's take a look at the design principles for the security pillar so the first here is implement a strong identity foundation so implement the principle of least privilege or polp that's a very popular concept meaning giving people only the permissions that they need use centralized identity so that would be using database iam avoid long link credentials then we have enable traceability so monitor alerts and audit actions and changes to your environment in real time integrate log and metric collection and automate investigations and remediation then we have apply security at all layers so take defense in depth approach with multiple security controls for everything from as networks vbcs load balancing instances os application code we might have a slide in this course on defense and uh depth where basically you see like a ring of things and you can kind of see how like there's layers that go from outward to inward and that's what they're talking about when they're listing out all these things here automate security best practices protect your data in transit at rest keep people away from your data the reason i don't have descriptions there is because those are pretty self-evident prepare for security events so incident management systems and investigation policies and processes tools to detect investigate and recovery from incidences and uh there are a lot of security tools out there and they all have funny initialisms i didn't put any of them in here but i'm sure there are some there but yeah there you go for security [Music] all right let's take a look at design principles for reliability and the first here is automatically recover from failure so monitor kpis and trigger automations when the threshold is breach test recovery procedures so test how your workload fails and you validate your recovery procedures you can use automation to simulate different failures or to recreate scenarios that led to failures before scale horizontally to increase aggregate system availability so replace one large resource with multiple small resources to reduce the impact of a single failure on the over overall workload distribute requests across multiple smaller resources to ensure that they don't share a common point of failure so we're talking about multi-az high availability okay stop guessing capacity we've seen this multiple times so in on-premise it takes a lot of guesswork to determine the elasticity of your workloads uh workload demands with cloud you don't need to guess how much you need because you can request the right size of resources on demand that's going to give you better reliability okay manage change and automation so making changes via infrastructure as a code will allow for a formal process to track and review infrastructure they're going to see iac show up a lot in this framework okay [Music] let's take a look at design principles for performance efficiency so the first here is democratize advanced technology so focus on product development rather than procurement provisioning and management of services because if you're on prem you'd have to order those machines set them up and so take advantage of advanced technologies specialize and optimize for your use case with on-demand cloud services because again if you're using on-prem uh you you know you might not have the option to have sage maker right it's just going to be a vm and you're going to do all the work yourselves whereas aws has all these specialized things so you can move quickly go global in minutes so deploying your workload in multiple abs regions around the world allows you to provide lower latency and a better experience for your customers at a minimal cost we have used serverless architecture so serverless architecture removes the need for you to run and maintain physical servers for traditional computing activities removes the operational burden of managing physical servers and can lower transactional costs because managed services operate at cloud scale and aws can be a lot better at running them efficiently then you will uh experiment more often so with virtual and automatable uh resources you can quickly carry out comparative testing using different types of instances storage or configurations to make the best choice we call this right sizing choosing the right size consider mechanical sympathy so understand how cloud services are consumed and always use technology approach that aligns best with your workload goals for example consider data access patterns when you select database or storage approaches [Music] let's take a look here at design principles for cost optimization so the first one here is implement cloud financial management so dedicate time and resources to build capacity via cloud financial management and cost optimization tooling statements is saying hey take advantage of all our tooling that makes it easy for you to know exactly what you're spending adopt a consumption model so pay only for computing resources that you require an increase or decrease using uh depending on the business requirements we're talking about on-demand pricing measure overall efficiency so measure the business output of the workload and the cost associated associated with delivering use this measure to know the gains you make from increasing output and reducing costs so stop spending money on undifferentiated that's a hard word to say undifferentiated heavy lifting so aws does the heavy lifting of the data center operations like racking stacking and power servers it also removes the operational burden of managing operating systems and applications with managed services this allows you to focus on your customers and business projects rather than your it infrastructure and the last one here is analyze and attribute expenditure so the cloud makes it easier to accurately identify the usage and cost of systems which then allow transparent attribution of it costs to individualize workload owners this helps measure return on investment and gives workload owners an opportunity to optimize the resources and reduce costs so there you go [Music] hey this is andrew brown from exam pro and we are taking a look at the aws well architected tool so this is an auditing tool to be used to assess your cloud workloads for alignment with the aws well architected framework and so what it is it's essentially a checklist but it also has nearby references so you know as you're reading through it it will show you information uh and resources so that it can help you with this checklist here and the idea is when you're done you can generate out a report and then you can provide that report to your executives and key stakeholders to prove uh you know how well architected your workload is on aws okay [Music] hey this is andrew brown from exam pro and in this video i want to show you two things the well architected framework and the well architected tool so first let's go look for the well architected framework so we're going to look up white papers aws and so if we go here to about amazon.com white papers we have a bunch of pages here and so i'm going to just check box on white paper so that we can kind of reduce the amount there and i'm going to check box well architected framework if we scroll all the way top here one of these you think it'd be right at the top but one of these is the well architected framework and here it is and so if we open it up i used to just directly open up as a pdf i'm sure you can still download it as is but generally you're going to open up as this html page and you can basically read through it see all the stuff see the multiple pillars we can click into here see the design principles read the definitions and then start reading about uh the best practices and they have these things at the bottom of each one uh very boring very very boring but um you know when you get to the solutions architect and things like that you're going to need to know this stuff inside and out it's going to really help you out this cloud practitioner we only need to know surface level information uh but that's a little arctic framework let's take a look at the well architected tool so we're going to type in well here we'll get the well architected tool and if we go here you can see that i've created a couple before probably demos for our videos and so i'm going to go define a new workload i'm going to say my my workload here my workload whoops my workload it is messing up because i probably have grammarly installed so it does not like grammarly so i'm just going to turn it off for now so my workload and it's still not typing correctly so i have to kill a kill of grammarly here which is kind of frustrating so that's a bug that that's not grammarly's fault that's adabus's fault for not playing well with grammarly and that's something i will definitely report to them because it's very annoying so i'm going to go ahead and refresh this page my workload my workload um and this is andrew brown production or pre-production doesn't matter pick your regions us east or usc's 2 sure i'm selecting it there we go uh optional optional optional optional you go to next and then you can choose your lens serverless lens ftr lens so that's the foundational technical review sas lens we can go with architected framework and then once that is there we can start reviewing okay and then we get this big checklist and so we can go through this and read each one so we say ops one how do you determine what your priorities are and all these things like ops and stuff like that these are all the summaries in each of the well architected framework sections so you pretty much don't need to really read the dock and just go through this so everyone needs to understand their part in enabling business success have shared goals in order to set priorities of resources this will maximize the benefit of your efforts so select from the following evaluate the customer's external needs external customer needs evaluate internal customer needs if you click info it's going to highlight each one here so of all key stakeholders including business development operations teams this will ensure etc and so you just go through this and uh you know once you have that and you save and exit okay you'll have the questions that are answered it'll say what's high risk what's not things like that very simplistic it's really just a way of making a very organized report or checklist and proving that you went through it uh to the executive level or to the management level there so hopefully that makes sense to you it's not too complicated but there you go [Music] hey it's andrew brown from exam pro and we are looking at the aws architecture center so the architecture center is a web portal that contains best practices and reference architectures for a variety of different workloads and you can find this at adabus.amazon.com for slash architecture so if you're looking for best practices in terms of security they have a huge section on that and they have it for pretty much every kind of category on aws or if you're looking for practical examples you can view the large library of reference architectures so here's one to make an aws q and a bot and it will have an architectural diagram but you can also deploy via cloud formation or possibly cdk and this way you can get a working example and then tweak it for your use case so this is a really great tool um when you are done the awesome architecture framework and you're saying okay how do we apply it can we get more concrete examples and i wouldn't be surprised if a lot of the resources within the well-architected framework white paper are just pointing to the center okay [Music] hey this is andrew brown from exam pro and we are taking a look at the concept of total cost of ownership also known as tco so what is tco well it is a financial estimate intended to help buyers and owners determine the direct and indirect costs of a product or service so here is an example of you know tco for maybe like a data center so we have hardware monitoring installation i.t personnel training software uh security licensing and taxes but that's not just the limit of it it's just kind of the examples we show here the idea of creating tco is useful when your company's looking to migrate from on-prem to cloud and we will have a better kind of visual here to kind of understand how you would contrast against on-premise to cloud but let's just talk about how it actually works in practicality which i think gets kind of overlooked when cloud service providers are selling you on tco so the idea is that gardner um you know they uh they were they wrote this article based on this research where an organization had moved uh 2500 virtual machines over to amazon dc2 and so what you're seeing here is that there is a an additional cost that we're not considering which is the migration cost see this bar up here um so the idea is that the company was paying around 400 000 and so they started to move over and as you see their costs initially went up for a short period of time here uh but then once that migration cost was over uh you can notice that they had a 55 reduction so it's uh totally possible to save money uh and clearly there is great savings uh now is it exactly what aws promises probably not and that's that could be the reason why they update their tco calculator but let's now just do that contrast against the two so we have on-premise on the left and aws on the right or any cloud service provider and what i want to do is help you think about what costs do people generally think about because if we have like iceberg the idea here is that these are the costs that we always think about above the iceberg and then there's these hidden costs that we just don't consider when factoring in our move and that's the idea of tcos to consider all the costs not just the superficial ones and so people say these look like teeth and that's why i add penguins and a whale here um and so when we're talking about on-premise what we generally think are software license fees and subscription fees but when you compare those against each other they might look the same um aws might just look slightly cheaper or even more and so the idea is you need to then factor in everything so on on premise there's implementation configuration training physical security hardware id personnel maintenance and on the aws side you know you are you don't have to do as much of that stuff so you just have implementation configuration and training and so aws with their tco calculator their old one used to make a promise of 75 percent in savings um again you know this is going to really vary based on what your migration strategy looks like um but you know it's totally possible you could save 75 percent or you could save 50 percent over a third year three-year period and there's a initial spike so that's just something you have to consider but the nice thing though is that once you've moved over all the stuff over here on the left-hand side will be eight of us's responsibility okay [Music] all right so let's take a look at capital versus operational expenditure so there's capex and opex so on the catholic side the idea here is you're spending money upfront on physical infrastructure deducting that expenses from your tax bill over time a lot of companies that are running their own data centers uh or have a lot of on-premise stuff understand what capex is because it's something that a lot of times they get tax breaks is on and that's why we see a lot of people that have a hard time moving away from the cloud because you know they keep on thinking about that money they save from the government but capex costs would be things like server costs storage network costs backups and archives disaster recovery costs data center costs technical personnel so the idea is with capital expenses you have to guess up front what you plan to spend okay with operational expenditure the idea here is the cost associated with an on-premise data center that has shifted the cost to the service provider the customer only has to be concerned with non-physical costs so leasing software and customizing features training employees and cloud services paying for cloud support billing based on cloud metrics so compute usage storage usage and so the idea here is with operational expenses you can try a product or service without investing in equipment so basically apex is what we think about when we think of on-premise and then opex is what we think about you know we're thinking about cloud or aws okay [Music] all right let's ask a very important question about cloud migration so does cloud make it personnel redundant so a company is considering migrating their workloads from on-premise to the cloud to take advantage of the savings there is a concern among the staff that there will be mass layoffs does cloud make it personnel redundant and that's a very important question to to have an answer to and this all talks about shifting your i.t team into different responsibilities so a company needs i.t personnel during the migration phase as we saw with that gardner research report that there was a period at least like a year where they needed that for you know depending on the size your company so you're still going to need those people around a company can transition some roles to new cloud roles so a very traditional example would be you have your traditional networking roles or people have like their ccna and now they're moving over to cloud networking they have a reduced workload but there's other things that they could be doing in the cloud a company may decide to take a hybrid approach so they'll always need to have a traditional it team and a cloud it team um and the last one and this one you'd actually see on the exam which is a company can change employees activities from managing infrastructure to rev revenue generating activities okay so the idea is that you know if you're a company why would you get rid of all your staff and you just put them all into rev regeneration i suppose you know you could uh you know uh lay them off and some companies might do that um or you know you could just retrain them because if that it personal team has uh technical expertise i'm sure they can translate that to the cloud [Music] let's talk about the database pricing calculator and this is a free cost estimate tool that can be used within your web browser without the need of a database account to estimate the cost of a various items services and this is um available at calculator.aws and the reason we're bringing this up is because there used to be a tco calculator but now this is the calculator that you use so the abs pricing calculator contains 100 plus services that you can configure for cost estimate and so you can just click through a bunch of knobs and boxes to uh you know exactly figure out a very accurate cost so the idea here is that to calculate your tco an organization needs to compare that existing cost against their abuse costs and so the ibis prices calculator can be used to determine uh you know the aws costs and obviously the organization knows its cost so we can compare it against that and the way you can get data out of this is you can export it as a final estimate to a csv okay [Music] hey this is andrew brown from exam pro and we are taking a look at the aws pricing calculator so to get there it's calculated.aws what you're going to do is hit create estimate and then here you have a bunch of services so you just choose what you want so you type in ec2 we're going to configure that and from there we can do a quick estimate or an advanced estimate so choose this option for fast and easy route to ballpark an estimate choose this option for detailed estimate for accounts workloads and stuff so notice down below very simplistic we hit advanced and we get all sorts of stuff okay so you know it's really up to you i'm very comfortable with the advanced options so i might be running a linux machine what is my usage it's going to have uh daily spikes of traffic because of the use cases you could say it's not busy on saturday and sunday that it has a baseline of one a peak of two eight things like that then you can choose what you're using um t4 g i don't even know what that is uh but let's just say like t uh t to uh micro which is not that big two three micro and you could say we're doing on demand because a lot of people would be doing that and you see like seven dollars a month it's not a lot of money then you're looking at your storage data in data out okay so we can add that another thing that we might see is something like rds so we go to rds and we add postgres and not all of them have the simple and complex sometimes they're simple so production database we'll have one here and we're just going to be say a db t2 micro t3 micro there we go a hundred that's fine we're not going to have multi-az we'll have single lazy on demand show the calculation 13 a month add that to our estimate so you're kind of getting the idea there right and so you know we have our summary that's our monthly 391 dollars um oh sorry over 12 months so our monthly cost is 32 okay you can go back there clone the service edit it stuff like that you can export the estimate i think it goes out as a csv you can also hit share and then hit agree and so then you have a public link and if i have that link we can just see what happens if i paste it okay and it just brings them to the same estimate so there you go [Music] hey this is andrew brown from exam pro and we are taking a look at migration evaluators so it was formerly known as tcl logic and then abus acquired the company and it is an estimate tool used to determine an organization existing on-premise costs so it can compare it against its aws costs for planned cloud migration uh so the idea is that you can get a very very detailed information and the way it collects information is via an agentless collector to collect data from your on-premise infrastructure to extract from your own on-premise costs i don't know if you can see there but you can see that it works with a lot of different kinds of on-premise technology like vmware microsoft tsql all sorts of things okay [Music] one migration tool that we can use with aws is the vm import export and this allows us to import virtual machines into ec2 so itamus has import instructions for vmware citrix microsoft hyper-v windows vhd from azure and also linux vhd from azure and so the way this works is that you prepare your virtual image for upload and adabus has a bunch of instructions for that once it is ready you're going to upload that to an s3 bucket and once it's uploaded to an s3 bucket then what you can do is use the aw cli to import your image um and so that is the cli command down below and once it is produced it will generate out an amazon machine image and so from an ami you can then go launch your ec2 okay [Music] hey this is andrew brown from exam pro and we are taking a look at the database migration service which allows you to quickly and securely migrate one database to another dms can be used to migrate your on-premise database to aws and that's why we're talking about it and so here's a general diagram where you have your source database which connects to a source endpoint goes through a replication instance so that's an ec2 instance that's going to replicate the data to the target endpoint onto the target database and so we have a bunch of possible sources so we have oracle database microsoft sql mysql mario db postgresql mongodb sap at asc imdb db2 azure sql database amazon rds amazon s3 and i'm assuming these are database dumps amazon aurora amazon document db and so for possible targets it's very similar we got oracle database microsoft sql mysql mario db post sql redis sap se amazon redshift amazon rds amazon dynamodb amazon s3 amazon aurora amazon open search service amazon elastic cache for redis amazon document db amazon neptune apache kafka i'm just showing you the list to give you an idea of how flexible this service is uh but you can tell that these are very different databases so how can it uh move them over right and so in not all cases can it easily do it like it's very easy to go from mysql to postgres um but you know for ones that are like relational to uh nosql uh this is where the innova schema conversion tool comes into play it's used in many cases to automatically convert a source database schema to a target database schema or semi-automate it so that you can kind of like you know figure out how to map the new schema each migration path requires a bit of research since not all combinations of sources and targets are possible and it really comes down to even versions of these things so but i just want you to know about that it's an option as a database migration service and i've migrated a very large database before and it's super fast so and it's not that hard to use so something you definitely want to remember when you're [Music] migrating hey this is andrew brown from exam pro and we are taking a look at the cloud adoption framework so this is a white paper to help you plan your migration from on premise to aws at the highest level the aws caf organizes guidance into six focus areas we've got business people governance platform security and operations and this white paper is pretty high level uh so you know it doesn't get into granular details on how that migration should work but gives you kind of a holistic approach and i believe that probably through the aws amazon partner network there's people that specialize in using this particular framework to help organizations move over and i believe that anybody has professional services through the apn but let's just kind of break down what these six categories are we're not going to go too deep into this but let's do it so the first is the business perspective so these are business managers finance managers budget owners strategy stakeholders so it's how to update the staff skills and organizational processes to optimize this value as they move ops to the cloud you have people perspective so human resources staffing people managers so how to update the staff skills and organizational processes to optimize and maintain the workforce and ensure competencies are in place at the appropriate time you have governance perspectives so cios program managers project managers enterprise architects business analysts so how to update the staff skills and organizational processes that are necessary to ensure business governance in the cloud and manage and measure cloud investments to evaluate the business outcomes we have platform perspectives so ctos it managers solution architects so how to update the staff skills and organizational processes that are necessary to deliver and optimize cloud solutions and services security perspectives so ciso i.t security managers i.t security analysts so how to update the staff skills and organizational processes that are necessary to ensure that the architecture deployed in in the cloud aligns to the organization's security control requirements resilience and compliance requirements we have operational or operations perspective so i t operations managers i t support managers so how to update the staff skills and organizational processes that are necessary to ensure system health and reliability during the move of operations to the cloud and then to operate operate using agile ongoing cloud computing best practices so this just taps the surface of what the caf is and i think for each of these they actually have a more detailed breakdown so you know business is going to break down to even more uh finite things there okay [Music] so aidabus has free services that are free forever unlike the free tier that are up to a point of usage or time um and so there are a lot here this is not even the full list there's definitely more and we have iem amazon vpc auto scaling cloud formation elastic bean stock ops works amplify appsync code star organizations consolidate billing it was cost explorer sagemaker systems manager there's a lot of them okay but the thing is is that these services are free but some of these can spin up other resources so the services are free themselves uh however ones that provision services may cost you money so cloudformation which is an infrastructure as a code tool could launch virtual machines those virtual machines will cost money right opsworks can launch servers that can cost money amplify can launch um lambdas that can cost money so that's something you just have to consider um but yeah there you go [Music] hey this is andrew brown from exam pro and we are taking a look at the aws support plans so we got basic developer business and enterprise and you absolutely absolutely need to know this stuff inside and out for exams they will ask you questions on this okay so basic is for email support only uh such as billing and account so if you think it got over billed and that's something you should do if you've uh misconfigured something and you end up with a big bill just go open up a support ticket under basic for billing and they're likely to refund you but if you do have questions about billing accounts that's we're going to be using for everything else that is for tech support um and so for developer business enterprise you're going to get email support which they'll roughly reply within 24 hours i believe this is business hours so if you message them on friday um or sorry saturday you might be waiting till monday for it okay in terms of third party support the only one that doesn't have third-party support is developer so if you are using something like ruby on rails or azure or something that has interoperability between aws and something else business enterprise will absolutely help you out with it same with enterprise but the developer one not so much uh if you like to use the phone or you like to chat with people um that's available the business enterprise tier this is the way i end up talking to people if you are um you know like if you're in north america and you're calling between nine to five on a monday friday you're likely to get somebody that is in within north america if not it'll be one of uh one of the supports from some other area so just be aware of that that can also affect the time they pick up uh sometimes it's five minutes sometimes it's 30 minutes to to an hour uh you know it just depends on what service you're asking for and you know what time of day okay in terms of responsiveness uh for general guidance everything is 24 hours or less for developer business enterprise if your system is impaired it's within 12 hours or less with production system impaired it's four hours or less with production system down it's one hour or less and if you're for enterprise um it's going to be business critical system down less than 15 minutes so just notice who has what for these things um i've definitely waited like three days on general guidance before so just take these with a grain of salt that they're not you know they don't really stick to these that or maybe i'm just not paying enough for them to care okay um in terms of uh getting actual people assigned to you this only happens at the enterprise level where they have their concierge team so they uh help your organization uh learn how to use adabask ask them any questions personally and then you have a tam a technical account manager that is somebody that knows um awsi inside and out and they'll help you architect things and make correct choices or they'll check your bill and help you try to reduce that bill things like that okay in terms of trusted advisory checks at the basic developer you get seven advisor checks once you're paying for business you get all the checks the cost here for business is zero um for developer it's starting at 29 a month for businesses starting at 100 a month and then for enterprise it's 15 000 a month so i said starting yet because it's dependent on your usage okay so let's just look at developer business enterprise here because basic's not going to be applicable here so for developers 29 usd a month or three percent of the monthly database usage which whichever is greater on the exam they're only going to ask you like is it 29 100 like generally do you know the tier of expensiveness but they're not going to ask you the percentage of usage okay there's not going to be formulas here when you get into business it's a little bit different where they have it in different brackets so it's going to be 10 for the first uh 10 000 and the next is going to be the next 7 000 stuff like that similar for enterprise as well so let's just do some math so we know that we understand how this works so if you if you had a monthly spend of 500. at the developer tier that's three percent of five hundred is fifteen dollars so they go okay what is greater twenty nine dollars or fifteen dollars so you're paying twenty nine dollars if your spend is a thousand dollars that comes up to thirty dollars uh so you're gonna end up paying thirty dollars because that's greater than 29 okay for business if your monthly spend is a thousand that's ten percent of a thousand that's a hundred dollars if your spend is five thousand then you're going to be paying 500 if your monthly spend is 12 000 then the first 10 percent of a 10 000 is a thousand and then the next is seven percent of two thousand so your total bill is 140 usd we're not going to do a calculation for enterprise because the same for business but hopefully that gives you an idea there okay [Music] hey it's andy brown from exam pro and we are taking a look at a technical account manager also known as a tam and these provide both proactive guidance and reactive support to help you succeed with your aws journey so what does atam do and this is straight from a database job posting what they would do is build solutions provide technical guidance and advocate for the customer ensure aws environments remain operationally healthy while reducing costs and complexity develop trusting relationships with customers understanding their business needs and technical challenges using your technical uh acumen and customer obsession you'll drive technical discussions regarding incidents trade-offs risk management consult with a range of partners from developers through the c-suite executives collaborative with adwords solutions architect business developers professional service consultants and sales account managers proactively find opportunities for customers to gain additional value from aws provide detailed reviews of service disruptions metrics detailed pre-launch planning being a part of a wider enterprise support team providing post-scale con uh consolidative expertise solve a variety of problems across different customers as they migrate their workloads to the cloud uplift customer capabilities by running workshops brown bag sessions brown bag sessions being a sessions that occur at lunch time something you can learn in 30 minutes an hour and so one thing that's really important to understand is that tams follow the amazon leadership principles especially about customer uh being customer obsessed and we do cover the amazon leadership principle somewhere in this course and tams are only available at the enterprise support tier so hopefully that gives you an idea what a tam does [Music] hey this is andrew brown from exam pro in this follow along i'm going to show you um database support and in order to use ada support or to change your level of support you're going to need to be logged into the root account i should say you can use support with im users but if you want to change the support plan you're going to have to be the root user so the top right corner i'm going to support and notice here on the left hand side right now i have a basic plan and so before we look at changing our plan i'm just going to go create a case and we're going to just take a look at some of the options that are open to us so we have account billing support service limit increase technical support notice this is grayed out so we cannot select anything here i can go to here and increase our service limit and this is something that you might have to do pretty soon earlier in your account you might say hey i need more of something like ec2 or a very common thing is ses so for ses you might say hey i need to have this amount of emails for etc okay so um if we go over to account and billing support uh we can go here and ask anything we want so if it's about the free tier i could say ask the general question getting started and saying uh what is free on aws um i want to know what is free on aws and you can attach three attachments there you can choose via web and phone which is really nice um but today i'm just going to do web here and submit that just to kind of show you that as an example and so what that is going to do is open a case and then we will see probably respond in 24 hours to 48 hours just depends on whether it's the weekend or not because it's based on business hours of course so now that we have an understanding of basic let's go take a look at what the other tiers look like so we have basic developer business and enterprise enterprise being extremely expensive developer being affordable and then business being um you know affordable for businesses so i would say developer is okay it gives you um it gives you better support but it's all via email and so you know if you really want good support you're gonna have to pay the business one and that's the one that i use quite a bit so if i change my plan i'm gonna go over to business and this is gonna cost me 93 bucks just to do to show you here today so i'm going to go ahead and click that and so it's now processing it and so what's going to happen is i'm going to have to wait for this basic to switch to business because if i go to the case here it hasn't happened as of yet so notice i cannot select this so i'm going to see you back here it may be like four or five minutes or however long it takes and we'll take a look then okay great so after a few minutes it says my plan is now business and what i can do is go ahead and create a new case and so i can go over to technical support and ask a question so if i was having issues with anything it doesn't matter what i could go over to ec2 linux and then i could choose my category so i could say i'm having an issue with um systems manager and a lot of times they like you to provide the instance id it's going to change based on what service you choose here but you'll get different information i'll just say i need help with [Music] logging into my ec2 instance managed by ssm so i could say i created an ec2 instance and i am attempting to access the instance via sessions manager but it is not working i think i have a role issue and then i'm just going to go down here and say this is not a real question i am filming a demo video for a tutorial video on how to use support okay and so once we do that we have the option of web chat and phone so if you use phone you're going to enter your phone number in and they're going to call you back usually you'll be on hold for anywhere for five minutes to an hour it just depends usually it's within 15 minutes so it's very good of course it depends on the time of day and your location things like that and the service because there's different support engineers for different types of services and the the balance of those are different but generally chat is pretty good so i can go here and i'm just going to hit submit and it's going to open a chat box and so you just wait okay and sometimes it's super fast and sometimes it takes uh minutes okay so we are going to just sit here for a bit and um you know i'll just pop back here when there is somebody to talk to okay okay so after waiting a little while looks like uh we've been connected here so it took a bit of time so we're just going to say hello hi umair this is andrew brown i am recording a video to teach people how to use aws and i wanted to show them how it was support works so i'm just showing them how the chat system works say hello and hopefully they'll appreciate or they won't it just doesn't really matter we'll give them a moment there we go that's it thanks for your help okay so that's pretty much it um so you know there's nothing really uh special about that but the idea is when you are typing with them it will appear in the correspondence there so i'm just going to end the chat okay and then i'm just going to mark that case as result sometimes they will ask you to resolve it if i go to cases i probably have some previous ones here um and i have a lot but i don't know why they don't all show up here so you can see this one is pending this one is resolved i go back to this one you can kind of see that the history of a conversation is kept and you can go back and forth with the people there um yeah that's pretty much it you can also do screen sharing so they might send you a request to go on zoom or download this piece of software that shares your screen and so that is another option as well so they can get pretty hands-on to help you with your problems there but that's pretty much all i wanted to show you with support i'm going to downgrade this and i'm not sure if they're going to give you back my money sometimes they'll prorate it for you but i'm going here and go back to basic um so we will also refine your credit card directly in the month's remaining fees on your old plan which you previously paid you're obligated to pay a minimum of 30 days of support each time you register so i'm not going to get any money back which is totally fine because i just wanted to show you how that works but business support is definitely worth it and uh you know that's it [Music] so the aws marketplace is a curated digital catalog with thousands of software listings from independent software vendors uh easily find buy test and deploy software that already runs on abs the product can be free to use or can have an associated charge the charge becomes part of your abs bill and once you pay database market pays the provider the sales channel for isv and consulting partners allow you to sell your solutions to other awes customers products can be offered such as amis it is confirmation templates software the service offerings web acls ableist laugh and rules so it sounds great um if you want to sell here i think you need like a u.s bank account to do it um and you know sometimes database marketplace is just part of aws so like when you're using the ec2 marketplace you are technically using the aws marketplace but they also have like a dedicated page for it so it's integrated with some services and it's also standalone okay [Music] hey this is andrew brown from exam pro in this follow along we're going to take a look at the adabus marketplace so what i want you to do is go to the top and type in marketplace and that will bring us over to here the marketplace can be found in a variety of different places on the platform here you can see that uh previously it was using something called guacamole bastion host to launch a server but the idea is that um you can discover products and subscriptions that you might want to utilize so if i go over here there's a variety of different things and so it could be like i want to have something like a firewall that might be something that we might be interested in so we can search there and there's like bring your own license firewall so maybe you have a license with this and you want to run it on an ec2 instance something like that again it's not like super complicated uh what's going on here but a lot of times you know when you're using services you're accessing the marketplace anyway so like when i'm launching an ec2 instance noticeable on the left-hand side is 8-bit marketplace and so i don't have to go to the marketplace there i can just kind of like check out the thing i want and that's pretty much all there really is to it okay so you know hopefully that makes sense [Music] well let's take a look here at consolidated billing so this is a feature of abuse organizations that allows you to pay for multiple accounts via one bill so the idea here is we have a master account and we have member accounts and i'm pretty sure that we probably call this root account now i don't think account might be a dated term but it's still showing up in the documentation the idea is that if you have member accounts within your organization they're all going to be consolidated under the single account if you have an account outside of your organization you know this is not going to give you this is going to be basically a separate bill as if it's like a standalone organization or what have you okay so for billing aws treats all accounts in an organization as if they were one account you can designate one master or root account that pays the charges for all the other member accounts consolidate billing is offered at no additional cost you can use cost explorer to visualize usage for consolidated billing which we can see i have the icon here you can combine the usage across all accounts in the organization to to share the volume pricing discount which we did cover in this course separately if you want an account to be able to leave the organization you do have to attach it to a new payment method so if let's say you had an account and you want to give it to your friend or whatever they have to hook up their cred their credit card but you can totally have an account leave an organization but you have to deal with that billing aspect okay [Music] all right so there's a really cool way to save an aws and that's through volume discounts and it's available for many services the more you use the more you save is the idea behind it um and so consolidating billing lets you take advantage of volume discounts this is a particular feature of database organization so if you do not have the orgs turned on you're not going to be able to take advantage of that okay so one example would be something like data transfer where it is billed for the first 10 terabytes at 17 cents or sorry point 17 cents and then the next 40 terabytes it will be at point 13 cents okay so if we had two accounts um such as odo and dax and they're not with an ableist organization we can calculate those and see what they are unconsolidated and just so you know one terabyte equals 1.024 gigabytes and that's what we're going to see in these calculations so for odo uh you know if he has four terabytes and that is uh we calculate the gigabytes there we times it by uh the um set value there we're going to get 696 dollars okay for dax we're going to end up with uh about 13.92 there and so if we were to add those up the bill would come out to 2088 okay so the idea is that there's an organization and they like your company and they created two accounts but they're just not within an organization by having them in the organization you're gonna save um about almost eighty dollars there so um that is a reason why you'd want to use volume discounts okay [Music] hey this is andrew brown from example and we're taking a look at abyss trusted advisor so trusted advisor is a recommendation tool which automatically and actively monitors your aws accounts to provide actual recommendations across a series of categories so this is what it looks like i personally prefer the older dashboard but this is what they have now and you can see along the side we have a bunch of categories and then we have some checks here saying uh you know what are we meeting what are we not and you can go in and read each one and they'll tell you so much information they'll even show you like what things are not meeting that requirements in some case you can easily remediate by pressing a button not in all cases but the thing with the ambush trust advisor is think of its trusted advisor like an automated checklist of best practices on aws and they kind of map to the pillars of the well-architected framework not exactly but pretty close but there are five categories of aws trusted advisor so we have cost up to imagine station how much money can we save performance so how can we improve performance security how can we improve security fault tolerance how we can we prevent a disaster or data loss and service limits so are we going to hit the maximum limit for a service and so the next thing we need to discuss is um there is a variation of the amount of checks that are available to you based on your support plan so you know if you're using basic or developer you have seven trusted advisor checks and if you have business enterprise you have all the trusted advisor checks so if we're talking about just the ones that are available to you the ones that come for free is mfa on root account security groups specified ports of unrestricted amazon s3 bucket permissions amazon ebs public snapshots amazon rds public snapshots imu so this is just about alerting you about discouraging the use of the root account service limits so all service limits checks are free um it's weird because they call it the like seven security checks but if you counted all the service limits it'd obviously be too large of a number but notice that one through six are all security checks so you're not getting anything from the other tiers just the security tier and what i want to do is just go over a bunch of available checks out there it's probably not the full list because i couldn't even be bothered to update it if they've added more but it'll give you a general idea of what you could expect under each category so for cost optimization it could be things like looking at idle load bouncers so you know if you have load balancers you're not using you're paying for them so get rid of them unassociated elastic ip addresses so for every ip that's not associated you're paying for as well maybe under performance you have high utilization of amazon ec2 instances so maybe you can save money by switching to smaller instances under security we saw mfa on root account very popular one making sure you turn on key rotation could be something as well there under fault tolerance it could be making sure that you're using backups on your amazon rdes database maybe that's turned off uh for service limits there's just a ton of them and so uh one that that you know might be pertinent to use vpcs or ec2 limits so there you go [Music] hey this is andrew brown from exam pro and we're going to take a look at trusted advisors so what i want you to do is go to the top and type in trusted advisor and once you're there you're going to notice on the left hand side we have cost optimization performance security fault tolerance and service limits right now there are no recommended actions because there's not much going on this account and when you uh have the uh free level of support the basic support you're not going to have all these checks but if we go in here we can still see kind of what they do so we have like performance security things like that so these are the ones that we actually can see and they generally work all the same way if you expand here it's going to say amazon ebs public snapshot so check the permission settings for the ebs volume snapshots and alert you if the any snapshots are marked as public and so if you scroll on down if there were ones that were an issue it would tell you right here okay then down below here we see like check buckets in amazon s3 that have open access permissions or allow access to authenticated database users so yellow the acl allows list access for everyone a bucket policy allows for any kind of open access bucket police statements have public grant access so maybe what we can do is to see if we can get this to trigger and so what i'm going to do here is go over to s3 and what we're going to do is make a bucket that has a full axis okay so i'm going to create a new bucket and we'll say my exposed bucket we'll scroll on down here and we'll just check box that off and create the bucket let's say i acknowledge that is totally fine okay so now i have a bucket that is 100 exposed if we go back to trusted advisor give this a refresh i'm not sure how fast it will show up here but if i expand so it says the bucket acl allows upload delete for everyone the trusted advisor does not have permissions to check the policy uh bucket policy statements that grant public access so what we could try to do is make a policy and try to grant all access here so i'm not writing these every single day but i'm sure we could try to figure this out um we'll say s3 bucket policy public access public read and so that one might be a good example so i'm going to go ahead and copy this one granting read only permission to anonymous users i don't recommend you doing this i'm just doing this to show you to see if we can get the trusted advisor to check because i don't want you to do this and forget about it and then have a serious issue but the principle is set to anybody so anyone can read it here it's saying get object etc then it's saying what particular resource so this one is going to be for the bucket in question here which is my exposed bucket we're going to scroll on down save the changes okay so this bucket is publicly accessible we're going to go back over here refresh and see what we can see okay so checks buckets in s3 etc so it should appear under here and it could be that it's just going to take some time so what i'm going to do is i'm just going to hang tight for a little bit oh there we go okay so it's showing up and i guess it just took some time to populate and so here we can see we have a yellow symbol it's a warning saying hey there's a problem here if we go back to the dashboard i wonder if that shows up so this one's for investigation and recommendation so you know hopefully that kind of makes sense to you i think in some cases you can do remediation from from here or at least you can go and check box and say okay um excuse me ignore gonna swore there's remediation for some of these but in any case you know that's generally what trusted advisor does i think that you probably can have it so it gives you alerts so yeah you could set recipients for particular things like if there's a security issue that i could email a particular person on your team and they could deal with it but that's pretty much it so what i'm going to do is go ahead and delete this bucket i'm all done with it we'll go delete and say my delete my exposed bucket here to delete it and that is it okay [Music] let's cover the concepts of service level agreements also known as slas so an sla is a formal commitment about the expected level of service between a customer provider when a service level is not met and if customer meets its obligation under the sla customer will be eligible to receive compensation so financial or service credits and so when we talk about slas then we talk about sli so at sli service level indicator is a metric or measurement that indicates what measure performance a customer is receiving at a given time a sli metric could be uptime performance availability throughput latency error rate durability correctness and if we're talking about sli's then we're talking about slos service level objectives so the objective that that the provider has agreed to meet as wells are represented as a specific target percentage over a period of time and so an example of a target percentage would be something that says availability sla of 99.99 in a period of three months all right and let's just talk about target percentages in the way they can be represented very common ones we will see is 99.95 percent 99.99 uh then we have 99 followed by nine nines and so commonly we just say we call this nine nines okay and then there's one nine elevens so if somebody says we have an sla guaranteeing of of 911s it's going to be the 99 followed by 0.911s all right [Music] let's take a look at abus service level agreements and so there are a lot of them and i just wanted to show you a few services to give you an idea how they work on the exam they're not going to ask you like oh what's dynamodb's sla for global tables but generally we should just go through this because it's good practice so let's take a look at dynamodb sla so abyss will use commercially reasonable efforts to make dynamodb available with a monthly uptime percentage of each aws region during any monthly billing cycle so for a at least 99.999 percent if global tables sla supplies or 99.99 if the standard sla applies in the event dynamodb does not meet the service commitment you'll be eligible to receive service credits described below so we have monthly uptime percentage and the service credit percentage we get global tables standard tables so let's take a look here so if less than 99.999 but equal to or greater than 99.0 percent is met so if if the service ends up being this you'll get 10 back of what you spent as service credits if it drops between 99.0 and 95.0 you get 25 percent back if it's less than 95 percent um then it's a hundred percent back okay and you get the general idea here sla is going to be slightly different with their drops now let's take a look at um a compute so compute is going to apply across a bunch of compute services probably because they're all using ec2 underneath so that's probably the reason for it so we have ec2 ebs ecs eks and abus makes two sla commitments for the included services so we have a region level sla that governs included services deployed across multiple azs or regions and an instance level sla that governs amazon ec2 instances individually and again we have our monthly up time percentage our service credit percentage region and instance level so you can just see the same thing it's like it's going to change based on uh what it can meet then we'll take a look at one more like rds so relational database uh service so abs will use commercially reasonable efforts to make multi-az instances available with monthly uptime percentage of 99.95 during any monthly billing cycle and again you know if if they don't meet those requirements you're gonna get service credits back which basically equal usc dollars on the platform and so for this it looks like that so just notice that you know with comp like compute it was for a a bunch of services for dynamodb it was based on uh particular features like global standard tables sla it's very straightforward uh we didn't do s3 because i just did not want to show you that one it's just too complicated but my point is is that it's going to vary so you have to look up per service okay [Music] hey this is andrew brown from exam pro and we're taking a look at amazon's service level agreements and so the way you find slas is you're pretty much just typing sla for whatever it is so if you're looking for compute you type in sla or you look for a particular service so maybe you say sage maker aws i don't think there's like a generic sla page at least i don't know where it is i always just type in sla to find what it is and through that you can just kind of read through and try to find out uh the things that that matter to you for your business okay [Music] let's take a look here at the service health dashboard and so the service health dashboard shows general status of aws services it's really simple the idea is that you can check based on the geographic area so you'd say north america europe etc and what you'll see is an icon that says whether the service is in in good standing and the details whether the service is operating normally etc notice they also have an rss feed the reason i'm talking about service health dashboards is because i want to talk about personal health dashboards and because they're both called health dashboards it's confusing so i wanted to tell you about this one first so now we'll jump into the aws personal health dashboard so we saw the service health dashboard now let's take a look at the adabus personal health dashboard so this is what it looks like and it provides alerts and guidance for it events that might affect your environment all airbus customers can access the personal health dashboard the personal health dashboard shows recent events to help you manage active events and show proactive notifications so that you can plan for scheduled activities you you can use these alerts to get notified about changes that can affect your invoice resources and then follow the guidance to diagnose and resolve the issue so this is very similar to the service health dashboard but it's personalized for you um and it's you know i i don't see it crop up very often but if you had to create alerts or be reactive to uh things that are happening within your bus this is where you do it okay [Music] so there's a team called aws trust and safety that specifically deals with abuses occurring on the abyss platform and so i'm going to just list of all the cases where you'd want to be contacting them as opposed to support so the first is spam so you're receiving unwanted emails from an abus owned ip address or abus resources are used to spam websites or forms port scanning your logs show that one or more aws owned ip addresses are sending packets to multiple ports on your server you also believe this is an attempt to discover unsecured ports uh dos attacks so your logs show that one or more italy's owned ip addresses are used to flood ports on your resources with packets you also believe this is an attempt to overwhelm or crash your server or the software running on your server intrusion attempts so your logs show that one or more adidas owned ip addresses are used to attempt to log into your resources hosting prohibited content so you have evidence that abyss resources are used to host or distribute prohibited content such as illegal content or copyrighted content without the consent of the copyright holder distributing malware so you have evidence that abus resources are used to distribute software that was knowingly created to compromise or cause harm to computers machines that it's installed on and so in any of these cases you're not going to it with support you're going to open up an abuse ticket and so you got to contact abuse at amazonatabus.com or fill out the amazon abuse form so and this is whether it's coming from an outside ableist account or even your internally if you think that somehow someone has a compromise your account and it's being used any of these ways this is what you're going to do okay [Music] hey this is andrew brown from exam pro and we're looking at awsw so uh we were saying that database has the itabus trust and safety team and what you'll want to do is if you find that there's an issue you're going to report it to this email at abuse at amazon.com or you're going to use this form which is the report amazon it was abuse so you'll go down here you'll sign in you'll put your email in your first name last name org phone number um the source ip the the details uh in here you can even select the type of abuse so you say if it's this kind or that kind things like that it's very straightforward and that's pretty much it okay [Music] hey this is andrew brown from exam pro and we are taking a look at the aws free tier and this allows you to use database at no cost um and when we say free tier there there there's the idea of the first 12 months of sign up there's going to be special offerings or it's free usage up to a certain monthly limit forever and then there's just services that are inherently free which we have a total separate slide on but let's talk about just the free tier stuff and this is absolutely not the full list but it's a good idea like it gives you a good overview of stuff that is free so for ec2 which you use for a web server you get a t2 micro for 750 hours per month for one year and so there's about 730 hours um in a month and so that means you could have a server running the entire month for free and an additional server for a bit as well so for rds which is a relational database service for either mysql or postgres we can do a t2db micro for 750 hours for free so there we get our free database and you would be surprised how far you can get with a uh a t2 db micro um you know even for a medium sized startup you can run it on a t2 db micro with no problems then you have your elastic load balancer you get 75 hours per month for one year um so that is a really good thing uh load bouncers usually cost 50 a month so that's great actually all these pretty much cost 15 a month so that's about um 15 30 45 month over month for a year that's uh free then you have amazon cloudfront this is where you'd have your home page caching your videos things like that so you get 50 gigabytes data transfer out for the total year then there's amazon connect you get your toll-free number there 90 minutes of a call time per month for one month or for one year sorry amazon elastic cash so you could launch a redis or um elastic cash server you get 70 hours on a cash d3 micro for a year um elastic search service so it's full text search so again 70 50 hours per month for one year pinpoint campaign marketing email so you can send out 5 000 targeted users per month for one year scs so simple email uh service so this is for um transactional emails um so that you send out from your web app so 62 000 emails per month forever airbus code pipeline so one pipeline free it was code build so this is for building out projects or things like that so 100 build minutes per month forever it was lambda service compute 1 million free requests per month 3.2 million million seconds of compute time per month for free and you know i like to highlight these ones because for traditional architecture you're always going to have a web server a database a load balancer um and you might even have cloudfront in there as well but uh yeah again there's a huge list and this does not even tap the service of what's free on aws [Music] hey this is andrew brown from exam pro and we are taking a look at abyss promotional credits and these are the equivalent to usd dollars on the abyss platform abs credits can be earned several ways this could be joining the database activate startup program winning a hackathon participating surveys and any other reason that database wants to give credits out once you have a promotional code you click the redeem credit button in the billing console you enter it in and then your credits will be shown there you can monitor them via it was budgets or via cost explorer and probably even billing alarms it was credits generally have an expired day attached to them could be a few months to a year immense credits can be used for most services but there are exceptions where it is credits cannot be used like purchasing a domain via roe 53 because uh that domain costs money outside of aws's cost like for their infrastructure and virtual stuff and so for things like that uh you know they're not gonna be you're not gonna be able to use credits for that okay [Music] the adams partner network also known as apn is a global partner program for aws so joining the apn will open your organization up to business opportunities and allow exclusive training and marketing events so when joining the apn you can either be a consulting partner so you help companies utilize database or a technology partner you build technology on top of abs as a service offering and a partner belongs to a specific tier so it's either going to be select advanced or premiere when you sign up it's free to sign up but you're not going to be able to do much until you start uh committing to an annual fee so that's it's like a certain amount of money to uh be able to be part of that tier and it starts in the thousands okay so i think the first tier is like something like a thousand or two thousand dollars and it gets uh more expensive as you go up as a tier and you also have to have particular knowledge requirements so this could be holding uh particular edible certifications at this at the foundational level at the associate level things like that um or it could be uh aws apn exclusive certification so training that um it's not in certifications but there's certifications that are only available to partners saying like how do you it could be like something like how do you uh talk to customers or communication things like that you can get back promotional database credits so you know if you say oh man i spent uh two thousand dollars on just being able to get into the apn at least the idea is that you can generally get back that uh that spend on aws so it's like you committing if you give like two thousand dollars like you're going to commit to keep using aws i'm not showing the annual fee commitments here and the promotional credits that you get back just because they've changed it a couple times on me and i just don't want this slide to go stale in case they happen to change it again so you'll have to look that up to find out what they actually are right now uh you can have unique speak speaking opportunities in the official awesome marketing channels like the blogs or webinars being part of the apn is a requirement to be a sponsor with a vendor booth enables events so when you s when you go to re invent or any aws event all the vendors are part of the apn all right so they've paid their fee and now they paid an additional fee to get their booth but um yeah the bus partner network is very good for helping you find new business and connecting with other people that are building workloads in aws but hopefully that gives you an idea of how that works okay [Music] hey this is andrew brown from exam pro and we are taking a look at ibis budgets so abs budgets gives you the ability to set up alerts if you exceed or approaching your defined budget create cost usage or reservation budgets it can be tracked at the monthly quarterly or yearly levels with customizable start and end dates alert support ec2 rds redshift elasticast reservations uh and so the idea here is you can choose your budget amount so it could be like a hundred dollars it'll even show you what was the last amount if you're resetting the budget it's something new you can choose based on a different kind of unit so if you wanted to be based on running hours on ec2 you could totally do that is budgets can be used to forecast costs but is limited compared to cost explorer or doing your own analysis whether it was costs and uses reports along with business intelligence tools budgets uh based on a fixed cost or or you can plan your cost up front based on your chosen level can be easily managed from the aws budgets dashboard via the aws budgets api get notified by providing email or chat bot and threshold uh how close to the current or forecasted budget um so you'd see a list of budgets here uh current versus forecasted the amount used things like that you can see your budget history you can download a csv uh it'll show you the cost history right in line there which i can't show you it's hard to see there you get the first two budgets are free so there's no reason not to set a budget when you first get into aws and each budget costs about 0.02 cents a day so it's like 60 cents um usd per month for a budget so they're very cheap to use and you've got a limit of 20 000 budgets they're going to be in good shape okay [Music] well let's take a look here at airbus budget reports which is used alongside abs budgets to create and send daily weekly or monthly reports to monitor the performance of your abus budgets that will be emailed to specific emails so it's not too complicated here you say create the report budget choose your frequency the emails you want um an administrative report serves as a more convenient way of staying on top of reports since they're delivered to your email instead of logging into the management console so it's just for those people that just can't be bothered to log in okay well let's take a look here at abyss costs and uses reports so generate a detailed spreadsheet enabling you to better analyze and understand your abs cost so this is kind of what it looks like and when you turn this feature on it will place it into an s3 bucket you could use something like athena to turn the report into a queryable database since it's very easy to consume s3 csvs into athena you could use quicksite to visualize your billing data as graphs so quicksite is a business intelligence tool similar to tableau or power bi you could also ingest this into redshift but the idea here is when you turn it on you can choose how granular you want the data to be hourly daily or monthly if you turn on daily you'll be able to even say spikes of uh of of of costs for ec2 instances which is kind of nice the report will contain cost allocation tags um which i think we have a separate slide on that type of tags and the data is stored in either as either a csv it will be zipped or it will be a par-cat format it just depends on how you want it for that okay [Music] let's talk about cost allocation tags so these are optional metadata that can be attached to aws resources so when you generate a cost and uses report you can use that data to better analyze your data so what you'd have to do is make your way over to cost allocation tags and need to activate the tags you want to show up there are two types of tags so we have user defined so whatever you've previously tagged will show up probably there you turn it on so if you made one with project you turn on project and there's a lot of aws generated ones that you can turn on so there's a huge list there but uh yeah that's particular with cost usage and reports if it says like cost allocation reports it's just that's what costs and usage reports used to be called and some of the documentation is a bit old there but yeah there you go [Music] so you can create your own alarms in cloudwatch alarms to monitor spend and they're commonly called building alarms and so it's just a regular alarm but it's just focused on spend but in order to do this you have to turn on building alerts first in order to be able to use it and then you'll go to cloudwatch alarms and you can choose billing as your metric and then you just set your alarm however you'd want build alarms are much more flexible than aba's budgets and are ideal for more complex use cases for monitoring spend and usage in terms of alerting so you just have to decide what you want to do uh before it was budgets this was the only way to do it and so this is the way i'm used to doing it and i still do it this way today but you know both options are valid and just have to decide what is your use case okay [Music] let's take a look at about cost explorer which lets you visualize understand and manage your aws costs and usage over time so here's a big graphic of aws cost explorer and you can specify time and range and aggregation it has a lot of robust filtering what's really nice is that they have a bunch of default reports for you so i'm just gonna get my pen tool just to show you where that button is it's over uh here uh if you can see my marker there but but you know you can look at things like monthly cost by service monthly cost by linked account daily cost savings marketplace r utilization so there's a bunch there you could also notice that you can create your own report so if you do find something that you like you can save it for later um you can you could have access to forecasting here so you get an idea of the future costs and whether it's been it's gone up or down just to kind of zoom in on some of those filtration options you can choose um either monthly or daily level of of how you want the data to be grouped together and you have a lot of filter control so if i want to just have ec2 instances for a particular region then i can get that filtered information over here and you can see you have a breakdown of the different types so it's very detailed and cost explorer shows up in us east one i'm pretty sure if you click on class explorer we'll just switch you over to that region but just understand that's where it lives okay [Music] hey this is andrew brown from exam pro and in this video i want to show you aws cost explorer so what we'll do is go to the top here and actually on the right hand side we're going to click on the right and go to my billing dashboard and from there on the left hand side we're going to look for cost explorer and then click launch cost explorer and this is where we're going to get to the aws cost management dashboard where this is where we find savings plans reservations things like that on the left hand side click on cost explorer and you can get this nice chart and so the idea is you can change it from monthly to daily if you if you uh prefer okay you can change the scope here maybe we don't need six months we can just go back three months here so there's less data it is a bit delayed when i'm clicking here so it also could be just because i'm doing the daily instead of monthly so you just have to be a little bit patient when uh using this interface you can change it to stack line graph you can kind of see the details there it's not always clear like what others is or things like that and so uh you can drill down and there's like ways of applying filters and things like that i always forget how to do this because it's bringing everything in so you have to hit clear all first i think and oh you have to click into it so like if you wanted to click into it and pick a particular service we could go here and type in ec2 and say ec2 instances and then apply that filter so now we can just see exactly that cost or if we want to choose like maybe just rds okay so you know that could be useful for you to see but yeah sometimes it's not always clear and so what i recommend is just go back to your billing dashboard and from there just go to bills okay bills is really really useful because here it shows you exactly every single little service that you're being billed for you can expand it and see exactly where if there you have other accounts you can go into this side here as well and find spend that way but cost explorer is very useful just it's useful in a different way okay so there you go hey this is andrew brown from exam pro and we are taking a look at the database pricing api so with adabs you can programmatically access pricing information to get the latest pricing offerings for services this makes sense because database can change them at any time and so you know you might want to know exactly what the current price is there are two versions of this api so we have the query api known as the pricing service api and you access this via json and then there's the batch api also known as the price list api via html what's odd is that the batch api returns json but you're accessing it via html so you can literally paste those links in your browser for the query api you're actually sending an an application json request so you'd have to use something like postman or something uh you can also subscribe to sns uh notifications to get alerts when pricing for the services change database prices change periodically such as when aws cuts prices when new instance types are launched or when new services are introduced so there you go [Music] hey this is andrew brown from exam pro and what i want to do here is show you savings plans and savings plan is going to be found under the it was cost explorer so just type in cost explorer at the top here or if you want you can type in savings plan as well and once we are here on the left hand side we are going to have a savings plans option so we're going to go to the overview and here it just describes what our savings plans if you want to read through it but down below if you have already some spend happening it's going to make some suggestions and in this particular account it's saying that i could save some money on compute before we take a look here i'm just going to go to the form here and see what we can see so up here we can say commitment two three years by the way you have compute savings which applies to ec2 fargate or lambda then you have the ec2 specific one where uh we can select a very particular type of instance family and then there's the sagemaker savings plans um but if we go here and we just enter in like two dollars all up front i don't really understand it from here because it doesn't make it clear what the savings are um but uh what it does make it very easy is probably if we go over here and then click down on the compute so kind of feel like here would auto fill it in for you and so here i filled it in or sorry it's filled in for me and so here it's saying with a one-year plan all up front for based on the past 30 days that it's going to see that i'm going to see a monthly savings of 25 and 36 cents and then i can add it to the cart that way and i kind of feel like that is the easiest way to um figure that out where with um with how it was going that form i just couldn't figure it out myself what the savings were there are some utilization reports and coverage reports honestly i've never really looked at these before um but uh i'm just curious like what we're looking at monthly daily the last let's go a few months here i've been running stuff in this account for a while so there should be something apply so nothing nothing of interest but um i mean i guess you have a lot of use and coverage report utilization report could be interesting but i imagine it's maybe you have to be using you have to have a savings plan before you can see this so that's probably the reason why um but yeah hopefully that gives you a clear idea that you know you can just go down to those recommendations and and see exactly what you can save and you just add it to your cart and then once you want to pay for it you just choose to submit that order and you're all good to go all right so that's savings plans [Music] let's take a look here at defense in depth to understand the layers of security aws has to consider uh for their data centers for their uh virtual workloads and things that you also have to consider when you are uh thinking about security for your cloud resources so in the most interior we have data so this is access to business and customer data and encryption to protect your data then we have applications so applications are secure and free of security vulnerabilities then you have compute so access to virtual machines ports on premise and cloud you have the network layers so this limits communication between resources using segmentation and access controls you have the perimeter itself so distributed denial of service protection to filter large-scale attacks before they can cause denial of service of users you could say that's part of the network layer and that's when i say there are variants on this but we're just separating it out explicitly there we have identity and access so controlling access to infrastructure and change control and then there's the physical layer so limiting access to data centers to only authorize personnel you'll notice i highlighted identity and access in yellow it's because that is considered the new primary um perimeter from the customer's perspective of course ida best has concern about the physical perimeter and things like that but as a customer that's what you're going to be thinking about especially with the zero trust model and when you see these depths the idea is that in order to get here you have to pass through all the stuff so if this um if this outward one is protected pretty well then you generally don't have to worry about the interiors but of course you should um but yeah there you go let's take a look here at confidentiality integrity and availability also known as the cia triad is a model describing the foundation to security principles and their trade-off relationships so here is our triad so we have confidentiality so confidentiality is a component of privacy that implements to protect our data from unauthorized viewers and practice this can be using cryptographic keys to encrypt our data and using keys to encrypt our keys so envelope encryption then we have integrity so maintaining and ensuring the accuracy and completeness of data over its entire lifecycle and practice utilizing acid-compliant databases for valid transactions utilizing tamper evident or tamper proof hardware security modules hsms availability so information needs to be available when needed in practice so high availability mitigating ddos decryption access so the cia triad was first mentioned in this publication in 1977 there have been efforts to expand and modernize or suggest alternatives the cia triad so one was in 1998 for the six atomic elements of information uh or in 2004 we have the engineering principles for uh for information technology security so it has 33 security principles but this is still a very popular um model for security uh and it's just to kind of tell you like you know you don't always get everything you don't get all three of them sometimes you have to trade off in your scenario um you know and hopefully some of the terminology here will resonate as we go through more security content [Music] what i want to do here is just to find the term vulnerability so a vulnerability is a whole or weakness in an application which can be designed a design flaw or implementation bug that allows an attacker to cause harm to stakeholders or applications and uh there's a lot of great definitions of vulnerabilities but owasp has a ton of them and we talked about oats when we talk about abuse waff but it's an organization that creates security projects that help you know what you should protect uh or gives you a working example so that you can understand how to get better at security and so they have a lot of ones here but maybe you might notice some here like using a broken or risky cryptographic algorithm maybe there's a memory leak least privileged violation so that's um uh least privilege is something that is a thing that you're always worried about in security improper data validation buffer overflows so you know just to kind of set the tone of what a vulnerability is and things you should be thinking about okay [Music] let's understand what encryption is but before we do we need to understand what is cryptography so this is the practice and study of techniques for secure communication in the presence of third parties called adversaries and encryption is the process of encoding or scrambling information using a key and a cipher to store sensitive data in an unintelligible format as a means of protection an encryption takes in plain text and produces produces a cipher text so here's an example of a very old encryption machine this is the enigma machine used during world war ii and it has a different key for each day that it was used to set the position of the rotors and it relied on simple cipher substitution and so you might be asking what is a cipher and that's what we're going to look at next [Music] so what is a cipher it is an algorithm that performs encryption or decryption so cipher is synonymous with code and the idea is that you use the code to either unlock or or lock up the information that you have so what is a ciphertext a ciphertext is the result of encryption performed on plain text via an algorithm so you lock that up you scramble it it doesn't make sense and you need that code to unlock it to get the information so a good practical example back in the day was a code book and this was the type of document used for gathering and storing cryptographic codes or ciphers so the idea is if we zoomed up on here notice where we have cannot so and it would be zero zero and then there would be give them authority so the idea is zero zero or if you had the word cannot it would translate to zero zero and then you use zero zero to match that up to say what does that actually mean and so that is kind of a very practical example of ciphers in action so we just took a look at encryption but what are cryptographic keys so a a cryptographic key an easy way to think of it is a variable used in conjunction with an encryption algorithm in order to encrypt or decrypt data and there are different kinds of um ones we have so we have symmetric encryption so this is where we have the same key that is used for encoding and decoding and a very popular one and the one you'll see on aws is called advanced encryption standard aes so just take a look at that graphic very closely so we have one key and it is used to encrypt so it produces the cipher and then or cipher text we should say and then it will decrypt and we will get our plain text so one single key then we have asymmetric encryption so two keys are used one in code and one to decode and a very popular one here is rsa if you're wondering what those those words are it's three people's names put together who helped invent this type of algorithm and so here we have one key for encrypt and one key for decrypt and there are two different keys all right [Music] all right let's look at the concept of hashing and salting so for hashing we have a hashing function and this accepts arbitrary size values and maps it to a fixed size data structure hashing can reduce the size of a store value and hashing is a one-way process and is deterministic so a deterministic function always returns the same output output for the same input so if we have something like john smith and we pass it to the hash function it's going to create something that is not human readable but it'll say something like zero two f a e x x y whatever um and it will always produce the same thing if the same key or you know value is being inputted there so the reason we use hashing functions or hash in general is to hash passwords so hash functions are used to store passwords in a database so that the password does not reside in a plain text format so you've heard about all these data breaches where they've stored the password in plain text this is the thing that helps us avoid that issue and the thing again is because it's one way you can't take that hash and unhash it well there are some conditions to it but so to authenticate a user when a user inputs their password it is then hashed so the one that was inputted at the time of you know login and then that hash is compared to the stored hash in the database and if they match the user is successfully logged in so in that case we never ever had to know what the original password looked like uh popular hashing functions are md5 sha-256 or bcrypt if an attacker knows the function you are using and and stole your database they could enumerate a dictionary of passwords to determine the password so they'll never see it but they could just keep on going through that so that's why we salt our passwords so a salt is a random string not known to the attacker that the hash function accepts to mitigate the deterministic nature of a hashing function so there you go [Music] let's take a look here at digital signatures and signing so what is a digital signature is a mathematical scheme for verifying the authenticity of digital messages or documents and a digital signature gives us tamper evidence so did someone mess or modify the data is this data from someone we did not expect it to be is it from the actual sender and so we kind of have this diagram where we have a person who sends or is going to send a message so they sign it and then uh bob verifies that it was for the person who it's from so there are three algorithms to a digital signature the key generation so generates a public and private key then there is signing the process of generating a digital signature with a private key and the inputted value so signing which is what is happening up here signing verification verifies the authenticity of the message with a public key so remember the private key is used for signing and the public key is used for verifying ssh uses a public and private key to authorize remote access into a remote machine such as a virtual machine it is common to use rsa and we saw that rsa is a type of algorithm earlier and so ssh hyphen keygen is a well-known command to generate a public and private key on linux i know this one off the top of my head i always know to do this and so what is code signing so when you use a digital signature to ensure computer code has not been tampered and so that's just a like subset of digital signaturing so you can use this as a means to get into a virtual machine or you can use signing as a means to make sure that the code being committed to your repository is who you expect it to be from so there you go [Music] let's talk about in transit versus at rest encryption so encryption transit this is data that is secure when moving between locations and the algorithms here are tls and ssl then you have encryption at rest so this is data that is secure when residing on storage or within a database so we're looking at aes or rsa which we both covered previously these algorithms so ones that we did not cover was tls and ssl so we'll cover them now so tls transport layer security is an encryption protocol for data integrity between two or more communic communicating computer applications so 1.0 and 1.1 are no longer used but tls 1.2 and 1.3 is the current best practice then we have ssl secure socket layers so an encrypted protocol for date integrity between two or more communicating uh computer applications so 1.0 2.0 and 3.0 are deprecated um and honestly i always get these two mixed up and i always figure uh get confused which is being used but um you know they're always changing on us but just understand generally what these concepts are and that you're familiar with the terms okay [Music] hey this is andrew brown from exam pro and we are taking a look at common compliance programs so these are a set of internal policies and procedures for a company to comply with laws rules and regulations or to uphold business reputation so here we have a bunch of different compliance programs and so some popular ones are like hipaa or pci dss the question is should you know these yes you should generally know the most popular ones because you're going to see them throughout your cloud career and so just getting familiar now is a good time so let's jump into it okay so the first one i want to introduce you to is for ia iso and they have a bunch of different ones so iso is the international organization of standardization and their other one called iec which is the international electro technical commission one deals with uh you know like uh virtual things the other one deals with hardware things but they have a lot of overlapping compliance programs okay and so the most popular absolutely most popular one that i know of is the 270100 i know a lot of organizations that are going for their 2701 so this is for control implementation guidance you have the 27017 this is enhanced focus on cloud security the 27018 this is protection of personal data in the cloud then you have the 27701 this is privacy information management system so pims framework this outlines controls and processes to manage data privacy and protect pii so that's personally identifiable information then you have system and organization control sock and this is a very popular thing that organizations go for especially the sock too so sock one is 18 standards and report on the effectiveness of internal controls at the service organization relevant to the client's internal control over financial reporting we have stock 2 evaluates internal controls policies and procedures that directly relate to the security of the system at a service organization and stock 3 a report based on the trust service services criteria that can be freely distributed then we have pci dss a set of security standards designed to ensure that all companies that accept process store and transmit credit card information maintains in a secure environment we have a federal information procedure standards or fips so 140 hyphen 2. this is u.s and canadian government standard that specifies the security requirements for cryptographic modules that protect sensitive information then we have a ph ipa this is more relevant to me because i'm actually in ontario and canada but it's also very well known one out there outside of hipaa so this regulates patient protected health information then you actually have hipaa this is the u.s federal law that regulates patient procedure health information then we have a cloud security alliance so csa star certification independent third-party assessment of a cloud provider's security posture if you never heard of csa they have a very uh well-known fundamental uh security certification called the cssk or ccsk i always get that mixed up then we have fedramp which we covered earlier in this course or in the future depending on where we put it but fedramp stands for federal risk and authorization management program it's a us government standardization approach to security authorizations for cloud service offerings if you want to work with the u.s government or places that sell the us government you need fed ramp that similar to criminal justice information services any u.s state or local agency that wants to access the fbi's cgis database is required to adhere to the cgis security policy then we have gdpr the general data protection regulation everyone knows what this is in europe maybe not so much in north america or other places a european privacy law imposes new rules on companies governments agencies nonprofits and other organizations that offer goods and services to people in the european union or that collect analyze data try tied to eu's residents there's a lot of compliance programs out there one that's also very popular is fips but we'll get to that when we talk about kms but yeah there you go [Music] so i just wanted to quickly show you here the aws compliance programs page where they list out all the types of compliance programs that aws is uh working with and that it has different types of certification and attestments which we can use it was artifact or amazon artifact whichever prefix they decide to use for the name there to ensure that it was has in order to meet those regulatory compliance so you can see them all there and if you want to know a little bit more about any of these you just go ahead and click them and you can read and they have additional information so you have a better idea okay [Music] let's talk about pen testing so pen testing is an authorized simulated cyber attack on a computer system performed to evaluate the security of the system and on aws you are allowed to perform uh pen testing but there are some restrictions so permitted services or ec2 instances nat gateways elbs rds so that's relational database service cloudfront aurora api gateways lambda lambda edge functions light cell resources elastic bean stock environments things you cannot do or you should not be doing is dns zone walking via rough d3 hosted zones then there's ddos simulation testing so you should not be doing ddot or dos ddoses or simulated dos or simulated ddos is okay and that doesn't mean that you can't necessarily do them uh again there's a lot of exceptions to the pen testing they have a whole page on this but generally you're not allowed to do ddosing port flooding protocol flooding request flooding can't do any of those things for other simulated events you need to submit a request to a bus a reply could take up to seven days you know again there's a lot of little intricacies here so you'd have to really read up on it if you're interested in doing this okay [Music] hey this is andrew brown from exam pro and we are taking a look at pen testing on the aws platform so they have this page here that tells you what you're allowed to do what you're not allowed to do um and there's some additional things you can read into like the stress test policy the ddos simulate simulation testing policy which i didn't cover in detail in the course content but for whatever reason you're interested in it i just want you to be aware of that kind of stuff if you want to simulate events there is a simulate events form that you have to fill out so yeah open it up and you can kind of read about it and it gives it eight of us a heads up of what you're going to be doing stress test fishing malware analysis other so that way that if you are doing it you're not going to get in trouble they're aware of what you are doing okay so that's pretty much it [Music] hey this is andrew brown from exam pro and we are taking a look at ibis artifact which is a self-serve portal for on-demand access to itabus compliance reports so here's an example of a bunch of different compliance reports that aws could be meeting and the idea is that when you go to this portal within the database management console you'll have a huge list of reports that you can go and access so here i'm searching for canada to get the government of canada partner package and then i go ahead and i download that report as a pdf and then within the pdf we can click a link to get the downloadable excel and that's pretty much what it is it's just if you want to see that databus is being compliant for different programs [Music] hey this is andrew brown from exam pro and we're going to take a look at adobe's artifacts so at the top here we're going to type in artifact and not to be confused with code artifact which i guess is a new service there's just always releasing new services eh and so here we have a video and some things but it's not too hard all we got to do is go to view reports and from here we have all the types of compliance programs or regulatory compliance programs that aws is meeting and we can do is search for something so we type in canada and that's the government of canada partner package and i can go ahead and download that report so when you download it you really want to open this up in um you're going to really want to open this up in adobe acrobat because if you don't open it up in adobe acrobat you're not going to be able to access the downloadables within it i know that's kind of odd to say but that's just what it is you do have to install adobe acrobat reader and once you have it open and i'm just moving it over here this is what you're going to see and it's going to say like hey um oops no i don't want to do that so please scroll to the next page to view the artifact download and so i think that if we go here you know they say scroll to the next page but i'm pretty sure we can just go here on the left hand side and this is what we're looking for that excel spreadsheet so we're going to save that attachment or actually we just can open it up open this file okay and we'll give it a moment i have excel installed and there we go there it is okay so i know it's a little bit odd way to get to those um certificates or reports but that's just how it works um but yeah i mean that's the idea is like if you need to prove that database is meeting whatever those standards are you can just type them in whatever it is i mean like maybe there's like fedramp right whatever it is and download those certificate attestments whatever um and just double check that aws is meeting those standards okay [Music] hey this is andrew brown from exam pro and we are taking a look at abs inspector but before we can answer what it does let's talk about hardening so hardening is the act of eliminating as many security risks as possible hardening is common for virtual machines where you run a collection of certain security checks known as a security benchmark so aws inspector runs a security benchmark against specific ec2 instances and you can run a variety of security benchmarks and you can perform network and host assessments and so here's an example of those two check boxes there which you'd say which assessments you want to do so the idea is you have to install the adobe station on your ec2 instance you run an assessment for your assessment target you review your findings and remediate security issues and one very popular benchmark you can run is the cis which has 699 checks so if you don't know what cis it stands for the center of internet security uh and so they are this organization that has a bunch of um uh security controls or check marks uh that are published that they suggest that you should check on your machine [Music] hey this is andrew brown from exam pro and we're looking at ddos so ddos is a type of malicious attack to disrupt normal traffic by flooding a website with a large amount of fake traffic so the idea is we have an attacker and the victim the victim is us and it could be our virtual machines our cloud services the idea is that it's some kind of resource which can take in uh incoming requests over the internet so the idea is the attacker is utilizing the internet and so they may control a bunch of virtual machines or servers they're loaded up with malicious software and the idea is that the attacker is going to tell them all to send a flood of traffic over the internet at your computing resource and this is where your website is going to either start to stall or it's going to become unavailable for your users and so the idea here is that you know if you want to protect against cdos you need some kind of ddos protection traditionally those used to be like third-party services that you uh would have to pay for and the and it would sit in front of your load bouncer or your n server but now the great thing with cloud service providers is that generally their networks have built-in ddos protection so the idea is just by having your compute or your resources on aws you're going to get built-in protection for free via aws shield and we'll talk about that next [Music] hey this is andrew brown from exam pro and we are taking a look at it with shield which is a managed ddos protection service that safeguards applications running on aws so when you route your traffic through refu3 or cloudfront you are using it with shield standard so here's a diagram to kind of show you that it's not just those services but these are the most common ones where you'll have a point of entry into aws so here we could also be including elastic ip it was global accelerator but the idea is that when you go through these services into the airbus network it has shield built in and so you're going to get that protection before those uh before that traffic reaches your cloud services and in this case we're showing uh ec2 instances so it was shield protects against layers three four and seven attacks uh layer three four and seven is based off the osi model which is a um a fundamental networking concept so seven is for the application layer four is the transport layer three is the network layer um there are two different types of plans ready with shield we have shield standard which is free and then shield advance which starts at 3000 usd per year plus some additional costs based on usage of the size of the attack or what services you're using how much traffic is moving in and out so protection against the most common ddos attacks is what shield standard does you have access to tools and best practices to build ddos brazilian architecture it's automatically available on all above services for additional protection against larger and more sophisticated attacks that's where she'll advance comes into play it's available for specific database services so refugee 3 cloud front elb able global accelerator elastic ip and some notable features here is visibility reporting on layer three four and seven you're only going to get seven if you are using it will swap with it uh you have access to team and support so these are ddos experts but you're only gonna get it if you're paying for business or enterprise support as you're paying for this as well uh you also get ddos cost protection just to ensure that you know your bills don't go crazy and it comes with an sla so you have a guarantee that it's going to work both plans integrate with aws web application firewall so waff to give you that layer 7 application protection so understand that if you're not using waff you're not going to be having that layer 7 production okay [Music] hey this is andrew brown from exam pro and we are looking at amazon guard duty so before we look at that we need to understand what is an ids ips so an intrusion detection system and intrusion protection system is used as a device or software application that monitors a network or systems for malicious activity or policy violations so guard duty is a threat detection service which is ids ips that continuously monitors for malicious and suspicious activity and unauthorized behavior it uses machine learning to analyze the following database logs your cloud trail logs your vpc flow logs your dns logs and what it will do is report back to you and say hey um there's this issue here and this is actually one that's very easy to replicate it's just saying somebody is using the root credentials and it's suggesting that you should not be doing that right because you're never supposed to be invoking api calls with the root credentials or you should be limiting that you might also notice that if you want to investigate you can kind of follow up that with uh amazon detective or aws detective which ever prefix they decided to put on that service it will alert you of findings which you can automate an incident response via cloudwatch events which this uh it's been renamed to eventbridge as you know or third party services so you can follow up a remediation action and here is a graphic of amazon guard duty just a bit up closer so you can see all the findings and you can just see you have a lot of detailed information there okay [Music] hey this is andrew brown from exam pro and we're going to take a look at guard duty so guard duty is an intrusion protection and detection service and so what i've done is i've done some bad practices purposely so that i can show you some information in there so i'm gonna go over to guard duty okay and you do have to turn guard duty on and so once scar duty is on you're going to start getting reports coming in so notice here that we have some anomalous behavior eight days ago and so uh that's bako he's my co-founder he's also named andrew as well and so we can kind of see some details here about who's accessing what and what they were doing he's not doing anything malicious but we can have an idea where they're from even shows uh generally where he is which he is near thunder bay and his his provider would be tbaytel um and you can see that he is making uh api calls the scribe account attributes and things like that then the other issue is the root account so there's mfa i turned it off so that we can or maybe it's just usage here i actually do have it turned on i suppose but here we see root credential usage and so it's saying hey you used it 77 times because sometimes i go in and and use the ruby account for tutorials but saying you're using this way too much you've got to stop doing that okay so that's something that is uh pretty interesting with guard duty and it's really cost effective and easy to turn on so you can turn it on looks like they have a new thing for s3 have not looked at that as of yet but that's kind of cool kind of feels like that would overlap with amazon macy but whatever and here we get a breakdown of cost so we see cloudtrail vpc full logs dns logs and this is where it would be ingesting data if you want to use that s3 protection you'd have to probably be turning or creating a custom cloud watch trail that has data events to consume that information um you know so you know hopefully that gives you kind of an idea of things you can do and you can also centralize guard duty into one account so you can have one thing that takes care of everything and move all the data across all your accounts into a single place so that's kind of interesting and you can set up follow follow-ups um it's possible that i don't see this this here but generally it would show you uh it would show you a way of like triggering into cloud watch probably could do it pragmatically this is something interesting like the list management you can add trusted ips or threat list so if there's people that you know are fine you can just white list them or if there's people that you know that are bad make sure that they are never allowed to get through so that's pretty much it with guard duty okay [Music] let's take a look here at amazon macy so macy is a fully managed service that continuously monitors s3 data access activity for anomalies and generates detailed alerts when it detects risks of unauthorized access or inadvertent data leaks so macy works by using machine learning to analyze your cloudtrail logs and macy has a variety of alerts so we have anomalized access config compliance credential loss data compliance file hosting identity enumeration information loss location anomaly open permissions privilege escalation ransomware service disruption suspicious access and mac will identify your most at-risk users which could lead to compromise so here's just one little kind of tidbit from the app itself where you have the total users and they categorize them into different uh risks i can't remember which flag means what in here uh amazon macy is an okay service it's very important if you're storing things in s3 but i don't i don't use it very often to be honest [Music] hey this is andrew brown from exam pro and we are taking a look at aws virtual private network also known as vpn so aws vpn lets you establish a secure and private tunnel from your network or device to the aws global network it's very important to emphasize the word secure here because when you're using direct connect that will establish a private connection but it's not using any kind of protocol to secure that data in transit whereas database vpn will be using a secure protocol there are two options here we have abyss site to site vpn so securely connect on-premise network or branch office site to vpc and it was client vpn so securely connect users to aws or on-premises networks one thing that we need to understand alongside vpns is ipsec this stands for internet protocol security and is a secure network protocol suite that authenticates and encrypts the packets of data to provide secure encrypted communication between two computers over an internet protocol network and it is used in vpns and it was definitely uses it okay [Music] hey this is andrew brown from exam pro and we are taking a look at aba's web application firewall also known as waff which protects you protects your web application from common web exploits so the idea here is you write your own rules to allow or deny traffic based on the contents of an http requests you use a rule set from a trusted image security partner in the abyss waff rule marketplace waft can be attached to either cloudfront or an application load balancer so here is that diagram the idea is you see cloudfront with the waf or alb with the laugh and what it does is it can protect uh web applications from attacks covered and the owasp10 uh top 10 most dangerous attacks if you don't know owas they're the open web application security project and they basically have all these security projects which are things to say hey these are things that you should commonly protect against or they might have like example applications that serve as a means to learn security so we look at the top 10 it's injection broken authentication sensitive data exposure xml external entities so xxe broken access control security misconfigurations cross-site scripting so xss insecure deserialization using components with known vulnerabilities and insufficient logging and monitoring so there you go hey this is andrew brown from exam pro and we're going to take a quick look at adabus web application firewall also known as waff and so in this account i happen to have a waf running so we don't have to create one we already have something we can take a look here so i'm going to go to waff and shield and then on the left hand side you'll notice it's a global service but on the left hand side we're going to be looking for our web acls and so the idea is that when you want to waff you create a web acl and then when then within that web acl you have uh the overview and then you have you can kind of show you kind of the traffic that's going on here we can have our rules and so there's a lot of different kind of managed rule groups that you can use so these are ones that are provided by aws so and a lot of these some of these can be paid some of these are free so you see there's these free rule groups where you're like hey i don't want any anonymous ips you checkbox that on you know or i want to protect against sql injection now the interesting thing is that abyss has this capacity unit so you can't add all of these you can add a certain amount of capacity before you have to uh pay for more or something like that it's just kind of a way to um uh kind of cap the amount of stuff that you can put in in terms of rules um but there's a lot of other um rule groups from third party services like security companies that know what they're doing so if you like fortinet's os top 10 you can subscribe to that in the marketplace and be able to use it but uh yeah so that's how you apply rules there's something called bot control i've never used this before get real-time visibility into bot activity on your resource and controllers what bots allow and block from your resources that sounds really cool i cannot stand bots so i might turn that on myself or take a look at the cost there and see what we can find out but that's pretty much it with waff one thing i would say is that you can block out specific ip addresses or white list specific ip addresses and you might do that through rules i'm just going to see yeah like maybe the bypass here and so these ip addresses are some of our um cloud support engineers where they're using our mid panel and um waff is being too aggressive in terms of protection and so sometimes you have to say hey allow this ip address and let my um you know let my cloud support engineer be able to use the mid panel because they're not malicious okay so that's one little exception there but that's pretty much it okay [Music] hey this is andrew brown from exam pro and we are taking a look at hardware security modules also known as hsm and it's a piece of hardware designed to store encryption keys and it holds keys in memory and never writes on the disk so the idea is that if the hsm was shut down that key would be gone and that would be a guarantee of protection because nobody could you know take the drive and steal it so here is an example of an hsm these are extremely expensive so you definitely don't want to have to buy them yourselves uh they generally follow fips so fips is the federal information processing standard so it's a u.s and canadian government standard that specifies the security requirements for cryptographic modules that protect sensitive information fips is something you want to definitely remember and there are two different protocols here there's actually a bunch of different fips versions but we have fips 142 level 2 and then fips 143 level 3 so let's talk about the difference here so hsms that are multi-tenant are going to be using fips 142 hyphen 2 level 2 compliant where you have multiple customers virtually isolated on the hsm and then there are hsms that are single tenant and so they're going to be utilizing fips 140 hyphen 2 level 3 compliant so a single customer on a dedicated hsm and so the reason why we have these two levels is that when you have multiple tenants you can say oh right this thing is uh has temper evidence so we can see that somebody was trying to break into it but there's no guarantee of tamp it being tamper proof where level three is tamper proof there's also fips 140 hyphen 3 which is the new uh the newer standard but not all cloud resources can meet that standard just because of how they offer the service so again fips 142 is really good but just understand that there are other ones out there and it's very easy to get fips 142 level 3 mixed up with pips 140 iphone 3 something that i always had a hard time remembering the distinguishing between those two so for multi-tenant this is where we're using adabus key management service and for single tenant we're using aws cloud hsm and the only time you're really using cloud hsm is if you're a large enterprise and you need that regulatory compliance of getting fips 140 heaven to level three okay [Music] hey this is andrew brown from exam pro and we are taking a look at key management service also known as kms and it is a managed service that makes it easy for you to create and control the encryption keys you use to encrypt your data so kms is a multi-tenant hsm so it's a hardware security module and many aws services are integrated to use kms to encrypt your data with a simple checkbox and kms uses envelope encryption so here's that example of a simple checkbox in this case it's for rds and what you'll do is choose a master key a lot of times aws will have a default key for you that's managed by them that is free to use which is really great so for kms it's using envelope encryption so when you encrypt your data your data is protected but you have to protect your encryption key when you encrypt your data key with a master key as an additional layer security so that's how it works so just to make this really clear i have my data i use this key to encrypt this data and now i need to protect this key so i use another key to encrypt this key which forms an envelope and then i store this master key in kms and this one's considered the data key all right [Music] hey this is andrew brown from exam pro and we're going to take a look at key management service also known as kms so type in kms on the top here and we'll pop over here and kms is a way for you to create your own keys or you can use abyss manage keys so up here and not all these appear right away but as you use services um you will itamas will generate out manage keys for you and these are free you can create your own keys um and these cost a dollar each so if i go ahead here and create a key i can choose whether it's symmetric or asymmetric which we definitely learned in the course which is nice for asymmetric you can make it encrypt and decrypt sign and verify and they're just kind of narrowing down the type of key you would use for this you know if i went to symmetric i go here i'm just kind of seeing if i can enter the actual material into the key here so i'm just going to keep clicking through here my custom key generally you don't really need to do this but you know if it's interesting you can set up administrators to say who's allowed to administer the key and then you have someone that is allowed to use the key and you usually want to keep those two accounts separate you don't want the same person administrating and using the key okay keep those two separate and so we would have a key policy so you can change this to say the rules that is allowed to use and then we can go here and hit finish and so there we now have our own custom key and one thing we can do is it's possible to rotate out these keys when you need to be um but anyway when we use kms it's built into basically everything and we've seen it multiple times throughout this course when we've gone over to ec2 we'll just go take a peek at a few different places here so when we've gone to go launch an ec2 instance and we go over to storage so we say select and review or next and we go over to storage notice that here this is using encryption right so i can choose that or even my custom key if you're in dynamodb or anywhere else it's always something like a checkbox and you choose your key so that's pretty much all there really is to kms it's very easy to use and there you go [Music] hey this is andrew brown from exam pro and we are going to take a look here at cloud hsm it is a single tenant hsm as a service that automates hardware provisioning software patching high availability and backups so here's the idea is that you have your aws cloud hsm you have your developers interacting with it your application interacting with it you have an hsm client installed in your ec2 instance so that it can access uh the cloud hsm keys so aws cloud hsm enables you to generate and use your encryption keys on fips 140 hyphen 2 level 3 validated hardware it's built on open hsm industry standards to integrate with things like pk cs 11 java cryptography extension so jce microsoft crypto and g libraries you can transfer your keys to other commercial commercials hsm solutions to make it easy for you to migrate keys on or off aws configure aws kms to use aws cloud hsm cluster as a custom key store rather than the default kms keystore uh so cloud hsm is way more expensive than kms kms is like free or a dollar per key where cloud hsm is a fixed cost per month because you are getting a dedicated piece of hardware um and there's not a lot of stuff around it so other than the aws kms integration a lot of times it can be really hard to use this as well so the only time you're really going to be using cloud hsm is if you're an enterprise and you need to meet fips 140 hyphen 2 level 3 compliancy okay [Music] hey this is andrew brown from exam pro and we are taking a look at know your initialism so a lot of aws services and concepts and cloud technologies use initialisms to just kind of shorten uh common things that we need to use on a frequent basis and it's going to really help if you learn these because then what you can do is substitute them when you are seeing a service name or something particular and that's going to get you through content a lot faster and in the wild you're going to see these all over the place because people aren't going to say the full name they're going to say the initialism so let's go through them so for iam it's identity and access management for s3 that's simple storage for swfs it's uh swf that's simple workflow service sns is simple notification service sqs is simple queue service scs a simple email service ssm is simple systems manager but uh you know when we see the name it's usually just systems manager but we still use the uh initialism ssm then there's rds relational database service vpc virtual private cloud vpn virtual private network cfn cloud formation waf web application firewall and that is a very common initialism uh not just databus but outside of it as well mq for amazon active mq asg for auto scaling groups tam for technical account manager elb for elastic load balancer alb for the application load balancer nlb for the network load balancer gwlb for the gateway load balancer clb for the classic load balancer ec2 for elastic cloud or cloud compute ecs for elastic container service ecr for elastic container repository ebs for elastic block storage emr for elastic mapreduce efs for elastic file store ebs or eb for elastic beanstalk es for elasticsearch eks for elastic kubernetes service msk for managed kafka service and if you think i got the s and k backwards i did not for whatever reason it's msk uh then uh there's resource manager which is known as ram acm for amazon certificate manager popl for principle of lease privilege which is a concept not a service iot internet of things this is not a service but is a tech concept or cloud concept ri for reserved instances and i'm sure there are more but these are the ones that i know off the top my head uh and they're in my usual use case uh for what i'm doing day to day but a lot of times you'll probably just end up needing to remember asg elb um ec2 s3 things like that okay [Music] all right let's compare aws config and app config which both have configured the name but there are two completely different services so aws config and app config so abs config is a governance tool for compliance as code you can create rules that will check to see if resources are configured the way you expect them to be if a resource drifts from the expected configuration you are notified or aws config can auto remediate correct the configuration back to the expected state for app config it is used to automate the process of deploying application configuration variable changes to your web application you can write a validator to ensure uh the changed variable will not break your web app you can monitor deployments on automate integrations to catch errors or rollbacks so config is for compliance governance app config is for uh config application configure configuration variables so there you go [Music] well let us take a look at sns versus sqs and these things have something in common and it's they both connect apps via messages uh so they're for application integration so let's take a look at sns so simple notification service and then simple queue service okay so sns is intended to pass along messages via a pub sub model whereas sqs queues up messages and has a guaranteed delivery so the idea with sns you send notifications to subscribers of topics via multiple protocols so it could be http email sqs sms and sns is generally used for sending plain text emails which is triggered via other services the best example here is building alarms i know we mentioned this but i like to repeat it so that you absolutely know it can retry sending in the case of failures of https so it does have a retry attempt but that doesn't mean there's a guarantee of delivery it's really good for web hooks simple internal emails triggering lambda functions if you had to compare these to third-party services it's similar to pusher or pub nub so sqs is uh the idea here is that messages are placed into a queue applications pull the queue using the itabus sdk you can retain a message for up to 14 days you can send them in sequential order a sequential order or in parallel you can ensure only one message is sent you can ensure messages are delivered at least once it's really good for delayed tasks queuing up emails um comparable uh stuff would be something like rabbit mq or uh ruby on rails sidekick okay [Music] hey this is danny brown from exam pro and we're doing variation study with sns versus ses versus pinpoint versus work mail and so sns and scs get confused quite often but all of these services uh have something common they all send emails but the utility of email is completely different for each one so the first one is simple notification service is for practical and internal emails so you send notifications to subscribers of topics via multiple protocols so it's not just for email it can handle http it can send to sqs it can send sns or sms messages so um messages to your phone but it does send emails and so sns is generally used for sending plain text emails which is triggered via other aws services the best example of this is a building alarm so most exam questions are going to be talking about sns because lots of services can trigger sns for notifications and so that's the idea it's like oh um you know did somebody spin up a server send off an email via sns uh did we spend too much money here you know all sorts of things can go through sns to send out emails and you need to know what are topics and subscriptions regarding sns then you have ses so simple email service and this is for transactional emails and when i say transaction emails i'm talking about emails that should be triggered based on in-app action so sign up reset password invoices so a cloud-based email service that is similar to this would be like send grid scs sends html emails sns cannot so that is the distinction is that scs can do html and plain text but sns just does plain text and you would not use sns for transactional emails sas can receive inbound emails scs can create email templates custom domain name emails so when you use sns it's whatever amazon gives you it's going to be some weird address but ses is whatever custom domain you want you can also monitor email reputation for scs then you have amazon pinpoint and so this is for promotional emails so these uh when we say promotional we're talking about emails for marketing so you can create email campaigns you can segment your contacts you can create customer journeys via emails um it can do a to b email testing and so scs and pinpoint get mixed up because a lot of people think well can i just use my transaction emails for promotion emails absolutely you can it's not recommended because um you know pinpoint has a lot more functionality around promotional emails they're built differently and so you know just understand that those two have overlapping responsibilities but generally you should use them for what they're for then you have amazon workmail and this is just an email web client so it's similar to gmail or outlook you can create company emails read write and send emails from a web client within the database management console so there you go [Music] let us compare amazon inspector versus adabus trusted advisor so both of these are security tools and they both perform audits but what they do is slightly different so amazon inspector audits a single ec2 instance that you've selected or i suppose you could select multiple ec2s it generates a report from a long list of security checks um and so trusted advisor has checks but uh the the key difference here is that it doesn't generate out a pdf report though i'm sure you could export csv data if you wanted to and then turn that into a report it gives you a holistic view of recommendations across multiple services and best practices so for example if you have an open port on the security groups that can tell you about about that you should enable mfa on your root account when using trusted advisor things like that one thing though is that trust advisor isn't just for security does checks across um five different things but they both use security and they both technically do checks okay [Music] so there are a few services that have connected the name you'd think they'd be related in some way but they absolutely are not and they don't even have similar functionality but let's take a look here so we know the difference the first is direct connect it is a dedicated fiber optics connection from your data center to aws it's intended for large enterprises with their own data center and they need an insanely fast and private connection directly to aws and you'll notice they give private and emphasis because if you need a secure connection you need to apply a database virtual private network connection on top of direct connect then you have amazon connect this is a call center as a service get a toll-free number accept inbound and outbound calls set up automated phone systems uh so if you ever heard of an interactive voice system at ibs this is basically what amazon connect is you have media connect this is the new version of elastic transcoder it converts videos to different video types so if you have let's say a thousand videos you need to transcode them into different video formats maybe you need to apply watermarks insert introduction videos in front of each one this is what you use media connect for okay [Music] just in case you see elastic transcoder as an option i just want you to know what it is compared to media connect so both these services are used for transcoding and technically elastic transcoder is the old way and it was elemental media convert or just media convert is the new way so elastic transcoder was the original transcoding service it may still have chromatic apis or workflows not available in media convert so this could be reasons why we see legacy customers still using it or you know it's just too much effort for them to upgrade to the new one it transcodes videos to streaming formats media convert is more robust transcoding service that can perform various operations during transcoding so it also translates videos to streaming different streaming formats but it overlays images it inserts video clips extracts captions data it has a robust ui so generally it's recommended to use the uh media convert in terms of costs they're basically the same so there's no reason not to use media convert okay [Music] so it was artifact versus amazon inspector get commonly mixed up all the time but both artifact inspector compiler pdf reports so that's where the confusion comes from but let's talk about what is different about the reports so abus artifact enables inspector so for artifact you're answering why should an enterprise trust aws it generates a security report that's based on global compliance frameworks such as sock or pci or a variety of others where amazon inspector is all about how do we know the cc2 instance is secure can you prove it so it runs a script that analyzes your ec2 instance then generates a pdf report telling you which security checks had passed so the idea here is it's an auto tool for security of ec2 instances so there you go [Music] so let's compare elb versus alb versus nlb versus jwlb versus clb uh because you know when i was first learning aws i was getting confused because there was elastic load balancer but there was these other ones so what gives right so what's happening here is that there is a main service called elastic load balancer elb and it has four different types of possible load balancers so we'll go through all the types so the first is application load bouncer commonly uh initializes alb and so this operates on layer seven for https so this makes sense because that is the application layer and it has some special powers in terms of routing rules so the idea here is you can create rules to change routing based on information found within the https request so let's say you wanted some routes to go that have a particular subdomain to this server and a different subdomain to another one you could do that and because it is an application load bouncer you can attach a web application firewall for protection you can't attach this on the nlb or other ones because they're not application based so that is just a little caveat there then you have network load bouncer uh commonly abbreviated to nlb this operates on layer three and four so we're talking tcp udp this is great for when you have extreme performance that requires tcp and tls traffic it's capable of handling millions of requests per seconds while maintaining ultra low latency it's optimized for sudden and volatile traffic patterns while using a single static ip address per availability zone if you're making video games this is what they like to use is the network load balancer but it has other utilities outside of that then you have gateway load bouncer gwlb this is where you need to deploy a fleet of third-party virtual appliances that support uh i don't know how to say that in abbreviation but i'll just say it's g-e-n-e-v-e um and there's not much we need to know outside of that okay then there is the classic load bouncer uh commonly initializes clb this operates on layer three four and seven it's intended for applications that were built within the ec2 classic network it doesn't support target groups so albs at nlbs use target groups which is just an easier way of grouping together a bunch of targeted resources like compute that we're going to load balance to and with classic load balancer you just directly assign ec2 instances and it's going to be retired on august 15th of 2022 so yeah it looks like it can do a lot of stuff but it also doesn't have any of the superpowers of these specialized ones and so there's no reason to keep it around and generally you should not be using it and so yeah that's about it