Transcript for:
AWS Solutions Architect Exam Preparation Tips

hi guys welcome to piece of code and welcome to this video and this video is going to be a special video because in this video I'm going to discuss a lot of previous exam questions of AWS solutions architect C03 right and I decided to make this video because a lot of people you know they find it difficult while attempting the questions on the exam even though they know the concepts it becomes difficult when you are faced with scenario based questions right so I thought you know why not we just do a little bit of practice right hope this video helps you a lot and we are going to discuss a lot of scenarios from all the concepts in aws and we're going to discuss some previous solution architect questions so let us dive into this video and let us go to the first question of this video Now let us first of all while attempting the question when you are reading the question right First of all mark down all the important or key points in the question, right? So that what will happen is that you will be able to pinpoint that exact service which has all those particular key points Let us see how I you know what I mean by You know pinpointing the key points. So let us read the question first So company collects data for temperature humidity and atmospheric pressure now what do you understand by this line it means that it is real-time data right so in cities across multiple continents the average volume of data that the company collects from each site is 500 GB it means that the volume of data is large so large volume of data make sense each site has a high-speed internet connection it means that no problem problem with internet connection now the company wants to aggregate the data from all these global sites as quickly as possible in a single Amazon s3 bucket the solution must minimize operational complexity so minimize operational complexity that is one key point and thing is that once you aggregate the data from all these global sites it means that they want to you know gather data from different locations and put it in a single place so I think if you have in enough experience all from all these key points only you can figure out the answer but still if you have some questions the second technique that you use after figuring out all these key points is the question of elimination the elimination technique it is also known as option elimination technique now let us read through the options and see which particular option actually you know is the best for our all key points all right now let us start with the last option upload the data from each site to an Amazon easy to instance in the closest region and you know by reading just the first point I am you know I do not consider this as an option why will you use Amazon ec2 over here when we do not even need any compute capacity we just need to store the data right why will you need ec2 instances let's go to see which kind of seems like a viable option it says schedule AWS snowball edge storage optimized device jobs daily to transfer data from each site to the closest region use s3 cross region replication to copy the objects to a destination s3 bucket this kind of seems like a viable thing but the thing is it says that the data is generated every day and 500 GB is the data so large volume of data so you will think that okay the snowball may be an answer but the thing is there is no problem with the internet so it means that why will you always you know use the snowball device it takes about seven days to reach the particular destination and it takes up a lot of time and your data is generated every day so it makes no sense to go with the option c talking about b this is the closest answer i can think of which is wrong actually yes so the upload the data from each side to an s3 bucket in the closest region use s3 cross region replication to copy object from to the destination s3 bucket then remove the data from the origin s3 bucket the thing is this this can be a right answer i'm not saying that this is the wrong answer but if you read this particular point which says minimize operational complexity so obviously doing this is a viable option but still the operation complexity is very large you have to move around data from one bucket to another and then delete the data from this bucket from the source bucket right so because you know if you remember my s3 lecture even though you have enabled you know versioning in cross region replication you still the delete marker is being copied but the actual deletion operation is not copied from one bucket to another in cross region replication.

Deleted right so operation complexity is more so in your in this sense i have eliminated three options so i can directly go with the a option which is the most viable option or you can read about it turn on s3 transfer acceleration in the destination s3 bucket use multi-part uploads to directly upload site data to the destination s3 bucket this is the best answer and it has the least you know operational complexity why because the thing is s3 transfer acceleration is just the opposite of cloud front right it uses the edge look locations to you know seamlessly transfer the data using the AWS private internet connections or the private networks and multi part uploads is used to you know divide you know large data like 500 GB of data into small parts and upload them parallelly so this will be all be uploaded in a parallel fashion so you know what it increases the efficiency that's why a is the most viable answer in this case so i took a lot of time in this question because i wanted to explain how to approach a question in the aws ssa exam saa exam now for the other questions we are going to be a little bit quick we don't have to waste so much time right now let us move on to the next question so let's go to question number two So let us read the question a company needs the ability to analyze Oh, you know what you can pause the video and you can answer the question yourself Right the company needs to ability to analyze the log files of its proprietary application. So analyze the log files, right? This is one of the points the logs are stored in JSON format in an s3 bucket this is cool queries will be simple and run on demand a solutions architect needs to perform the analysis with minimal changes minimal changes to the existing architecture least amount of operational overhead let us read through the options one by one right so let us see the option number a so use Amazon redshift to load all the content into one place and then run the SQL queries as needed now I will not consider this option why because it's fine you can load the data into Amazon redshift but still the operational overhead is much more and why will you use a data warehouse where you just need to run simple queries right so I'm going to just ignore this option use Amazon CloudWatch to store the logs run SQL queries as needed from the Amazon CloudWatch console Now it is kind of possible to do that, but you cannot run complex or you know, you cannot directly perform that kind of operation on demand using the CloudWatch console, right? So CloudWatch logs is not an option in this case. And the thing is, it is being stored in Amazon S3 bucket.

So CloudWatch logs is not an option at all. And they are also the log files of a proprietary application, right? Then C kind of seems like a very good option.

Because if you you know that there is a service known as Athena. which can perform queries directly on an S3 bucket. And it doesn't even need to load that particular data somewhere.

And it has the least operational overhead. So in this case, I'm going to consider C as a answer. Because using Amazon Athena, we can quickly query the, you know, data on the Amazon S3 bucket without any much operational overhead. So I think that makes a lot of sense in this case. Let us move on to the next.

next question question number three so a company uses AWS organizations right so remember organizations can be a tricky topic to manage multiple accounts for different departments cool the management account has a s3 bucket that contains project reports the company wants to limit access to its s3 bucket right to only users of the account within the organization in AWS organization now when you are reading it as a file faster it can be a little bit confusing the company wants to limit access to this s3 bucket to only users of accounts within the organization in the aws organization it means that you have a root account you have created an aws organization and under that organization there can be multiple departments and there can be multiple employees right and multiple users only those particular entities should avail that s3 bucket that's what it means so which solution meets the particular thing now let us read through the things right so create an organizational unit OU for each department add AWS principal org paths global condition key to the s3 bucket policy so principal or paths right now this is a viable option sorry let me go back this is a viable option but the thing is it requires a little bit of more operational overhead so I will not consider this option right and it says least amount of operational overhead so I'm not going to consider this option now cance talking about option number C now it says use AWS cloud trail to okay let me mark it as wrong use AWS cloud trail to monitor the create account invite account to organization whatever whatever no need to even read the rest of things because the thing is CloudTrail only kind of generate logs for you know your complete whatever you know API calls in your account it cannot control who and you know it cannot block access or do some kind of stuff so this is out of the question at all now D this is also incorrect why because we can limit access by this way but this will take most amount of operational overhead because it says that tag each user that needs access to the s3 bucket then add aws principle tag global key to the s3 bucket policy now when it comes to modifying the s3 bucket policy without with you know include which includes a lot of users in your organization it becomes a cumbersome task right and basically you will not consider that option option so these B and D options are I do not consider it because to the due to the operational overhead and this one AWS cloud trail I do not consider because of a cloud trail cannot do it actually now the rest one is a even if you don't know a will be the correct answer if you have eliminated these three ones then a is the correct answer but the thing is a is the easiest solution because we just add the principal org ID write global condition key with a reference to the organization id to the s3 bucket policies what will happen when we add this principal organization id the users that belong to that particular organization will automatically get allowed and other ones will not get allowed right simple as that that is why a is the correct answer all right let us move on to the next question uh question number four the four is actually very simple question if you look at it an application runs on an amazon ec2 instance in a vpc the application processes logs that are stored in an amazon s3 bucket the ec2 instance needs to access the s3 bucket without connectivity to the internet which solution will provide private connectivity to as amazon s3 now you know what this is very easy Whenever you see this kind of question, gateway endpoint is always the correct answer. You do not even have to see these options at all. Just completely ignore these questions. Now, these options.

Whenever it says S3 or DynamoDB, right when it says dynamo db or s3 and you want to access it from your private internet then gateway vpc endpoint is the answer because it doesn't even charge anything to connect to that particular service right gateway endpoint is always the answer to these questions now talking about question number five Now, it is also a very simple question. Company is hosting a web application on AWS using a single Amazon EC2 instance that stores user uploaded documents in an Amazon EBS volume. So this is where the data is being stored for better.

scalability and availability the company duplicated the architecture and created a second easy to instance and EBS volume second easy to instant EBS volume in an another availability zone placing both behind an application load balancer after Completing this change, users reported that each time they refresh the website, they could see one subset of the documents or the other, but never all the documents at the same time. Now what should a solutions architect propose to ensure users to see all of the documents as ones? Now this is a very interesting question.

Why? Because you have to think about it as a architect. Now the thing is, you have your users.

oh my god you have your users and you have a couple of ec2 instances now the user can access any of these ec2 instances but the thing is the data is being stored in ebs volume so one ebs and another ebs now the thing is some data is being stored over here and some data is being stored over here that is why whenever the user is trying to access a particular you know you know document it can either go to this abs volume or this ebs volume where partial data is here and partial data is here so never at any moment an user can access all the data So we need some kind of shared storage between these ec2 instances such that whenever the user accesses the data it will be able to access all the data because it is being shared with a ec2 instances now we need some kind of what we need some kind of network file storage nfs right and what kind of service provides you with that with that kind of particular service like shared storage like a network drive or a network file system we already know that ef EFS which is elastic file system provides us with that kind of particular solution. So you can directly go for the option which says EFS which is C. Copy the data from both EBS volumes to Amazon EFS. Modify the application to save new documents to Amazon EFS. Now other options you can consider it but if you directly know the structure you can directly go to C without wasting any much time.

right now let us move on question number six now a company uses nfs to store large video files in on-premises network attached storage each video file ranges in size from 1 mb to 500 gb So the size is 1 MB to 500 GB. The total storage is 70 terabytes or TB and is no longer growing. This is a very important point. I will tell you why it is.

The company decides to migrate the video. video files to Amazon S3 the company must migrate the video files as soon as possible while using the least possible network bandwidth which solution will meet these requirements right so video files as soon as possible while using the least possible network bandwidth okay that is a very interesting question okay now let us read one by one options now the thing is when you directly know the answer you can go for it but if you have some you know questions regarding the answer you should always consider all the options let us go for the first option and say is create an s3 bucket create an iam role that has the permissions to write to the S3 bucket use the AWS CLI to copy all the files locally to the S3 bucket I will directly discard this option why because it says it has 70 terabytes of data and it needs to use least possible network bandwidth and in this case this option is out of the question completely where you need to use the network So, second option, create an AWS Snowball Edge job, receive a Snowball Edge device on premises, use the Snowball Edge client to transfer data to the device, return the device so that the AWS can import the data into S3. this is a very good option I will I will keep this option as of now I will say okay this is the correct answer as of now because it makes sense right you have 70 terabytes of data and you do not want to use your network connectivity what is the best way to do you know transfer the data kind of use a snowball device so you will just directly get the snowball edge job it will take around 7 to 10 days right to transfer the data from your on-premises to the particular destination now uh return the device so that aws can import the data into amazon s3 so that is that is quite the good solution you can directly transfer the 70 terabytes of data which is not at all growing to the s3 using snowball devices that's a viable option let us go to c deploy an s3 file gateway on premises create a public service endpoint to connect the s3 file gateway create a s3 bucket i will directly discard this option it requires some kind of network connectivity and also this is a very very you know complicated thing as the data is not growing at all you do not need to synchronize your data on premises data to your cloud data right why will you deploy s3 file gateway file gateways are much more good when your data is actually growing or changing or you know you want to synchronize your data right but in this case we just need to store the data and transfer transfer the data so we just don't need the c option at all d set up a aws direct connection so just with three words i'll just discard this option so you will need the least possible network bandwidth why will you go for a direct connection which gives you you know gbps you know throughput of data and it costs a lot more and it takes about one to 1.5 months to set up so this is also not a viable option so the correct answer is b in this case correct let us move on to the next question a company has an application that ingests incoming messages right so dozens of other applications and micro services that quickly consume these messages the number of messages varies drastically and some times increases suddenly to hundred thousand each second the company wants to decouple the solution and increase scalability which solution meets this requirements now when it comes to micro services and decoupling Only two services should come into your mind which actually help in this scenario.

One is SNS and one is SQS. SNS stands for Simple Notification Service and SQS stands for Simple Queue Service. Now the thing is SQS follows a queue pattern where the messages come into the queue and the microservices.

you know, pull from the queue. And SNS is just, you know, publishes the messages, right? It's a publish subscribe pattern.

It publishes the messages and all the subscribers just get the message. Now I can go with either of these services in this case. You know?

So the best way to answer this question is to search for any of these services in the particular options. Now in this case, I see that D has that particular thing with SNS. is there and it says publish the messages to Amazon simple notification service topic with multiple Amazon simple queue services as QS subscriptions configure the consumer application to process the messages from these queues so it says a very simple fan out pattern and what is the fan out pattern is is that you are your messages are coming in over here and you have your topic right and basically what will happen your SQS queues are the subscribers of this topic so it will fan out to those SQS queues and then whatever micro services m1 m2 m3 you know they can pull from these SQS queues for this messages as I've told you right When it comes to SNS and SQS, either you can use both. You can use either of them and you can use as a combination. Like in this case, the fan out pattern.

But when it comes to microservices, always SNS and SQS should be your answer. And in this case, D has that particular property. Now, when it comes to other options, completely discard all these options because it doesn't matter. You just directly go with this option. Let us move on.

question number eight right and a company is migrating a distributed application to aws the application serves variable workloads the legacy platform consists of a primary server that coordinates jobs across multiple compute nodes now multiple compute nodes legacy platform variable workloads now the company wants to modernize the application with a solution that maximizes resiliency and scalability how should a solutions architect design the architecture to meet these requirements now like the previous question this question you cannot directly go for an answer because it is kind of a ambiguous question this is one of those questions which are a little bit ambiguous you can think about multiple things this is where you use the option elimination technique the most you consider all the options and then you just eliminate them All right, let us go one by one option number a Configure a simple queue service as queues queue as a destination for these jobs which kind of makes sense Implement the compute nodes with Amazon easy to instances that are managed into an auto scaling group. This also makes some sense Configure a situato scaling to use a scheduled scaling. Mmm.

This is this is where This is where I do not I do not like this. Why why will you use scheduled scaling? for we variable workloads doesn't make any sense so completely ignore this thing let's go to b configure an amazon blah blah blah queue service then implement the compute nodes with amazon ec2 instances that are managed into an auto scaling group cool configure auto scaling based on the size of the queue hmm this is where it becomes interesting right i kind of think that this is this answer because it makes sense messages are coming into an SQS queue and there is auto scaling group where there are easy to instances and there is a cloud watch alarm which will get triggered once the SQS queue runs out of you know you know specific messages so in that case more instances will be added into the auto scaling group to process all these messages now this is this is a very good pattern and this makes sense so i kind of think that this is the answer as of now let us keep it at that let us go to option number c which says implement the primary server and the compute nodes with amazon ec2 instances that are managed into an auto scaling group configure AWS cloud trail as a destination for this job completely doesn't make any sense why will you configure AWS cloud trail as a destination for a job AWS cloud trail is used to log all your API requests on your account who accessed what make any sense does it make any sense to process you know these messages so obviously done now implement the primary server and the compute with Amazon ec2 instances that are managed into an auto scaling group configure Amazon event bridge as a destination for the jobs configure easy to auto scaling based on the load on the computer nodes now this is kind of makes sense a little bit but it kind of completely doesn't make any sense configure Amazon event bridge event bridge is used for event notifications notifications right and the thing is if you are configuring amazon event bridge as the destination for these jobs does it make any sense to configure auto scaling based on the load on the compute nodes how will the auto scaling group know like when the the load on the compute nodes will be the most in this case if you use the size of the queue that makes sense but using the load on the compute nodes doesn't make any sense and also using amazon event bridge as a destination for the jobs doesn't make any sense that's why I'm going to discard this option right let us move on to ninth question let's see now company is running a SMB file server in its data center SMB file server think this is all you need to answer the question right now the file server stores large files that are accessed frequently for the first few days access frequently for the first few days and and then after days the files are rarely accessed so now when you see this kind of wordings your mind should go to some lifecycle policies right the total size is increasing and is close to company's total storage capacity size is increasing increasing SMB file server kind of my mind goes to the file gateway now remember in some previous question where the file you know data was not growing I said that you do not need a file gateway because in that case you don't need to synchronize the data but in this case the you know the data is growing like each day and you need to synchronize the data with the cloud and in this case you kind of need the file gateway okay whatever a solution architect must increase the company's available storage blah blah blah without using low latency access to the most recently accessed file so kind of life cycle policies and stuff a solutions architect must also provide life cycle management you know it already gave you the answer in the question so what do you think the answer will be so data is increasing need it needs synchronization so your mind should go to file gateway and also you need the life cycle policy so which option has both of those things things. B file gateway with lifecycle policy to transition the data from s3 deep archive after seven days. Right?

Kind of makes sense. Okay, let us move on. To question number 10. So company is building an e commerce web application on AWS. the application sends information about new orders to an amazon api gateway rest api to process amazon api get you know rest api to process right the company wants to ensure that orders are processed processed in the order that they are received my mind is directly going without even seeing the options my mind is going to fifo queue in SQS because if you want ordering and you do not want duplication of the messages FIFO Q is the answer so without even blinking I will directly go to B which says what come on where is the pointer which says what send a message to Amazon simple queue service fee for Q when the application receives an order configure SQS fee for Q to invoke an AWS lambda function for processing so you know sometimes what happens is that the answer is right in at your face you know you do not even have to spend two seconds for the answer so these kind of questions are kind of you know relaxing questions in the exam so you can spend more time on the more difficult questions right you this is an example of that all right let us move on to the next question all right let us see this question so sorry let me go back a company has an application that runs an am Amazon EC2 instances and uses Amazon Aurora database right the EC2 instance is connected to the database by using user names and password that are stored locally in a file so use names and password are stored locally in a file which which is bad right the company wants to minimize the operational overhead of credential management so when you see this thing your mind should go to one place databases and credentials management AWS secrets manager that's it you do not have to look for any other option just go with aws secrets manager and in this case i can see the option is a do not have to read all options also and waste your time right let us move on now question number 12 so a global company hosts its web application you know on amazon ec2 instances behind an application load balancer alb right the web application has static data so application load balancer ec2 instances the web application has static data and dynamic data the company stores its static data in an Amazon s3 bucket so static data is stored in an s3 bucket the company wants to improve performance and reduce latency for the static data and dynamic data the company is using its own domain name registered with the Amazon route 53 what should a solutions architect do to meet these requirements you Now, when it comes to this kind of thing, you can get confused in this particular scenario. But the thing is, it says it has static data and dynamic data and the company wants to improve performance and reduce latency for static data.

and dynamic data now my mind kind of goes to edges like edge locations and what comes under edge locations we have two things in edge locations we have one is cloud front and global accelerator in In this case we do not need the global accelerator because we do not have any network layer data or some kind of that. We are dealing with the application layer. So CloudFront makes the most sense. Now in this options we can see all the options have CloudFront.

How will you differentiate between them? Let us start one by one. The first option create an Amazon CloudFront distribution that has the S3 bucket and ALB as origins.

Configure. route 53 to the route traffic to the cloud cloud front distribution now i think this is the right answer i'm just going to keep it like that because it is the easiest and it has the least operational overhead just configure s3 bucket and alb as origins because you can configure anything as origin in cloud front right unless you know only it has to be an s3 bucket or an http endpoint or alb whatever you want you can do it second option create an amazon cloud front distribution that that has ALB as an origin, create AWS global accelerator. So we do not need it.

So I'm just going to directly close it because we do not need global accelerator in this case. C option, create an Amazon CloudFront distribution that has S3 bucket as an origin, global accelerator again. I'm just going to go and say no. CloudFront distribution ALB as an origin. So, you know, all these options have global accelerator with CloudFront.

So it doesn't make any sense to use both of them. And it also doesn't make any sense to use the global accelerator. So I'm going to. stick with the option which has the easiest one which using cloud front as the alb origins and s3 bucket as the origins using cloud front sometimes you know the easiest answers are the you know the easiest options are the answers so don't think that the most complex options will be the right answers right now a company performs monthly maintenance on its aws infrastructure during these maintenance activities the company needs to rotate the credentials for its amazon rds for mysql databases across multiple aws regions right which solution will meet these requirements with the least operationals overhead now the thing is i've already told you right when it comes to databases right and credentials management what should be your answer aws secrets manager just go with the option number a which says store the credentials as secrets in aws secrets manager just ignore all of the other things. We do not need it at all.

question number 14 right so a company you know runs a e-commerce application on amazon ec2 instances behind an application load balancer right amazon ec2 instances behind an application load balancer the instances run in an amazon ec2 auto scaling group across multiple availability zones the auto scaling group scales based on cpu utilization metrics The e-commerce application stores the transaction data in MySQL 8 database that is hosted on large EC2 instances. The database performance degrades quickly as the application load increases. The application handles more read requests and write transactions.

The company wants a solution that will automatically scale the database to meet the demand of unpredictable read workloads when maintaining high availability. which solution will meet the requirements now if you have studied about rds you know that to scale read capacity you need replicas so your mind should go towards the particular option which says read replicas so amazon a use amazon redshift it's a data warehouse not an option amazon rds with a single az deployment and configure amazon rds to read reader instances in a different availability zone so it says that one of the things it says that um more read request company wants a solution I automatically scale you know unpredictable workloads while maintaining high availability So single easy deployment will not work So B is also out of the question see use Amazon Aurora with a multi easy deployment now this is where it tackles this particular problem and Configure Aurora auto scaling with Aurora replicas where it tackles that read problem. So C is the correct answer in this case elastic cache is not an option because it needs a lot of code changes and whenever you do not need a lot of code changes you will not go with elastic cache right so the best option is c in this case cool question number 15 So a company recently migrated to AWS and wants to implement a solution to protect the data, sorry, protect the traffic that flows in and out of the production VPC.

To protect the traffic that flows in and out of the production VPC. The company had an inspection server on its promises, sorry, on premises data center. The inspection server performed specific operations such as traffic flow inspection and traffic filtering.

The company wants to have the same functionalities in the AWS cloud. Now traffic flow inspection and traffic filtering. is a particular feature of AWS network firewall so without even blinking you will search for that option which says network firewall that is the correct answer now you can say that what will I do if I do not you do not know about you know network firewall you know sometimes you may forget about that service right So guard duty right when we talk about guard duty it's a threat detection service it's not a traffic inspection service so it kind of makes no sense to go with a traffic mirroring is a feature that allows you to replicate and send.

a copy of network traffic from VPC to another VPC. It can be in an on-premises location also. It is not a service that performs traffic inspection or filtering. So B is also out of the question.

firewall manager is a security management service that helps you to centrally configure and manage firewalls. It's kind of a, you know, one place you manage everything, but it's not used for traffic inspection and filtering. So D is also out of the picture.

So if you do not know the answer, try to use the elimination technique in this case. So C is the correct answer in this case. If you remember that AWS network firewall provides all these things, then you do not even have to use, you know, these elimination techniques.

This directly goes go with the answer right cool let's go on to question number 16 and let us read what it says so company hosts a data lake on aws the data lake consists of data in amazon s3 and rds for postgre sql i cannot cannot pronounce this post postgre sql i don't know why makes no sense consists of data in amazon s3 and amazon rds for post postgre sql the company needs a reporting solution that provides data visualization and includes so reporting solution that provides data visualization and includes all the data sources within the data lake one company's management team should have full access to all those visualizations and the rest should have limited access now when it comes to visual visualizations of you know data quick site is your service and to you know what it called what said to you know for permissions and stuff right for which user can access and what not there is a concept of users in quick site so quick site is your answer so obviously you will directly ignore C and D you are going to think about a and B in this question so create an analysis in Amazon QuickSight connect all the data sources and create new data sets publish dashboards to visualize the data share the dashboards with appropriate IAM rules Now I am rose. Why will you use it when you have the concepts of users? So this is ignored B is the correct answer right because you do not have to read about B because you have already ignored C and D and you have ignored a so B is your answer so publish dashboards to visualize the data share the dashboards with the appropriate users and groups that's it you have the concepts of users and groups in quick site which is different from the concept of users and groups in I am so you do not have and have to think about it that much so B is your answer now sometimes you know uh people say that why we we have to first of all integrate the data using the glue crawler right and then run a federated query on all those data sets right but that is a very complicated task and using QuickSight you can actually load data from various places so why won't you use the easier method and why will you use the data crawler method data color has other complicated functionalities and you will go for that when you will need those requirements.

So we'll just ignore as of now that data crawler feature. Question number 17. This is a relaxing question for you guys. Company is implementing a new business application. The company's application runs on two Amazon EC2 instances and uses an Amazon S3 bucket for document storage.

A solutions architect needs to ensure that the EC2 instances can access the S3 bucket. I mean, what is this question? It's a very, So, EC2 instances needs to access S3 bucket.

Simply your mind will go to roles. I am writing it in big letters. Roles.

So, wherever it says role with EC2 instances, just go for that which is A in this case. Right. Cool.

Very easy question this was. Let us go to the next question. Question number 18. Oh, now we have a bigger question now.

And this can be a little bit tricky. So, let us see this question. So, an application.

development team is designing a micro service that will convert large images to smaller compressed images so designing a micro service that will convert large images to smaller compressed images when a user uploads an image through the web interface the micro service should store the image in Amazon S3 bucket process and compress the image with AWS Lambda function and store that image in its compressed form in a different S3 bucket. So it's a typical application of Lambda functions and S3 bucket with an addition of microservices. A solution architect needs to design a solution that uses durable stateless components to process the image automatically. Now, when it comes to this thing, it says, you know, what combination of actions will meet these requirements.

So, you need lambda, S3 and for microservices, you need some more things. But you know for sure that you need lambda and S3. So, you have to configure a lambda function. right now let us see which which options have the lambda function so b and c have the lambda function so let us see which one of those options are the correct answer so configure a lambda function to monitor s3 buckets for new uploads when uploaded an image is detected write the file name to a text file in memory and use the text file to keep track of images that were processed out and I don't like this option for some reason this text file and stuff it just it just doesn't click with me so I'm just going to ignore this one let's go with b configure the lambda function to use simple queue service now this makes sense as an invocation source when the sqs message is successfully processed delete the message from the queue so the thing is you will take the image and write the metadata into the sqsq right that thing which you are trying to do using a text file you will just directly write the metadata into the sqsq and then whenever that particular message is processed then you delete the message that makes sense so b is one of the options in this case.

Now the next thing is you need S3. So D doesn't have S3 so ignore. E has S3 but the thing is configure an Amazon event bridge to monitor the S3 bucket when an image is uploaded send an alert to amazon ample notification service what is this amazon sns okay simple notification service there is a typo over here which which application owners email address for further processing now sns we do not require sns in this case and on the other hand we already have SQS why do we need SNS now do not need SNS as of now because the thing is we do not have the publish and subscribe action over here right so we'll just ignore e over here and the the remaining option is a so we are just going to select that but the thing is let us also read that option. So this option says that create a Amazon simple queue service we all obviously we are using the queue with the lambda function over here. So the first step will be creating that Amazon queue service.

Then configure the S3 bucket to send a notification and this is known as S3 event notifications. So make sense and it is easier to set up event notifications than setting up like a bus or something in Amazon event bridge. So when an image is uploaded to the S3 bucket. So if you combine both of these steps, you create a queue, you create a bucket and you enable S3 event notifications, then what will happen, the image will go inside the queue and then the lambda function will be triggered and it will process the image and at the end, the message will be deleted from the queue. That's it makes a lot of sense.

So A and B will be the answers. Let us go to the 19th question now. now the 19th question is also kind of a relaxing question but still you can get confused so let us read about it so company has a three-tier web application that is deployed on AWS the web servers are deployed in a public subnet in VPC web servers are deployed in public subnet in VPC and it has a three-tier web application the application servers and and database servers are deployed in private subnets in the same VPC. So it's kind of a typical, you know, private public subnet structure.

The company has deployed a third party virtual firewall appliance from AWS marketplace with an inspection in an inspection VPC. So, a solutions architect needs to integrate the web application with the appliance to inspect all traffic the application before the traffic reads the web server which solution must require with the least operation overhead now if you learnt about all the three types of load balancer which is ALB another one is NLB and another one is GLB now ALB is application load balancer this is network load balancer and this is gateway load balancer this is used for layer 7 traffic this is used for layer 4 traffic and this is used for inspection purposes right now the thing is if you know about this then you can directly go with the gateway load balancer which is d but if you do not know about it you can use the elimination technique now first of all is network load balancer we do not have anything to do with the layer 4 traffic so we just going to ignore this a option next is creating application load balancer in the public subnet of the application vpc application load balancer we are not dealing with layer 7 traffic so b is also ignored. Deploy a transit gateway in the app inspection.

We are not connecting multiple networks or we are not you know using a direct connection or you know a combination of site to site VPNs or and all those kind of network topology stuff. We are using a simple network topology. So C is out of the question and we are remaining with D. So you can directly go with D. But if you directly know that for inspection purposes we use the gateway load balancer then you can directly go with the D option.

Right now going to the 20th question. Now that this is also very easy question. If you see the company wants to improve its ability to clone large amounts of production data into a test environment in the same AWS region. So it's it wants to improve its ability to clone large. amount of production data into a test environment in the same AWS region.

So in this case, same AWS region, the data is stored in Amazon EC2 instances on Amazon elastic block storage, ABS volumes modifications to the clone data must not affect the production environment. Thank you. The software that accesses this data requires consistently high IO performance.

A solution architect needs to minimize the time that is required to clone the production data into the test environment. Which solution will meet this requirement? Now the thing is, if you remember, we can take EBS snapshots of running EBS volumes.

And there is something known as a fast snapshot feature which is actually fast and then you can restore an ebs volume from a snapshot and you can attach the ebs volumes to running ec2 instances it is some basic knowledge about ec2 and ebs so if you know all about that you can directly go with the dth option so let's see question number 21 and it says an e-commerce company wants to launch a one deal a day website on aws each day will feature exactly one product on sale for a period of 24 hours the company wants to able to handle millions of requests which each hour with millisecond latency during peak hours which solution will meet will you know meet these requirements with the least operational overhead now there are certain things that you need to keep in mind one thing is is that uh it's a one product a sale a period of 24 hours right so it means that they are pointing to a static website because there is no more changes right every day this is updated The thing is the company wants to be able to handle millions of requests each hour with millisecond latency during peak hours this points to the edge locations concepts, which is basically cloud front so try to find out a particular you know option which you know goes with both of these options both of these key points one is s3 and another one is cloudfront now if we if we go you know to the options the first option says use amazon s3 to host the full website on a different s3 bucket so why do we want to use different s3 buckets right so this is just going to get cancelled out deploy the full website on ec2 instances you know which says ec2 instances so i'm just going to say no No, no to it. Migrate the full application to run in containers. Why do you want to run a simple application on containers?

Cancel. The only option left is D, which says use an Amazon S3 bucket to host the website static content. That is fine. And use CloudFront distribution. That's our answer, guys.

Let us move on to the next question. Question number 22. so a solutions architect is using amazon s3 to design the storage architecture of a new digital media application the media files must be resilient to the loss of an availability zone so resilient to a loss of an availability zone so it means that in In frequent access, one zone is out of the question. So you directly discard that option.

Some files are accessed frequently while others are rarely accessed in an unpredictable pattern. So unpredictable pattern, this kind of points to intelligent tiering. because even if you if you know how the objects are accessed then you may set up lifecycle rules but this completely states that you need to go with s3 intelligent theory so the answer is s3 intelligent theory sometimes the question itself gives you the answers don't have to think about it that much Talking about question number 23. Now a company is storing backup files by using Amazon S3 standard storage.

The files are accessed frequently for one month. However, the files are not accessed after one month not accessed after one month kind of points do. lifecycle policies the company must keep the files indefinitely even though they are not accessed they need to keep it indefinitely with storage solution will meet most cost effectively so basically after one month you need to just create which is transferred to a kind of Glacier storage. We just see which option resonates with it the most.

So intelligent tiering is out of the question because you don't want to use intelligent tiering when you already know the pattern, right? Then coming to B, I think this is the correct answer because create a S3 lifecycle configuration to transition objects from S3 standard to S3 Glacier deep archive after one month. So it makes sense to go with B. For C, as deep archive is already a better solution, solution why will you transfer to s3 standard infrequent access so this is going to go cancel this out and cancel this out too because we don't want one zone access right so b is the correct answer in this case let us move on to the next question question number 24 so company observes an increase in ec2 costs in its most recent bill the billing team notices unwanted vertical scaling unwanted vertical scaling you know vertical scaling increasing ram compute capacity whatever vertical of instance type for a couple of vc2 instances a solutions architect needs to create a graph comparing the last two months ec2 costs and what it is called perform in-depth analysis to identify the root cause analysis of vertical scaling now there is a tool called the cost explorer which is kind of the quick site for your cost estimations if you want to do cost estimations you want to do budget planning or you want to do some kind of forecasts you can use cost cost explorer to identify the root cause and to kind of get the future billing of your particular services now you may think about budgets also but budgets is different from cost explorer budgets is kind of tracks your current expenditure and you can kind of create alarms and budgets right where for example you cross hundred dollar limit and there will be a alarm raised so for those kind of operations you use use budgets but for estimation or for finding a root cause or you know doing predictions about your costs cost explorer is the tool for you so wherever it says cost explorer just go with that particular thing so use cost explorers granular filtering feature to perform in-depth analysis of ec2 costs based on instance types right so don't get confused between budgets and cost explorers all right Question number 25, what does it say? A company is designing an application, right?

The application uses AWS Lambda function to receive information through Amazon API gateway to store the information in an Amazon Aurora PostgreSQL database. Now, during the proof of concept stage, the company has increased the Lambda quotas significantly to handle high volumes of data that the company needs to load into the database. A solutions architect must recommend a new device to improve scalability and... and minimize the configuration effort.

Now the thing is, the company has to increase the Lambda quotas significantly to handle high volumes of data. So basically they are facing some kind of latency issues regarding high volumes of data. So what can we do to get rid of it? basically they are receiving more they can handle so let us see one by one options and let us think about which is the correct answer so first of first of all the first option says the refactor the lambda function code to Apache temp Tomcat code that runs on easy instances so this is kindly kind of I rejected because Amazon ec2 instances is out of the question completely change the platform from Aurora to Dynamo DB provision this is also cancelled out because they already have they have think thought about a solution regarding sequel databases why will you you know completely transform it to no sequel databases now C and D may be viable options let us see one by one set up two lambda functions configure one function to receive the information configure the other form function to information to database integrate the lambda function by using simple notification service now the thing is till here it was the correct answer but when it came to this this is actually the wrong answer why because see uh sns uses a publish subscribe pattern it means that it fans out whatever notification it receives to its subscribers right but the thing is we have to control the information flow because the lambda functions are not able to handle what they are actually receiving there are a high volume of requests so we need a some kind of queue to actually put them as a buffer in the queue and one by one the lambda functions is going to take that particular information out of the queue and that points to SQS so the D th option has the correct SQS integrated with two lambda functions so that is the correct answer in this case so talking about question number 26 guys a company needs to review its aws cloud deployment to ensure that its s3 buckets do not have unauthorized configuration changes very easy question so when it comes to configuration in aws you have to remember three things guys one is cloud watch one is cloud trail and another one is aws config And you need to understand the difference between this. This is used for monitoring.

This is used for tracing who access to what like APIs. And this is used for configuration. Right. So when it comes to configuration, your answer is AWS config. So just directly go with turn on AWS config with the appropriate rules.

That should make the most of the sense. Now coming to question number 27. a company is launching a new application and will display application metrics on amazon cloud dashboard so this is the thing the company's product manager needs to access this dashboard periodically periodically right the product manager does not have an aws account does not have an aws account very very important to see this a solution architect must provide access to the product manager by what following the principle of least privilege which solution will meet these requirements now let us see one by one and see the solutions for this question now the eighth option share the dashboard from the cloud watch console and enter the product manager email address and complete the sharing steps provide a shareable link for the dashboard to the product manager now this kind of seems like a you know good solution to me because it has the principle of least privilege and we are sharing it from the cloudwatch console we are entering the email address of the product manager so that the product manager will get kind of a temporary link and sharing steps right provide a shareable link right as I've told you so this is the correct answer in my say in my case but the thing is let us talk about all of these solutions I am user is completely out of the question because the the person doesn't have a AWS account. So this is also out of question. And deploy a bastion server in a public subnet, this is also out of the question.

So the only viable option in this case is option number eight. That should make the lot in the most of sense. So question number 28, it says that a company is migrating applications to AWS, applications are deployed in different accounts, right? And blah, blah, blah, no the account centrally managed by AWS. organizations and the company security team needs a single sign-on solution so this is where the key point lies a single sign-on solution across all companies accounts the company must continue managing the users and groups and it's on-premises self-managed Active Directory now the thing is there is a feature in AWS that is known as active ad which means it's Microsoft Active Directory and you can connect it to your you know on-premises Microsoft server like if you want to establish a connection between this thing and your on-premises server and your cloud server like for example for credential management and everything so when thinking about that the answer lies between these two options right a and b now let us read about this thing so c and d is out of the question because it is not possible to do either of those things because you have to use microsoft active directory with sso right so enable aws single sign on from the aws so console that is fine create a one-way forest trust or a one-way domain trust to con to connect the company's self-managed microsoft active directory with aws so by using aws directory service for microsoft active directory okay this is kind of a little bit tricky let us let us read the bth question now uh sorry option enable this this thing from the aws so console created two-way for forest trust to connect the companies self-managed Active Directory with AWS so by using AWS and directory service from active Microsoft Active Directory so the answer lies between these two particular points two way and one way now in the question what it says that company security team needs the SSO solution the company must continue managing the users and groups on its own premises self-managed Microsoft SSO directory so this is kind of a two-way thing link happening because because if there's one way only one of the particular points for example from cloud to the on-premises server will work not the two way thing.

So in this case, it needs to manage the users and also needs to authenticate with your on-premises Active Directory. So in this case, a two way trust relationship must get established. So with that in mind, A is discarded and B is the correct answer. you know this is a tricky question and people will get confused if you if you get confused in one way and two way but the thing is if you if you forget about that just understand the question what it what it is trying to do it is trying to authenticate so one way second way it also wants to manage the users and groups so it's the two way so two things you need to manage right think about it as that if you do not remember the actual uh you know concept all right question number 29 a company provides a voice over internet protocol VOIP service that uses UDP connections.

The service consists of Amazon EC2 instances that run in auto scaling group. The company has deployments across multiple AWS regions. So VOIP service, UDP connections, EC2 instances, auto scaling group. the company has deployments across multiple aws regions the company needs to route users to the region with the lowest latency kind of you know goes to load balancing you know just reading the question will give you ton of answers company also needs automated failover between regions kind of also is a you know functionality of load balancer But which load balancer you have to choose, you need to keep in mind. Talking about application load balancer.

So, this is a voice over internet protocol. So, application load balancer is not going to work. which is a basically layer 7 load balancer um so all application load balancers are kind of gone by default so talking about these two options a and c so deploy a network load balancer that is fine associate a target group with the auto scaling group you choose the NLB as an AWS global accelerator end point in each region kind of make sense.

So I'll just consider this as the right answer talking about see deploy a network load balancer as a target group associate the target group with whatever whatever it says create a AWS as our Amazon route 53 latency record at points the aliases in each NLB now this doesn't make any sense because why will you use route 53 latency failover policy if you are already using a load balancer that's why C is also ignored so I'm going to go with a in this case all right question number 30 A development team runs monthly resource intensive tests on its general purpose Amazon RDS for MySQL DB instance with performance insights enabled RDS with performance insights enabled. The testing lasts for 48 hours hours once a month and is the only process that uses the database the team wants to reduce the costs of running the tests without reducing the compute and memory attributes of the DB instance which solution meets this requirements most cost effectively now it's a very very straightforward question now it says that team wants to reduce the cost of running the test without reducing the compute and memory attributes of the db instance now if you go with a it says that stop the db instance when the tests are completed restart the db instance when required but it doesn't even make any sense because uh why will you stop the db instance when it is you know it can be required for other purposes also it's not only required for testing purposes also and the thing is if you even stop the db instance you're still going to pay for some resources at right so this is out of the question use auto scaling policy don't need an auto scaling policies out of the question now c is create a snapshots when tests are completed terminate the db instance when this and restore the snapshots when required this is a viable solution because you are not going to pay for the snapshots only when you are going to take the resources like the storage capacity and stuff you're going to pay for those so whenever you are running the tests you're just going to uh you know create the database directly from the snapshot because it's easier to do that right the snapshot contains all the data so directly just create it from the snapshot so that is by far is the most correct answer and talking about d modify the db instance to a low capacity instance when tests are completed and modify the db instance again this this is kind of a far-fetched idea so i'm just not going to go with that because still modifying the db instance still you're going to pay for it right even if it is for a less time no for less resources you're still going to pay for it so that is why it is the wrong answer so the correct answer is c in this case right let us move on all right so let us talk about this question question number 31 it says a company that hosts its web application on aws wants to ensure all amazon ec2 instances amazon rdsdb instances and amazon redshift clusters are configured with tags the company wants to minimize the effort of of configuring and operating this check now when it comes to configuring as already told you that you will go with aws config right so if anything that comes with like uh like some kind of compliance issues or configuring stuff always go with aws config so if you want to check if your services have tags that is a compliance kind of thing right and that is some configuration stuff happening so you will go with aws config rules so you'll just create some rules with says that okay if the tag is absent for some rules then create an alarm or something like that right so aws config rules is the answer in this case talking about question number 32 right a development team needs to host a website that will be accessed by other teams right the website can cons contents consist of html css some some images in javascript most cost effective way for hosting the website it's completely static web hosting so that is is basically s3 bucket and post the website there so s3 static web hosting so you you you will face these easy questions also in the exam so don't worry that all the questions will be difficult as i've told you right these are all relaxing questions so let us talk about this question a company runs an online marketplace web application on aws the application serves hundreds of thousands of users during peak hours the company needs a scalable near real time solution right to share the details of millions of financial transactions with several other other internal applications transaction also needs to be processed to remove sensitive data before stored in a document database now here there are a couple of things that points to different services one is near real-time solution to share the details of millions of financial transactions and millions of financial transactions so this particular thing points to DynamoDB because except dynamo db there is no other database which can handle millisecond latency with millions of transactions at a particular time and another thing is removal of sensitive data before being stored in a document database so this kind of points to an event and that event should trigger a lambda function so dynamo db plus lambda function this is what i think should be in of the correct answers right let us go one by one now the thing is a is out of the question over here because set up a rule in DynamoDB to remove sensitive data from every transaction is kind of not possible stream the transaction into Amazon Kinesis data firehose and this is a tricky thing because the thing is people would think about this answer because it is a near real time wherever that is a near real time and you can stream near real time data in Kinesis data firehose right so you will think that this is an option but the thing is data firehose cannot send the data to Amazon DynamoDB right it can only send to redshift s3 and a couple of locations right so this is out of the question now talking about C stream the transaction into Amazon Kinesis data streams now this is kind of also a possibility because Kinesis data stream is real-time right not near real-time but completely near real-time use AWS lambda to remove sensitive data from every transaction and then store the transactions data in Amazon DynamoDB so this is kind of I think is the correct answer and D is also not correct because store the batch transaction into Amazon s3s file so it you know storing real-time data as files is also not a viable option so I'm just going to ignore this as well so C is the correct answer question number 34 a very straightforward question if you read about it company hosts its multi-tier applications on aws for compliance governance auditing and security the company must track configuration changes configuration changes right aws config and also one other thing and record a history of api calls that is config with cloud trail so wherever it config with cloud trail that is the correct answer which is b in this case right so b is the correct answer let's go to question number 35. Thank you. So, companies preparing to launch a public-facing web application on the AWS cloud.

The architecture consists of Amazon EC2 instances within a VPC behind an Elastic Load Balancer ELB. So, EC2 instances with a VPC behind an Elastic Load Balancer ELB. A third-party service is used for the DNS, right?

The company's solutions architect must recommend a solution to detect and protect against large-scale DDoS attacks. So, when it comes to DDoS attacks, guys. there is one only service that you need to talk about that is aws shield this is search anything any options that contains aws shield so two options is containing aws shield right and one is enable as aws shield and assign aws route 53 to it okay and another one is enable aws shield at once and assign the elb to it now the thing is aws shield advance works with elb right and And Shield Advanced provides you with a little bit more functionality than AWS Shield because it is a paid service and the payment is really high also.

Right. So, as the company must protect against large scale DDoS attacks, D is the correct answer. AWS Shield Advanced and assign it to a ELB. Question number 36. Company is building an application in the AWS cloud.

The application will store data in Amazon S3 buckets in two AWS regions. The company must use an AWS key management service, AWS KMS, customer managed key. encrypt all data that is stored in the s3 buckets the data in both s3 buckets must be encrypted and decrypted with the same kms key the data and the key must be stored in each of the two regions which solution will meet the requirements with the least operational over now when it comes to this thing you know kms key and multiple regions two things you should point to the multi region keys so multi-region keys help you to use the same key kms key in multiple regions so that is the correct answer in this case but see the options which contains multi-region keys one is b it says managed multi-region kms key and this much this much whatever this is and i think b should be the answer because it has multi-region keys in it a is create a s3 bucket in each region configure s3 to use server-side encryption you know this is completely discarded create a customer managed key KMS key and an s3 bucket in each region configure the s3 buckets to use server-side encryption with AWS s3 managed encryption keys configure replication I mean this is a very huge operational overhead and this is not going to work and create a customer managed key in each region configure the s3 keys to use the server side and give now the thing is you are creating a customer managed key in each region but the question literally states that and should be encrypted and decrypted with the same kms key and they are in multiple regions so there is no other option than to go with multi-region keys let's go to question number 37 now The company recently launched a variety of new workloads on Amazon EC2 instances in its AWS account.

The company needs to create a strategy to access and administer the instances remotely and securely. The company needs to implement a repeatable process that works with AWS services and follows the AWS well-architected framework. right now which solution will meet these requirements with the least operational overhead now what does it say it says company recently launched a variety of new workloads the company needs to create a strategy to access and administer the instances remotely and securely. Now there is something known as systems manager session, AWS systems manager session to establish a secure SSH connection to your EC2 instances.

you do not even have to you know give permission for ssh access in your security group but still it will work because it is a very secure and you you cannot able to connect it in a normal way it will you know aws will establish it for you so that wherever in that option where it says systems manager session manager just go with that option in this case so b is the particular answer in this case other options are just discarded automatically side-to-side VPN connection is not required create an administrative SSH key pair is it it is also not required because it says to access an administrative instances remotely and securely so always go with the securely the keyword is securely and none nothing is more secure than AWS systems manager session manager so B is the correct answer in this case okay going to question number 38 Company is hosting a static website on Amazon s3 and is using Amazon route 53 for DNS their website is experiencing Increased demand from around the world right a Company must decrease latency for you users who access the website, right? Which solution meets these requirements most cost effectively? Now, what it say is that a company is hosting a static website on Amazon S3 and is using Amazon Route 53 for DNS, right? now the thing is it is experiencing demand from the world so it needs to decrease the latency so what is the answer to this thing decreasing latency amazon s3 your mind should automatically point to cloud front edge locations cloud front so directly go with the option number c S3 transfer acceleration will not work because it is just the opposite right when you want to upload the data you go with S3 transfer acceleration replicate the S3 bucket that contains the website to all AWS regions geolocation routing entries this is also not required provision accelerators in AWS global accelerator so this is also not required right because it will work with cloud front the best and it is a very you know efficient way and cost-effective way to set up and as we are not dealing with network layer protocols. So global accelerator is also out of the question.

So let us talk about this question. A company maintains a searchable repository of items on its website. The data is stored in an Amazon RDS for MySQL database table that contains more than 10 million rows. The database has two TB of general purpose storage. There are millions of updates against this data every day through the company's website.

The company has noticed that some insert operations are taking 10 seconds or longer. So it is completely related to database storage. storage performance and if you think about it if you increase the storage or you change the storage type if you do either of these things you will get an increase in iops which is input output per second so in the options changing the store database storage type is the thing that we can do because otherwise that there is no other option so let us see which storage type is the best now if you go with a change the storage type to provisioned IOPS SSD it is made for high levels of IO operations for consistent and predictable performance right so we if we if we talk about this thing I think this is the correct answer in this case because it will give you the most of the IOPS Right because because it is related to IOPS right and the database storage performance is directly linked with the IOPS Which is input output per second. So a is kind of I think currently is the correct answer Talking about be right change the DB instance to a memory optimized instance class I mean it can improve performance of insert operations, but it is a storage performance rather than a processing power problem, right? So this is out of the question in this case Change the DB instance to a burstable performance is this class now Now this is kindly you know it is a bus table you know performance right.

It means it means that will go beyond what it what it's required Iops is. We don't want this thing. So we will going to discard this option as well.

Now enable multi easy RDS read replicas. Now the thing is it is related to read performance and in this question it doesn't say that any problem is related to the read performance. Right. So I'm just going to discard that. due to this thing because it is not present in the question that the reads must be segregated or from the rights so I'm just going to ignore this option which is D let's go to question number 40 a company has thousands of edge devices that collectively generate 1 TB of status alerts each day.

Each alert is approximately 2 KB in size. A solutions architect needs to implement a solution to ingest and store alerts for for future analysis right the company wants a highly available solution however the company needs to minimize cost and does not want to manage additional infrastructure additionally the company wants to give 14 days of data available for immediate analysis and archive any data older than 14 days so ingesting plus storing plus life cycles So I think you should know the combination of services which is required for all these particular purposes. So for ingesting data streams, Kinesis data streams, storing S3, lifecycle, lifecycle policies.

So any option which goes with all of these three things just pick that option completely so if we go with option a creator at Amazon Kinesis data firehose delivery system that is good we need that configure the Kinesis data fire hose you know to deliver the alerts to Amazon S3 bucket, set up an S3 lifecycle configuration to transition the data to S3 Glacier after 14 days. Now, this kind of seems like a viable, you know, option. Now, the thing is, we can go with either Kinesis data streams or Kinesis data firehose.

Now, the thing is, if... if you know if you get the option as kinesis data streams and kinesis data firehose you can go with either of them but there are certain limitations KDS does it in real time and KDF does it in near real time so when Whenever there is a difference in this real time and near real time, you will choose either you know any one of the service which just goes with the question. Right. Another thing is that Kinesis data firehose can only deliver to Redshift S3, and I think some other source, right?

Only three sources it can transfer it to. So, if it is any other source, then you can directly go with Kinesis Data Streams. But as it is S3 over here, we can choose Kinesis Data Firehose in this case, right?

Now, let us talk about, you know, option number B. Launch Amazon EC2 instances, which is we do not require. We do not need to set up any infrastructure.

So, B is completely gone out of the picture. Kinesis Data Firehose delivery stream to ingest the alerts. Let us configure this. the data firehose stream to deliver the alerts to Amazon open search service which is elastic search cluster setup open search service now this is a lot of overhead so we don't need this right and create an Amazon simple queue service standard to ingest the alerts simple queue service phone cannot ingest real-time alerts so it needs to be used with a combination of kinesis data firehose or data streams so a is the most viable answer in this particular case we're going to go with a now talking about question number 41 let us go for question number 41 company's application integrates with multiple software as a service sources for data collection the company runs amazon ec2 instances to receive the data and to upload the data to an amazon s3 bucket for analysis the same ec2 instances that receives and uploads the data also sends a notification to the user when an upload is complete the company has noticed slow application performance and wants to improve the performance as much as possible right now the thing is Actually need to remember a particular service for this question. It says that multiple software as a service sources Company runs Amazon easy to instance would receive the data and to upload the data from s3 whatever blah blah blah The same is to instance that receives the data uploads data and also sends a notification to the user when an upload is complete slow application performance is noticed Why because they are communicating between software as service?

applications or apps when it comes to communication between software as a service apps there is a service known as amazon app flow Right and you will not get confused between this thing because if you remember that service for software as a service applications then only that option in your question will contain Amazon app flow. So you directly go with that option which contains Amazon app flow. So B is the correct answer. Create an auto scaling group and blah blah blah it's a little bit of overhead.

So I'm not going to go with that. Event bridge is for processing events. But the thing is. application performance is being affected so it has nothing to do with event bridge so I'm going to just ignore that docker container right to instead of ec2 instance now the thing is even though we are switching from ec2 instance to docker container it still doesn't make any sense because we need to communicate right so docker container is out of the picture completely and also cloud watch container insights to send events to Amazon simple notification service when they upload to s3 bucket is kind of very complicated you know thing that you are doing so the thing is if you know about app flow you will directly choose the option which says Amazon app flow when it comes to software as a service SAS sources or applications question number 42 pretty straightforward question so just skip everything uh it says that the ec2 instances download images from s3 and upload to amazon s3 through a simple nat gateway the company is concerned about the data transfer charges what is the most cost effective way the company to avoid regional transfer charges single VPC so private privately it wants to access s3 what is the option without any transfer charges that is just gateway endpoint we have already discussed in another question so it should be in your mind already let's go question number 43 now company has an on-premises application that generates a large amount of time sensitive data right now and it is backed up to Amazon S3 application has grown and there is user complaints about internet bandwidth limitations kind of throws you off to direct connections solution architect needs to design a long-term solution mm-hmm interesting that allows for both timely backups to Amazon S3 and with minimal impact on internet connectivity for internal users internet connectivity so VPN connection it says internet bandwidth limitation so VPN connections cannot provide you with the maximum bandwidth and also it is it can be internet connectivity problems because you have to travel through the public internet so this is going to get lost in this case establish a new AWS direct connection and direct backup traffic through this connection now this is kind of I think is the correct answer because this is a long-term solution and along with that it is also give you the maximum bandwidth so B is the correct answer I think snowball device is out of the question it is not permanent and this is also is a very pretty bullshit option so I'm just not going to go with D also so B is the correct answer Question number 44 It says company has an Amazon s3 bucket that contains critical data company must protect the data from access accidental deletion which combination of steps should a solutions architect take to meet these requirements okay so the thing is accidental deletion it means M F a delete so MFA delete is for accidental deletion so very simple thing that you will learn when you learn with a you know s3 so mfa delete is one of the options and the thing is if you want to enable mfa delete on a particular bucket then you need to enable versioning for that also that is a requirement so a and b is the correct answers in this case now question number 45 A company has a data ingestion workflow that consists of the following. Just a minute, guys.

Just set up my mic. So, a company has a data ingestion workflow that consists of the following. Simple Amazon simple notification service, SNS topic for notifications and AWS Lambda function to process the data and record. data company observes that the ingestion workflow fails occasionally because of the network connectivity issues when such a failure occurs the lambda function does not ingest the data obviously make sense and unless the company manually reruns the job job which combinations of action should a solution architect take to ensure the lambda function ingest all the data in the future now the thing is it's a very simple thing the producer produces the data and the consumer consumes the data but for certain times the consumer is not able to consume the data because there is some issues in internet connectivity or the consumer is overloaded or something like that in that case what you do you introduce a queue in between and that q is sqs so we need to create a sqs queue so b is one of the correct answers right now what is the other answer the lambda function is the consumer in this case the lambda function must read from the sqsq to process it process the data and where is that particular option modify the lambda function to read from the simple queue service that's it that's it that's the correct answer right question number 46 so a company has an application right uh that provides marketing services to stores the services are based on previous purchases by store customers the stores upload transaction data to the company through sftp and the data is processed and analyzed to generate new marketing offers right some of the files can exceed 200 gb in size recently the company discovered that some of the stores have uploaded files that contain personally identifiable information and should not have been should have been not included and the company once administers to be alerted if PIA is shared again company also wants to automate remediation what a solutions architect must do to actually do this thing now when it comes to identifying personal identifiable information piis there is a specific service on aws which is amazon mass mackie or mass whatever you want to say you want to say it is used for am you know personal identifiable information remediation and for you know alert alert thing whatever so just search for the option which contains amazon mackie that is your right answer so where is Amazon Mac key here is Amazon Mac key and it has in option number B don't have to read about anything also again because the thing is if we know the answer the know the service then basically that is the thing directly go for that particular option and another thing is that if you don't know I'm is about Amazon Mac key you can read other options also create a s3 bucket as a secure transfer point use Amazon inspector Amazon inspector is not used for personal identifiable information at all it is used for you know for scanning purposes if there is any vulnerabilities so a is out of the question implement custom scanning algorithms in an AWS lambda function trigger the function when objects are loaded into the bucket if objects contain PII use simple notification service trigger a notification to administer to remove the object that contains PII the thing is they want to automate this thing so automation is not there in C so it is ignored custom scanning algorithms blah contains PII use simple email service and this is also not going to work so all these three if you eliminate them then we are left with only b option so you can go with that even if you don't know about amazon mackey question number 47 pretty straightforward question and this is one of those relaxing questions you know company needs guaranteed amazon ec2 capacity in three specific availability zones in a specific aws region for an upcoming event that will last one week so it says guaranteed amazon ec2 capacity and i think what is the purchasing option that guarantees specific capacity in specific availability zone or region that is capacity reservation so we're just going to go with d question number 48 Company website uses Amazon easy to instance store for its capable catalog of items Instance store you means that this is local right Certain point should strike in your mind, right? It should actually be part of your answers the company wants to make sure that the catalog is highly available and that the Catalog is stored in durable location highly available and durable What do you think is the answer for that?

right company wants to make sure the catalog is highly available and that the catalog is stored in a durable location highly available and durable location what is the option for that now elastic cache is out of the question we do not need caching larger ec2 instance doesn't make any store doesn't make any sense move the catalog from instant store to amazon s3 deep archive highly available i think that is the problem thing highly available and and what is the other thing that that is that and durable right so c is out of the question move the catalog to elastic file system now this is not a very good solution but still as compared to other options this is the best solution in this case so this is one of those questions where you think that okay none of the options are correct but you need to choose an option which kind of resonates with the question even though it is not the actual good answer still have to do this okay let us go to question number 49 so a company stores call transcript files on a monthly basis on a monthly basis users access the files randomly right within one year of call but users access the files infrequently after one year something to do with you know lifecycle policies i can clearly see that the company wants to optimize its solution by giving users the ability to query and retrieve files query and retrieve files that are less than one year old as quickly as possible a delay in retrieving older files is acceptable so very easy thing right the thing is users access the files randomly see this word randomly within one year of the call but users access the files infrequently after one year so randomly says that it can be frequent it can be infrequent you don't know about that and it is done for one year What is the storage tier that provides this kind of frequent infrequent type of access like whenever you want you can access it or you don't access it up to you that is intelligent tiering. So that should be in your option and then after one year less than one year and delay in retrieving older files is acceptable. So after one year you can access it.

You know the retrieval time may be larger. So you can just archive it to glacier deep archive or something like that. right so i think b is the correct answer in this case because store individual files in amazon s3 intelligent tiering which is for one year use s3 lifecycle policies to move the files to s3 glacier flexible retrieval after one year query and retrieve the file that are in s3 by using amazon athena and query and retrieve the files that is in s3 glacier by using s3 glacier select even if you don't know about amazon athena and s3 select and glacier select two things should strike in your mind one is amazon intelligent tiering and the and after one year move it to flexible retrieval those two should point you to b other other things are you know out of the question so store individual files with tags in amazon s3 it doesn't make any sense store individual files with tags in amazon s3 standard storage this could have been the answer but the thing is it says randomly within one year that is why you ignore this thing because s3 storage provides you the least latency access but the thing is if you are infrequent then still you are paying for that particular story but intelligent tearing will move it to a different storage class in that case right now talking about the store individual files in amazon s3 standard storage use lifecycle policies so as it is standard storage i'm also going to ignore that in this case question number fifty Company has a production workload that runs on thousand Amazon EC2 Linux instances.

The workload is powered by third party software. The company needs to patch the third party software in all EC2 instances as quickly as possible to remediate a critical security one. now this it can be a confusing question why you will see that it's a very short question and it's a very easy question but why is it confusing is that there is a particular service that is known as patch manager So you would think that B is the correct answer in this case that is used for patching right B is the correct answer you would think but the thing is patch manager is used for OS level patches and bigger patches right for patching OS or you know critical software or something like that but in this case it says that it is a production workload that runs on thousand Amazon EC2 Linux instances. the it is powered by a third party software and the company needs to patch the third party software so what do you think is better like you don't want to do a whole os patching that is better or just run a simple command on the terminal which is better simple command on the terminal run it and it will just update everything on your ec2 instances so in that case b i will not consider b as an answer because it says patch manager and let us consider other options lambda function is not going to be considered see system management in its window that is also not good This one is actually a correct answer because use AWS systems manager to run command. So what it will do your systems manager run command you can run the same command on multiple EC2 instances.

So run the command on AWS system manager run command and it will automatically reflect in all of those EC2 instances. Right. Okay.