SAP-C02 Amazon AWS Certified Solutions Architect Professional – Exam Preparation Guide Part 3

  • By
  • September 4, 2023
0 Comment

3. Exam Preparation Part 02 – Domain 2

Hey everyone, and welcome back. Now, in today’s video, we will be continuing our journey, understanding some of the important pointers for exams for domain two. As we have discussed, even in the earlier video, be aware about the distinction on which use cases where DynamoDB would prove to be useful, and which use cases where RDS will prove to be useful. So do remember that specifically in exams, if there is a use case where there is a need of a relational database or something related to asset transactions, then straight away the answer should be RDS. But if the data is unstructured, then DynamoDB proves to be the right answer for such use cases. So, make sure you remember the keywords like relational database, asset as well as unstructured data. Now, as far as the DynamoDB is concerned, we already know that it is a no sequel database.

Now, caching becomes one of the important functionality, even for DynamoDB. Now, there are multiple caching options which are available for DynamoDB. One is through elastic cache and second is through DynamoDB accelerator, which is also referred as the DAX. So in terms of elastic cache, if you are using it, so what easy to instance can do is if you have application running there, the hot data can be stored within the elastic cache so that it can be retrieved much more faster and the longer term persistent data can be stored within the DynamoDB. Now, storing your data within the elastic cache not only can improve your performance because it is an inmemory based caching solution, but it can also save you cost, because remember, DynamoDB works based on RCU and WCU in terms of overall throughput. So, the more the data has been retrieved from your caching solution, the better in terms of both performance and costing. Now, speaking about the new solution of DAX.

So, DAX is one of the features which were recently introduced. So, the DAX is basically a fully managed, clustered in memory cache for DynamoDB. Now, it can deliver up to ten x the read performance improvement. It is generally suitable when you need a response time in microseconds and when you have millions of requests per second. Now, do remember that DAX can act both as a read through as well as write through cache. So, when you look into the architecture of DAX, you have DynamoDB, you have your DAX cluster here, you have your DAX client. It can be part of the easy to instance where your application is running. So whatever read request that your application might make, it goes through your DAX cluster. Whatever write request that your application might make it again goes through the DAX cluster. And this is the reason why it’s both read through and write through caching solution. Now, speaking about the read through cache, let’s look into how exactly things would work. So, in read through cache, the application first tries to read the data from DAX. If the cache is populated with the data, which is referred as the cache hit, then the value is returned. If the cache does not have the relevant data, then it goes to the step two.

Now, within the step two, what DAX does is that it forwards the request to the DynamoDB table. So in the first step, application makes a read request to DAX. DAX, if it has the data, then it will immediately send it back to the application. If it does not have the data, it will send the request to the DynamoDB. From the DynamoDB, it will fetch the response. It will store that response within the DAX and also forward it to the application. All right? So all of these intelligence is part of DAX. So as an application, you don’t really have to do anything other than sending the request to the DAX. That’s about it. Now, same happens with the write through cache.

So for a given key value pair, the application writes to the DAX endpoint. Now, do remember that if you’re using DAX, then you have to send a request to the DAX endpoint only. So the application writes to the DAX end point, you have DAX intercepts the write and then writes that specific key value pair to the DynamoDB. Now, upon successful write, DAX populates the DAX cache with a new value so that any subsequent reads for that same key value pair can act as a cache hit there. All right? Now upon the acknowledgement of successful write, then the DAX will send a successful acknowledgment back to the application. Now, there are certain costing considerations that you must remember. Now, basically, the more WCU and RCU, the more expensive it would be for the DynamoDB. So generally, for organizations who might be putting DynamoDB on demand and there are a huge amount of spiky workloads, then you will have a great cost at the end of the month.

So costing becomes an important consideration there. So we already discussed that in the first point that the Dynamo DB throughput can lead to a huge amount of overall cost. Now, we can make use of SQS to deal with burst workloads with limited RCU and WCU. So let’s say that you have a five RCU and five WCU for the DynamoDB table, and certainly you have a burst workload. So now what will happen there? Many of the requests will fail because you don’t really have a proper RCU and WCU there. So in such cases, you can either go with auto scaling, you can go with on demand, but again, in such cases, the costing will increase. So if you do not want costing to be increased, so like you want RCU and WCU to remain static, then what is the approach that you can take without the request being failed? Then the approach that you can take is SQS.

So now what will happen is your application will write the data to SQS writer app. SQS writer app will store the data to your SQS queue. From sqsq, the data will be stored to the DynamoDB table. All right? So this is one of the ways in which you can manage the WCU here for the RCU, if you have a caching solution that works well for WCU, this type of architecture is good and can save you a lot of cost. But do remember that here you have an asynchronous operation and you need to be fine with the appropriate amount of latency. So there can be a latency which can be involved, because again, there are two additional steps which are involved here. So make sure you remember this architecture, this is pretty important. Now again, you should be aware about the DynamoDB auto scaling. So, DynamoDB auto scaling basically allows us to dynamically scale up and scale down the overall throughput of your DynamoDB table. So you have the user, you have the DynamoDB table here. So the overall WCU and RCU will be checked through Cloud Watch. And if you see that request that might be coming is much more higher, then it can go ahead and auto scale. So it can update the table, so that it can handle good amount of requests. And along with that, SNS topic can be present, which will have the notification so that you can be notified over the email or the appropriate endpoint. Now, along with that, you should be aware about the DynamoDB Global tables.

So, a global table is a collection of one or more replica table, all owned by a single AWS account. So you can have a global dispersed user, right? You can have a global app and you have the users across the world who are using the app. So now what you can do is in order to reduce the overall latency and improve the performance, you can make use of DynamoDB global table. So there will be multiple tables across the region, all right, and all of them are replica. So depending upon the nearest endpoint, the request can be made and served too. So this improves the overall performance and can also help during the overall disaster recovery. Now, coming to some of the services which helps during the deployment, at a high level overview, you should be aware about code commit, code build, code deploy and code pipeline on what exactly they are. Now, the solutions architect Professional will not go in too much detail.

So if you want to go in too much detail, DevOps Professional is the right certification. There they will be testing your knowledge about configuring and troubleshooting this. But for professionals, just a high level overview is good enough. So, Code commit is basically a fully managed Gitbased repository. So you can store your code here, which is Git supported, so it supports version control and all get related features. Then you have code build, code will basically helps in building the code as well as testing with continuous scaling. So generally many of the organizations, they make use of Jenkins, but Jenkins they have to again design the architecture which can scale.

So if you use code build, it is designed to continue scale. So you can just focus on building and testing your code without having to maintain the infrastructure. Now, Code Deploy is a deployment service that basically automates the overall application deployments to EC to on premise instances, serverless lambda functions or even to ECS service. So once your code is built, you can deploy that code with the help of Code Deploy. And the last is code. Pipeline. Code Pipeline is basically a continuous integration and release automation service for your application you want to release in the cloud. All right?

So Code Pipeline is where you can have multiple stages, you can have a proper CI CD there. Now, in terms of deployment services, there are multitude of deployment services available. So you have in the stage of runtime slash container, you can have EC, two ECS, you have Lambda, you have Elastic Beanstalk for application deployment, you have Code Deploy, you have OpsWorks. Again, you have Elastic Beanstalk for the code deployment and management, you have Code Commit Pipeline and Elastic Beanstalk for infrastructure deployment, OpsWorks, Cloud formation and Elastic Beanstalk. So Elastic Beanstalk is one of the services which works across all the stages.

So this is one important part to remember. Again, for this diagram, all credit goes to AWS. So in terms of deployment methods, specifically when you talk about Elastic Beanstalk. So generally again, you have various deployment methods like all at once rolling with additional batch. Now all of these will be tested extensively in the DevOps professional. But two of the primary ones that you need to remember is the Immutable deployment and the Bluegreen deployment. These are the two ones that you need to focus specifically related to Elastic Beanstalk. So here the first is impact of failed deployment. So in both of them it is minimal. All right. Now zero downtime, you have Immutable as well as blue green which can support that and no DNS change. Immutable is yes and blue green is no. So just go through the videos that we have for Immutable and blue green before you sit for the exams because this can prove to be important in the overall deployment related questions.

The next important part is AWS OpsWorks. Now AWS Ops Works basically manages the infrastructure deployment for the cloud administrators. So do remember here that AWS Ops Works is a global service, but we can only manage resources in the region where the Ops Works stack is created. Now do remember that when you get things like Chef Puppet within your exam question, then AWS Ops Works would pretty be a right answer. Now basically if you just go bit up. Now for the application deployment, you have OpsWorks and Elastic Beanstalk. Even for infrastructure deployment. You have Ops works, elastic beanstalk. So let’s say you have a question where you have Ops Works cloud formation elastic beanstalk as the probable answer, but the question says Chef or puppet, then Ops works is a straightforward answer there. So do remember that. Now the next important part is cloud formation. Now, do remember that cloud formation supports most of the AWS services here. Most is important. It does not support all, it does support most. All right?

Now, it also allows us to have custom resources because generally whenever a new service is launched, you will not really see a cloud formation integration there. So for such cases you can have a custom resource that you can develop. Now, there is also a certain feature of change sets in cloud formation which basically provides a summary of the change that might happen to your stack when you update the stack. So this also proves to be quite important. Now, you also have cloud formation stack sets which allows you to deploy stacks across multiple AWS accounts as well as AWS regions from a single location. Apart from that, the deletion policy attributes is important.

So there are two attributes. One is retain and snapshot. So when you have a deletion policy attribute of retain, then cloud formation keeps the resource without deleting it, even when you delete the cloud formation stack. Now, for the snapshot, when you have the deletion policy attribute at snapshot, cloud formation will first create a snapshot of the resource before deleting it. So one of the services where the snapshot deletion policy attribute is generally used is RDS. Now, the next important topic that you need to remember is Memcache versus Redis. This is important. So exam might give you a use case and you will have to select whether Memcache or Redis is the right solution. So where will Memcache be a right solution? It will be when you want as simplest model possible for caching. Now, within Memcache, the data is not persistent, do remember because it is an in memory cache.

So your data is not persistent. Then you can select Memcache when you need the ability to scale up as well as scale out with the help of horizontal scaling. When it comes to scale out. Now, Memcache does not support backup and restore operations and Memcache does support multi threaded operations there. Now for Redis, you can use Redis when you need data persistence, when you need to sort or rank in memory data sets, or when you need advanced data structures. Now, Redis also supports automated failure, so you have multi as the functionality. Redis also supports backup and resource functionalities. Redis can scale up, but it cannot scale out. Now, once a redis is scaled up, it cannot scale down back again. And Redis can also scale with the help of read replicas. So you have to understand the difference between Memcache and Redis because you would probably get multiple questions where you’ll have to select either one of them as a right answer.

So you’ll have to read the question carefully. Now, the next thing that you need to understand is the role of virtual gateway and customer gateway. So this is the overall architecture diagram that you should be well versed with. So here you have a virtual gateway on the AWS side, you have the customer gateway on the customer side, and both of them are linked with the VPN connection. So customer gateway is the Amazon VPC VPN connection link that your data center has and it initiates back to the virtual private gateway. All right? So the VGW is at the Amazon site and the CGW is at the customer side. And when you create a VPN connection, then you can make use of VGW directly from Amazon site and you’ll have to create the customer gateway.

Now, do remember here that let’s say that you have a customer gateway. This customer gateway needs to be highly available because whenever you create a VGW, it will automatically create two endpoints for High Availability. All right? So these two endpoints are basically in a different availability zones for High Availability and automated failure is taken care of by AWS. But you need to make sure that on the right hand side, this customer gateway is also highly available. Because if this customer gateway is not highly available, then if this goes down, then the entire VPN connection link will break. So this is one important part to remember. So generally whenever you are setting up a VPN like AWS VPN, then you have to make sure that you create two VPN tunnels. So in case if one VPN tunnel goes down, then the second terminal can be used. Now, the next important part that you should remember is a simple notification service. So SNS is a fully managed pub sub messaging service that enables you to decouple micro services. This is important and a lot of organization makes use of SNS for decoupling. Now, along with that, SNS also integrates with AWS services like Cloud Watch for alarm functionality. So when you talk about decoupling, so you have a message publisher here.

So message publisher just has to send a message to SNS topic and a specific SNS topic can be integrated with SQS lambda function, http. Endpoint and others and whatever SNS topic the endpoints are subscribed to, the data will automatically go there. And this is the reason why it helps in decoupling the application. So the great advantage here is that let’s say you have application and the application has to send message to SQS, it has to send to lambda, it has to send to Http. Endpoint. So you don’t have to create a code that can allow application to send to all of these three endpoints. You only have to create one code that can send your message from application to SNS, and that’s about it.

Now, SNS will automatically send it depending upon the configuration that you do over here. So it really helps in decoupling and also helps you quickly deploy and develop the code. Now, the last point for today’s video is AWS config. So AWS config is a service that enables you to assess, audit and evaluate the configurations of AWS resources. Now, there are a lot of config manage rules that comes. Now, there are two specific very important rules that you need to remember before you sit for exams. One is approved Amis by ID and second is S three bucket read prohibited rule. So one of them is the approved Amis by ID. So this specific rule as the documentation says that it checks whether the running instances are using a specified AMI.

So you can basically specify a list of AMI and it will check whether the running instances are using that AMI and if it is using then it will say it as compliant, if it is not, it will say it as non compliant. The second thing is the s three bucket public read prohibited. So this basically checks whether the buckets is allowing the public access or not. So if it is allowing public access, then it will show you those specific details. So do remember these two are quite important use case. So whenever in an example if you see that organization wants to see what are the list of EC two instances which do not have approved or you want to have a solutions which will automatically show you the list of SD buckets which are.

Comments
* The most recent comment are at the top

Interesting posts

Achieving Your ISO Certification Made Simple

So, you’ve decided to step up your game and snag that ISO certification, huh? Good on you! Whether it’s to polish your company’s reputation, meet supplier requirements, or enhance operational efficiency, getting ISO certified is like telling the world, “Hey, we really know what we’re doing!” But, like with any worthwhile endeavor, the road to… Read More »

What is Replacing Microsoft MCSA Certification?

Hey there! If you’ve been around the IT block for a while, you might fondly remember when bagging a Microsoft Certified Solutions Associate (MCSA) certification was almost a rite of passage for IT pros. This badge of honor was crucial for those who wanted to master Microsoft platforms and prove their mettle in a competitive… Read More »

5 Easiest Ways to Get CRISC Certification

CRISC Certification – Steps to Triumph Are you ready to stand out in the ever-evolving fields of risk management and information security? Achieving a Certified in Risk and Information Systems Control (CRISC) certification is more than just adding a prestigious title next to your name — it’s a powerful statement about your expertise in safeguarding… Read More »

Complete VMware Certification Guide 2024

Hello, tech aficionados and IT wizards! Ever thought about propelling your career forward with a VMware certification? If you have, great – you’ve landed in the perfect spot. And if you haven’t, get ready to be captivated. VMware stands at the forefront of virtualization and cloud infrastructure globally, presenting a comprehensive certification program tailored to… Read More »

How Cisco CCNA Certification Can Boost Your IT Career?

Hello, fellow tech aficionados! Are you itching to climb the IT career ladder but find yourself at a bit of a standstill? Maybe it’s time to spice up your resume with some serious certification action. And what better way to do that than with the Cisco Certified Network Associate (CCNA) certification? This little gem is… Read More »

What You Need to Know to Become Certified Information Security Manager?

Curious about the path to Certified Information Security Manager? Imagine embarking on a journey where each step brings you closer to mastering the complex realm of information security management. Picture yourself wielding the prestigious Certified Information Security Manager (CISM) certification, a beacon of expertise administered by the esteemed Information Systems Audit and Control Association (ISACA).… Read More »

img