SAP-C02 Amazon AWS Certified Solutions Architect Professional – Exam Preparation Guide Part 2

  • By
  • September 4, 2023
0 Comment

2. Exam Preparation Part 01 – Domain 2

Hey everyone and welcome back in today’s video for important pointers. For exam, our focus would be on domain two. Now the domain two, which is designed for new solutions is one of the largest domains of this certification both in terms of the number of topics as well as in terms of the percentage of examination. Now, if you talk about domain two, it seems to be covering each and every service is, if you might say, because here in 2. 1 it covers the security requirements, in 2. 2 it is the reliability requirements, in two three it is the business continuity, in 2. 4 its performance and in two five it is deployment. And this is the reason why this domain is quite big. Now, in terms of the number of topics which are a part of this domain, there are a lot of them.

So you have auto scaling, you have cloud formation, ElastiCache ELB, SQS kinases and so many others. So let’s go ahead and start with the important pointers for this domain. Now the first important pointer is that you should know about auto scaling. So you should know basics on how you can set up the auto scaling environment. So there are two important concepts that you should be aware of. One is the launch configuration and second is the auto scaling group. So launch configuration is the time where we specify a specific AMI ID, the key pace as well as the security groups. And during the auto scaling group we specify the minimum maximum as well as the desired capacity for a specific ASG. So within this architecture diagram, you see that you have an ELB and within ELB you have an auto scaling. And depending upon the auto scaling group size, there will be instances that will be launched. So at a high level overview you should be able to know how you can set up this architecture of auto scaling and associated with the ELB.

Now the next important part of the auto scaling that you should know is the lifecycle hooks. Now the auto scaling lifecycle hooks basically allows us to perform a certain custom action when an instance is launched or terminated. This is very important and a quite useful feature. So you should be able to understand this specific diagram we have already discussed about this. All right, so basically there are two places for lifecycle hooks. One is during the instance launch phase and second is during the instance termination phase. Now the third important pointer that you should know is about the kinases. So there are multiple kinesis streams which are available in AWS. You should know at a high level overview about each one of them and also the use case associated with them. Now the Kinesis data stream, it basically captures, processes and stores data streams in real time. Now, Kinesis data stream is specifically useful where a use case where a sub 1 second processing latency is required.

The second is the Kinesis data firehose. Now this basically allows us to capture and deliver data to a specific data store in real time. Now here the primary aim is to move the data from point A to point B. Now do remember that for Kinesis data firehose there can be a processing latency of 60 seconds or higher. Now in terms of exam perspective you might find a question which would be like data stream versus Firehose. So you should know at a high level overview the differences between a data stream and a data firehose. Now the third important pointer here is the Kindnesses data analytics. This basically allows us to analyze the streaming data in real time with the SQL and Java code. All right? So for Firehose, also remember that it allows basic transformation related functionality like you can compress, you can change the type and few more.

All right? So that functionality is not really possible in the data stream there the last year is the Kindnesses video stream. It basically allows us to capture, process and store video streams. Now video streams can also be integrated with machine learning or AI services if you want to have some kind of analytics there. Now the next important part here is the load balancing. Now you should be having an overview about each and every load balancer types which are available and also the use cases where they can be used. Now you should also be familiar with the term of ELB Sandwich. You might see this specific term within the exams.

So ELB Sandwich is one of the specific type of architectures where there are two elastic load balancers. One is generally at the public subnet and second is at the private subnet. So we already have a video on that. Go ahead and look into the video if this is something that you want to revise on. Now, in terms of load balancer types, there are three types. You have Classic, you have application and third is network. So Classic is generally recommended for development and testing purposes. Application load balancers is generally recommended for the layer seven traffic. So whatever application layer traffic that you might have, you can make use of the application load balancer. And third is the network load balancer. Network load balancer is a very fast in performance and we can also associate a static IP address with the NLB. Now do remember that among all of these ALB is one of the load balancers where you can also attach a web application firewall. Now there are certain architecture considerations that you should be aware about. All right. So this is a similar architecture, something similar to a ELB Sandwich where you have an ELB. Let’s say that this is the first ELB. You have a set of auto scaling groups, then you have a second ALB and then you have a set of auto scaling group.

So now you can scale the auto scaling group independently and all of these auto scaling group are associated with a single ALB. So this basically allows separation of tires. So you can say that this first ALB can be in the DMZ which is in the public subnet and the first set of auto scaling group can be a NGINX based web servers. Then you have one more ELB here. So this ELB can be in the private subnet and then you have the application based EC two instances. So all these two instances here, they can send the traffic to the internal load balancer. From the internal load balancer it can send or forward the traffic to the application easy to instances. So this allows separation of tires. This makes sure that this specific ELB is not publicly accessible and also independent scaling. Now along with that instead of an ELB you can even put an SQS here.

So in case if you have SQS then your task will be asynchronous it will be unordered. Do remember then it is a single direction only. Now if you talk about ELB then ELB can also have a return traffic. So let’s say that this ELB sends the traffic to EC two instance. Now EC two instance can forward the reply back to the same ELB here. Now, in case of SQS that is not really possible at least with this type of architecture. So you have a single direction only and you have the at least delivery once. So you need to make sure that your application can sustain and overcome this specific shortcoming. Now, along with that you can also have a kindness stream here. Now, similar to the things that we were discussing about SQS here also we have asynchronous task you can have ordering at least within the chart. So if you have kinesis there can be multiple shots. So the order can be maintained within the Shard.

Again you will have a single direction only and you have a semantic of at least delivery once. Now, if you want to architect a return traffic. So we were discussing that if you have a single SQS queue it cannot architect a return traffic there. So in such cases you can have multiple SQS queues here. So let’s say that these set of EC two instances can send the data to the SQS queue here, which the EC two instances within the lower group can fetch. Now, in order for them to reply back to the first order of EC two instances let’s say with a success or failure they can send a message back to a different SQS queue here, which the first order of EC two instances can fetch. So if you want to have a return communication or a request response based communication year then you can have multiple SQS queues type of architecture here. Now, in terms of scalability aspect with AWS s three now generally a lot of organizations they tend to store static contents like you can have an MP3 images generally the contents which remains to be static within their web server.

Now, instead of that, you can store those static contents within the S Three bucket and can reference that with the Cloud Front distribution. Now, when you have S Three bucket and also Cloud Front distribution, it can lead to a better response time because the static contents can be cached across multiple edge locations which can be near to the users. Now, you can also make use of the random key names for the S Three buckets which can act as an important performance factor. Now, this was one of the pretty important factor earlier, so you might see this specific one within the exam. So this is no longer a case now, but this used to be an important factor.

Now, you can also restrict the access to an S Three bucket via the Origin access identity, so that if your S Three bucket is present, a user cannot access the content from the S Three bucket. They have to go via the cloud front distribution. Now, within the Cloud front distribution, again you can have various security controls like WAV. So restricting the S Three bucket and also referencing it through the Cloud front distribution can prove to be important both in terms of scalability as well as security. Now, speaking about the scalability aspect, you should be aware about the RDS. So there are two important aspects here.

One is the multiaz and second is the Read Replica. So the multi AC is useful for the High Availability aspect. So when you want to have a High Availability, then multi AZ is quite useful. Now in multi easy, there can be two instances. So the second instance is referred as a standby instance and it cannot be accessed directly. Now, automated failure is taken care by AWS in multiaz and whenever an automated failure happens, the DNS names does not change, so it remains to be static there. And do remember that multiaz is based on synchronous replication. Now, as opposed to that you have Read Replica. Reader Replica is based on Asynchronous replication here it is useful in the scalability aspect. Now, you can also have multiple Read Replicas for a single database.

So you can have one RDS and that RDS can have multiple read Replicas. So that can really help. Specifically, when you have an application which does a lot of read queries, then you can direct that read queries to the read Replica which can help you improve the overall performance there. Now, AWS also has multiple amount of data stores, so depending upon the Use case, you should select one of them. So let’s say Use case contains a relational database or a requirement is there for a relational database or for the asset transactions or something related to the OLTP, then RDS is the best choice. Now, if the Use case says that a no SQL database is required, or if the data is unstructured, or when there is a High I O needs, then a DynamoDB can be a good use case there. Now, if the use case specify that a customer needs complete control of the database, then that can be done by hosting the database within the EC two instance. Because when you host the database in RDS you do not have a complete control there, or a use case where a database is not really supported by RDS. Again, easy to might be a right choice.

Now, you have SC which is good for data blocks and for redshift. If the requirements suggest for data warehouse or OLAP, then redshift can be a good choice there. Now, you should also be aware about a service control policy versus the Im policy. Now, do remember that Im policies are generally attached to Im user groups and roles. Now, it cannot be attached to root account. Root account by default has complete access which cannot be restricted via any Im policies there. Now, AWS organization basically allows us to create a service control policy to allow or deny access to a particular AWS services for individual AWS accounts or for group of accounts within an organizational unit. So let’s say that you want that no one should be able to disable cloud trail. So that specific policy that you writing AWS organization is referred as the service control policy.

Now, the SCP can be attached to an individual AWS account or it can even be attached to an Ou which can have multiple AWS accounts. Now, one important part to remember about the SAP, that whatever action that you specify within the SAP, that SAP when you attach to an AWS account, it affects all the Im users, all the groups, all the roles, including the root identity. So let’s say that you create a SAP policy that denies access to cloud trail. So then when you attach that SAP to an AWS account one, then no one within that AWS account will be able to play around with cloud trail, not even an administrator user, not even a root account. This is important part to remember. Now, there are certain miscellaneous points that you should remember here. First is the VPN and multicast. Now, do remember that AWS does not support multicast. Now, if you need a multicast like functionality, then that can be made possible through a virtual private network.

Now, there’s also a concept of bastion host. So you might see certain questions which has bastion host. Now again, this is not specific to AWS, but do remember that bastion hosts are generally within the public subnet here and generally they act as a proxy to connect to the internal host. So you have a private instances here, so you can make use of the bastion host. So this is the host which is quite hardened and this acts as a proxy to connect to the instances in private subnet. So a user from your office environment through the internet gateway, he can first log into the bastion host from the bastion host, he can connect to the internal network. So, this is one of the architectures. Now again, this is one of the good architectures. But when you compare this with VPN, VPN does provide much more granular functionality. Now, second important point here is the integration of lambda and S three. Now, lambda can be associated with the IAM role.

So do remember that. So for example, if you have certain important data in DynamoDB or maybe in S Three which you want to fetch, then you can associate a lambda function with the Im role. All right? So you can even add the triggers based on event types with the lambda. Now, the next important part here is the AWS route 53. You should be aware about both the public as well as private hosted zones. This is important to remember. Now, private hosted zones can be associated with multiple VPCs for internal DNS resolution. So whenever you create a private hosted zone, that private hosted zone cannot be resolved through the Internet. You can associate that private hosted zone within the VPC and whatever EC two instances are there within the VPC, they can query the DNS associated with those private hosted zones. Now, at a high level overview, you should also be aware about the Route 53 Hell checks as well as the Route 53 routing policies which includes Failover, which includes weighted latency, geolocation and multialue.

Now, the next important part is Cloud Front. Now, for Cloud Front, you should know on how you can create cloud front distribution and also know the concept of edge location. Now, origin access identity proves to be very important point within the cloud Front. And Oi is something that you will see in multiple certifications ranging from associate to professional to security specialty, even in advanced networking specialty. So this is one topic that you should be well versed with. Now, CloudFront can integrate with the AWS certificate manager for Certificates. Now, CloudFront also supports the server name indication which is the SNI, and it also supports a dedicated IP which is used for each edge location for the older browsers and clients who do not really support SNI. So, the last two pointers are very important, so make sure you understand both of them. We already have a video so you can go ahead and revise in case this is something which you have a doubt in. Now, the next important pointer is the EBS and instance store volume.

Now, Ebay volumes are generally used for persistent storage. Now, the size and volume type can now be increased while the EBS volumes are attached to EC two. This was something which was not really possible earlier, but now it is possible. Now in case if you need a specific performance increase, you can make use of the architecture of EBS and Raid to increase the overall performance. Now for the instance store volume, do remember that the size cannot be increased of instant Store and it is not portable. So Instant Store volume you cannot disassociate and attach it to a different EC to instance. Since Instant Store are basically the hard disk associated with the physical host where the EC two is running, you cannot really have the portability feature there, nor you can stop and start the instance which is running on Instance Store because generally whenever you stop and start the instance can migrate to a different physical host. So instance or cannot support that.

Now, in terms of EBS, it is very important to note a distinction between general purpose and provision. So general purpose SSD is generally used for workloads which requires a proper balance between price performance. So any normal kind of a workload general purpose SSD is sufficient. Now, in case if you need a very fast performance then you need to go with provision I Ops. So provision IOPS are basically the highest performance SSD volume types. Now, it is designed for mission critical application workloads. Now, a general purpose SSD is typically used for test, dev and prod environments which has normal workloads and provision IOPS is generally used for again, we discussed that it is used for applications which might require a very fast performance.

Typically it can be databases like MySQL, postgres Equal et cetera. So you also have the volume size for the gender purpose and provisioned IOPS. And definitely the performance can be a maximum of 10,000 IOPS and 160 MIB per second of throughput here. Now, as opposed to that, if you see the provision IOPS performance is a maximum of 20,000 IOPS and 320 MIBs per second. So general purpose SSD is referred as GP Two and the provision IOPS is generally referred as IO One. So do revise the difference between the GP Two and IO one before you sit for exams. So these are types of SSDs over here. Now, you also have the HDD type. So here you have Throughput Optimize which is St one and you have the Cold HDD which is referred as the SC One. Now, the throughput Optimize is basically a low cost hard disk drive designed for frequently accessed throughput intensive load.

Now as opposed to that, col HDT is designed for less frequent access workload. So this is for frequently accessed, this is for less frequent access workload. All right, now along with that you also need to remember that this is referred as St One and this is referred as SC One. So this is in terms of costing, the Cold HDT has the lowest cost here. Throughput Optimize is little expensive when you compare with the Cold HDD. So there are also volume size and performance matrix. You will not really see this to be coming in exams but just to have a glance about it will prove to be useful.

Comments
* The most recent comment are at the top

Interesting posts

Achieving Your ISO Certification Made Simple

So, you’ve decided to step up your game and snag that ISO certification, huh? Good on you! Whether it’s to polish your company’s reputation, meet supplier requirements, or enhance operational efficiency, getting ISO certified is like telling the world, “Hey, we really know what we’re doing!” But, like with any worthwhile endeavor, the road to… Read More »

What is Replacing Microsoft MCSA Certification?

Hey there! If you’ve been around the IT block for a while, you might fondly remember when bagging a Microsoft Certified Solutions Associate (MCSA) certification was almost a rite of passage for IT pros. This badge of honor was crucial for those who wanted to master Microsoft platforms and prove their mettle in a competitive… Read More »

5 Easiest Ways to Get CRISC Certification

CRISC Certification – Steps to Triumph Are you ready to stand out in the ever-evolving fields of risk management and information security? Achieving a Certified in Risk and Information Systems Control (CRISC) certification is more than just adding a prestigious title next to your name — it’s a powerful statement about your expertise in safeguarding… Read More »

Complete VMware Certification Guide 2024

Hello, tech aficionados and IT wizards! Ever thought about propelling your career forward with a VMware certification? If you have, great – you’ve landed in the perfect spot. And if you haven’t, get ready to be captivated. VMware stands at the forefront of virtualization and cloud infrastructure globally, presenting a comprehensive certification program tailored to… Read More »

How Cisco CCNA Certification Can Boost Your IT Career?

Hello, fellow tech aficionados! Are you itching to climb the IT career ladder but find yourself at a bit of a standstill? Maybe it’s time to spice up your resume with some serious certification action. And what better way to do that than with the Cisco Certified Network Associate (CCNA) certification? This little gem is… Read More »

What You Need to Know to Become Certified Information Security Manager?

Curious about the path to Certified Information Security Manager? Imagine embarking on a journey where each step brings you closer to mastering the complex realm of information security management. Picture yourself wielding the prestigious Certified Information Security Manager (CISM) certification, a beacon of expertise administered by the esteemed Information Systems Audit and Control Association (ISACA).… Read More »

img