An AWS Organization is using Service Control Policies (SCP) for central control over the maximum available permissions for all accounts in their organization. This allows the organization to ensure that all accounts stay within the organization’s access control guidelines.
Which of the given scenarios are correct regarding the permissions described below? (Select three)
- If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can't perform that action.
- SCPs affect all users and roles in attached accounts, including the root user
- If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can still perform that action
- SCPs affect all users and roles in attached accounts, excluding the root user
- SCPs affect service-linked roles
- SCPs do not affect service-linked role
Correct options:
- If a user or role has an IAM permission policy that grants access to an action that is either not allowed or explicitly denied by the applicable SCPs, the user or role can't perform that action
- SCPs affect all users and roles in attached accounts, including the root user
- SCPs do not affect service-linked role
Service control policies (SCPs) are one type of policy that can be used to manage your organization. SCPs offer central control over the maximum available permissions for all accounts in your organization, allowing you to ensure your accounts stay within your organization’s access control guidelines. In SCPs, you can restrict which AWS services, resources, and individual API actions the users and roles in each member account can access. You can also define conditions for when to restrict access to AWS services, resources, and API actions. These restrictions even override the administrators of member accounts in the organization.
A company wants to improve its gaming application by adding a leaderboard that uses a complex proprietary algorithm based on the participating user's performance metrics to identify the top users on a real-time basis. The technical requirements mandate high elasticity, low latency, and real-time processing to deliver customizable user data for the community of users. The leaderboard would be accessed by millions of users simultaneously.
Which of the following options support the case for using ElastiCache to meet the given requirements? (Select two)
- Use ElastiCache to improve the performance of compute-intensive workloads
- Use ElastiCache to improve latency and throughput for read-heavy application workloads
- Use ElastiCache to improve the performance of Extract-Transform-Load (ETL) workloads
- Use ElastiCache to run highly complex JOIN queries
- Use ElastiCache to improve latency and throughput for write-heavy application workloads
Correct option:
- Use ElastiCache to improve latency and throughput for read-heavy application workloads
- Use ElastiCache to improve the performance of compute-intensive workloads
Amazon ElastiCache allows you to run in-memory data stores in the AWS cloud. Amazon ElastiCache is a popular choice for real-time use cases like Caching, Session Stores, Gaming, Geospatial Services, Real-Time Analytics, and Queuing. Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads (such as social networking, gaming, media sharing, leaderboard, and Q&A portals) or compute-intensive workloads (such as a recommendation engine) by allowing you to store the objects that are often read in the cache.
Incorrect options:
- Use ElastiCache to improve latency and throughput for write-heavy application workloads - As mentioned earlier in the explanation, Amazon ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads. Caching is not a good fit for write-heavy applications as the cache goes stale at a very fast rate.
- Use ElastiCache to improve the performance of Extract-Transform-Load (ETL) workloads - ETL workloads involve reading and transforming high-volume data which is not a good fit for caching. You should use AWS Glue or Amazon EMR to facilitate ETL workloads.
- Use ElastiCache to run highly complex JOIN queries - Complex JSON queries can be run on relational databases such as RDS or Aurora. ElastiCache is not a good fit for this use case.
References:
- https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/elasticache-use-cases.html
- https://aws.amazon.com/elasticache/features/
The DevOps team at an IT company is provisioning a two-tier application in a VPC with a public subnet and a private subnet. The team wants to use either a NAT instance or a NAT gateway in the public subnet to enable instances in the private subnet to initiate outbound IPv4 traffic to the internet but needs some technical assistance in terms of the configuration options available for the NAT instance and the NAT gateway.
As a solutions architect, which of the following options would you identify as CORRECT? (Select three)
- Security Groups can be associated with a NAT instance
- NAT gateway can be used as a bastion server
- NAT gateway supports port forwarding
- NAT instance supports port forwarding
- NAT instance can be used as a bastion server
- Security Groups can be associated with a NAT gateway
Correct options:
- NAT instance can be used as a bastion server
- Security Groups can be associated with a NAT instance
- NAT instance supports port forwarding
A NAT instance or a NAT Gateway can be used in a public subnet in your VPC to enable instances in the private subnet to initiate outbound IPv4 traffic to the Internet.
- How NAT Gateway works: https://docs.aws.amazon.com/vpc/latest/userguide/vpc-nat-gateway.html
- How NAT Instance works: https://docs.aws.amazon.com/vpc/latest/userguide/VPC_NAT_Instance.html
Please see this high-level summary of the differences between NAT instances and NAT gateways relevant to the options described in the question:
A retail company uses AWS Cloud to manage its IT infrastructure. The company has set up "AWS Organizations" to manage several departments running their AWS accounts and using resources such as EC2 instances and RDS databases. The company wants to provide shared and centrally-managed VPCs to all departments using applications that need a high degree of interconnectivity.
As a solutions architect, which of the following options would you choose to facilitate this use-case?
- Use VPC peering to share a VPC with other AWS accounts belonging to the same parent organization from AWS Organizations
- Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations
- Use VPC peering to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations
- Use VPC sharing to share a VPC with other AWS accounts belonging to the same parent organization from AWS Organizations
Correct option:
- Use VPC sharing to share one or more subnets with other AWS accounts belonging to the same parent organization from AWS Organizations
VPC sharing (part of Resource Access Manager) allows multiple AWS accounts to create their application resources such as EC2 instances, RDS databases, Redshift clusters, and Lambda functions, into shared and centrally-managed Amazon Virtual Private Clouds (VPCs). To set this up, the account that owns the VPC (owner) shares one or more subnets with other accounts (participants) that belong to the same organization from AWS Organizations. After a subnet is shared, the participants can view, create, modify, and delete their application resources in the subnets shared with them. Participants cannot view, modify, or delete resources that belong to other participants or the VPC owner.
You can share Amazon VPCs to leverage the implicit routing within a VPC for applications that require a high degree of interconnectivity and are within the same trust boundaries. This reduces the number of VPCs that you create and manage while using separate accounts for billing and access control.
References:
- https://docs.aws.amazon.com/vpc/latest/userguide/vpc-sharing.html
- https://docs.aws.amazon.com/vpc/latest/peering/what-is-vpc-peering.html
An IT company is looking to move its on-premises infrastructure to AWS Cloud. The company has a portfolio of applications with a few of them using server bound licenses that are valid for the next year. To utilize the licenses, the CTO wants to use dedicated hosts for a one year term and then migrate the given instances to default tenancy thereafter.
As a solutions architect, which of the following options would you identify as CORRECT for changing the tenancy of an instance after you have launched it? (Select two)
- You can change the tenancy of an instance from host to dedicated
- You can change the tenancy of an instance from dedicated to host
- You can change the tenancy of an instance from default to dedicated
- You can change the tenancy of an instance from default to host
- You can change the tenancy of an instance from dedicated to default
Correct options:
- You can change the tenancy of an instance from dedicated to host
- You can change the tenancy of an instance from host to dedicated
By default, EC2 instances run on a shared-tenancy basis. Dedicated Instances are Amazon EC2 instances that run in a virtual private cloud (VPC) on hardware that's dedicated to a single customer. Dedicated Instances that belong to different AWS accounts are physically isolated at the hardware level. However, Dedicated Instances may share hardware with other instances from the same AWS account that is not Dedicated Instances. A Dedicated Host is also a physical server that's dedicated to your use. With a Dedicated Host, you have visibility and control over how instances are placed on the server.
You can only change the tenancy of an instance from dedicated to host, or from host to dedicated after you've launched it.
A media startup is looking at hosting their web application on AWS Cloud. The application will be accessed by users from different geographic regions of the world to upload and download video files that can reach a maximum size of 10GB. The startup wants the solution to be cost-effective and scalable with the lowest possible latency for a great user experience.
As a Solutions Architect, which of the following will you suggest as an optimal solution to meet the given requirements?
- Use Amazon S3 for hosting the web application and use Amazon CloudFront for faster distribution of content to geographically dispersed users
- Use Amazon EC2 with Global Accelerator for faster distribution of content, while using Amazon S3 as storage service
- Use Amazon S3 for hosting the web application and use S3 Transfer Acceleration to reduce the latency that geographically dispersed users might face
- Use Amazon EC2 with ElastiCache for faster distribution of content, while Amazon S3 can be used as a storage service
Correct option:
- Use Amazon S3 for hosting the web application and use S3 Transfer Acceleration to reduce the latency that geographically dispersed users might face
Amazon S3 Transfer Acceleration can speed up content transfers to and from Amazon S3 by as much as 50-500% for long-distance transfer of larger objects. Customers who have either web or mobile applications with widespread users or applications hosted far away from their S3 bucket can experience long and variable upload and download speeds over the Internet. S3 Transfer Acceleration (S3TA) reduces the variability in Internet routing, congestion, and speeds that can affect transfers, and logically shortens the distance to S3 for remote applications.
S3TA improves transfer performance by routing traffic through Amazon CloudFront’s globally distributed Edge Locations and over AWS backbone networks, and by using network protocol optimizations.
For applications interacting with your S3 buckets through the S3 API from outside of your bucket’s region, S3TA helps avoid the variability in Internet routing and congestion. It does this by routing your uploads and downloads over the AWS global network infrastructure, so you get the benefit of AWS network optimizations.
Incorrect options:
- Use Amazon S3 for hosting the web application and use Amazon CloudFront for faster distribution of content to geographically dispersed users - Amazon S3 with CloudFront is a very powerful way of distributing static content to geographically dispersed users with low latency speeds. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront's PUT/POST commands for optimal performance. The given use case has data larger than 1GB and hence S3 Transfer Acceleration is a better option.
via - https://aws.amazon.com/s3/faqs/
The database backend for a retail company's website is hosted on Amazon RDS for MySQL having a primary instance and three read replicas to support read scalability. The company has mandated that the read replicas should lag no more than 1 second behind the primary instance to provide the best possible user experience. The read replicas are falling further behind during periods of peak traffic spikes, resulting in a bad user experience as the searches produce inconsistent results.
You have been hired as an AWS Certified Solutions Architect Associate to reduce the replication lag as much as possible with minimal changes to the application code or the effort required to manage the underlying resources.
Which of the following will you recommend?
- Set up database migration from RDS MySQL to Aurora MySQL. Swap out the MySQL read replicas with Aurora Replicas. Configure Aurora Auto Scaling
- Host the MySQL primary database on a memory-optimized EC2 instance. Spin up additional compute-optimized EC2 instances to host the read replicas
- Set up an Amazon ElastiCache for Redis cluster in front of the MySQL database. Update the website to check the cache before querying the read replicas
- Set up database migration from RDS MySQL to DynamoDB. Provision a large number of read capacity units (RCUs) to support the required throughput and enable Auto-Scaling
Correct option:
- Set up database migration from RDS MySQL to Aurora MySQL. Swap out the MySQL read replicas with Aurora Replicas. Configure Aurora Auto Scaling
Aurora features a distributed, fault-tolerant, and self-healing storage system that is decoupled from compute resources and auto-scales up to 128 TiB per database instance. It delivers high performance and availability with up to 15 low-latency read replicas, point-in-time recovery, continuous backup to Amazon Simple Storage Service (Amazon S3), and replication across three Availability Zones (AZs).
Since Amazon Aurora Replicas share the same data volume as the primary instance in the same AWS Region, there is virtually no replication lag. The replica lag times are in the 10s of milliseconds (compared to the replication lag of seconds in the case of MySQL read replicas). Therefore, this is the right option to ensure that the read replicas lag no more than 1 second behind the primary instance.
via - https://aws.amazon.com/rds/aurora/faqs/
Incorrect options:
- Host the MySQL primary database on a memory-optimized EC2 instance. Spin up additional compute-optimized EC2 instances to host the read replicas - Hosting the MySQL primary database and the read replicas on the EC2 instances would result in significant overhead to manage the underlying resources such as OS patching, database patching, etc. So this option is incorrect.
Your application is hosted by a provider on yourapp.provider.com. You would like to have your users access your application using www.your-domain.com, which you own and manage under Route 53.
What Route 53 record should you create?
- Create a PTR record
- Create an A record
- Create an Alias Record
- Create a CNAME record
Correct option:
- Create a CNAME record
A CNAME record maps DNS queries for the name of the current record, such as acme.example.com, to another domain (example.com or example.net) or subdomain (acme.example.com or zenith.example.org).
CNAME records can be used to map one domain name to another. Although you should keep in mind that the DNS protocol does not allow you to create a CNAME record for the top node of a DNS namespace, also known as the zone apex. For example, if you register the DNS name example.com, the zone apex is example.com. You cannot create a CNAME record for example.com, but you can create CNAME records for www.example.com, newproduct.example.com, and so on.
Reference:
A video conferencing application is hosted on a fleet of EC2 instances which are part of an Auto Scaling group (ASG). The ASG uses a Launch Configuration (LC1) with "dedicated" instance placement tenancy but the VPC (V1) used by the Launch Configuration LC1 has the instance tenancy set to default. Later the DevOps team creates a new Launch Configuration (LC2) with "default" instance placement tenancy but the VPC (V2) used by the Launch Configuration LC2 has the instance tenancy set to dedicated.
Which of the following is correct regarding the instances launched via Launch Configuration LC1 and Launch Configuration LC2?
- The instances launched by both Launch Configuration LC1 and Launch Configuration LC2 will have default instance tenancy
- The instances launched by Launch Configuration LC1 will have default instance tenancy while the instances launched by the Launch Configuration LC2 will have dedicated instance tenancy
- The instances launched by both Launch Configuration LC1 and Launch Configuration LC2 will have dedicated instance tenancy
- The instances launched by Launch Configuration LC1 will have dedicated instance tenancy while the instances launched by the Launch Configuration LC2 will have default instance tenancy
Correct option:
- The instances launched by both Launch Configuration LC1 and Launch Configuration LC2 will have dedicated instance tenancy
A launch configuration is an instance configuration template that an Auto Scaling group uses to launch EC2 instances. When you create a launch configuration, you specify information for the instances. Include the ID of the Amazon Machine Image (AMI), the instance type, a key pair, one or more security groups, and a block device mapping. If you've launched an EC2 instance before, you specified the same information to launch the instance.
When you create a launch configuration, the default value for the instance placement tenancy is null and the instance tenancy is controlled by the tenancy attribute of the VPC. If you set the Launch Configuration Tenancy to default and the VPC Tenancy is set to dedicated, then the instances have dedicated tenancy. If you set the Launch Configuration Tenancy to dedicated and the VPC Tenancy is set to default, then again the instances have dedicated tenancy.
If either Launch Configuration Tenancy or VPC Tenancy is set to dedicated, then the instance tenancy is also dedicated.
References:
- https://docs.aws.amazon.com/autoscaling/ec2/userguide/LaunchConfiguration.html
- https://docs.aws.amazon.com/autoscaling/ec2/userguide/asg-in-vpc.html#as-vpc-tenancy
A media company wants a low-latency way to distribute live sports results which are delivered via a proprietary application using UDP protocol.
As a solutions architect, which of the following solutions would you recommend such that it offers the BEST performance for this use case?
- Use CloudFront to provide a low latency way to distribute live sports results
- Use Auto Scaling group to provide a low latency way to distribute live sports results
- Use Elastic Load Balancer to provide a low latency way to distribute live sports results
- Use Global Accelerator to provide a low latency way to distribute live sports results
Correct option:
- Use Global Accelerator to provide a low latency way to distribute live sports results
Please note the differences between the capabilities of Global Accelerator and CloudFront -
AWS Global Accelerator and Amazon CloudFront are separate services that use the AWS global network and its edge locations around the world. CloudFront improves performance for both cacheable content (such as images and videos) and dynamic content (such as API acceleration and dynamic site delivery). Global Accelerator improves performance for a wide range of applications over TCP or UDP by proxying packets at the edge to applications running in one or more AWS Regions.
Global Accelerator is a good fit for non-HTTP use cases, such as gaming (UDP), IoT (MQTT), or Voice over IP, as well as for HTTP use cases that specifically require static IP addresses or deterministic, fast regional failover. Both services integrate with AWS Shield for DDoS protection.
References: