Summer Sale Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: v4s65

SCS-C02 Exam Dumps - AWS Certified Security - Specialty

Go to page:
Question # 97

A company's on-premises networks are connected to VPCs using an IAM Direct Connect gateway. The company's on-premises application needs to stream data using an existing Amazon Kinesis Data Firehose delivery stream. The company's security policy requires that data be encrypted in transit using a private network.

How should the company meet these requirements?

A.

Create a VPC endpoint tor Kinesis Data Firehose. Configure the application to connect to theVPC endpoint.

B.

Configure an IAM policy to restrict access to Kinesis Data Firehose using a source IP condition. Configure the application to connect to the existing Firehose delivery stream.

C.

Create a new TLS certificate in IAM Certificate Manager (ACM). Create a public-facing Network Load Balancer (NLB) and select the newly created TLS certificate. Configure the NLB to forward all traffic to Kinesis Data Firehose. Configure the application to connect to the NLB.

D.

Peer the on-premises network with the Kinesis Data Firehose VPC using Direct Connect. Configure the application to connect to the existing Firehose delivery stream.

Full Access
Question # 98

A company accidentally deleted the private key for an Amazon Elastic Block Store (Amazon EBS)-backed Amazon EC2 instance. A security engineer needs to regain access to the instance.

Which combination of steps will meet this requirement? (Choose two.)

A.

Stop the instance. Detach the root volume. Generate a new key pair.

B.

Keep the instance running. Detach the root volume. Generate a new key pair.

C.

When the volume is detached from the original instance, attach the volume to another instance as a data volume. Modify the authorized_keys file with a new public key. Move the volume back to the original instance. Start the instance.

D.

When the volume is detached from the original instance, attach the volume to another instance as a data volume. Modify the authorized_keys file with a new private key. Move the volume back to the original instance. Start the instance.

E.

When the volume is detached from the original instance, attach the volume to another instance as a data volume. Modify the authorized_keys file with a new public key. Move the volume back to the original instance that is running.

Full Access
Question # 99

You work at a company that makes use of IAM resources. One of the key security policies is to ensure that all data i encrypted both at rest and in transit. Which of the following is one of the right ways to implement this.

Please select:

A.

Use S3 SSE and use SSL for data in transit

B.

SSL termination on the ELB

C.

Enabling Proxy Protocol

D.

Enabling sticky sessions on your load balancer

Full Access
Question # 100

A security engineer discovers that the Lambda function is failing to create the report. The security engineer must implement a solution that corrects the issue and provides least privilege permissions. Which solution will meet these requirements?

A.

Create a resource based policy that allows Security Hub access to the ARN of the Lambda function.

B.

Attach the AWSSecurityHubReedOnlyAccess AWS managed policy to the Lambda function's execution role.

C.

Grant the Lambda function s execution role read-only permissions to access Amazon Inspector and Security Hub.

D.

Create a custom 1AM policy that grants the Security Hub Get' List" Batch' and Desert*" permissions on the arn aws securityhub us-west-2 productaws/inspector' resource Anacn the policy to the Lambda function's execution role.

Full Access
Question # 101

A company has an organization in AWS Organizations. The organization consists of multiple OUs. The company must prevent 1AM principals from outside the organization from accessing the organization's Amazon S3 buckets. The solution must not affect the existing access that the OUs have to the S3 buckets.

Which solution will meet these requirements?

A.

Configure S3 Block Public Access for all S3 buckets.

B.

Configure S3 Block Public Access for all AWS accounts.

C.

Deploy an SCP that includes the "awsiResourceOrgPaths": "${aws:PrincipalOrgPaths}" condition.

D.

Deploy an SCP that includes the "aws:ResourceOrglD": "${aws:PrincipalOrglD}" condition.

Full Access
Question # 102

A company has recently recovered from a security incident that required the restoration of Amazon EC2 instances from snapshots. The company uses an AWS Key

Management Service (AWS KMS) customer managed key to encrypt all Amazon Elastic Block Store (Amazon EBS) snapshots.

The company performs a gap analysis of its disaster recovery procedures and backup strategies. A security engineer needs to implement a solution so that the company can recover the EC2 instances if the AWS account is compromised and the EBS snapshots are deleted.

Which solution will meet this requirement?

A.

Create a new Amazon S3 bucket. Use EBS lifecycle policies to move EBS snapshots to the new S3 bucket. Use lifecycle policies to move snapshots to the S3Glacier Instant Retrieval storage class. Use S3 Object Lock to prevent deletion of the snapshots.

B.

Use AWS Systems Manager to distribute a configuration that backs up all attached disks to Amazon S3.

C.

Create a new AWS account that has limited privileges. Allow the new account to access the KMS key that encrypts the EBS snapshots. Copy the encryptedsnapshots to the new account on a recurring basis.

D.

Use AWS Backup to copy EBS snapshots to Amazon S3. Use S3 Object Lock to prevent deletion of the snapshots.

Full Access
Question # 103

A company receives a notification from the AWS Abuse team about an AWS account The notification indicates that a resource in the account is compromised The company determines that the compromised resource is an Amazon EC2 instance that hosts a web application The compromised EC2 instance is part of an EC2 Auto Scaling group

The EC2 instance accesses Amazon S3 and Amazon DynamoDB resources by using an 1AM access key and secret key The 1AM access key and secret key are stored inside the AMI that is specified in the Auto Scaling group's launch configuration The company is concerned that the credentials that are stored in the AMI might also have been exposed

The company must implement a solution that remediates the security concerns without causing downtime for the application The solution must comply with security best practices

Which solution will meet these requirements'?

A.

Rotate the potentially compromised access key that the EC2 instance uses Create a new AM I without the potentially compromised credentials Perform an EC2 Auto Scaling instance refresh

B.

Delete or deactivate the potentially compromised access key Create an EC2 Auto Scaling linked 1AM role that includes a custom policy that matches the potentiallycompromised access key permission Associate the new 1AM role with the Auto Scaling group Perform an EC2 Auto Scaling instance refresh.

C.

Delete or deactivate the potentially compromised access key Create a new AMI without the potentially compromised credentials Create an 1AM role that includes the correct permissions Create a launch template for the Auto Scaling group to reference the new AMI and 1AM role Perform an EC2 Auto Scaling instance refresh

D.

Rotate the potentially compromised access key Create a new AMI without the potentially compromised access key Use a user data script to supply the new access key as environmental variables in the Auto Scaling group's launch configuration Perform an EC2 Auto Scaling instance refresh

Full Access
Question # 104

Amazon CtoudWatch Logs agent is successfully delivering logs lo the CloudWatch Logs service. However, logs stop being delivered after the associated log stream has been active for a specific number of hours.

What steps are necessary to identify the cause of this phenomenon? (Select TWO.)

A.

Ensure that file permissions for monitored files that allow the CloudWatch Logs agent to read the file have not been modified

B.

Verify that the OS Log rotation rules are compatible with the configuration requirements for agent streaming.

C.

Configure an Amazon Kinesis producer to first put the logs into Amazon Kinesis Streams.

D.

Create a CloudWatch Logs metric to isolate a value that changes at least once during the period before logging stops.

E.

Use AWS CloudFormation to dynamically create and maintain the configuration file for the CloudWatch Logs agent.

Full Access
Go to page: