Weekend Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

Professional-Cloud-DevOps-Engineer Exam Dumps - Google Cloud Certified - Professional Cloud DevOps Engineer Exam

Go to page:
Question # 9

You are developing reusable infrastructure as code modules. Each module contains integration tests that launch the module in a test project. You are using GitHub for source control. You need to Continuously test your feature branch and ensure that all code is tested before changes are accepted. You need to implement a solution to automate the integration tests. What should you do?

A.

Use a Jenkins server for Cl/CD pipelines. Periodically run all tests in the feature branch.

B.

Use Cloud Build to run the tests. Trigger all tests to run after a pull request is merged.

C.

Ask the pull request reviewers to run the integration tests before approving the code.

D.

Use Cloud Build to run tests in a specific folder. Trigger Cloud Build for every GitHub pull request.

Full Access
Question # 10

You are designing a system with three different environments: development, quality assurance (QA), and production.

Each environment will be deployed with Terraform and has a Google Kubemetes Engine (GKE) cluster created so that application teams can deploy their applications. Anthos Config Management will be used and templated to deploy

infrastructure level resources in each GKE cluster. All users (for example, infrastructure operators and application owners) will use GitOps. How should you structure your source control repositories for both Infrastructure as Code (laC) and application code?

A.

Cloud Infrastructure (Terraform) repository is shared: different directories are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: differentoverlay directories are different environmentsApplication (app source code) repositories are separated: different branches are different features

B.

Cloud Infrastructure (Terraform) repository is shared: different directories are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated:different branches are different environmentsApplication (app source code) repositories are separated: different branches are different features

C.

Cloud Infrastructure (Terraform) repository is shared: different branches are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repository is shared: differentoverlay directories are different environmentsApplication (app source code) repository is shared: different directories are different features

D.

Cloud Infrastructure (Terraform) repositories are separated: different branches are different environmentsGKE Infrastructure (Anthos Config Management Kustomize manifests) repositories are separated:different overlay directories are different environmentsApplication (app source code) repositories are separated: different branches are different features

Full Access
Question # 11

You have a CI/CD pipeline that uses Cloud Build to build new Docker images and push them to Docker Hub. You use Git for code versioning. After making a change in the Cloud Build YAML configuration, you notice that no new artifacts are being built by the pipeline. You need to resolve the issue following Site Reliability Engineering practices. What should you do?

A.

Disable the CI pipeline and revert to manually building and pushing the artifacts.

B.

Change the CI pipeline to push the artifacts to Container Registry instead of Docker Hub.

C.

Upload the configuration YAML file to Cloud Storage and use Error Reporting to identify and fix the issue.

D.

Run a Git compare between the previous and current Cloud Build Configuration files to find and fix the bug.

Full Access
Question # 12

You are the Operations Lead for an ongoing incident with one of your services. The service usually runs at around 70% capacity. You notice that one node is returning 5xx errors for all requests. There has also been a noticeable increase in support cases from customers. You need to remove the offending node from the load balancer pool so that you can isolate and investigate the node. You want to follow Google-recommended practices to manage the incident and reduce the impact on users. What should you do?

A.

1. Communicate your intent to the incident team.2. Perform a load analysis to determine if the remaining nodes can handle the increase in traffic offloaded from the removed node, and scale appropriately.3. When any new nodes report healthy, drain traffic from the unhealthy node, and remove the unhealthy node from service.

B.

1. Communicate your intent to the incident team.2. Add a new node to the pool, and wait for the new node to report as healthy.3. When traffic is being served on the new node, drain traffic from the unhealthy node, and remove the old node from service.

C.

1 . Drain traffic from the unhealthy node and remove the node from service.2. Monitor traffic to ensure that the error is resolved and that the other nodes in the pool are handling the traffic appropriately.3. Scale the pool as necessary to handle the new load.4. Communicate your actions to the incident team.

D.

1 . Drain traffic from the unhealthy node and remove the old node from service.2. Add a new node to the pool, wait for the new node to report as healthy, and then serve traffic to the new node.3. Monitor traffic to ensure that the pool is healthy and is handling traffic appropriately.4. Communicate your actions to the incident team.

Full Access
Question # 13

Your application services run in Google Kubernetes Engine (GKE). You want to make sure that only images from your centrally-managed Google Container Registry (GCR) image registry in the altostrat-images project can be deployed to the cluster while minimizing development time. What should you do?

A.

Create a custom builder for Cloud Build that will only push images to gcr.io/altostrat-images.

B.

Use a Binary Authorization policy that includes the whitelist name pattern gcr.io/attostrat-images/.

C.

Add logic to the deployment pipeline to check that all manifests contain only images from gcr.io/altostrat-images.

D.

Add a tag to each image in gcr.io/altostrat-images and check that this tag is present when the image is deployed.

Full Access
Question # 14

Your company has a Google Cloud resource hierarchy with folders for production test and development Your cyber security team needs to review your company's Google Cloud security posture to accelerate security issue identification and resolution You need to centralize the logs generated by Google Cloud services from all projects only inside your production folder to allow for alerting and near-real time analysis. What should you do?

A.

Enable the Workflows API and route all the logs to Cloud Logging

B.

Create a central Cloud Monitoring workspace and attach all related projects

C.

Create an aggregated log sink associated with the production folder that uses a Pub Sub topic as the destination

D.

Create an aggregated log sink associated with the production folder that uses a Cloud Logging bucket as the destination

Full Access
Question # 15

You use Cloud Build to build your application. You want to reduce the build time while minimizing cost and development effort. What should you do?

A.

Use Cloud Storage to cache intermediate artifacts.

B.

Run multiple Jenkins agents to parallelize the build.

C.

Use multiple smaller build steps to minimize execution time.

D.

Use larger Cloud Build virtual machines (VMs) by using the machine-type option.

Full Access
Question # 16

You have a set of applications running on a Google Kubernetes Engine (GKE) cluster, and you are using Stackdriver Kubernetes Engine Monitoring. You are bringing a new containerized application required by your company into production. This application is written by a third party and cannot be modified or reconfigured. The application writes its log information to /var/log/app_messages.log, and you want to send these log entries to Stackdriver Logging. What should you do?

A.

Use the default Stackdriver Kubernetes Engine Monitoring agent configuration.

B.

Deploy a Fluentd daemonset to GKE. Then create a customized input and output configuration to tail the log file in the application's pods and write to Slackdriver Logging.

C.

Install Kubernetes on Google Compute Engine (GCE> and redeploy your applications. Then customize the built-in Stackdriver Logging configuration to tail the log file in the application's pods and write to Stackdriver Logging.

D.

Write a script to tail the log file within the pod and write entries to standard output. Run the script as a sidecar container with the application's pod. Configure a shared volume between the containers to allow the script to have read access to /var/log in the application container.

Full Access
Go to page: