Weekend Sale Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

NCP-AIO Exam Dumps - NVIDIA AI Operations

Go to page:
Question # 4

A GPU administrator needs to virtualize AI/ML training in an HGX environment.

How can the NVIDIA Fabric Manager be used to meet this demand?

A.

Video encoding acceleration

B.

Enhance graphical rendering

C.

Manage NVLink and NVSwitch resources

D.

GPU memory upgrade

Full Access
Question # 5

An administrator is troubleshooting issues with an NVIDIA Unified Fabric Manager Enterprise (UFM) installation and notices that the UFM server is unable to communicate with InfiniBand switches.

What step should be taken to address the issue?

A.

Reboot the UFM server to refresh network connections.

B.

Install additional GPUs in the UFM server to boost connectivity.

C.

Disable the firewall on the UFM server to allow communication.

D.

Verify the subnet manager configuration on the InfiniBand switches.

Full Access
Question # 6

A system administrator is experiencing issues with Docker containers failing to start due to volume mounting problems. They suspect the issue is related to incorrect file permissions on shared volumes between the host and containers.

How should the administrator troubleshoot this issue?

A.

Use the docker logs command to review the logs for error messages related to volume mounting and permissions.

B.

Reinstall Docker to reset all configurations and resolve potential volume mounting issues.

C.

Disable all shared folders between the host and container to prevent volume mounting errors.

D.

Reduce the size of the mounted volumes to avoid permission conflicts during container startup.

Full Access
Question # 7

You have successfully pulled a TensorFlow container from NGC and now need to run it on your stand-alone GPU-enabled server.

Which command should you use to ensure that the container has access to all available GPUs?

A.

kubectl create pod --gpu=all nvcr.io/nvidia/tensorflow:

B.

docker run nvcr.io/nvidia/tensorflow:

C.

docker start nvcr.io/nvidia/tensorflow:

D.

docker run --gpus all nvcr.io/nvidia/tensorflow:

Full Access
Question # 8

You need to do maintenance on a node. What should you do first?

A.

Drain the compute node using scontrol update.

B.

Set the node state to down in Slurm before completing maintenance.

C.

Set the node state to down in Slurm before completing maintenance.

D.

Disable job scheduling on all compute nodes in Slurm before completing maintenance.

Full Access
Go to page: