Summer Sale Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: v4s65

NCP-AIO Exam Dumps - NVIDIA AI Operations

Go to page:
Question # 17

Your organization is running multiple AI models on a single A100 GPU using MIG in a multi-tenant environment. One of the tenants reports a performance issue, but you notice that other tenants are unaffected.

What feature of MIG ensures that one tenant's workload does not impact others?

A.

Hardware-level isolation of memory, cache, and compute resources for each instance.

B.

Dynamic resource allocation based on workload demand.

C.

Shared memory access across all instances.

D.

Automatic scaling of instances based on workload size.

Full Access
Question # 18

A cloud engineer is looking to provision a virtual machine for machine learning using the NVIDIA Virtual Machine Image (VMI) and Rapids.

What technology stack will be set up for the development team automatically when the VMI is deployed?

A.

Ubuntu Server, Docker-CE, NVIDIA Container Toolkit, CSP CLI, NGC CLI, NVIDIA Driver

B.

Cent OS, Docker-CE, NVIDIA Container Toolkit, CSP CLI, NGC CLI

C.

Ubuntu Server, Docker-CE, NVIDIA Container Toolkit, CSP CLI, NGC CLI, NVIDIA Driver, Rapids

D.

Ubuntu Server, Docker-CE, NVIDIA Container Toolkit, CSP CLI, NGC CLI

Full Access
Question # 19

You are managing a deep learning workload on a Slurm cluster with multiple GPU nodes, but you notice that jobs requesting multiple GPUs are waiting for long periods even though there are available resources on some nodes.

How would you optimize job scheduling for multi-GPU workloads?

A.

Reduce memory allocation per job so more jobs can run concurrently, freeing up resources faster for multi-GPU workloads.

B.

Ensure that job scripts use --gres=gpu: and configure Slurm’s backfill scheduler to prioritize multi-GPU jobs efficiently.

C.

Set up separate partitions for single-GPU and multi-GPU jobs to avoid resource conflicts between them.

D.

Increase time limits for smaller jobs so they don’t interfere with multi-GPU job scheduling.

Full Access
Go to page: