Summer Sale Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: v4s65

1z0-1127-25 Exam Dumps - Oracle Cloud Infrastructure 2025 Generative AI Professional

Go to page:
Question # 9

Which statement best describes the role of encoder and decoder models in natural language processing?

A.

Encoder models and decoder models both convert sequences of words into vector representations without generating new text.

B.

Encoder models take a sequence of words and predict the next word in the sequence, whereas decoder models convert a sequence of words into a numerical representation.

C.

Encoder models convert a sequence of words into a vector representation, and decoder models take this vector representation to generate a sequence of words.

D.

Encoder models are used only for numerical calculations, whereas decoder models are used to interpret the calculated numerical values back into text.

Full Access
Question # 10

What is the purpose of Retrievers in LangChain?

A.

To train Large Language Models

B.

To retrieve relevant information from knowledge bases

C.

To break down complex tasks into smaller steps

D.

To combine multiple components into a single pipeline

Full Access
Question # 11

Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI service?

A.

Updates the weights of the base model during the fine-tuning process

B.

Serves as a designated point for user requests and model responses

C.

Evaluates the performance metrics of the custom models

D.

Hosts the training data for fine-tuning custom models

Full Access
Question # 12

What is prompt engineering in the context of Large Language Models (LLMs)?

A.

Iteratively refining the ask to elicit a desired response

B.

Adding more layers to the neural network

C.

Adjusting the hyperparameters of the model

D.

Training the model on a large dataset

Full Access
Question # 13

How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?

A.

Shared among multiple customers for efficiency

B.

Stored in Object Storage encrypted by default

C.

Stored in an unencrypted form in Object Storage

D.

Stored in Key Management service

Full Access
Question # 14

Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?

A.

Fine-tuning requires training the entire model on new data, often leading to substantial computational costs, whereas PEFT involves updating only a small subset of parameters, minimizing computational requirements and data needs.

B.

PEFT requires replacing the entire model architecture with a new one designed specifically for the new task, making it significantly more data-intensive than Fine-tuning.

C.

Both Fine-tuning and PEFT require the model to be trained from scratch on new data, making them equally data and computationally intensive.

D.

Fine-tuning and PEFT do not involve model modification; they differ only in the type of data used for training, with Fine-tuning requiring labeled data and PEFT using unlabeled data.

Full Access
Question # 15

What happens if a period (.) is used as a stop sequence in text generation?

A.

The model ignores periods and continues generating text until it reaches the token limit.

B.

The model generates additional sentences to complete the paragraph.

C.

The model stops generating text after it reaches the end of the current paragraph.

D.

The model stops generating text after it reaches the end of the first sentence, even if the token limit is much higher.

Full Access
Question # 16

What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?

A.

It allows the LLM to access a larger dataset.

B.

It eliminates the need for any training or computational resources.

C.

It provides examples in the prompt to guide the LLM to better performance with no training cost.

D.

It significantly reduces the latency for each model request.

Full Access
Go to page: