Summer Sale Special Limited Time 65% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: v4s65

1z0-1127-25 Exam Dumps - Oracle Cloud Infrastructure 2025 Generative AI Professional

Go to page:
Question # 17

What does "Loss" measure in the evaluation of OCI Generative AI fine-tuned models?

A.

The difference between the accuracy of the model at the beginning of training and the accuracy of the deployed model

B.

The percentage of incorrect predictions made by the model compared with the total number of predictions in the evaluation

C.

The improvement in accuracy achieved by the model during training on the user-uploaded dataset

D.

The level of incorrectness in the model’s predictions, with lower values indicating better performance

Full Access
Question # 18

How does a presence penalty function in language model generation when using OCI Generative AI service?

A.

It penalizes all tokens equally, regardless of how often they have appeared.

B.

It only penalizes tokens that have never appeared in the text before.

C.

It applies a penalty only if the token has appeared more than twice.

D.

It penalizes a token each time it appears after the first occurrence.

Full Access
Question # 19

When is fine-tuning an appropriate method for customizing a Large Language Model (LLM)?

A.

When the LLM already understands the topics necessary for text generation

B.

When the LLM does not perform well on a task and the data for prompt engineering is too large

C.

When the LLM requires access to the latest data for generating outputs

D.

When you want to optimize the model without any instructions

Full Access
Question # 20

Which is NOT a category of pretrained foundational models available in the OCI Generative AI service?

A.

Summarization models

B.

Generation models

C.

Translation models

D.

Embedding models

Full Access
Question # 21

What is the purpose of the "stop sequence" parameter in the OCI Generative AI Generation models?

A.

It specifies a string that tells the model to stop generating more content.

B.

It assigns a penalty to frequently occurring tokens to reduce repetitive text.

C.

It determines the maximum number of tokens the model can generate per response.

D.

It controls the randomness of the model’s output, affecting its creativity.

Full Access
Question # 22

How can the concept of "Groundedness" differ from "Answer Relevance" in the context of Retrieval Augmented Generation (RAG)?

A.

Groundedness pertains to factual correctness, whereas Answer Relevance concerns query relevance.

B.

Groundedness refers to contextual alignment, whereas Answer Relevance deals with syntactic accuracy.

C.

Groundedness measures relevance to the user query, whereas Answer Relevance evaluates data integrity.

D.

Groundedness focuses on data integrity, whereas Answer Relevance emphasizes lexical diversity.

Full Access
Question # 23

Which is NOT a built-in memory type in LangChain?

A.

ConversationImageMemory

B.

ConversationBufferMemory

C.

ConversationSummaryMemory

D.

ConversationTokenBufferMemory

Full Access
Question # 24

What does the term "hallucination" refer to in the context of Large Language Models (LLMs)?

A.

The model's ability to generate imaginative and creative content

B.

A technique used to enhance the model's performance on specific tasks

C.

The process by which the model visualizes and describes images in detail

D.

The phenomenon where the model generates factually incorrect information or unrelated content as if it were true

Full Access
Go to page: