Spring Sale Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

Databricks-Generative-AI-Engineer-Associate Exam Dumps - Databricks Certified Generative AI Engineer Associate

Searching for workable clues to ace the Databricks Databricks-Generative-AI-Engineer-Associate Exam? You’re on the right place! ExamCert has realistic, trusted and authentic exam prep tools to help you achieve your desired credential. ExamCert’s Databricks-Generative-AI-Engineer-Associate PDF Study Guide, Testing Engine and Exam Dumps follow a reliable exam preparation strategy, providing you the most relevant and updated study material that is crafted in an easy to learn format of questions and answers. ExamCert’s study tools aim at simplifying all complex and confusing concepts of the exam and introduce you to the real exam scenario and practice it with the help of its testing engine and real exam dumps

Go to page:
Question # 4

A Generative AI Engineer is creating an LLM-powered application that will need access to up-to-date news articles and stock prices.

The design requires the use of stock prices which are stored in Delta tables and finding the latest relevant news articles by searching the internet.

How should the Generative AI Engineer architect their LLM system?

A.

Use an LLM to summarize the latest news articles and lookup stock tickers from the summaries to find stock prices.

B.

Query the Delta table for volatile stock prices and use an LLM to generate a search query to investigate potential causes of the stock volatility.

C.

Download and store news articles and stock price information in a vector store. Use a RAG architecture to retrieve and generate at runtime.

D.

Create an agent with tools for SQL querying of Delta tables and web searching, provide retrieved values to an LLM for generation of response.

Full Access
Question # 5

A Generative AI Engineer developed an LLM application using the provisioned throughput Foundation Model API. Now that the application is ready to be deployed, they realize their volume of requests are not sufficiently high enough to create their own provisioned throughput endpoint. They want to choose a strategy that ensures the best cost-effectiveness for their application.

What strategy should the Generative AI Engineer use?

A.

Switch to using External Models instead

B.

Deploy the model using pay-per-token throughput as it comes with cost guarantees

C.

Change to a model with a fewer number of parameters in order to reduce hardware constraint issues

D.

Throttle the incoming batch of requests manually to avoid rate limiting issues

Full Access
Question # 6

A Generative Al Engineer is building a system that will answer questions on currently unfolding news topics. As such, it pulls information from a variety of sources including articles and social media posts. They are concerned about toxic posts on social media causing toxic outputs from their system.

Which guardrail will limit toxic outputs?

A.

Use only approved social media and news accounts to prevent unexpected toxic data from getting to the LLM.

B.

Implement rate limiting

C.

Reduce the amount of context Items the system will Include in consideration for its response.

D.

Log all LLM system responses and perform a batch toxicity analysis monthly.

Full Access
Question # 7

A Generative AI Engineer is experimenting with using parameters to configure an agent in Mosaic Agent Framework. However, they are struggling to get the agent to respond with relevant information with this configuration:

config = {"prompt_template": "You are a trivia bot. Generate a question based on the user's input: {user_input}", "input_vars": ["user_input"], "parameters": {"temperature": 0.01, "max_tokens": 500}}

Which error is causing the problem?

A.

The prompt does not parse the user's input vars

B.

The prompt does not set the retriever schema

C.

The prompt does not list available agents for the LLM to call

D.

The prompt is not wrapped in ChatModel

Full Access
Question # 8

A Generative AI Engineer is building an LLM to generate article summaries in the form of a type of poem, such as a haiku, given the article content. However, the initial output from the LLM does not match the desired tone or style.

Which approach will NOT improve the LLM’s response to achieve the desired response?

A.

Provide the LLM with a prompt that explicitly instructs it to generate text in the desired tone and style

B.

Use a neutralizer to normalize the tone and style of the underlying documents

C.

Include few-shot examples in the prompt to the LLM

D.

Fine-tune the LLM on a dataset of desired tone and style

Full Access
Go to page: