Which statement best describes the role of encoder and decoder models in natural language processing?
Which role does a "model endpoint" serve in the inference workflow of the OCI Generative AI service?
What is prompt engineering in the context of Large Language Models (LLMs)?
How are fine-tuned customer models stored to enable strong data privacy and security in the OCI Generative AI service?
Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?
What happens if a period (.) is used as a stop sequence in text generation?
What is the main advantage of using few-shot model prompting to customize a Large Language Model (LLM)?