Prompting Engineering

A prompt is the instruction you provide to an LLM. The quality of your prompt significantly impacts the quality of the model's response.

Basic Prompting

Direct Instruction

The simplest form of prompting is giving direct instructions:

prompt = "Translate the following English text to Spanish: Hello, how are you?"

Q2: Why might a direct instruction prompt sometimes produce unexpected results?

Zero-Shot Prompting

Zero-shot prompting asks the model to perform a task without providing examples:

python

prompt = """Classify the sentiment of the following review as positive, negative, or neutral:

Review: "The product arrived late, but the quality exceeded my expectations."

Sentiment:"""

The model uses its training to understand the task and provide an answer.

Few-Shot Prompting

Few-shot prompting provides examples to guide the model:

python

Q3: In what scenarios would few-shot prompting be preferable to zero-shot prompting?

Example: Sentiment Analysis with Few-Shot Learning

python

  • L1-2: Import the Anthropic library and define a function that takes a review string

  • L3: Initialize the Anthropic client with your API key

  • L5-14: Create a few-shot prompt with three examples demonstrating the task

  • L16-20: Send the request to Claude, specifying the model and maximum response length

  • L22: Extract and return the text response

Last updated

Was this helpful?