## Prompt Engineering Framework
### Phase 1: Requirements Analysis
```
Define the prompt engineering task:
**Objective:** [What should the model accomplish]
**Input:** [What information the model receives]
**Output:** [Desired format and content]
**Constraints:** [Limitations, edge cases to handle]
Requirements analysis:
1. **Task Decomposition**
- Is this a single task or multi-step?
- What reasoning is required?
- What knowledge is needed?
- What creativity level is appropriate?
2. **Output Specification**
- Format: JSON, Markdown, plain text, code?
- Structure: Fixed schema, flexible, hybrid?
- Length: Minimum, maximum, typical?
- Quality criteria: Accuracy, creativity, tone?
3. **Edge Cases**
- Invalid or ambiguous inputs
- Missing required information
- Conflicting requirements
- Adversarial inputs
4. **Evaluation Criteria**
- How will success be measured?
- What are failure modes?
- Human evaluation needed?
- Automated metrics available?
Define success criteria and test cases.
```
### Phase 2: Prompt Design Patterns
#### System Prompt Architecture
```
Design a system prompt for:
**Role:** [What persona should the model adopt]
**Task:** [Primary function]
**Context:** [Background information]
System prompt structure:
1. **Identity & Purpose**
```
You are an expert [role] specializing in [domain].
Your purpose is to [primary objective].
```
2. **Guidelines & Constraints**
```
## Guidelines
- Always [required behavior]
- Never [prohibited behavior]
- When uncertain, [default action]
## Constraints
- Response length: [limits]
- Format: [required format]
- Tone: [communication style]
```
3. **Knowledge & Context**
```
## Background
[Relevant domain knowledge]
## Current Context
[Session-specific information]
```
4. **Output Format**
```
## Response Format
Structure your response as:
1. [First section]
2. [Second section]
...
```
5. **Examples** (optional)
```
## Examples
Input: [example input]
Output: [example output]
```
Generate complete system prompt with all sections.
```
#### Few-Shot Learning
```
Create few-shot examples for:
**Task:** [Description]
**Difficulty Level:** [Simple to complex]
**Number of Examples:** [2-5 typically]
Few-shot design principles:
1. **Example Selection**
- Cover diverse input types
- Include edge cases
- Progress from simple to complex
- Avoid examples too similar to test cases
2. **Example Format**
```
Example 1:
Input: [representative input]
Output: [correctly formatted output]
Example 2:
Input: [different input type]
Output: [correctly formatted output]
Example 3:
Input: [edge case]
Output: [correct handling of edge case]
```
3. **Annotation (optional)**
```
Input: [input]
Reasoning: [step-by-step thought process]
Output: [final output]
```
4. **Anti-Examples** (when helpful)
```
Bad Example (don't do this):
Input: [input]
Output: [incorrect output]
Why it's wrong: [explanation]
```
Generate optimized few-shot prompt with examples.
```
#### Chain-of-Thought (CoT)
```
Implement chain-of-thought reasoning for:
**Task:** [Complex reasoning task]
**Reasoning Type:** [Logical, mathematical, analytical]
CoT patterns:
1. **Standard CoT**
```
Let's think through this step by step:
Step 1: [Identify key information]
Step 2: [Apply relevant rules/knowledge]
Step 3: [Make intermediate conclusions]
Step 4: [Synthesize final answer]
Therefore, the answer is: [final answer]
```
2. **Self-Consistency CoT**
```
I'll approach this problem multiple ways:
Approach A:
[reasoning path 1] → Answer: X
Approach B:
[reasoning path 2] → Answer: X
Both approaches agree, so the answer is X.
```
3. **Tree-of-Thought**
```
Let me explore multiple possibilities:
Branch 1: Assume [condition A]
→ leads to [outcome]
→ Evaluation: [viable/not viable]
Branch 2: Assume [condition B]
→ leads to [outcome]
→ Evaluation: [viable/not viable]
Best path: Branch [X] because [reasoning]
```
4. **Reflection Pattern**
```
Initial answer: [first attempt]
Let me verify this:
- Check 1: [verification step] ✓
- Check 2: [verification step] ✗ (issue found)
Corrected answer: [revised answer]
```
Apply appropriate CoT pattern to the task.
```
### Phase 3: Structured Outputs
```
Design structured output format for:
**Use Case:** [How the output will be consumed]
**Consumer:** [Human, API, parsing system]
Structured output patterns:
1. **JSON Output**
```
Respond ONLY with valid JSON matching this schema:
{
"field1": "string - description",
"field2": number,
"field3": ["array", "of", "items"],
"nested": {
"subfield": "value"
}
}
Do not include any text before or after the JSON.
```
2. **XML/Markdown Sections**
```
Format your response using these sections:
<analysis>
Your analysis here
</analysis>
<recommendation>
Your recommendation here
</recommendation>
<confidence>
high|medium|low
</confidence>
```
3. **Function Calling Format**
````
When you need to [action], respond with:
```function
{
"name": "function_name",
"arguments": {
"arg1": "value",
"arg2": "value"
}
}
````
```
```
4. **Hybrid Format**
````
## Analysis
[Free-form analysis text]
## Structured Data
```json
{ "key_findings": [...], "score": number }
````
## Next Steps
[Recommendations]
```
```
Generate structured output specification.
```
### Phase 4: Optimization & Evaluation
```
Optimize prompt performance:
**Current Prompt:** [Paste prompt]
**Issues:** [What's not working well]
**Metrics:** [How you measure success]
Optimization process:
1. **A/B Testing**
- Create variations: [brief description of each]
- Test with same inputs
- Compare outputs on metrics
- Select winner, iterate
2. **Prompt Compression**
- Identify redundant instructions
- Combine similar guidelines
- Use examples instead of lengthy explanations
- Balance brevity with clarity
3. **Error Analysis**
- Categorize failure types
- Add specific guidance for common errors
- Include negative examples if needed
- Add validation steps
4. **Token Optimization**
- Estimate token usage
- Remove unnecessary context
- Use abbreviations where clear
- Consider multi-turn vs. single-turn
5. **Robustness Testing**
- Test with edge case inputs
- Try adversarial inputs
- Vary input phrasing
- Check for prompt injection vulnerability
Generate optimized prompt with performance comparison.
```
## Advanced Techniques
### Prompt Chaining
```
Design a prompt chain for:
**Complex Task:** [Multi-step process]
**Steps:** [List of steps]
Chain architecture:
Step 1: [Task] → Output feeds into Step 2
Step 2: [Task] → Output feeds into Step 3
...
For each step, define:
- Input: What it receives
- Prompt: Instructions for this step
- Output: What it produces
- Validation: How to verify success
- Error handling: What if this step fails
Generate complete prompt chain specification.
```
### Meta-Prompting
```
Create a meta-prompt that generates prompts:
**Target Task Type:** [What kind of prompts to generate]
**Constraints:** [Requirements for generated prompts]
Meta-prompt structure:
You are a prompt engineering expert. Generate a high-quality prompt for the following task:
Task Description: {user_input}
The generated prompt should:
- Follow best practices for [model type]
- Include clear instructions
- Define output format
- Handle edge cases
Output the generated prompt wrapped in <prompt> tags.
````
## Best Practices
1. **Be Specific**: Vague instructions → vague outputs
2. **Use Examples**: Show, don't just tell
3. **Define Format**: Specify exactly what you want
4. **Handle Edge Cases**: Explicitly address ambiguity
5. **Iterate Systematically**: Change one thing at a time
6. **Version Control**: Track prompt changes and results
7. **Test Extensively**: Use diverse inputs
8. **Document Decisions**: Record why prompts work
## Example Session
**User**: "Create a prompt that extracts action items from meeting notes."
**Response**:
> Here's an optimized prompt for extracting action items:
>
> ```markdown
> You are an executive assistant expert at extracting action items from meeting notes.
>
> ## Task
> Extract all action items from the meeting notes provided. An action item is a specific task that someone committed to doing.
>
> ## Output Format
> Return a JSON array of action items:
> ```json
> [
> {
> "task": "Brief description of the action item",
> "owner": "Person responsible (or 'Unassigned')",
> "deadline": "Mentioned deadline (or 'Not specified')",
> "priority": "high|medium|low (infer from context)",
> "context": "Brief relevant context from the meeting"
> }
> ]
> ```
>
> ## Guidelines
> - Include only concrete, actionable tasks
> - Exclude general discussions or FYIs
> - If owner is unclear, list as "Unassigned"
> - Infer priority from urgency language ("ASAP", "by EOD", etc.)
> - Return empty array [] if no action items found
>
> ## Example
> Input: "John will send the report by Friday. We discussed the budget."
> Output: [{"task": "Send the report", "owner": "John", "deadline": "Friday", "priority": "medium", "context": "Report delivery"}]
>
> Now extract action items from:
> {meeting_notes}
> ```
## Related Resources
- [Anthropic Prompt Engineering Guide](https://docs.anthropic.com/claude/docs/prompt-engineering)
- [OpenAI Prompt Engineering](https://platform.openai.com/docs/guides/prompt-engineering)
- [Learn Prompting](https://learnprompting.org/)
- [Prompt Engineering Guide](https://www.promptingguide.ai/)
````