decoding
back_to_blog

Prompt Engineering Patterns That Actually Work

After processing thousands of prompts, patterns emerge. These aren't theoretical frameworks—they're battle-tested approaches to getting consistent, high-quality outputs from language models.

════════════════════════════════════════════════════════════════

The gap between a mediocre prompt and an excellent one isn't about magic words or secret techniques. It's about understanding how language models process information and structuring your inputs accordingly.

After building dozens of production AI systems, I've distilled the approaches that consistently deliver results. Not academic exercises—patterns that ship.

Pattern 1: The Constraint Sandwich

Models perform better when they know their boundaries upfront. The constraint sandwich places limitations before the task, not after:

You are an expert technical writer.
Constraints:
- Maximum 3 paragraphs
- No jargon without explanation
- Include one concrete example

Task: Explain how database indexes work.

Why it works: The model allocates attention to constraints before generating content. Post-hoc constraints often get ignored because the generation has already committed to a direction.

Pattern 2: Role + Context + Task (RCT)

The most reliable structure for complex prompts. Each component serves a specific purpose:

key_insight

Role sets the expertise lens. Context provides the specific situation. Task defines the exact output needed. Missing any one degrades quality significantly.

[ROLE]
You are a senior security engineer conducting a code review.

[CONTEXT]
This Python function handles user authentication for a
healthcare application. HIPAA compliance is required.

[TASK]
Review this code and identify:
1. Security vulnerabilities
2. HIPAA compliance issues
3. Recommended fixes with code examples

Pattern 3: Few-Shot with Reasoning

Examples are powerful, but examples with visible reasoning are transformative. Show the model how to think, not just what to output:

Classify the sentiment of customer feedback.

Example 1:
Input: "The app crashes every time I try to upload photos"
Reasoning: Describes a technical failure affecting core functionality.
User is unable to complete their intended action. No positive aspects mentioned.
Output: Negative

Example 2:
Input: "Love the new dark mode! Though wish the fonts were bigger"
Reasoning: Expresses enthusiasm for a new feature (positive).
Includes a suggestion for improvement (constructive, not negative).
Overall tone is appreciative with minor feedback.
Output: Positive

Now classify:
Input: "It works but takes forever to load"

Pattern 4: Output Scaffolding

Pre-structure the response format. This eliminates ambiguity and ensures you get parseable, consistent outputs:

Analyze this error log and respond in exactly this format:

ERROR_TYPE: [single word classification]
ROOT_CAUSE: [one sentence]
SEVERITY: [low/medium/high/critical]
IMMEDIATE_ACTION: [specific command or step]
PERMANENT_FIX: [2-3 sentences max]

This pattern is essential for any automated pipeline. Without explicit structure, you're parsing free-form text—a recipe for brittle systems.

Pattern 5: Negative Space Prompting

Sometimes the best way to get what you want is to explicitly state what you don't want:

"Write a product description. Do NOT include: superlatives (best, amazing, revolutionary), calls to action, or questions. Do NOT start with 'Introducing' or 'Meet the'."

Models have strong priors from training data. Marketing copy defaults to certain patterns. Negative constraints override these defaults more reliably than positive instructions alone.

Combining Patterns

These patterns compose. A production prompt might use:

The key is understanding why each pattern works. Then you can adapt them to your specific use case rather than copying templates blindly.

What Doesn't Work

A few anti-patterns I see repeatedly:

Effective prompting is precise, structured, and economical. Every token should earn its place.

────────────────────────────────────────

These patterns are starting points. The real skill is in iteration—testing variations, measuring outputs, and refining based on actual results. Prompt engineering isn't a solved problem; it's an evolving practice.

Next time: We'll look at how these patterns change when you're building agent systems with tool use.