Prompting Techniques
Great prompting isn't magic—it's a set of learnable techniques. Master these and you'll get dramatically better results from any AI model.
1. Chain-of-Thought Prompting
What it is: Asking the AI to show its work and reason step-by-step instead of jumping to conclusions.
Why it works: AI models (like humans) make fewer reasoning errors when they break problems into steps. Explicit reasoning also lets you spot where things go wrong.
Before
What should I do about my startup's declining user engagement?
Result: Generic advice like "improve your product" and "talk to users."
After
My startup's user engagement has declined 30% over the past 3 months. Walk me through diagnosing this step-by-step: 1. What data should I look at first to understand the scope? 2. What are the most likely root causes, and how would I test each? 3. For each likely cause, what would be the first validation step? 4. What metrics would tell me if my fix is working? Think through each step before giving recommendations.
Result: Structured diagnostic process with specific, actionable steps.
When to use
- Complex problem-solving
- Math or logic tasks
- Debugging or troubleshooting
- Strategic planning
- Any time accuracy matters more than speed
Pro tip: Add "Let's think through this step by step" or "Show your reasoning" to almost any analytical prompt for better results.
2. Few-Shot Prompting
What it is: Giving the AI examples of what you want before asking for new output.
Why it works: AI learns patterns from examples better than from descriptions. Show, don't just tell.
Before
Classify these customer support tickets by urgency.
Result: Inconsistent classification based on the AI's assumptions about urgency.
After
Classify each ticket as HIGH, MEDIUM, or LOW urgency. Examples: Ticket: "The entire site is down and we're losing sales" Urgency: HIGH Ticket: "I'd like to request a feature for next quarter" Urgency: LOW Ticket: "My report exports are timing out" Urgency: MEDIUM Ticket: "The login button doesn't work on mobile" Urgency: HIGH Now classify these: [TICKETS TO CLASSIFY]
Result: Consistent classification matching your criteria.
When to use
- Classification tasks
- Tone/style matching
- Formatting structured output
- Teaching the AI your preferences
- Translation or transformation tasks
Pro tip: Include 3-5 diverse examples covering edge cases. More examples generally help, but quality matters more than quantity.
3. Role Assignment
What it is: Asking the AI to adopt a specific persona, expertise, or perspective.
Why it works: Role-playing activates relevant knowledge patterns in the model. A "marketing expert" responds differently than a "software engineer."
Before
How should I price my new SaaS product?
Result: Generic pricing advice.
After
You are a pricing strategist with 20 years of experience in B2B SaaS. I've built a project management tool for remote engineering teams. Key details: - Current competitors charge $10-25/user/month - We have unique features around async collaboration - Target customers are 50-200 person startups - We're pre-revenue and need to balance growth with sustainability How should I approach pricing? Consider: - Pricing model (per seat, usage-based, tiered?) - Price anchoring strategies - When to grandfather early users - How to test pricing before committing
Result: Expert-level strategic advice tailored to the situation.
When to use
- Needing specialized expertise
- Wanting a specific perspective (devil's advocate, customer, etc.)
- Adjusting tone and depth
- Simulating conversations
- Getting feedback from a specific viewpoint
Pro tip: Be specific. "Experienced divorce attorney" gets better results than "lawyer." Include relevant background in the role description.
4. Output Structuring
What it is: Explicitly defining the format, structure, and organization you want.
Why it works: AI defaults to paragraphs. If you want something else, you have to ask. Clear structure makes output more useful and easier to act on.
Before
Analyze our competitors.
Result: Unstructured paragraphs mixing everything together.
After
Analyze our top 3 competitors: [Competitor A], [Competitor B], [Competitor C]. For each competitor, provide: OVERVIEW - Founded: [year] - Funding/Size: [info] - Target market: [description] STRENGTHS (2-3 specific, evidence-based) WEAKNESSES (2-3 specific, evidence-based) POSITIONING (how they describe themselves) PRICING (what they charge, pricing model) STRATEGIC THREAT LEVEL - High/Medium/Low with justification OPPORTUNITIES FOR US - Gaps we could exploit - Differentiation opportunities Format as markdown. Be specific—avoid generic statements like "they have a strong team."
Result: Consistently structured analysis you can scan, compare, and act on.
When to use
- Analysis and research
- Comparison tasks
- Reports and summaries
- Data extraction
- Any time you need scannable output
Pro tip: Use specific markers (ALL CAPS, markdown headers, bullet points) to enforce structure. The AI will follow your formatting cues.
5. Iterative Refinement
What it is: Treating prompting as a conversation, not a single shot. Build on previous outputs.
Why it works: Complex tasks can't be fully specified upfront. Iteration lets you course-correct and add nuance.
Before
[One massive prompt trying to specify everything]
Result: Overwhelming, often misses the mark on key aspects.
After
Round 1: "Give me 10 blog post ideas about remote work." Round 2: "I like ideas 3 and 7. Develop full outlines for both with H2s and key points." Round 3: "Let's go with idea 3. Write the introduction and first section." Round 4: "This is too formal. Make it more conversational and add a personal anecdote hook." Round 5: "Better. Now add data points to support the claims in paragraph 3."
Result: Polished output shaped through feedback, not guessed upfront.
When to use
- Complex creative tasks
- When you're not sure exactly what you want
- Building something large (article, codebase, strategy)
- Polishing and refining
- Learning what works through experimentation
Pro tip: Save successful iteration patterns. They become reusable workflows.
6. Context Priming
What it is: Loading relevant context before making your request.
Why it works: AI has no memory of your situation. The more context you provide, the more tailored and useful the response.
Before
Should I pivot my startup?
Result: Generic advice that ignores your specific situation.
After
Context about my situation: - Product: B2B SaaS for restaurant inventory management - Traction: 12 customers, $2K MRR, launched 8 months ago - Burn rate: $15K/month, 6 months runway remaining - The problem: Customer acquisition is slow (3-month sales cycle), churn is high (15% monthly) - The team: 2 founders (technical), no sales expertise - Market feedback: Users love the product but say it's hard to justify cost for small restaurants Based on this context, should we: A) Double down on current market with better sales B) Pivot to serve larger restaurant chains C) Pivot to adjacent market (retail inventory) D) Something else What additional information would help you give better advice?
Result: Situation-specific analysis recognizing constraints and tradeoffs.
When to use
- Strategic decisions
- Advice and recommendations
- Troubleshooting
- Creative projects
- Any time your situation matters
Pro tip: Structure context clearly (bullet points, sections) so the AI can parse it. Include constraints, goals, relevant history, and what you've already tried.
7. Constraint-Based Prompting
What it is: Explicitly stating what the output should NOT include or what limits to respect.
Why it works: AI defaults to comprehensive responses. Constraints force focus and prevent unwanted elements.
Before
Summarize this article.
Result: Summary that's too long, includes unnecessary details, misses your use case.
After
Summarize this article in exactly 3 sentences. Constraints: - Maximum 25 words per sentence - Focus only on findings relevant to healthcare applications - Do not mention the authors or their affiliations - Use simple language (8th-grade reading level) - No jargon without explanation Article: [PASTE ARTICLE]
Result: Tightly focused summary matching your exact specifications.
When to use
- When output length matters
- Removing unwanted content (jargon, speculation, etc.)
- Enforcing style or tone
- Focusing on specific aspects
- Creating consistency across multiple outputs
Pro tip: Phrase constraints as "Do not..." or "Avoid..." Negative instructions often work better than trying to specify everything you do want.
Putting It All Together
These techniques compound. A great prompt often uses several:
You are an experienced UX researcher (Role Assignment). I'm showing you 3 examples of user feedback and how I categorized them (Few-Shot). Now categorize these 10 new pieces of feedback using the same approach (Few-Shot continued). For each, provide: - Category - Sentiment - Priority - Suggested action Think through ambiguous cases step by step (Chain-of-Thought). Do not include the original feedback text in your output—just the analysis (Constraint).
Quick Reference
| Technique | Use When | Key Phrase |
|---|---|---|
| Chain-of-Thought | Complex reasoning | "Think step by step" |
| Few-Shot | Pattern matching | "Here are examples..." |
| Role Assignment | Need expertise | "You are a [expert]..." |
| Output Structuring | Need organized output | "Format as..." |
| Iterative Refinement | Complex tasks | Multi-turn conversation |
| Context Priming | Situation matters | "Context: [details]" |
| Constraint-Based | Need limits | "Do not..." "Maximum..." |
Ready to apply these? Browse the Prompt Library for ready-to-use examples.