Prompt Engineering: From Shallow to Substantial Answers
We've built bots that are specific (explicit instructions), formatted (structured JSON), and logical (chain-of-thought reasoning). But what about depth?
This post is for you if you've ever asked an AI a complex, nuanced question (like "Is a hotdog a sandwich?") and gotten a fluffy, non-committal, or shallow answer.
Today, we'll build a Debate Bot and use two techniques—Step-Back Prompting and Goal Decomposition—to force it to move beyond shallow opinions and produce deep, structured, and well-reasoned arguments.
The problem: the Shallow Answer
Let's give our bot a classic, nuanced question.
Use Case: "Is a hotdog a sandwich?"
graph TD
A["User: 'Is a hotdog a sandwich?'"] --> B(LLM)
B --> C["Bot: 'That's a fun debate! It's a really divisive topic. Many people say no, but some argue it is. What do you think?'"]
style C fill:#fff8e1,stroke:#f57f17,color:#212121
Why this is bad: This is a "fluffy", unhelpful, opinion-based answer. It's not wrong, but it has zero substance. It doesn't help the user understand the reason for the debate.
Improvement 1: Define principles first (Step-Back Prompting)
The bot gave a shallow answer because it jumped straight to the popular opinion. We need to force it to define its principles first.
This is called Step-Back Prompting. We ask the LLM to "step back" from the specific question and first define the general rules or concepts that govern it.
The "How": We'll write a prompt that explicitly asks the bot to define "sandwich" before answering the question.
prompt = """
I have a specific question: "Is a hotdog a sandwich?"
Before you answer, please "step back" and define the
general principles.
1. First, provide the technical/taxonomical definition of a sandwich
(e.g., based on bread, filling, structure).
2. Second, provide the common culinary/cultural definition of a sandwich.
3. Finally, use your definitions to analyze whether a
hotdog fits into either or both categories and give a final conclusion.
"""
graph TD
A["User: Step-Back Hotdog Prompt"] --> B(LLM)
subgraph PLAN["LLM's Internal Plan"]
direction TB
B1["1. Define 'Sandwich' (Technical)"]
B1 --> B2["2. Define 'Sandwich' (Culinary)"]
B2 --> B3["3. Analyze Hotdog vs. Definitions"]
end
B --> C["Bot: Provides technical and culinary definitions, then analyzes hotdog against both"]
style C fill:#e8f5e9,stroke:#388e3c,color:#212121
Observation: We've completely changed the output. By forcing the LLM to define its principles first (like in our chain-of-thought post), we get a high-quality, well-reasoned, and educational answer instead of a shallow one.
Think About It: This "step-back" technique is perfect for any nuanced topic. How could you use it to answer other questions, like "Is a virus alive?" or "Is AI creative?"
Improvement 2: Decompose the goal (Goal Decomposition)
Step-Back Prompting is great for analyzing a single question. But what if we have a huge, complex request?
Use Case: "Plan a 3-day marketing campaign for our new product."
A simple prompt will give a rambling, unstructured essay. We need to break the goal itself into smaller, logical pieces. This is Goal Decomposition.
The "How": We'll structure our prompt as a to-do list for the LLM.
prompt = """
Plan a 3-day marketing campaign for our new product (an AI-powered
email assistant).
Please provide a plan that includes these 4 distinct components:
1. **Overall Theme:** A single, catchy theme for the campaign.
2. **Daily Breakdown:** A plan for Day 1, Day 2, and Day 3.
3. **Target Audience:** The specific user persona we are targeting.
4. **Key Metrics:** 3-4 metrics we will use to measure success.
"""
graph TD
A["User: 'Plan a campaign...'"] --> B(LLM)
subgraph TODO["The 'To-Do List' (Goal Decomposition)"]
direction TB
B1["1. Find Theme"]
B1 --> B2["2. Create 3-Day Plan"]
B2 --> B3["3. Define Audience"]
B3 --> B4["4. List Metrics"]
end
TODO --> B
B --> C["Bot: Provides structured plan with theme, daily breakdown, target audience, and metrics"]
style C fill:#e8f5e9,stroke:#388e3c,color:#212121
Observation: By decomposing the goal for the LLM, we've turned a complex, creative task into a simple checklist. The LLM fills in each section, resulting in a perfectly structured and highly useful output.
Challenge for you
-
Use Case: You are building a "Blog Post Writer" bot.
-
The Problem: You ask,
Write a blog post about RAG.The bot gives you a wall of text. -
Your Task: Design a Goal Decomposition prompt for this. What 4-5 sections would you ask the bot to generate to create a well-structured blog post? (Hint: Think about "Introduction", "The Problem", "The Solution", etc.)
Key takeaways
- Step-back prompting adds depth: Defining principles first transforms shallow opinions into well-reasoned arguments
- Goal decomposition structures complexity: Breaking large tasks into components ensures comprehensive, organized outputs
- Combine techniques for best results: Step-back for analysis, goal decomposition for planning, and chain-of-thought for logic
- Structure guides quality: Providing a framework helps LLMs produce more useful, actionable content
For more on building production AI systems, check out our AI Bootcamp for Software Engineers.