Prompt Engineering: Giving Your Bot Tools to See the World

Param Harrison
5 min read

Share this post

In our previous posts, we've pushed prompting to its absolute limit. We've taught our AI to be specific (explicit instructions), to format data (structured JSON), to "think" logically (chain-of-thought), and even to critique its own work (self-critique).

But our AI is still stuck in its own head. It's a "closed-book" brain. It has no access to real-time data, your company's database, or the internet.

This post is for you if you're ready to build a true Agent. We will give our LLM "eyes and ears" (Tools) and a "brain" (Reasoning) to use them.

The problem: the Closed-Book AI

Let's give our powerful LLM a simple research task.

Use Case: "What's the weather in London right now, and how does it compare to our new competitor, 'Project Nova'?"

graph TD
    A["User: 'What's the weather in London...'"] --> B(LLM)
    B --> C["Bot: 'I'm sorry, I am an AI and do not have access to real-time information like the weather. I also have no information on a 'Project Nova' as my knowledge cut-off is 2023.'"]
    
    style C fill:#ffebee,stroke:#b71c1c,color:#212121

Why this is bad: The AI is useless for any real-world, current task.

The solution: the ReAct agent (Reason + Act)

The only way to solve this is to give our LLM Tools (like a web search) and a Reasoning Process to use them. This is the ReAct (Reason + Act) framework.

A ReAct agent follows a continuous loop:

  1. Reason: Based on the query, what do I need to know?
  2. Act: Which of my tools can get me that information? I'll call it.
  3. Observe: What was the result of the tool call?
  4. Repeat: Now that I have this new info, do I have the final answer, or do I need to Reason and Act again?

The "How": We define our tools and provide a "master prompt" that instructs the LLM to follow this ReAct loop.

# --- 1. Define Our Tools ---
# In a real app, these are actual Python functions

def get_weather(location: str) -> str:
    """Gets the real-time weather for a location."""
    # (Code to call a weather API)
    return f"The weather in {location} is 15°C and cloudy."

def search_company_database(company_name: str) -> str:
    """Searches the internal database for competitor info."""
    if company_name == "Project Nova":
        return "Project Nova: AI-powered logistics, launched 2024, high-growth."
    return "Company not found."

# --- 2. The Master "ReAct" Prompt ---
system_prompt = """
You are an assistant that has access to the following tools.

Your job is to answer the user's question by following a
'Thought, Action, Observation' loop.

TOOLS:
- `get_weather(location: str)`: Gets the real-time weather for a location.
- `search_company_database(company_name: str)`: Searches the internal database for competitor info.

PROCESS:
Thought: [Your reasoning for what to do next]
Action: [The *exact* tool call to make, e.g., `get_weather(location="London")`]
Observation: [The result from the tool]
... (Repeat this loop until you have the final answer) ...
Thought: I have enough information to answer the user's question.
Final Answer: [Your answer]
"""

user_query = "What's the weather in London, and what is 'Project Nova'?"

Putting the ReAct Agent to Work

Now, let's watch the agent "think" as it receives the user_query.

graph TD
    A["User: Weather in London and info on Project Nova"] --> B(LLM)
    B --> C["Thought: Need weather and company info"]
    C --> D[Action: get_weather London]
    D --> E["Observation: Weather 15°C cloudy"]
    E --> F(LLM)
    F --> G["Thought: Have weather, need company info"]
    G --> H[Action: search_company_database Project Nova]
    H --> I["Observation: Project Nova AI logistics 2024"]
    I --> J(LLM)
    J --> K["Final Answer: Weather 15°C cloudy, Project Nova AI logistics 2024"]
    
    style K fill:#e8f5e9,stroke:#388e3c,color:#212121

Observation: This is a fundamental leap. The LLM is no longer just responding. It is acting as an orchestrator. It intelligently decomposed the user's query into two sub-goals and used two different tools to solve them, synthesizing the results into a single, perfect answer.

This is the core of what an "AI Agent" is. For more on building production agents, see our agentic RAG guide and architecting AI agents.

Challenge for you

  1. Add a new tool: Add a new tool to the list called get_stock_price(company_name).

  2. Test it: Give the agent a new query: Write a report on 'Cognito Inc.' and find its stock price.

  3. Observe: Does your agent first call search_company_database (which will fail) and then call get_stock_price? Or does it fail to find the stock price? How would you improve the tool descriptions to help the agent succeed?

Key takeaways

  • Tools give AI access to the world: Without tools, LLMs are limited to their training data cutoff
  • ReAct framework enables reasoning: The Thought-Action-Observation loop allows agents to plan and execute multi-step tasks
  • Tool descriptions are critical: Clear, descriptive tool documentation helps agents choose the right tool for each task
  • Agents orchestrate, not just respond: True agents decompose complex queries, use tools strategically, and synthesize results

For more on building production AI systems, check out our AI Bootcamp for Software Engineers.

Share this post

Continue Reading

Weekly Bytes of AI — Newsletter by Param

Technical deep-dives for engineers building production AI systems.

Architecture patterns, system design, cost optimization, and real-world case studies. No fluff, just engineering insights.

Unsubscribe anytime. We respect your inbox.