AI & Development
15 min read

LangChain Hub & hub.pull() Guide: Master ReAct Prompts for AI Agents

The line prompt = hub.pull("hwchase17/react") is your gateway to building AI agents that can reason through problems and take real-world actions. This guide explains LangChain Hub, the ReAct framework, and how to build production-ready agents that think before they act.

TL;DR: What You Need to Know

LangChain Hub is a centralized repository for sharing and discovering prompts at smith.langchain.com/hub. Use hub.pull() to download prompts and hub.push() to share your own.

hwchase17/react is the official ReAct prompt template that enables LLMs to generate reasoning traces (thoughts) and actions (tool use) in an interleaved loop—think → act → observe → repeat.

ReAct agents outperform pure chain-of-thought prompting because they can access external tools (search, APIs, databases) and update their knowledge in real-time, reducing hallucinations.

Quick Start: Your First ReAct Agent
from langchain import hub
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool

# 1. Pull the ReAct prompt from LangChain Hub
prompt = hub.pull("hwchase17/react")

# 2. Initialize your LLM
llm = ChatOpenAI(model="gpt-4", temperature=0)

# 3. Define tools (example: a simple search tool)
tools = [
    Tool(
        name="Search",
        func=lambda q: "Search results for: " + q,
        description="Useful for searching the web"
    )
]

# 4. Create the ReAct agent
agent = create_react_agent(llm, tools, prompt)

# 5. Run with AgentExecutor
executor = AgentExecutor(agent=agent, tools=tools, verbose=True)
result = executor.invoke({"input": "What is the weather in Tokyo?"})
print(result["output"])

What is LangChain Hub?

LangChain Hub is the official repository for sharing and discovering prompts in the LangChain ecosystem. Think of it as npm for prompts—a centralized place where developers can:

Discover & Pull

  • • Browse community-created prompts
  • • Filter by use case, model, language
  • • Download with hub.pull()
  • • Access specific versions via commit hash

Create & Share

  • • Upload your own prompts
  • • Version control with commit history
  • • Test in the interactive Playground
  • • Share with hub.push()

Access Levels

With LangSmith account: Full read/write access. Browse, pull, push, and manage prompts. Navigate to Hub from your LangSmith dashboard.

Without LangSmith: Read-only access. You can view, download, and run prompts directly at smith.langchain.com/hub.

The hub.pull() Function

The hub.pull() function downloads a prompt from LangChain Hub into your Python environment:

from langchain import hub

# Basic usage - pulls latest version
prompt = hub.pull("hwchase17/react")

# Pull specific version using commit hash
prompt = hub.pull("hwchase17/react:50442af1")

# Pull your own private prompt (no handle needed)
my_prompt = hub.pull("my-custom-prompt")

# Pull with specific API URL (for self-hosted)
prompt = hub.pull("owner/prompt", api_url="https://your-langsmith.com")

The function accepts an owner_repo_commit string in formats like:

  • owner/prompt_name — Latest version
  • owner/prompt_name:commit_hash — Specific version
  • prompt_name — Your own prompt (if logged in)

Understanding hwchase17/react

hwchase17/react is the official ReAct prompt template created by Harrison Chase (hwchase17), the founder of LangChain. This prompt implements the ReAct (Reasoning + Acting) framework from the influential October 2022 paper by Yao et al.

The ReAct Prompt Structure

The hwchase17/react prompt instructs the LLM to follow a specific format:

Thought: I need to search for information about X

Action: Search

Action Input: query about X

Observation: [Result from tool]

... (repeat Thought/Action/Observation)

Final Answer: Based on my research...

Related ReAct Prompts

Harrison Chase maintains several ReAct variants for different use cases:

hwchase17/react

Basic ReAct agent prompt. Single-turn reasoning and action.

hub.pull("hwchase17/react")

hwchase17/react-chat

ReAct with chat history support. Multi-turn conversations.

hub.pull("hwchase17/react-chat")

hwchase17/react-json

ReAct with structured JSON output format.

hub.pull("hwchase17/react-json")

hwchase17/react-chat-json

Chat + JSON: Multi-turn with structured output.

hub.pull("hwchase17/react-chat-json")

The ReAct Framework Explained

ReAct (Reasoning and Acting) is a prompting paradigm that enables LLMs to solve complex tasks by interleaving reasoning (thinking) with actions (tool use). It was introduced in the paper "ReAct: Synergizing Reasoning and Acting in Language Models" (Yao et al., 2022).

The Thought → Action → Observation Loop

1
Thought (Reasoning)

The LLM reasons about the current situation, decomposes the problem into subtasks, and plans what action to take next. This creates an explicit reasoning trace.

2
Action (Tool Use)

Based on the reasoning, the LLM selects a tool and provides input. Actions can be API calls, web searches, database queries, calculations, or any defined function.

3
Observation (Result)

The tool returns a result. The LLM observes this output and uses it to inform the next thought, creating a feedback loop that grounds reasoning in real data.

4
Final Answer

When the LLM has enough information, it produces a final answer synthesized from all observations and reasoning steps.

ReAct vs Chain-of-Thought (CoT)

Chain-of-Thought (CoT)

LLM generates step-by-step reasoning but cannot access external information.

  • Prone to fact hallucination
  • Cannot update knowledge in real-time
  • Error propagation through reasoning chain

ReAct

LLM reasons AND takes actions, grounding thoughts in real data.

  • Reduced hallucinations via external tools
  • Real-time information retrieval
  • Self-correction through observations

RESEARCH INSIGHT

The original ReAct paper showed that combining ReAct with chain-of-thought prompting and self-consistency checks produces the best results. However, 2024 research suggests that native function calling (supported by OpenAI, Anthropic, Mistral, Google) often outperforms ReAct prompting for simple tool use cases.

Building Your First ReAct Agent (Step-by-Step)

Let's build a complete ReAct agent that can search the web and perform calculations. This example uses hub.pull("hwchase17/react") with OpenAI's GPT-4.

1

Install Dependencies

pip install langchain langchain-openai langchainhub
2

Set Up Environment

import os
from langchain import hub
from langchain_openai import ChatOpenAI
from langchain.agents import create_react_agent, AgentExecutor
from langchain.tools import Tool

# Set your OpenAI API key
os.environ["OPENAI_API_KEY"] = "sk-your-api-key"

# Optional: Set LangSmith for tracing (recommended)
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "your-langsmith-key"
3

Pull the ReAct Prompt

# Pull the official ReAct prompt from LangChain Hub
prompt = hub.pull("hwchase17/react")

# Inspect the prompt template
print(prompt.template)
# Output: "Answer the following questions as best you can.
# You have access to the following tools: {tools}..."
4

Define Your Tools

# Define tools the agent can use
def search(query: str) -> str:
    """Simulate a web search (replace with real API)"""
    return f"Search results for '{query}': [Example results...]"

def calculator(expression: str) -> str:
    """Safely evaluate math expressions"""
    try:
        return str(eval(expression))
    except:
        return "Error: Invalid expression"

tools = [
    Tool(
        name="Search",
        func=search,
        description="Useful for searching the web for current information"
    ),
    Tool(
        name="Calculator",
        func=calculator,
        description="Useful for math calculations. Input: math expression"
    )
]
5

Create & Run the Agent

# Initialize LLM
llm = ChatOpenAI(model="gpt-4", temperature=0)

# Create the ReAct agent using the pulled prompt
agent = create_react_agent(llm, tools, prompt)

# Wrap in AgentExecutor for execution
executor = AgentExecutor(
    agent=agent,
    tools=tools,
    verbose=True,  # See the Thought/Action/Observation loop
    handle_parsing_errors=True,
    max_iterations=10
)

# Run the agent!
result = executor.invoke({
    "input": "What is 25% of the current Bitcoin price?"
})

print(result["output"])

Expected Output (verbose=True):

> Entering new AgentExecutor chain...
Thought: I need to find the current Bitcoin price first
Action: Search
Action Input: current bitcoin price USD
Observation: Search results for 'current bitcoin price USD': Bitcoin is currently trading at $43,250...

Thought: Now I need to calculate 25% of $43,250
Action: Calculator
Action Input: 43250 * 0.25
Observation: 10812.5

Thought: I now have the final answer
Final Answer: 25% of the current Bitcoin price ($43,250) is $10,812.50

> Finished chain.

Advanced: Web Scraping Tools for ReAct Agents

Real-world ReAct agents need access to live data. Web scraping tools are essential for AI agents that need to gather information from websites, monitor prices, or collect research data. Here's how to build a production-ready scraping tool for your agent.

Production Challenge: IP Blocking

When your AI agent makes repeated web requests, target websites will detect the pattern and block your IP. This is especially problematic for:

  • • High-frequency data collection
  • • Multi-step agent workflows (multiple requests per task)
  • • Long-running autonomous agents
  • • Testing LLM applications at scale

The solution is to use mobile proxies that rotate IPs automatically. Mobile IPs from real 4G/5G networks have higher trust scores than datacenter IPs and are much harder to block.

Web Scraping Tool with Proxy Support
import requests
from langchain.tools import Tool

def scrape_website(url: str) -> str:
    """
    Scrape a website using rotating mobile proxies.
    Replace with your proxy credentials.
    """
    proxies = {
        "http": "http://user:pass@proxy.proxies.sx:port",
        "https": "http://user:pass@proxy.proxies.sx:port"
    }

    headers = {
        "User-Agent": "Mozilla/5.0 (iPhone; CPU iPhone OS 17_0 like Mac OS X)..."
    }

    try:
        response = requests.get(url, proxies=proxies, headers=headers, timeout=30)
        response.raise_for_status()
        return response.text[:5000]  # Truncate for LLM context
    except Exception as e:
        return f"Error scraping {url}: {str(e)}"

# Create the tool
scrape_tool = Tool(
    name="WebScraper",
    func=scrape_website,
    description="Scrapes a website URL and returns the HTML content. Input: full URL"
)

Why Mobile Proxies for AI Agents?

AI agents that use web tools need reliable, unblocked access to data sources. Learn more about using mobile proxies for AI testing and agent development:

US Mobile IPs for AI Testing

Best Practices for Production ReAct Agents

1. Use Specific Prompt Versions

Always pin to a specific commit hash in production to avoid unexpected behavior from prompt updates:

# Production: pin to specific version
prompt = hub.pull("hwchase17/react:50442af1")

# Development: use latest
prompt = hub.pull("hwchase17/react")

2. Handle Parsing Errors Gracefully

LLMs sometimes produce malformed output. Configure AgentExecutor to handle errors:

executor = AgentExecutor(
    agent=agent,
    tools=tools,
    handle_parsing_errors=True,  # Retry on malformed output
    max_iterations=10,           # Prevent infinite loops
    early_stopping_method="generate"  # Let LLM decide when to stop
)

3. Enable LangSmith Tracing

Trace your agent's execution to debug issues and optimize performance:

import os
os.environ["LANGCHAIN_TRACING_V2"] = "true"
os.environ["LANGCHAIN_API_KEY"] = "your-langsmith-key"
os.environ["LANGCHAIN_PROJECT"] = "my-react-agent"  # Optional

4. Write Clear Tool Descriptions

The LLM decides which tool to use based on descriptions. Be specific:

# ❌ Bad: Too vague
Tool(name="Search", description="Searches things")

# ✅ Good: Specific and actionable
Tool(
    name="GoogleSearch",
    description="Search Google for current information. Use for: "
                "news, prices, facts, recent events. "
                "Input: search query string. "
                "Returns: top 5 search results with snippets."
)

5. Consider Native Function Calling

For simple tool use, native function calling may outperform ReAct prompting:

from langchain.agents import create_openai_functions_agent

# Alternative: OpenAI Functions agent (simpler tool use)
functions_agent = create_openai_functions_agent(llm, tools, prompt)

# Use ReAct when you need:
# - Explicit reasoning traces for debugging
# - Complex multi-step planning
# - Non-OpenAI models (Llama, Claude, etc.)

Frequently Asked Questions

What is hub.pull("hwchase17/react") in LangChain?

hub.pull("hwchase17/react") downloads the official ReAct prompt template from LangChain Hub. This prompt enables LLMs to perform reasoning and acting in an interleaved manner—generating thought traces and taking actions using tools to solve complex tasks.

What is LangChain Hub?

LangChain Hub is a centralized platform for uploading, browsing, pulling, and managing prompts for LangChain applications. It allows developers to share and discover high-quality prompts, with features like versioning, playground testing, and community collaboration.

What does ReAct stand for in AI?

ReAct stands for "Reasoning and Acting." It's a prompting framework introduced in the October 2022 paper by Yao et al. that enables LLMs to generate reasoning traces (thoughts) and task-specific actions in an interleaved manner, reducing hallucinations by grounding reasoning in external tool observations.

How do I use hub.pull() in LangChain?

Import hub from langchain, then call hub.pull("owner/prompt_name"):

from langchain import hub
prompt = hub.pull("hwchase17/react")  # Latest
prompt = hub.pull("hwchase17/react:50442af1")  # Specific version

What is the difference between ReAct and Chain-of-Thought prompting?

Chain-of-Thought (CoT) prompting enables LLMs to generate reasoning traces but lacks access to external tools or knowledge, leading to hallucinations. ReAct extends CoT by adding the ability to take actions (use tools, make API calls) and observe results, reducing hallucinations and enabling real-time information retrieval.

Should I use ReAct or OpenAI function calling?

For simple tool use with OpenAI models, native function calling often performs better. Use ReAct when you need: (1) explicit reasoning traces for debugging, (2) complex multi-step planning, (3) non-OpenAI models like Llama or Claude, or (4) educational/research purposes where you want to see the agent's thought process.

Conclusion

hub.pull("hwchase17/react") is your entry point to building AI agents that can reason through problems and take real-world actions. LangChain Hub makes it easy to discover, share, and version prompts—and the ReAct framework provides a proven pattern for creating agents that think before they act.

Key Takeaways

  1. 1.LangChain Hub is the npm/pip of prompts—browse, pull, push, and version your prompt templates at smith.langchain.com/hub.
  2. 2.hwchase17/react implements the ReAct framework—interleaved reasoning and acting for complex, multi-step tasks.
  3. 3.ReAct reduces hallucinations by grounding LLM reasoning in external tool observations and real-time data.
  4. 4.Production agents need reliable data access—consider mobile proxies for web scraping tools to avoid IP blocking.
  5. 5.Pin prompt versions in production and enable LangSmith tracing for debugging and optimization.

Whether you're building research agents, customer support bots, or autonomous data collection systems, understanding hub.pull() and the ReAct framework is essential for modern AI development. Start with the code examples above, explore LangChain Hub for more prompts, and build agents that can truly reason and act.

Share this article

Building AI Agents That Scrape the Web?

ReAct agents with web tools need reliable, unblocked data access. PROXIES.SX provides mobile proxies from real 4G/5G networks—perfect for AI agent development, LLM testing, and autonomous data collection at scale.