Build a Multi-Tool AI Agent

Many junior data scientists and AI engineers think you can simply give an API key to an LLM and it will solve complex business problems. In reality, the real value in enterprise AI comes from orchestration. This means building Multi-Tool AI Agents that work like cognitive engines, using both your internal data and external APIs to create useful insights. In this article, I’ll show you how to build a Multi-Tool AI Agent.

Building a Multi-Tool AI Agent

A Multi-Tool AI Agent is just an LLM that can use external tools. You can think of the LLM as the brain, while the tools act as its hands and eyes.

To make sure everything stays free and private, we’ll use Ollama to run the Llama 3 model on your own computer, DuckDuckGo for web searches, and LangChain to connect everything.

Step 1: The Setup

Before you start coding, make sure you have Ollama installed on your computer and that you’ve downloaded the Llama 3 model:

ollama pull llama3

Next, install the required Python libraries in your terminal:

pip install -U \
langchain \
langchain-core \
langchain-community \
langchain-ollama \
pandas \
ddgs

We’re going to build a Business Analyst AI Agent that can:

  1. Reads a real sales CSV file
  2. Calculates revenue, AOV, category & region breakdowns
  3. Fetches industry context from the web
  4. Produces data-grounded business insights

Make sure you have a sample sales.csv file ready. It should have columns like quantity, unit_price, discount_percent, product_category, region, and payment_method. Here’s an example dataset you can use.

Step 2: Initializing the Local LLM

Let’s begin by importing the libraries we need and setting up the LLM:

import pandas as pd
import json

from langchain_ollama import OllamaLLM
from langchain_community.tools import DuckDuckGoSearchRun
from langchain_core.prompts import PromptTemplate

# 1. Initialize Local LLM
llm = OllamaLLM(
    model="llama3",
    temperature=0
)

We set temperature=0 here. For business analytics, you want the results to be consistent and factual. High temperature is better for creative writing, while low temperature works best for data analysis.

Step 3: The Internal Tool (Data Analysis)

One important lesson from real projects is that you shouldn’t rely on an LLM to do accurate math on large datasets. Instead, we’ll use standard data tools like Pandas to get reliable results:

def analyze_sales_csv(file_path: str) -> dict:
    df = pd.read_csv(file_path)

    # Calculate revenue per order
    df["order_revenue"] = (
        df["quantity"]
        * df["unit_price"]
        * (1 - df["discount_percent"] / 100)
    )

    # Return structured metrics
    return {
        "total_revenue": round(df["order_revenue"].sum(), 2),
        "average_order_value": round(df["order_revenue"].mean(), 2),
        "orders": len(df),
        "revenue_by_category": (
            df.groupby("product_category")["order_revenue"]
            .sum()
            .round(2)
            .to_dict()
        ),
        "revenue_by_region": (
            df.groupby("region")["order_revenue"]
            .sum()
            .round(2)
            .to_dict()
        ),
        "top_payment_method": (
            df.groupby("payment_method")["order_revenue"]
            .sum()
            .idxmax()
        ),
    }

This function acts as our internal tool. It gives us clean, calculated metrics in a dictionary, which we’ll later turn into JSON.

Step 4: The External Tool (Web Search)

Now we’ll let our agent look beyond our database. We’ll use DuckDuckGo to find industry context, so the AI can compare our CSV metrics with real-world benchmarks:

search = DuckDuckGoSearchRun()

def fetch_benchmarks():
    return search.run(
        "average ecommerce order value benchmark"
    )

Step 5: The Orchestration Stage

At this stage, we collect all the context before starting up the LLM:

def run_analysis():
    # Update this path to where your actual sales.csv lives
    metrics = analyze_sales_csv("/Users/amankharwal/aiagent/multi tool agent/sales.csv")
    benchmarks = fetch_benchmarks()

    print("\nDEBUG: METRICS USED\n")
    print(json.dumps(metrics, indent=2))

    return metrics, benchmarks

Step 6: The Prompt Engineering and Execution

This is the most important step. We’ll use a PromptTemplate to set clear rules for the LLM. This helps prevent hallucinations, includes JSON metrics, and adds web context:

writer_prompt = PromptTemplate.from_template("""
You are a business analyst.

RULES:
- Use ONLY the numbers provided
- Do NOT invent values
- Do NOT assume currency
- Do NOT recalculate

METRICS (JSON):
{metrics}

INDUSTRY CONTEXT:
{benchmarks}

Write 3 insights and 3 recommendations.
""")

# 6. MAIN EXECUTION
if __name__ == "__main__":
    # Gather data from our tools
    metrics, benchmarks = run_analysis()

    # Pass the data to the LLM
    response = llm.invoke(
        writer_prompt.format(
            metrics=json.dumps(metrics, indent=2),
            benchmarks=benchmarks
        )
    )

    print("\nFINAL OUTPUT:\n")
    print(response)
{
"total_revenue": 322400.0,
"average_order_value": 32240.0,
"orders": 10,
"revenue_by_category": {
"Electronics": 205700.0,
"Fashion": 26700.0,
"Home Appliances": 90000.0
},
"revenue_by_region": {
"East": 52200.0,
"North": 96700.0,
"South": 110500.0,
"West": 63000.0
},
"top_payment_method": "Credit Card"
}

FINAL OUTPUT:

Based on the provided metrics, here are three insights and three recommendations:

Insights:

1. The average order value (AOV) is $32,240, which is significantly higher than the industry benchmark of $150 orders mentioned in the context. This suggests that the business has a strong product offering that appeals to customers who are willing to spend more.
2. The revenue by category shows that Electronics accounts for 63% of total revenue ($205,700 out of $322,400), followed by Home Appliances (28%) and Fashion (8%). This highlights the importance of the Electronics category in driving revenue.
3. The top payment method is Credit Card, which suggests that customers are comfortable using credit cards to make purchases online.

Recommendations:

1. Leverage the strong AOV to increase revenue by upselling or cross-selling products, especially in the Electronics category where there appears to be a high demand.
2. Focus on optimizing the product offerings and marketing strategies for the Electronics category to further drive revenue growth. This could include targeted promotions, loyalty programs, or strategic partnerships with suppliers.
3. Consider offering alternative payment methods, such as PayPal or Apple Pay, to cater to customers who may prefer these options over Credit Card. Additionally, explore ways to reduce cart abandonment rates and improve checkout processes to increase conversions.
(agent_env) (base) amankharwal@192 multi tool agent %

Now you have your multi-tool AI agent. Its insights and recommendations will update whenever your data changes.

Closing Thoughts

Building this script shows an important shift in thinking when moving from learning about AI to actually using it. The main focus isn’t just the model, it’s how you design the whole system.

Did you notice how little of the script is actually LLM code? Most of the work involved parsing data, creating reliable functions, and writing a clear, specific prompt. Real-world AI engineering depends a lot on traditional software skills. If you learn how to send reliable data to AI models, you’ll stand out from engineers who only know how to use API calls.

If you found this article helpful, you can follow me on Instagram for daily AI tips and practical resources. You may also be interested in my latest book, Hands-On GenAI, LLMs & AI Agents, a step-by-step guide to prepare you for careers in today’s AI industry.

Aman Kharwal
Aman Kharwal

AI/ML Engineer | Published Author. My aim is to decode data science for the real world in the most simple words.

Articles: 2104

Leave a Reply

Discover more from AmanXai by Aman Kharwal

Subscribe now to keep reading and get access to the full archive.

Continue reading