Function Calling in LLMs Tutorial

Large Language Models can now do much more than just generate text. Today’s LLMs can use external tools, call APIs, work with databases, send emails, search online, and automate tasks using function calling.

This is a major change in AI engineering. Developers are now using LLMs as decision-makers that know when to use outside systems and how to send them the right information.

In this tutorial, you’ll see how function calling works in LLMs and how to build a full workflow using Python and free tools.

What Is Function Calling in LLMs?

Function calling lets an LLM choose when to use a certain tool or function during a conversation.

Instead of just making plain text, the model can give back structured, JSON-like data that connects directly to a Python function or API call.

Here’s a simple overview of how the workflow goes:

  1. You define available tools/functions.
  2. The LLM reads the user query.
  3. The LLM decides whether a tool is needed.
  4. The model generates structured parameters.
  5. Your code executes the function.
  6. The function result is returned to the model.
  7. The model generates the final response.

This is how modern AI assistants can work with real systems, not just create text.

If you want to learn how to build real-world AI systems with LLMs and agents, I’ve broken it down in my book: Hands-On GenAI, LLMs & AI Agents.

Building a Function Calling Workflow with Python

In this tutorial, we’ll create:

  1. A local LLM-powered assistant
  2. Tool definitions
  3. Structured function inputs
  4. API integration
  5. Tool execution pipeline

Let’s get started.

Step 1: Install Ollama

Ollama lets you run open-source LLMs on your own computer. You can install it from here.

After installation, pull Llama 3.1:

ollama pull llama3.1

Step 2: Install Python Dependencies

Create a virtual environment:

python -m venv venv

Activate it:

source venv/bin/activate
For windows: venv\Scripts\activate

Install dependencies:

pip install ollama requests

Step 3: Understanding Tool Definitions

To start function calling, you first define the tools the model can use. Here’s a simple example:

tools = [
    {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "Get current weather for a city",
            "parameters": {
                "type": "object",
                "properties": {
                    "city": {
                        "type": "string",
                        "description": "City name"
                    }
                },
                "required": ["city"]
            }
        }
    }
]

This schema tells the LLM:

  1. the function name
  2. what it does
  3. expected parameters
  4. required inputs
  5. parameter types

This setup is based on JSON Schema.

Step 4: Creating a Function

Now let’s make a real Python function for weather using the free Open-Meteo API:

import requests

def get_weather(city):
    
    coordinates = {
        "delhi": (28.61, 77.20),
        "london": (51.50, -0.12),
        "new york": (40.71, -74.00)
    }

    city_lower = city.lower()

    if city_lower not in coordinates:
        return "City not found"

    lat, lon = coordinates[city_lower]

    url = (
        f"https://api.open-meteo.com/v1/forecast"
        f"?latitude={lat}&longitude={lon}"
        f"&current_weather=true"
    )

    response = requests.get(url)

    data = response.json()

    temperature = data["current_weather"]["temperature"]

    return f"The current temperature in {city} is {temperature}°C"

This function takes structured input, calls an outside API, and gives back clear results.

This is the main setup behind most AI tool systems.

Step 5: Connecting the LLM to the Tool

Now let’s connect the model:

import ollama
import json

response = ollama.chat(
    model="llama3.1",
    messages=[
        {
            "role": "user",
            "content": "What's the weather in Delhi?"
        }
    ],
    tools=tools
)

print(response)  
model='llama3.1' created_at='2026-05-10T05:18:05.809195Z' done=True done_reason='stop' total_duration=2919044583 load_duration=91632500 prompt_eval_count=156 prompt_eval_duration=1407084250 eval_count=18 eval_duration=1364990000 message=Message(role='assistant', content='', thinking=None, images=None, tool_name=None, tool_calls=[ToolCall(function=Function(name='get_weather', arguments={'city': 'Delhi'}))]) logprobs=None

At this point, the model might decide that no tool is needed, or that a tool is required. If it picks a tool, the response will include the function call arguments.

Step 6: Execute the Function

Now we execute the tool manually:

tool_call = response["message"]["tool_calls"][0]

function_name = tool_call["function"]["name"]

arguments = tool_call["function"]["arguments"]

if function_name == "get_weather":
    
    result = get_weather(arguments["city"])

    print(result)  
The current temperature in Delhi is 34.0°C

At this stage, the model has picked the tool, your backend has run it, and you’ve got real data back.

This is how production-level AI systems work.

Closing Thoughts

Function calling is one of the most useful skills in AI engineering today because it takes you beyond prompt engineering and into real system design.

This is what sets real AI products apart from simple chatbot projects.

I hope you found this tutorial on function calling in LLMs helpful.

For more AI and machine learning tips, follow me on Instagram. My book, Hands-On GenAI, LLMs & AI Agents, can also help you grow your AI career.

Aman Kharwal
Aman Kharwal

AI/ML Engineer | Published Author. My aim is to decode data science for the real world in the most simple words.

Articles: 2101

Leave a Reply

Discover more from AmanXai by Aman Kharwal

Subscribe now to keep reading and get access to the full archive.

Continue reading