How to Build AI Agents Using CrewAI

If you want to go beyond basic LLM apps and build AI agents that work together, reason, and complete tasks, learning CrewAI is one of the quickest ways to start.

Many beginners know how to use an LLM or even build a chatbot, but find it hard to design systems where several AI parts work together as a team. CrewAI helps solve this problem.

In this article, I’ll show you how CrewAI works and guide you through building your first multi-agent system in Python using a local LLM.

What CrewAI Actually Does

CrewAI is a framework designed to orchestrate multiple AI agents, where each agent has:

  1. A role (what it is responsible for)
  2. A goal (what it tries to achieve)
  3. A backstory (context that shapes its behavior)
  4. Access to an LLM

Rather than using one large prompt and hoping it works, you split the problem into smaller parts and give each part to a specialized agent.

If you want to master this shift from simple LLM apps to real-world AI agent systems, I’ve broken it down step-by-step in my book: Hands-On GenAI, LLMs & AI Agents.

For example, one agent might handle research, another focuses on writing, and a third could take care of validation or running tasks.

Build AI Agents Using CrewAI: Getting Started

Let’s get started with building AI agents using CrewAI.

Before you write any code, make sure you have a local LLM set up. Start by installing Ollama.

Ollama lets you run models like LLaMA on your own computer. Use this command:

brew install ollama   # Mac

Or, download Ollama from the official website for Windows, Linux, or Mac.

After installing, open your terminal and pull the model:

ollama pull llama3

Next, install CrewAI:

pip install crewai

CrewAI revolves around three core abstractions:

CrweAI Agents Architecture

Start by thinking about the roles and responsibilities, not by writing code right away.

Let’s go through a complete example to see how to build AI agents with CrewAI, step by step.

Step 1: Initialize the Local LLM

from crewai import Agent, Task, Crew, Process, LLM

# 1. Initialize our free, local LLM via Ollama
# Make sure Ollama is running in the background!
local_llm = LLM(
    model="ollama/llama3", 
    base_url="http://localhost:11434"
)

In this step, you tell CrewAI to use a local LLaMA 3 model. The base_url points to your Ollama server.

Step 2: Define Your Agents

# 2. Define our Agents
researcher = Agent(
    role='Senior Tech Researcher',
    goal='Uncover the latest trends in open-source AI models.',
    backstory='You are a meticulous tech analyst known for finding accurate, fluff-free technical information.',
    verbose=True,
    allow_delegation=False,
    llm=local_llm
)

writer = Agent(
    role='Technical Content Strategist',
    goal='Distill complex research into clear, digestible summaries for junior developers.',
    backstory='You are an expert technical writer. You avoid buzzwords and explain things simply and practically.',
    verbose=True,
    allow_delegation=False,
    llm=local_llm
)

Here’s where it gets interesting. Each agent is more than just a prompt; it acts as a persona with its own rules:

  1. The researcher focuses on accuracy and depth.
  2. The writer focuses on clarity and simplification.

Dividing roles like this is what makes multi-agent systems so effective.

Step 3: Define the Tasks

# 3. Define our Tasks
research_task = Task(
    description='Analyze the current landscape of local AI models. Identify 3 key benefits of running models locally versus using paid cloud APIs.',
    expected_output='A bulleted list of 3 key benefits with a brief technical explanation for each.',
    agent=researcher
)

writing_task = Task(
    description='Take the research provided and write a short, engaging 2-paragraph summary. Format it nicely with Markdown.',
    expected_output='A 2-paragraph Markdown summary.',
    agent=writer
)

Assign each task to a specific agent. Always define what the output should look like. If your tasks are unclear, your agents will give unclear results.

Step 4: Assemble the Crew

# 4. Assemble the Crew and Kick it Off
tech_crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    process=Process.sequential # The writer waits for the researcher to finish
)

This is where you set up the workflow. Using process.sequential means the tasks will run one after another, in order.

Step 5: Run the System

print("Starting the AI Crew... Watch them work!")
result = tech_crew.kickoff()

print("\n==================================")
print("FINAL OUTPUT:")
print("==================================")
print(result)

When you run the system, the researcher creates structured insights, the writer turns them into clear content, and you end up with a finished result.

This gives you your first working AI agent pipeline with CrewAI.

You will see such an output in the end:

CrewAI Execution

This output shows how each agent completed its task step by step, including LLM call times and the order of task completion, so you can see how the system works.

Closing Thoughts

Learning to build AI agents with CrewAI is more than picking up a new framework. It’s about understanding a new way to design AI systems.

If you want to become an AI Engineer or ML Engineer in 2026, this mindset will set you apart. It’s not just about knowing models, but about building systems that solve real problems.

I hope you found this article on building AI agents with CrewAI helpful.

For more AI and machine learning tips, follow me on Instagram. My book, Hands-On GenAI, LLMs & AI Agents, can also help you grow your AI career.

Aman Kharwal
Aman Kharwal

AI/ML Engineer | Published Author. My aim is to decode data science for the real world in the most simple words.

Articles: 2093

Leave a Reply

Discover more from AmanXai by Aman Kharwal

Subscribe now to keep reading and get access to the full archive.

Continue reading