Welcome to Atomic Agents Documentation

A Lightweight and Modular Framework for Building AI Agents

Atomic Agents

AI Assistant Resources

πŸ“₯ Download Documentation for AI Assistants and LLMs

Choose the resource that best fits your needs:

All files are optimized for AI assistants and Large Language Models, with clear structure and formatting for easy parsing.

The Atomic Agents framework is designed around the concept of atomicity to be an extremely lightweight and modular framework for building Agentic AI pipelines and applications without sacrificing developer experience and maintainability. The framework provides a set of tools and agents that can be combined to create powerful applications. It is built on top of Instructor and leverages the power of Pydantic for data and schema validation and serialization.

All logic and control flows are written in Python, enabling developers to apply familiar best practices and workflows from traditional software development without compromising flexibility or clarity.

Key Features

  • Modularity: Build AI applications by combining small, reusable components

  • Predictability: Define clear input and output schemas using Pydantic

  • Extensibility: Easily swap out components or integrate new ones

  • Control: Fine-tune each part of the system individually

  • Provider Agnostic: Works with various LLM providers through Instructor

  • Built for Production: Robust error handling and async support

Installation

You can install Atomic Agents using pip:

pip install atomic-agents

Or using Poetry (recommended):

poetry add atomic-agents

Make sure you also install the provider you want to use. For example, to use OpenAI and Groq:

pip install openai groq

This also installs the CLI Atomic Assembler, which can be used to download Tools (and soon also Agents and Pipelines).

Note

The framework supports multiple providers through Instructor, including OpenAI, Anthropic, Groq, Ollama (local models), Gemini, and more! For a full list of all supported providers and their setup instructions, have a look at the Instructor Integrations documentation.

Quick Example

Here’s a glimpse of how easy it is to create an agent:

import instructor
import openai
from atomic_agents.context import ChatHistory
from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BasicChatOutputSchema


# Set up your API key (either in environment or pass directly)
# os.environ["OPENAI_API_KEY"] = "your-api-key"
# or pass it to the client: openai.OpenAI(api_key="your-api-key")

# Initialize agent with history
history = ChatHistory()

# Set up client with your preferred provider
client = instructor.from_openai(openai.OpenAI())  # Pass your API key here if not in environment

# Create an agent
agent = AtomicAgent[BasicChatInputSchema, BasicChatOutputSchema](
    config=AgentConfig(
        client=client,
        model="gpt-4o-mini",  # Use your provider's model
        history=history
    )
)

# Interact with your agent (using the agent's input schema)
response = agent.run(agent.input_schema(chat_message="Tell me about quantum computing"))

# Or more explicitly:
response = agent.run(
    BasicChatInputSchema(chat_message="Tell me about quantum computing")
)

print(response)

Example Projects

Check out our example projects in our GitHub repository:

Community & Support

Indices and References