================================================================================ ATOMIC AGENTS DOCUMENTATION ================================================================================ This file contains the complete documentation for the Atomic Agents framework. Generated for use with Large Language Models and AI assistants. Project Repository: https://github.com/BrainBlend-AI/atomic-agents ================================================================================ DOCUMENTATION ================================================================================ Welcome to Atomic Agents Documentation[](#welcome-to-atomic-agents-documentation "Link to this heading") ========================================================================================================= User Guide[](#user-guide "Link to this heading") ------------------------------------------------- This section contains detailed guides for working with Atomic Agents. ### Quickstart Guide[](#quickstart-guide "Link to this heading") **See also:** * [Quickstart runnable examples on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/quickstart) * [All Atomic Agents examples on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples) This guide will help you get started with the Atomic Agents framework. We’ll cover basic usage, custom agents, and different AI providers. #### Installation[](#installation "Link to this heading") First, install the package using pip: ``` pip install atomic-agents ``` #### Basic Chatbot[](#basic-chatbot "Link to this heading") Let’s start with a simple chatbot: ``` import os import instructor import openai from rich.console import Console from atomic_agents.context import ChatHistory from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BasicChatOutputSchema # Initialize console for pretty outputs console = Console() # History setup history = ChatHistory() # Initialize history with an initial message from the assistant initial_message = BasicChatOutputSchema(chat_message="Hello! How can I assist you today?") history.add_message("assistant", initial_message) # OpenAI client setup using the Instructor library client = instructor.from_openai(openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))) # Create agent with type parameters agent = AtomicAgent[BasicChatInputSchema, BasicChatOutputSchema]( config=AgentConfig( client=client, model="gpt-4o-mini", # Using the latest model history=history, model_api_parameters={"max_tokens": 2048} ) ) # Start a loop to handle user inputs and agent responses while True: # Prompt the user for input user_input = console.input("[bold blue]You:[/bold blue] ") # Check if the user wants to exit the chat if user_input.lower() in ["/exit", "/quit"]: console.print("Exiting chat...") break # Process the user's input through the agent and get the response input_schema = BasicChatInputSchema(chat_message=user_input) response = agent.run(input_schema) # Display the agent's response console.print("Agent: ", response.chat_message) ``` #### Streaming Responses[](#streaming-responses "Link to this heading") For a more interactive experience, you can use streaming with async processing: ``` import os import instructor import openai import asyncio from rich.console import Console from rich.panel import Panel from rich.text import Text from rich.live import Live from atomic_agents.context import ChatHistory from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BasicChatOutputSchema # Initialize console for pretty outputs console = Console() # History setup history = ChatHistory() # Initialize history with an initial message from the assistant initial_message = BasicChatOutputSchema(chat_message="Hello! How can I assist you today?") history.add_message("assistant", initial_message) # OpenAI client setup using the Instructor library for async operations client = instructor.from_openai(openai.AsyncOpenAI(api_key=os.getenv("OPENAI_API_KEY"))) # Agent setup with specified configuration agent = AtomicAgent( config=AgentConfig( client=client, model="gpt-4o-mini", history=history, ) ) # Display the initial message from the assistant console.print(Text("Agent:", style="bold green"), end=" ") console.print(Text(initial_message.chat_message, style="green")) async def main(): # Start an infinite loop to handle user inputs and agent responses while True: # Prompt the user for input with a styled prompt user_input = console.input("\n[bold blue]You:[/bold blue] ") # Check if the user wants to exit the chat if user_input.lower() in ["/exit", "/quit"]: console.print("Exiting chat...") break # Process the user's input through the agent and get the streaming response input_schema = BasicChatInputSchema(chat_message=user_input) console.print() # Add newline before response # Use Live display to show streaming response with Live("", refresh_per_second=10, auto_refresh=True) as live: current_response = "" async for partial_response in agent.run_async(input_schema): if hasattr(partial_response, "chat_message") and partial_response.chat_message: # Only update if we have new content if partial_response.chat_message != current_response: current_response = partial_response.chat_message # Combine the label and response in the live display display_text = Text.assemble(("Agent: ", "bold green"), (current_response, "green")) live.update(display_text) if __name__ == "__main__": import asyncio asyncio.run(main()) ``` #### Custom Input/Output Schema[](#custom-input-output-schema "Link to this heading") For more structured interactions, define custom schemas: ``` import os import instructor import openai from rich.console import Console from typing import List from pydantic import Field from atomic_agents.context import ChatHistory, SystemPromptGenerator from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BaseIOSchema # Initialize console for pretty outputs console = Console() # History setup history = ChatHistory() # Custom output schema class CustomOutputSchema(BaseIOSchema): """This schema represents the response generated by the chat agent, including suggested follow-up questions.""" chat_message: str = Field( ..., description="The chat message exchanged between the user and the chat agent.", ) suggested_user_questions: List[str] = Field( ..., description="A list of suggested follow-up questions the user could ask the agent.", ) # Initialize history with an initial message from the assistant initial_message = CustomOutputSchema( chat_message="Hello! How can I assist you today?", suggested_user_questions=["What can you do?", "Tell me a joke", "Tell me about how you were made"], ) history.add_message("assistant", initial_message) # OpenAI client setup using the Instructor library client = instructor.from_openai(openai.OpenAI(api_key=os.getenv("OPENAI_API_KEY"))) # Custom system prompt system_prompt_generator = SystemPromptGenerator( background=[ "This assistant is a knowledgeable AI designed to be helpful, friendly, and informative.", "It has a wide range of knowledge on various topics and can engage in diverse conversations.", ], steps=[ "Analyze the user's input to understand the context and intent.", "Formulate a relevant and informative response based on the assistant's knowledge.", "Generate 3 suggested follow-up questions for the user to explore the topic further.", ], output_instructions=[ "Provide clear, concise, and accurate information in response to user queries.", "Maintain a friendly and professional tone throughout the conversation.", "Conclude each response with 3 relevant suggested questions for the user.", ], ) # Agent setup with specified configuration and custom output schema agent = AtomicAgent[BasicChatInputSchema, CustomOutputSchema]( config=AgentConfig( client=client, model="gpt-4o-mini", system_prompt_generator=system_prompt_generator, history=history, ) ) # Start a loop to handle user inputs and agent responses while True: # Prompt the user for input user_input = console.input("[bold blue]You:[/bold blue] ") # Check if the user wants to exit the chat if user_input.lower() in ["/exit", "/quit"]: console.print("Exiting chat...") break # Process the user's input through the agent input_schema = BasicChatInputSchema(chat_message=user_input) response = agent.run(input_schema) # Display the agent's response console.print("[bold green]Agent:[/bold green] ", response.chat_message) # Display the suggested questions console.print("\n[bold cyan]Suggested questions you could ask:[/bold cyan]") for i, question in enumerate(response.suggested_user_questions, 1): console.print(f"[cyan]{i}. {question}[/cyan]") console.print() # Add an empty line for better readability ``` #### Multiple AI Providers Support[](#multiple-ai-providers-support "Link to this heading") The framework supports multiple AI providers: ``` { "openai": "gpt-4o-mini", "anthropic": "claude-3-5-haiku-20241022", "groq": "mixtral-8x7b-32768", "ollama": "llama3", "gemini": "gemini-2.0-flash-exp", "openrouter": "mistral/ministral-8b" } ``` Here’s how to set up clients for different providers: ``` import os import instructor from rich.console import Console from rich.text import Text from atomic_agents.context import ChatHistory from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BasicChatOutputSchema from dotenv import load_dotenv load_dotenv() # Initialize console for pretty outputs console = Console() # History setup history = ChatHistory() # Initialize history with an initial message from the assistant initial_message = BasicChatOutputSchema(chat_message="Hello! How can I assist you today?") history.add_message("assistant", initial_message) # Function to set up the client based on the chosen provider def setup_client(provider): if provider == "openai": from openai import OpenAI api_key = os.getenv("OPENAI_API_KEY") client = instructor.from_openai(OpenAI(api_key=api_key)) model = "gpt-4o-mini" elif provider == "anthropic": from anthropic import Anthropic api_key = os.getenv("ANTHROPIC_API_KEY") client = instructor.from_anthropic(Anthropic(api_key=api_key)) model = "claude-3-5-haiku-20241022" elif provider == "groq": from groq import Groq api_key = os.getenv("GROQ_API_KEY") client = instructor.from_groq( Groq(api_key=api_key), mode=instructor.Mode.JSON ) model = "mixtral-8x7b-32768" elif provider == "ollama": from openai import OpenAI as OllamaClient client = instructor.from_openai( OllamaClient( base_url="http://localhost:11434/v1", api_key="ollama" ), mode=instructor.Mode.JSON ) model = "llama3" elif provider == "gemini": from openai import OpenAI api_key = os.getenv("GEMINI_API_KEY") client = instructor.from_openai( OpenAI( api_key=api_key, base_url="https://generativelanguage.googleapis.com/v1beta/openai/" ), mode=instructor.Mode.JSON ) model = "gemini-2.0-flash-exp" elif provider == "openrouter": from openai import OpenAI as OpenRouterClient api_key = os.getenv("OPENROUTER_API_KEY") client = instructor.from_openai( OpenRouterClient( base_url="https://openrouter.ai/api/v1", api_key=api_key ) ) model = "mistral/ministral-8b" else: raise ValueError(f"Unsupported provider: {provider}") return client, model # Prompt for provider choice provider = console.input("Choose a provider (openai/anthropic/groq/ollama/gemini/openrouter): ").lower() # Set up client and model client, model = setup_client(provider) # Create agent with chosen provider agent = AtomicAgent[BasicChatInputSchema, BasicChatOutputSchema]( config=AgentConfig( client=client, model=model, history=history, model_api_parameters={"max_tokens": 2048} ) ) ``` The framework supports multiple providers through Instructor: * **OpenAI**: Standard GPT models * **Anthropic**: Claude models * **Groq**: Fast inference for open models * **Ollama**: Local models (requires Ollama running) * **Gemini**: Google’s Gemini models Each provider requires its own API key (except Ollama) which should be set in environment variables: ``` # OpenAI export OPENAI_API_KEY="your-openai-key" # Anthropic export ANTHROPIC_API_KEY="your-anthropic-key" # Groq export GROQ_API_KEY="your-groq-key" # Gemini export GEMINI_API_KEY="your-gemini-key" # OpenRouter export OPENROUTER_API_KEY="your-openrouter-key" ``` #### Running the Examples[](#running-the-examples "Link to this heading") To run any of these examples: 1. Save the code in a Python file (e.g., `chatbot.py`) 2. Set your API key as an environment variable: ``` export OPENAI_API_KEY="your-api-key" ``` 3. Run the script: ``` poetry run python chatbot.py ``` #### Next Steps[](#next-steps "Link to this heading") After trying these examples, you can: 1. Learn about [tools and their integration](#document-guides/tools) 2. Review the [API reference](#document-api/index) for detailed documentation #### Explore More Examples[](#explore-more-examples "Link to this heading") For more advanced usage and examples, please check out the [Atomic Agents examples on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples). These examples demonstrate various capabilities of the framework including custom schemas, advanced history usage, tool integration, and more. ### Tools Guide[](#tools-guide "Link to this heading") Atomic Agents uses a unique approach to tools through the **Atomic Forge** system. Rather than bundling all tools into a single package, tools are designed to be standalone, modular components that you can download and integrate into your project as needed. #### Philosophy[](#philosophy "Link to this heading") The Atomic Forge approach provides several key benefits: 1. **Full Control**: You have complete ownership and control over each tool you download. Want to modify a tool’s behavior? You can change it without impacting other users. 2. **Dependency Management**: Since tools live in your codebase, you have better control over dependencies. 3. **Lightweight**: Download only the tools you need, avoiding unnecessary dependencies. For example, you don’t need Sympy if you’re not using the Calculator tool. #### Available Tools[](#available-tools "Link to this heading") The Atomic Forge includes several pre-built tools: * **Calculator**: Perform mathematical calculations * **SearXNG Search**: Search the web using SearXNG * **Tavily Search**: AI-powered web search * **YouTube Transcript Scraper**: Extract transcripts from YouTube videos * **Webpage Scraper**: Extract content from web pages #### Using Tools[](#using-tools "Link to this heading") ##### 1. Download Tools[](#download-tools "Link to this heading") Use the Atomic Assembler CLI to download tools: ``` atomic ``` This will present a menu where you can select and download tools. Each tool includes: * Input/Output schemas * Usage examples * Dependencies * Installation instructions ##### 2. Tool Structure[](#tool-structure "Link to this heading") Each tool follows a standard structure: ``` tool_name/ │ .coveragerc │ pyproject.toml │ README.md │ requirements.txt │ poetry.lock │ ├── tool/ │ │ tool_name.py │ │ some_util_file.py │ └── tests/ │ test_tool_name.py │ test_some_util_file.py ``` ##### 3. Using a Tool[](#using-a-tool "Link to this heading") Here’s an example of using a downloaded tool: ``` from calculator.tool.calculator import ( CalculatorTool, CalculatorInputSchema, CalculatorToolConfig ) # Initialize the tool calculator = CalculatorTool( config=CalculatorToolConfig() ) # Use the tool result = calculator.run( CalculatorInputSchema( expression="2 + 2" ) ) print(f"Result: {result.value}") # Result: 4 ``` #### Creating Custom Tools[](#creating-custom-tools "Link to this heading") You can create your own tools by following these guidelines: ##### 1. Basic Structure[](#basic-structure "Link to this heading") ``` from atomic_agents import BaseTool, BaseToolConfig, BaseIOSchema ################ # Input Schema # ################ class MyToolInputSchema(BaseIOSchema): """Define what your tool accepts as input""" value: str = Field(..., description="Input value to process") ##################### # Output Schema(s) # ##################### class MyToolOutputSchema(BaseIOSchema): """Define what your tool returns""" result: str = Field(..., description="Processed result") ################# # Configuration # ################# class MyToolConfig(BaseToolConfig): """Tool configuration options""" api_key: str = Field( default=os.getenv("MY_TOOL_API_KEY"), description="API key for the service" ) ##################### # Main Tool & Logic # ##################### class MyTool(BaseTool[MyToolInputSchema, MyToolOutputSchema]): """Main tool implementation""" input_schema = MyToolInputSchema output_schema = MyToolOutputSchema def __init__(self, config: MyToolConfig = MyToolConfig()): super().__init__(config) self.api_key = config.api_key def run(self, params: MyToolInputSchema) -> MyToolOutputSchema: # Implement your tool's logic here result = self.process_input(params.value) return MyToolOutputSchema(result=result) ``` ##### 2. Best Practices[](#best-practices "Link to this heading") * **Single Responsibility**: Each tool should do one thing well * **Clear Interfaces**: Use explicit input/output schemas * **Error Handling**: Validate inputs and handle errors gracefully * **Documentation**: Include clear usage examples and requirements * **Tests**: Write comprehensive tests for your tool * **Dependencies**: Manually create `requirements.txt` with only runtime dependencies ##### 3. Tool Requirements[](#tool-requirements "Link to this heading") * Must inherit from appropriate base classes: + Input/Output schemas from `BaseIOSchema` + Configuration from `BaseToolConfig` + Tool class from `BaseTool` * Must include proper documentation * Must include tests * Must follow the standard directory structure #### Next Steps[](#next-steps "Link to this heading") 1. Browse available tools in the [Atomic Forge repository](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-forge) 2. Try downloading and using different tools via the CLI 3. Consider creating your own tools following the guidelines 4. Share your tools with the community through pull requests ### Implementation Patterns[](#implementation-patterns "Link to this heading") The framework supports various implementation patterns and use cases: #### Chatbots and Assistants[](#chatbots-and-assistants "Link to this heading") * Basic chat interfaces with any LLM provider * Streaming responses * Custom response schemas * Suggested follow-up questions * History management and context retention * Multi-turn conversations #### RAG Systems[](#rag-systems "Link to this heading") * Query generation and optimization * Context-aware responses * Document Q&A with source tracking * Information synthesis and summarization * Custom embedding and retrieval strategies * Hybrid search approaches #### Specialized Agents[](#specialized-agents "Link to this heading") * YouTube video summarization and analysis * Web search and deep research * Recipe generation from various sources * Multimodal interactions (text, images, etc.) * Custom tool integration * Task orchestration ### Provider Integration Guide[](#provider-integration-guide "Link to this heading") Atomic Agents is designed to be provider-agnostic. Here’s how to work with different providers: #### Provider Selection[](#provider-selection "Link to this heading") * Choose any provider supported by Instructor * Configure provider-specific settings * Handle rate limits and quotas * Implement fallback strategies #### Local Development[](#local-development "Link to this heading") * Use Ollama for local testing * Mock responses for development * Debug provider interactions * Test provider switching #### Production Deployment[](#production-deployment "Link to this heading") * Load balancing between providers * Failover configurations * Cost optimization strategies * Performance monitoring #### Custom Provider Integration[](#custom-provider-integration "Link to this heading") * Extend Instructor for new providers * Implement custom client wrappers * Add provider-specific features * Handle unique response formats ### Best Practices[](#best-practices "Link to this heading") #### Error Handling[](#error-handling "Link to this heading") * Implement proper exception handling * Add retry mechanisms * Log provider errors * Handle rate limits gracefully #### Performance Optimization[](#performance-optimization "Link to this heading") * Use streaming for long responses * Implement caching strategies * Optimize prompt lengths * Batch operations when possible #### Security[](#security "Link to this heading") * Secure API key management * Input validation and sanitization * Output filtering * Rate limiting and quotas ### Getting Help[](#getting-help "Link to this heading") If you need help, you can: 1. Check our [GitHub Issues](https://github.com/BrainBlend-AI/atomic-agents/issues) 2. Join our [Reddit community](https://www.reddit.com/r/AtomicAgents/) 3. Read through our examples in the repository 4. Review the example projects in `atomic-examples/` **See also**: * [API Reference](#document-api/index) - Browse the API reference * [Main Documentation](#document-index) - Return to main documentation API Reference[](#api-reference "Link to this heading") ------------------------------------------------------- This section contains the API reference for all public modules and classes in Atomic Agents. ### Agents[](#agents "Link to this heading") #### Schema Hierarchy[](#schema-hierarchy "Link to this heading") The Atomic Agents framework uses Pydantic for schema validation and serialization. All input and output schemas follow this inheritance pattern: ``` pydantic.BaseModel └── BaseIOSchema ├── BasicChatInputSchema └── BasicChatOutputSchema ``` ##### BaseIOSchema[](#baseioschema "Link to this heading") The base schema class that all agent input/output schemas inherit from. *class* BaseIOSchema[](#BaseIOSchema "Link to this definition") Base schema class for all agent input/output schemas. Inherits from [`pydantic.BaseModel`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel "(in Pydantic v0.0.0)"). All agent schemas must inherit from this class to ensure proper serialization and validation. **Inheritance:** * [`pydantic.BaseModel`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel "(in Pydantic v0.0.0)") ##### BasicChatInputSchema[](#basicchatinputschema "Link to this heading") The default input schema for agents. *class* BasicChatInputSchema[](#BasicChatInputSchema "Link to this definition") Default input schema for agent interactions. **Inheritance:** * [`BaseIOSchema`](#BaseIOSchema "BaseIOSchema") → [`pydantic.BaseModel`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel "(in Pydantic v0.0.0)") chat\_message*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*[](#BasicChatInputSchema.chat_message "Link to this definition") The message to send to the agent. Example: ``` >>> input_schema = BasicChatInputSchema(chat_message="Hello, agent!") >>> agent.run(input_schema) ``` ##### BasicChatOutputSchema[](#basicchatoutputschema "Link to this heading") The default output schema for agents. *class* BasicChatOutputSchema[](#BasicChatOutputSchema "Link to this definition") Default output schema for agent responses. **Inheritance:** * [`BaseIOSchema`](#BaseIOSchema "BaseIOSchema") → [`pydantic.BaseModel`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel "(in Pydantic v0.0.0)") chat\_message*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*[](#BasicChatOutputSchema.chat_message "Link to this definition") The response message from the agent. Example: ``` >>> response = agent.run(input_schema) >>> print(response.chat_message) ``` ##### Creating Custom Schemas[](#creating-custom-schemas "Link to this heading") You can create custom input/output schemas by inheriting from `BaseIOSchema`: ``` from pydantic import Field from typing import List from atomic_agents import BaseIOSchema class CustomInputSchema(BaseIOSchema): chat_message: str = Field(..., description="User's message") context: str = Field(None, description="Optional context for the agent") class CustomOutputSchema(BaseIOSchema): chat_message: str = Field(..., description="Agent's response") follow_up_questions: List[str] = Field( default_factory=list, description="Suggested follow-up questions" ) confidence: float = Field( ..., description="Confidence score for the response", ge=0.0, le=1.0 ) ``` #### Base Agent[](#base-agent "Link to this heading") The `AtomicAgent` class is the foundation for building AI agents in the Atomic Agents framework. It handles chat interactions, history management, system prompts, and responses from language models. ``` from atomic_agents import AtomicAgent, AgentConfig from atomic_agents.context import ChatHistory, SystemPromptGenerator # Create agent with basic configuration agent = AtomicAgent[BasicChatInputSchema, BasicChatOutputSchema]( config=AgentConfig( client=instructor.from_openai(OpenAI()), model="gpt-4-turbo-preview", history=ChatHistory(), system_prompt_generator=SystemPromptGenerator() ) ) # Run the agent response = agent.run(user_input) # Stream responses async for partial_response in agent.run_async(user_input): print(partial_response) ``` ##### Configuration[](#configuration "Link to this heading") The `AgentConfig` class provides configuration options: ``` class AgentConfig: client: instructor.Instructor # Client for interacting with the language model model: str = "gpt-4-turbo-preview" # Model to use history: Optional[ChatHistory] = None # History component system_prompt_generator: Optional[SystemPromptGenerator] = None # Prompt generator input_schema: Optional[Type[BaseModel]] = None # Custom input schema output_schema: Optional[Type[BaseModel]] = None # Custom output schema model_api_parameters: Optional[dict] = None # Additional API parameters ``` ##### Input/Output Schemas[](#input-output-schemas "Link to this heading") Default schemas for basic chat interactions: ``` class BasicChatInputSchema(BaseIOSchema): """Input from the user to the AI agent.""" chat_message: str = Field( ..., description="The chat message sent by the user." ) class BasicChatOutputSchema(BaseIOSchema): """Response generated by the chat agent.""" chat_message: str = Field( ..., description="The markdown-enabled response generated by the chat agent." ) ``` ##### Key Methods[](#key-methods "Link to this heading") * `run(user_input: Optional[BaseIOSchema] = None) -> BaseIOSchema`: Process user input and get response * `run_async(user_input: Optional[BaseIOSchema] = None)`: Stream responses asynchronously * `get_response(response_model=None) -> Type[BaseModel]`: Get direct model response * `reset_history()`: Reset history to initial state * `get_context_provider(provider_name: str)`: Get a registered context provider * `register_context_provider(provider_name: str, provider: BaseDynamicContextProvider)`: Register a new context provider * `unregister_context_provider(provider_name: str)`: Remove a context provider ##### Context Providers[](#context-providers "Link to this heading") Context providers can be used to inject dynamic information into the system prompt: ``` from atomic_agents.context import BaseDynamicContextProvider class SearchResultsProvider(BaseDynamicContextProvider): def __init__(self, title: str): super().__init__(title=title) self.results = [] def get_info(self) -> str: return "\n\n".join([ f"Result {idx}:\n{result}" for idx, result in enumerate(self.results, 1) ]) # Register with agent agent.register_context_provider( "search_results", SearchResultsProvider("Search Results") ) ``` ##### Streaming Support[](#streaming-support "Link to this heading") The agent supports streaming responses for more interactive experiences: ``` async def chat(): async for partial_response in agent.run_async(user_input): # Handle each chunk of the response print(partial_response.chat_message) ``` ##### History Management[](#history-management "Link to this heading") The agent automatically manages conversation history through the `ChatHistory` component: ``` # Access history history = agent.history.get_history() # Reset to initial state agent.reset_history() # Save/load history state serialized = agent.history.dump() agent.history.load(serialized) ``` ##### Custom Schemas[](#custom-schemas "Link to this heading") You can use custom input/output schemas for structured interactions: ``` from pydantic import BaseModel, Field from typing import List class CustomInput(BaseIOSchema): """Custom input with specific fields""" question: str = Field(..., description="User's question") context: str = Field(..., description="Additional context") class CustomOutput(BaseIOSchema): """Custom output with structured data""" answer: str = Field(..., description="Answer to the question") sources: List[str] = Field(..., description="Source references") # Create agent with custom schemas agent = AtomicAgent[CustomInput, CustomOutput]( config=AgentConfig( client=client, model=model, ) ) ``` For full API details: atomic\_agents.agents.atomic\_agent.model\_from\_chunks\_patched(*cls*, *json\_chunks*, *\*\*kwargs*)[](#atomic_agents.agents.atomic_agent.model_from_chunks_patched "Link to this definition") *async* atomic\_agents.agents.atomic\_agent.model\_from\_chunks\_async\_patched(*cls*, *json\_chunks*, *\*\*kwargs*)[](#atomic_agents.agents.atomic_agent.model_from_chunks_async_patched "Link to this definition") *class* atomic\_agents.agents.atomic\_agent.BasicChatInputSchema(*\**, *chat\_message: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*)[](#atomic_agents.agents.atomic_agent.BasicChatInputSchema "Link to this definition") Bases: [`BaseIOSchema`](index.html#atomic_agents.base.base_io_schema.BaseIOSchema "atomic_agents.base.base_io_schema.BaseIOSchema") This schema represents the input from the user to the AI agent. chat\_message*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*[](#atomic_agents.agents.atomic_agent.BasicChatInputSchema.chat_message "Link to this definition") model\_config*: ClassVar[ConfigDict]* *= {}*[](#atomic_agents.agents.atomic_agent.BasicChatInputSchema.model_config "Link to this definition") Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. *class* atomic\_agents.agents.atomic\_agent.BasicChatOutputSchema(*\**, *chat\_message: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*)[](#atomic_agents.agents.atomic_agent.BasicChatOutputSchema "Link to this definition") Bases: [`BaseIOSchema`](index.html#atomic_agents.base.base_io_schema.BaseIOSchema "atomic_agents.base.base_io_schema.BaseIOSchema") This schema represents the response generated by the chat agent. chat\_message*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*[](#atomic_agents.agents.atomic_agent.BasicChatOutputSchema.chat_message "Link to this definition") model\_config*: ClassVar[ConfigDict]* *= {}*[](#atomic_agents.agents.atomic_agent.BasicChatOutputSchema.model_config "Link to this definition") Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. *class* atomic\_agents.agents.atomic\_agent.AgentConfig(*\**, *client: Instructor*, *model: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") = 'gpt-4o-mini'*, *history: [ChatHistory](index.html#atomic_agents.context.chat_history.ChatHistory "atomic_agents.context.chat_history.ChatHistory") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*, *system\_prompt\_generator: [SystemPromptGenerator](index.html#atomic_agents.context.system_prompt_generator.SystemPromptGenerator "atomic_agents.context.system_prompt_generator.SystemPromptGenerator") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*, *system\_role: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = 'system'*, *model\_api\_parameters: [dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*)[](#atomic_agents.agents.atomic_agent.AgentConfig "Link to this definition") Bases: [`BaseModel`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel "(in Pydantic v0.0.0)") client*: Instructor*[](#atomic_agents.agents.atomic_agent.AgentConfig.client "Link to this definition") model*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*[](#atomic_agents.agents.atomic_agent.AgentConfig.model "Link to this definition") history*: [ChatHistory](index.html#atomic_agents.context.chat_history.ChatHistory "atomic_agents.context.chat_history.ChatHistory") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")*[](#atomic_agents.agents.atomic_agent.AgentConfig.history "Link to this definition") system\_prompt\_generator*: [SystemPromptGenerator](index.html#atomic_agents.context.system_prompt_generator.SystemPromptGenerator "atomic_agents.context.system_prompt_generator.SystemPromptGenerator") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")*[](#atomic_agents.agents.atomic_agent.AgentConfig.system_prompt_generator "Link to this definition") system\_role*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")*[](#atomic_agents.agents.atomic_agent.AgentConfig.system_role "Link to this definition") model\_config*: ClassVar[ConfigDict]* *= {'arbitrary\_types\_allowed': True}*[](#atomic_agents.agents.atomic_agent.AgentConfig.model_config "Link to this definition") Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. model\_api\_parameters*: [dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")*[](#atomic_agents.agents.atomic_agent.AgentConfig.model_api_parameters "Link to this definition") *class* atomic\_agents.agents.atomic\_agent.AtomicAgent(*config: [AgentConfig](index.html#atomic_agents.agents.atomic_agent.AgentConfig "atomic_agents.agents.atomic_agent.AgentConfig")*)[](#atomic_agents.agents.atomic_agent.AtomicAgent "Link to this definition") Bases: [`Generic`](https://docs.python.org/3/library/typing.html#typing.Generic "(in Python v3.13)") Base class for chat agents with full Instructor hook system integration. This class provides the core functionality for handling chat interactions, including managing history, generating system prompts, and obtaining responses from a language model. It includes comprehensive hook system support for monitoring and error handling. Type Parameters: InputSchema: Schema for the user input, must be a subclass of BaseIOSchema. OutputSchema: Schema for the agent’s output, must be a subclass of BaseIOSchema. client[](#atomic_agents.agents.atomic_agent.AtomicAgent.client "Link to this definition") Client for interacting with the language model. model[](#atomic_agents.agents.atomic_agent.AtomicAgent.model "Link to this definition") The model to use for generating responses. Type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") history[](#atomic_agents.agents.atomic_agent.AtomicAgent.history "Link to this definition") History component for storing chat history. Type: [ChatHistory](index.html#atomic_agents.context.chat_history.ChatHistory "atomic_agents.context.chat_history.ChatHistory") system\_prompt\_generator[](#atomic_agents.agents.atomic_agent.AtomicAgent.system_prompt_generator "Link to this definition") Component for generating system prompts. Type: [SystemPromptGenerator](index.html#atomic_agents.context.system_prompt_generator.SystemPromptGenerator "atomic_agents.context.system_prompt_generator.SystemPromptGenerator") system\_role[](#atomic_agents.agents.atomic_agent.AtomicAgent.system_role "Link to this definition") The role of the system in the conversation. None means no system prompt. Type: Optional[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] initial\_history[](#atomic_agents.agents.atomic_agent.AtomicAgent.initial_history "Link to this definition") Initial state of the history. Type: [ChatHistory](index.html#atomic_agents.context.chat_history.ChatHistory "atomic_agents.context.chat_history.ChatHistory") current\_user\_input[](#atomic_agents.agents.atomic_agent.AtomicAgent.current_user_input "Link to this definition") The current user input being processed. Type: Optional[InputSchema] model\_api\_parameters[](#atomic_agents.agents.atomic_agent.AtomicAgent.model_api_parameters "Link to this definition") Additional parameters passed to the API provider. - Use this for parameters like ‘temperature’, ‘max\_tokens’, etc. Type: [dict](https://docs.python.org/3/library/stdtypes.html#dict "(in Python v3.13)") Hook System: The AtomicAgent integrates with Instructor’s hook system to provide comprehensive monitoring and error handling capabilities. Supported events include: * ‘parse:error’: Triggered when Pydantic validation fails * ‘completion:kwargs’: Triggered before completion request * ‘completion:response’: Triggered after completion response * ‘completion:error’: Triggered on completion errors * ‘completion:last\_attempt’: Triggered on final retry attempt Hook Methods: * register\_hook(event, handler): Register a hook handler for an event * unregister\_hook(event, handler): Remove a hook handler * clear\_hooks(event=None): Clear hooks for specific event or all events * enable\_hooks()/disable\_hooks(): Control hook processing * hooks\_enabled: Property to check if hooks are enabled Example [``](#id1)[`](#id3)python # Basic usage agent = AtomicAgent[InputSchema, OutputSchema](config) # Register parse error hook for intelligent retry handling def handle\_parse\_error(error): > print(f”Validation failed: {error}”) > # Implement custom retry logic, logging, etc. agent.register\_hook(“parse:error”, handle\_parse\_error) # Now parse:error hooks will fire on validation failures response = agent.run(user\_input) [``](#id5)[`](#id7) \_\_init\_\_(*config: [AgentConfig](index.html#atomic_agents.agents.atomic_agent.AgentConfig "atomic_agents.agents.atomic_agent.AgentConfig")*)[](#atomic_agents.agents.atomic_agent.AtomicAgent.__init__ "Link to this definition") Initializes the AtomicAgent. Parameters: **config** ([*AgentConfig*](index.html#atomic_agents.agents.atomic_agent.AgentConfig "atomic_agents.agents.atomic_agent.AgentConfig")) – Configuration for the chat agent. reset\_history()[](#atomic_agents.agents.atomic_agent.AtomicAgent.reset_history "Link to this definition") Resets the history to its initial state. *property* input\_schema*: [Type](https://docs.python.org/3/library/typing.html#typing.Type "(in Python v3.13)")[[BaseIOSchema](index.html#atomic_agents.base.base_io_schema.BaseIOSchema "atomic_agents.base.base_io_schema.BaseIOSchema")]*[](#atomic_agents.agents.atomic_agent.AtomicAgent.input_schema "Link to this definition") *property* output\_schema*: [Type](https://docs.python.org/3/library/typing.html#typing.Type "(in Python v3.13)")[[BaseIOSchema](index.html#atomic_agents.base.base_io_schema.BaseIOSchema "atomic_agents.base.base_io_schema.BaseIOSchema")]*[](#atomic_agents.agents.atomic_agent.AtomicAgent.output_schema "Link to this definition") run(*user\_input: InputSchema | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*) → OutputSchema[](#atomic_agents.agents.atomic_agent.AtomicAgent.run "Link to this definition") Runs the chat agent with the given user input synchronously. Parameters: **user\_input** (*Optional**[**InputSchema**]*) – The input from the user. If not provided, skips adding to history. Returns: The response from the chat agent. Return type: OutputSchema run\_stream(*user\_input: InputSchema | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*) → [Generator](https://docs.python.org/3/library/typing.html#typing.Generator "(in Python v3.13)")[OutputSchema, [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)"), OutputSchema][](#atomic_agents.agents.atomic_agent.AtomicAgent.run_stream "Link to this definition") Runs the chat agent with the given user input, supporting streaming output. Parameters: **user\_input** (*Optional**[**InputSchema**]*) – The input from the user. If not provided, skips adding to history. Yields: *OutputSchema* – Partial responses from the chat agent. Returns: The final response from the chat agent. Return type: OutputSchema *async* run\_async(*user\_input: InputSchema | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*) → OutputSchema[](#atomic_agents.agents.atomic_agent.AtomicAgent.run_async "Link to this definition") Runs the chat agent asynchronously with the given user input. Parameters: **user\_input** (*Optional**[**InputSchema**]*) – The input from the user. If not provided, skips adding to history. Returns: The response from the chat agent. Return type: OutputSchema Raises: **NotAsyncIterableError** – If used as an async generator (in an async for loop). Use run\_async\_stream() method instead for streaming responses. *async* run\_async\_stream(*user\_input: InputSchema | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*) → [AsyncGenerator](https://docs.python.org/3/library/typing.html#typing.AsyncGenerator "(in Python v3.13)")[OutputSchema, [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")][](#atomic_agents.agents.atomic_agent.AtomicAgent.run_async_stream "Link to this definition") Runs the chat agent asynchronously with the given user input, supporting streaming output. Parameters: **user\_input** (*Optional**[**InputSchema**]*) – The input from the user. If not provided, skips adding to history. Yields: *OutputSchema* – Partial responses from the chat agent. get\_context\_provider(*provider\_name: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*) → [Type](https://docs.python.org/3/library/typing.html#typing.Type "(in Python v3.13)")[[BaseDynamicContextProvider](index.html#atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider "atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider")][](#atomic_agents.agents.atomic_agent.AtomicAgent.get_context_provider "Link to this definition") Retrieves a context provider by name. Parameters: **provider\_name** ([*str*](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")) – The name of the context provider. Returns: The context provider if found. Return type: [BaseDynamicContextProvider](index.html#atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider "atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider") Raises: [**KeyError**](https://docs.python.org/3/library/exceptions.html#KeyError "(in Python v3.13)") – If the context provider is not found. register\_context\_provider(*provider\_name: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*, *provider: [BaseDynamicContextProvider](index.html#atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider "atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider")*)[](#atomic_agents.agents.atomic_agent.AtomicAgent.register_context_provider "Link to this definition") Registers a new context provider. Parameters: * **provider\_name** ([*str*](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")) – The name of the context provider. * **provider** ([*BaseDynamicContextProvider*](index.html#atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider "atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider")) – The context provider instance. unregister\_context\_provider(*provider\_name: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*)[](#atomic_agents.agents.atomic_agent.AtomicAgent.unregister_context_provider "Link to this definition") Unregisters an existing context provider. Parameters: **provider\_name** ([*str*](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")) – The name of the context provider to remove. register\_hook(*event: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*, *handler: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.13)")*) → [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")[](#atomic_agents.agents.atomic_agent.AtomicAgent.register_hook "Link to this definition") Registers a hook handler for a specific event. Parameters: * **event** ([*str*](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")) – The event name (e.g., ‘parse:error’, ‘completion:kwargs’, etc.) * **handler** (*Callable*) – The callback function to handle the event unregister\_hook(*event: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*, *handler: [Callable](https://docs.python.org/3/library/typing.html#typing.Callable "(in Python v3.13)")*) → [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")[](#atomic_agents.agents.atomic_agent.AtomicAgent.unregister_hook "Link to this definition") Unregisters a hook handler for a specific event. Parameters: * **event** ([*str*](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")) – The event name * **handler** (*Callable*) – The callback function to remove clear\_hooks(*event: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*) → [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")[](#atomic_agents.agents.atomic_agent.AtomicAgent.clear_hooks "Link to this definition") Clears hook handlers for a specific event or all events. Parameters: **event** (*Optional**[*[*str*](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*]*) – The event name to clear, or None to clear all enable\_hooks() → [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")[](#atomic_agents.agents.atomic_agent.AtomicAgent.enable_hooks "Link to this definition") Enable hook processing. disable\_hooks() → [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")[](#atomic_agents.agents.atomic_agent.AtomicAgent.disable_hooks "Link to this definition") Disable hook processing. *property* hooks\_enabled*: [bool](https://docs.python.org/3/library/functions.html#bool "(in Python v3.13)")*[](#atomic_agents.agents.atomic_agent.AtomicAgent.hooks_enabled "Link to this definition") Check if hooks are enabled. ### Context[](#context "Link to this heading") #### Agent History[](#agent-history "Link to this heading") The `ChatHistory` class manages conversation history and state for AI agents: ``` from atomic_agents.context import ChatHistory from atomic_agents import BaseIOSchema # Initialize history with optional max messages history = ChatHistory(max_messages=10) # Add messages history.add_message( role="user", content=BaseIOSchema(...) ) # Initialize a new turn history.initialize_turn() turn_id = history.get_current_turn_id() # Access history history = history.get_history() # Manage history history.get_message_count() # Get number of messages history.delete_turn_id(turn_id) # Delete messages by turn # Persistence serialized = history.dump() # Save to string history.load(serialized) # Load from string # Create copy new_history = history.copy() ``` Key features: * Message history management with role-based messages * Turn-based conversation tracking * Support for multimodal content (images, etc.) * Serialization and persistence * History size management * Deep copy functionality ##### Message Structure[](#message-structure "Link to this heading") Messages in history are structured as: ``` class Message(BaseModel): role: str # e.g., 'user', 'assistant', 'system' content: BaseIOSchema # Message content following schema turn_id: Optional[str] # Unique ID for grouping messages ``` ##### Multimodal Support[](#multimodal-support "Link to this heading") The history system automatically handles multimodal content: ``` # For content with images history = history.get_history() for message in history: if isinstance(message.content, list): text_content = message.content[0] # JSON string images = message.content[1:] # List of images ``` #### System Prompt Generator[](#system-prompt-generator "Link to this heading") The `SystemPromptGenerator` creates structured system prompts for AI agents: ``` from atomic_agents.context import ( SystemPromptGenerator, BaseDynamicContextProvider ) # Create generator with static content generator = SystemPromptGenerator( background=[ "You are a helpful AI assistant.", "You specialize in technical support." ], steps=[ "1. Understand the user's request", "2. Analyze available information", "3. Provide clear solutions" ], output_instructions=[ "Use clear, concise language", "Include step-by-step instructions", "Cite relevant documentation" ] ) # Generate prompt prompt = generator.generate_prompt() ``` ##### Dynamic Context Providers[](#dynamic-context-providers "Link to this heading") Context providers inject dynamic information into prompts: ``` from dataclasses import dataclass from typing import List @dataclass class SearchResult: content: str metadata: dict class SearchResultsProvider(BaseDynamicContextProvider): def __init__(self, title: str): super().__init__(title=title) self.results: List[SearchResult] = [] def get_info(self) -> str: """Format search results for the prompt""" if not self.results: return "No search results available." return "\n\n".join([ f"Result {idx}:\nMetadata: {result.metadata}\nContent:\n{result.content}\n{'-' * 80}" for idx, result in enumerate(self.results, 1) ]) # Use with generator generator = SystemPromptGenerator( background=["You answer based on search results."], context_providers={ "search_results": SearchResultsProvider("Search Results") } ) ``` The generated prompt will include: 1. Background information 2. Processing steps (if provided) 3. Dynamic context from providers 4. Output instructions #### Base Components[](#base-components "Link to this heading") ##### BaseIOSchema[](#baseioschema "Link to this heading") Base class for all input/output schemas: ``` from atomic_agents import BaseIOSchema from pydantic import Field class CustomSchema(BaseIOSchema): """Schema description (required)""" field: str = Field(..., description="Field description") ``` Key features: * Requires docstring description * Rich representation support * Automatic schema validation * JSON serialization ##### BaseTool[](#basetool "Link to this heading") Base class for creating tools: ``` from atomic_agents import BaseTool, BaseToolConfig from pydantic import Field class MyToolConfig(BaseToolConfig): """Tool configuration""" api_key: str = Field( default=os.getenv("API_KEY"), description="API key for the service" ) class MyTool(BaseTool[MyToolInputSchema, MyToolOutputSchema]): """Tool implementation""" input_schema = MyToolInputSchema output_schema = MyToolOutputSchema def __init__(self, config: MyToolConfig = MyToolConfig()): super().__init__(config) self.api_key = config.api_key def run(self, params: MyToolInputSchema) -> MyToolOutputSchema: # Implement tool logic pass ``` Key features: * Structured input/output schemas * Configuration management * Title and description overrides * Error handling For full API details: *class* atomic\_agents.context.chat\_history.Message(*\**, *role: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*, *content: [BaseIOSchema](index.html#atomic_agents.base.base_io_schema.BaseIOSchema "atomic_agents.base.base_io_schema.BaseIOSchema")*, *turn\_id: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*)[](#atomic_agents.context.chat_history.Message "Link to this definition") Bases: [`BaseModel`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel "(in Pydantic v0.0.0)") Represents a message in the chat history. role[](#atomic_agents.context.chat_history.Message.role "Link to this definition") The role of the message sender (e.g., ‘user’, ‘system’, ‘tool’). Type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") content[](#atomic_agents.context.chat_history.Message.content "Link to this definition") The content of the message. Type: [BaseIOSchema](index.html#BaseIOSchema "BaseIOSchema") turn\_id[](#atomic_agents.context.chat_history.Message.turn_id "Link to this definition") Unique identifier for the turn this message belongs to. Type: Optional[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] role*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*[](#id0 "Link to this definition") content*: [BaseIOSchema](index.html#atomic_agents.base.base_io_schema.BaseIOSchema "atomic_agents.base.base_io_schema.BaseIOSchema")*[](#id1 "Link to this definition") turn\_id*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")*[](#id2 "Link to this definition") model\_config*: ClassVar[ConfigDict]* *= {}*[](#atomic_agents.context.chat_history.Message.model_config "Link to this definition") Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. *class* atomic\_agents.context.chat\_history.ChatHistory(*max\_messages: [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*)[](#atomic_agents.context.chat_history.ChatHistory "Link to this definition") Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.13)") Manages the chat history for an AI agent. history[](#atomic_agents.context.chat_history.ChatHistory.history "Link to this definition") A list of messages representing the chat history. Type: List[[Message](index.html#atomic_agents.context.chat_history.Message "atomic_agents.context.chat_history.Message")] max\_messages[](#atomic_agents.context.chat_history.ChatHistory.max_messages "Link to this definition") Maximum number of messages to keep in history. Type: Optional[[int](https://docs.python.org/3/library/functions.html#int "(in Python v3.13)")] current\_turn\_id[](#atomic_agents.context.chat_history.ChatHistory.current_turn_id "Link to this definition") The ID of the current turn. Type: Optional[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] \_\_init\_\_(*max\_messages: [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*)[](#atomic_agents.context.chat_history.ChatHistory.__init__ "Link to this definition") Initializes the ChatHistory with an empty history and optional constraints. Parameters: **max\_messages** (*Optional**[*[*int*](https://docs.python.org/3/library/functions.html#int "(in Python v3.13)")*]*) – Maximum number of messages to keep in history. When exceeded, oldest messages are removed first. initialize\_turn() → [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")[](#atomic_agents.context.chat_history.ChatHistory.initialize_turn "Link to this definition") Initializes a new turn by generating a random turn ID. add\_message(*role: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*, *content: [BaseIOSchema](index.html#atomic_agents.base.base_io_schema.BaseIOSchema "atomic_agents.base.base_io_schema.BaseIOSchema")*) → [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")[](#atomic_agents.context.chat_history.ChatHistory.add_message "Link to this definition") Adds a message to the chat history and manages overflow. Parameters: * **role** ([*str*](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")) – The role of the message sender. * **content** ([*BaseIOSchema*](index.html#BaseIOSchema "BaseIOSchema")) – The content of the message. get\_history() → [List](https://docs.python.org/3/library/typing.html#typing.List "(in Python v3.13)")[[Dict](https://docs.python.org/3/library/typing.html#typing.Dict "(in Python v3.13)")][](#atomic_agents.context.chat_history.ChatHistory.get_history "Link to this definition") Retrieves the chat history, handling both regular and multimodal content. Returns: The list of messages in the chat history as dictionaries. Each dictionary has ‘role’ and ‘content’ keys, where ‘content’ contains either a single JSON string or a mixed array of JSON and multimodal objects. Return type: List[Dict] Note This method supports multimodal content by keeping multimodal objects separate while generating cohesive JSON for text-based fields. copy() → [ChatHistory](index.html#atomic_agents.context.chat_history.ChatHistory "atomic_agents.context.chat_history.ChatHistory")[](#atomic_agents.context.chat_history.ChatHistory.copy "Link to this definition") Creates a copy of the chat history. Returns: A copy of the chat history. Return type: [ChatHistory](index.html#atomic_agents.context.chat_history.ChatHistory "atomic_agents.context.chat_history.ChatHistory") get\_current\_turn\_id() → [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")[](#atomic_agents.context.chat_history.ChatHistory.get_current_turn_id "Link to this definition") Returns the current turn ID. Returns: The current turn ID, or None if not set. Return type: Optional[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] delete\_turn\_id(*turn\_id: [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.13)")*)[](#atomic_agents.context.chat_history.ChatHistory.delete_turn_id "Link to this definition") Delete messages from the history by its turn ID. Parameters: **turn\_id** ([*int*](https://docs.python.org/3/library/functions.html#int "(in Python v3.13)")) – The turn ID of the message to delete. Returns: A success message with the deleted turn ID. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") Raises: [**ValueError**](https://docs.python.org/3/library/exceptions.html#ValueError "(in Python v3.13)") – If the specified turn ID is not found in the history. get\_message\_count() → [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.13)")[](#atomic_agents.context.chat_history.ChatHistory.get_message_count "Link to this definition") Returns the number of messages in the chat history. Returns: The number of messages. Return type: [int](https://docs.python.org/3/library/functions.html#int "(in Python v3.13)") dump() → [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")[](#atomic_agents.context.chat_history.ChatHistory.dump "Link to this definition") Serializes the entire ChatHistory instance to a JSON string. Returns: A JSON string representation of the ChatHistory. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") load(*serialized\_data: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*) → [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")[](#atomic_agents.context.chat_history.ChatHistory.load "Link to this definition") Deserializes a JSON string and loads it into the ChatHistory instance. Parameters: **serialized\_data** ([*str*](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")) – A JSON string representation of the ChatHistory. Raises: [**ValueError**](https://docs.python.org/3/library/exceptions.html#ValueError "(in Python v3.13)") – If the serialized data is invalid or cannot be deserialized. *class* atomic\_agents.context.system\_prompt\_generator.BaseDynamicContextProvider(*title: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*)[](#atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider "Link to this definition") Bases: [`ABC`](https://docs.python.org/3/library/abc.html#abc.ABC "(in Python v3.13)") \_\_init\_\_(*title: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*)[](#atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider.__init__ "Link to this definition") *abstract* get\_info() → [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")[](#atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider.get_info "Link to this definition") *class* atomic\_agents.context.system\_prompt\_generator.SystemPromptGenerator(*background: [List](https://docs.python.org/3/library/typing.html#typing.List "(in Python v3.13)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*, *steps: [List](https://docs.python.org/3/library/typing.html#typing.List "(in Python v3.13)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*, *output\_instructions: [List](https://docs.python.org/3/library/typing.html#typing.List "(in Python v3.13)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*, *context\_providers: [Dict](https://docs.python.org/3/library/typing.html#typing.Dict "(in Python v3.13)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)"), [BaseDynamicContextProvider](index.html#atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider "atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider")] | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*)[](#atomic_agents.context.system_prompt_generator.SystemPromptGenerator "Link to this definition") Bases: [`object`](https://docs.python.org/3/library/functions.html#object "(in Python v3.13)") \_\_init\_\_(*background: [List](https://docs.python.org/3/library/typing.html#typing.List "(in Python v3.13)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*, *steps: [List](https://docs.python.org/3/library/typing.html#typing.List "(in Python v3.13)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*, *output\_instructions: [List](https://docs.python.org/3/library/typing.html#typing.List "(in Python v3.13)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*, *context\_providers: [Dict](https://docs.python.org/3/library/typing.html#typing.Dict "(in Python v3.13)")[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)"), [BaseDynamicContextProvider](index.html#atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider "atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider")] | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*)[](#atomic_agents.context.system_prompt_generator.SystemPromptGenerator.__init__ "Link to this definition") generate\_prompt() → [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")[](#atomic_agents.context.system_prompt_generator.SystemPromptGenerator.generate_prompt "Link to this definition") *class* atomic\_agents.base.base\_io\_schema.BaseIOSchema[](#atomic_agents.base.base_io_schema.BaseIOSchema "Link to this definition") Bases: [`BaseModel`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel "(in Pydantic v0.0.0)") Base schema for input/output in the Atomic Agents framework. *classmethod* model\_json\_schema(*\*args*, *\*\*kwargs*)[](#atomic_agents.base.base_io_schema.BaseIOSchema.model_json_schema "Link to this definition") Generates a JSON schema for a model class. Parameters: * **by\_alias** – Whether to use attribute aliases or not. * **ref\_template** – The reference template. * **schema\_generator** – To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications * **mode** – The mode in which to generate the schema. Returns: The JSON schema for the given model class. model\_config*: ClassVar[ConfigDict]* *= {}*[](#atomic_agents.base.base_io_schema.BaseIOSchema.model_config "Link to this definition") Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. *class* atomic\_agents.base.base\_tool.BaseToolConfig(*\**, *title: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*, *description: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*)[](#atomic_agents.base.base_tool.BaseToolConfig "Link to this definition") Bases: [`BaseModel`](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel "(in Pydantic v0.0.0)") Configuration for a tool. title[](#atomic_agents.base.base_tool.BaseToolConfig.title "Link to this definition") Overrides the default title of the tool. Type: Optional[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] description[](#atomic_agents.base.base_tool.BaseToolConfig.description "Link to this definition") Overrides the default description of the tool. Type: Optional[[str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")] title*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")*[](#id3 "Link to this definition") description*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)")*[](#id4 "Link to this definition") model\_config*: ClassVar[ConfigDict]* *= {}*[](#atomic_agents.base.base_tool.BaseToolConfig.model_config "Link to this definition") Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict]. *class* atomic\_agents.base.base\_tool.BaseTool(*config: [BaseToolConfig](index.html#atomic_agents.base.base_tool.BaseToolConfig "atomic_agents.base.base_tool.BaseToolConfig") = BaseToolConfig(title=None, description=None)*)[](#atomic_agents.base.base_tool.BaseTool "Link to this definition") Bases: [`ABC`](https://docs.python.org/3/library/abc.html#abc.ABC "(in Python v3.13)"), [`Generic`](https://docs.python.org/3/library/typing.html#typing.Generic "(in Python v3.13)") Base class for tools within the Atomic Agents framework. Tools enable agents to perform specific tasks by providing a standardized interface for input and output. Each tool is defined with specific input and output schemas that enforce type safety and provide documentation. Type Parameters: InputSchema: Schema defining the input data, must be a subclass of BaseIOSchema. OutputSchema: Schema defining the output data, must be a subclass of BaseIOSchema. config[](#atomic_agents.base.base_tool.BaseTool.config "Link to this definition") Configuration for the tool, including optional title and description overrides. Type: [BaseToolConfig](index.html#atomic_agents.base.base_tool.BaseToolConfig "atomic_agents.base.base_tool.BaseToolConfig") input\_schema[](#atomic_agents.base.base_tool.BaseTool.input_schema "Link to this definition") Schema class defining the input data (derived from generic type parameter). Type: Type[InputSchema] output\_schema[](#atomic_agents.base.base_tool.BaseTool.output_schema "Link to this definition") Schema class defining the output data (derived from generic type parameter). Type: Type[OutputSchema] tool\_name[](#atomic_agents.base.base_tool.BaseTool.tool_name "Link to this definition") The name of the tool, derived from the input schema’s title or overridden by the config. Type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") tool\_description[](#atomic_agents.base.base_tool.BaseTool.tool_description "Link to this definition") Description of the tool, derived from the input schema’s description or overridden by the config. Type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") \_\_init\_\_(*config: [BaseToolConfig](index.html#atomic_agents.base.base_tool.BaseToolConfig "atomic_agents.base.base_tool.BaseToolConfig") = BaseToolConfig(title=None, description=None)*)[](#atomic_agents.base.base_tool.BaseTool.__init__ "Link to this definition") Initializes the BaseTool with an optional configuration override. Parameters: **config** ([*BaseToolConfig*](index.html#atomic_agents.base.base_tool.BaseToolConfig "atomic_agents.base.base_tool.BaseToolConfig")*,* *optional*) – Configuration for the tool, including optional title and description overrides. *property* input\_schema*: [Type](https://docs.python.org/3/library/typing.html#typing.Type "(in Python v3.13)")*[](#id5 "Link to this definition") Returns the input schema class for the tool. Returns: The input schema class. Return type: Type[InputSchema] *property* output\_schema*: [Type](https://docs.python.org/3/library/typing.html#typing.Type "(in Python v3.13)")*[](#id6 "Link to this definition") Returns the output schema class for the tool. Returns: The output schema class. Return type: Type[OutputSchema] *property* tool\_name*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*[](#id7 "Link to this definition") Returns the name of the tool. Returns: The name of the tool. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") *property* tool\_description*: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*[](#id8 "Link to this definition") Returns the description of the tool. Returns: The description of the tool. Return type: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") *abstract* run(*params: InputSchema*) → OutputSchema[](#atomic_agents.base.base_tool.BaseTool.run "Link to this definition") Executes the tool with the provided parameters. Parameters: **params** (*InputSchema*) – Input parameters adhering to the input schema. Returns: Output resulting from executing the tool, adhering to the output schema. Return type: OutputSchema Raises: [**NotImplementedError**](https://docs.python.org/3/library/exceptions.html#NotImplementedError "(in Python v3.13)") – If the method is not implemented by a subclass. ### Utilities[](#utilities "Link to this heading") #### Tool Message Formatting[](#module-atomic_agents.utils.format_tool_message "Link to this heading") atomic\_agents.utils.format\_tool\_message.format\_tool\_message(*tool\_call: [Type](https://docs.python.org/3/library/typing.html#typing.Type "(in Python v3.13)")[[BaseModel](https://docs.pydantic.dev/latest/api/base_model/#pydantic.BaseModel "(in Pydantic v0.0.0)")]*, *tool\_id: [str](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)") | [None](https://docs.python.org/3/library/constants.html#None "(in Python v3.13)") = None*) → [Dict](https://docs.python.org/3/library/typing.html#typing.Dict "(in Python v3.13)")[](#atomic_agents.utils.format_tool_message.format_tool_message "Link to this definition") Formats a message for a tool call. Parameters: * **tool\_call** (*Type**[**BaseModel**]*) – The Pydantic model instance representing the tool call. * **tool\_id** ([*str*](https://docs.python.org/3/library/stdtypes.html#str "(in Python v3.13)")*,* *optional*) – The unique identifier for the tool call. If not provided, a random UUID will be generated. Returns: A formatted message dictionary for the tool call. Return type: Dict ### Core Components[](#core-components "Link to this heading") The Atomic Agents framework is built around several core components that work together to provide a flexible and powerful system for building AI agents. #### Agents[](#agents "Link to this heading") The agents module provides the base classes for creating AI agents: * `AtomicAgent`: The foundational agent class that handles interactions with LLMs * `AgentConfig`: Configuration class for customizing agent behavior * `BasicChatInputSchema`: Standard input schema for agent interactions * `BasicChatOutputSchema`: Standard output schema for agent responses [Learn more about agents](#document-api/agents) #### Context Components[](#context-components "Link to this heading") The context module contains essential building blocks: * `ChatHistory`: Manages conversation history and state with support for: + Message history with role-based messages + Turn-based conversation tracking + Multimodal content + Serialization and persistence + History size management * `SystemPromptGenerator`: Creates structured system prompts with: + Background information + Processing steps + Output instructions + Dynamic context through context providers * `BaseDynamicContextProvider`: Base class for creating custom context providers that can inject dynamic information into system prompts [Learn more about context components](#document-api/context) #### Utils[](#utils "Link to this heading") The utils module provides helper functions and utilities: * Message formatting * Tool response handling * Schema validation * Error handling [Learn more about utilities](#document-api/utils) ### Getting Started[](#getting-started "Link to this heading") For practical examples and guides on using these components, see: * [Quickstart Guide](#document-guides/quickstart) * [Tools Guide](#document-guides/tools) Example Projects[](#example-projects "Link to this heading") ------------------------------------------------------------- This section contains detailed examples of using Atomic Agents in various scenarios. Note All examples are available in optimized formats for AI assistants: * **`Examples with documentation`** - All examples with source code and READMEs * **`Full framework package`** - Complete documentation, source, and examples ### Quickstart Examples[](#quickstart-examples "Link to this heading") Simple examples to get started with the framework: * Basic chatbot with history * Custom chatbot with personality * Streaming responses * Custom input/output schemas * Multiple provider support 📂 **[View on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/quickstart)** - Browse the complete source code and run the examples ### Hooks System[](#hooks-system "Link to this heading") Comprehensive monitoring and error handling with the AtomicAgent hook system: * Parse error handling and validation * API call monitoring and metrics * Response time tracking and performance analysis * Intelligent retry mechanisms * Production-ready error isolation * Real-time performance dashboards 📂 **[View on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/hooks-example)** - Browse the complete source code and run the examples ### Basic Multimodal[](#basic-multimodal "Link to this heading") Examples of working with images and text: * Image analysis with text descriptions * Image-based question answering * Visual content generation * Multi-image comparisons 📂 **[View on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/basic-multimodal)** - Browse the complete source code and run the examples ### RAG Chatbot[](#rag-chatbot "Link to this heading") Build context-aware chatbots with retrieval-augmented generation: * Document indexing and embedding * Semantic search integration * Context-aware responses * Source attribution * Follow-up suggestions 📂 **[View on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/rag-chatbot)** - Browse the complete source code and run the examples ### Web Search Agent[](#web-search-agent "Link to this heading") Create agents that can search and analyze web content: * Web search integration * Content extraction * Result synthesis * Multi-source research * Citation tracking 📂 **[View on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/web-search-agent)** - Browse the complete source code and run the examples ### Deep Research[](#deep-research "Link to this heading") Perform comprehensive research tasks: * Multi-step research workflows * Information synthesis * Source validation * Structured output generation * Citation management 📂 **[View on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/deep-research)** - Browse the complete source code and run the examples ### YouTube Summarizer[](#youtube-summarizer "Link to this heading") Extract and analyze information from videos: * Transcript extraction * Content summarization * Key point identification * Timestamp linking * Chapter generation 📂 **[View on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/youtube-summarizer)** - Browse the complete source code and run the examples ### YouTube to Recipe[](#youtube-to-recipe "Link to this heading") Convert cooking videos into structured recipes: * Video analysis * Recipe extraction * Ingredient parsing * Step-by-step instructions * Time and temperature conversion 📂 **[View on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/youtube-to-recipe)** - Browse the complete source code and run the examples ### Orchestration Agent[](#orchestration-agent "Link to this heading") Coordinate multiple agents for complex tasks: * Agent coordination * Task decomposition * Progress tracking * Error handling * Result aggregation 📂 **[View on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/orchestration-agent)** - Browse the complete source code and run the examples ### MCP Agent[](#mcp-agent "Link to this heading") Build intelligent agents using the Model Context Protocol: * Server implementation with multiple transport methods * Dynamic tool discovery and registration * Natural language query processing * Stateful conversation handling * Extensible tool architecture [View MCP Agent Documentation](#document-examples/mcp_agent) 📂 **[View on GitHub](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/mcp-agent)** - Browse the complete source code and run the examples Contributing Guide[](#contributing-guide "Link to this heading") ----------------------------------------------------------------- Thank you for your interest in contributing to Atomic Agents! This guide will help you get started with contributing to the project. ### Ways to Contribute[](#ways-to-contribute "Link to this heading") There are many ways to contribute to Atomic Agents: 1. **Report Bugs**: Submit bug reports on our [Issue Tracker](https://github.com/BrainBlend-AI/atomic-agents/issues) 2. **Suggest Features**: Share your ideas for new features or improvements 3. **Improve Documentation**: Help us make the documentation clearer and more comprehensive 4. **Submit Code**: Fix bugs, add features, or create new tools 5. **Share Examples**: Create example projects that showcase different use cases 6. **Write Tests**: Help improve our test coverage and reliability ### Development Setup[](#development-setup "Link to this heading") 1. Fork and clone the repository: ``` git clone https://github.com/YOUR_USERNAME/atomic-agents.git cd atomic-agents ``` 2. Install dependencies: ``` poetry install ``` 3. Set up pre-commit hooks: ``` pre-commit install ``` 4. Create a new branch: ``` git checkout -b feature/your-feature-name ``` ### Code Style[](#code-style "Link to this heading") We follow these coding standards: * Use [Black](https://black.readthedocs.io/) for code formatting * Follow [PEP 8](https://www.python.org/dev/peps/pep-0008/) style guide * Write docstrings in [Google style](https://google.github.io/styleguide/pyguide.html#38-comments-and-docstrings) * Add type hints to function signatures * Keep functions focused and modular * Write clear commit messages ### Creating Tools[](#creating-tools "Link to this heading") When creating new tools: 1. Use the tool template: ``` atomic-assembler create-tool my-tool ``` 2. Implement the required interfaces: ``` from pydantic import BaseModel from atomic_agents import BaseTool class MyToolInputs(BaseModel): # Define input schema pass class MyToolOutputs(BaseModel): # Define output schema pass class MyTool(BaseTool[MyToolInputs, MyToolOutputs]): name = "my_tool" description = "Tool description" inputs_schema = MyToolInputs outputs_schema = MyToolOutputs def run(self, inputs: MyToolInputs) -> MyToolOutputs: # Implement tool logic pass ``` 3. Add comprehensive tests: ``` def test_my_tool(): tool = MyTool() inputs = MyToolInputs(...) result = tool.run(inputs) assert isinstance(result, MyToolOutputs) # Add more assertions ``` 4. Document your tool: * Add a README.md with usage examples * Include configuration instructions * Document any dependencies * Explain error handling ### Testing[](#testing "Link to this heading") Run tests with pytest: ``` poetry run pytest ``` Include tests for: * Normal operation * Edge cases * Error conditions * Async functionality * Integration with other components ### Documentation[](#documentation "Link to this heading") When adding documentation: 1. Follow the existing structure 2. Include code examples 3. Add type hints and docstrings 4. Update relevant guides 5. Build and verify locally: ``` cd docs poetry run sphinx-build -b html . _build/html ``` ### Submitting Changes[](#submitting-changes "Link to this heading") 1. Commit your changes: ``` git add . git commit -m "feat: add new feature" ``` 2. Push to your fork: ``` git push origin feature/your-feature-name ``` 3. Create a Pull Request: * Describe your changes * Reference any related issues * Include test results * Add documentation updates ### Getting Help[](#getting-help "Link to this heading") If you need help: * Join our [Reddit community](https://www.reddit.com/r/AtomicAgents/) * Check the [documentation](https://atomic-agents.readthedocs.io/) * Ask questions on [GitHub Discussions](https://github.com/BrainBlend-AI/atomic-agents/discussions) ### Code of Conduct[](#code-of-conduct "Link to this heading") Please note that this project is released with a Code of Conduct. By participating in this project you agree to abide by its terms. You can find the full text in our [GitHub repository](https://github.com/BrainBlend-AI/atomic-agents/blob/main/CODE_OF_CONDUCT.md). A Lightweight and Modular Framework for Building AI Agents[](#a-lightweight-and-modular-framework-for-building-ai-agents "Link to this heading") ================================================================================================================================================= ![Atomic Agents](_images/logo.png) AI Assistant Resources 📥 **Download Documentation for AI Assistants and LLMs** Choose the resource that best fits your needs: * **`📚 Full Package`** - Complete documentation, source code, and examples in one file * **`📖 Documentation Only`** - API documentation, guides, and references * **`💻 Source Code Only`** - Complete atomic-agents framework source code * **`🎯 Examples Only`** - All example implementations with READMEs All files are optimized for AI assistants and Large Language Models, with clear structure and formatting for easy parsing. The Atomic Agents framework is designed around the concept of atomicity to be an extremely lightweight and modular framework for building Agentic AI pipelines and applications without sacrificing developer experience and maintainability. The framework provides a set of tools and agents that can be combined to create powerful applications. It is built on top of [Instructor](https://github.com/jxnl/instructor) and leverages the power of [Pydantic](https://docs.pydantic.dev/latest/) for data and schema validation and serialization. All logic and control flows are written in Python, enabling developers to apply familiar best practices and workflows from traditional software development without compromising flexibility or clarity. Key Features[](#key-features "Link to this heading") ----------------------------------------------------- * **Modularity**: Build AI applications by combining small, reusable components * **Predictability**: Define clear input and output schemas using Pydantic * **Extensibility**: Easily swap out components or integrate new ones * **Control**: Fine-tune each part of the system individually * **Provider Agnostic**: Works with various LLM providers through Instructor * **Built for Production**: Robust error handling and async support Installation[](#installation "Link to this heading") ----------------------------------------------------- You can install Atomic Agents using pip: ``` pip install atomic-agents ``` Or using Poetry (recommended): ``` poetry add atomic-agents ``` Make sure you also install the provider you want to use. For example, to use OpenAI and Groq: ``` pip install openai groq ``` This also installs the CLI *Atomic Assembler*, which can be used to download Tools (and soon also Agents and Pipelines). Note The framework supports multiple providers through Instructor, including **OpenAI**, **Anthropic**, **Groq**, **Ollama** (local models), **Gemini**, and more! For a full list of all supported providers and their setup instructions, have a look at the [Instructor Integrations documentation](https://python.useinstructor.com/integrations/). Quick Example[](#quick-example "Link to this heading") ------------------------------------------------------- Here’s a glimpse of how easy it is to create an agent: ``` import instructor import openai from atomic_agents.context import ChatHistory from atomic_agents import AtomicAgent, AgentConfig, BasicChatInputSchema, BasicChatOutputSchema # Set up your API key (either in environment or pass directly) # os.environ["OPENAI_API_KEY"] = "your-api-key" # or pass it to the client: openai.OpenAI(api_key="your-api-key") # Initialize agent with history history = ChatHistory() # Set up client with your preferred provider client = instructor.from_openai(openai.OpenAI()) # Pass your API key here if not in environment # Create an agent agent = AtomicAgent[BasicChatInputSchema, BasicChatOutputSchema]( config=AgentConfig( client=client, model="gpt-4o-mini", # Use your provider's model history=history ) ) # Interact with your agent (using the agent's input schema) response = agent.run(agent.input_schema(chat_message="Tell me about quantum computing")) # Or more explicitly: response = agent.run( BasicChatInputSchema(chat_message="Tell me about quantum computing") ) print(response) ``` Example Projects[](#example-projects "Link to this heading") ------------------------------------------------------------- Check out our example projects in our [GitHub repository](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples): * [Quickstart Examples](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/quickstart): Simple examples to get started * [Hooks System](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/hooks-example): Comprehensive monitoring, error handling, and performance metrics * [Basic Multimodal](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/basic-multimodal): Analyze images with text * [RAG Chatbot](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/rag-chatbot): Build context-aware chatbots * [Web Search Agent](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/web-search-agent): Create agents that perform web searches * [Deep Research](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/deep-research): Perform deep research tasks * [YouTube Summarizer](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/youtube-summarizer): Extract knowledge from videos * [YouTube to Recipe](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/youtube-to-recipe): Convert cooking videos into structured recipes * [Orchestration Agent](https://github.com/BrainBlend-AI/atomic-agents/tree/main/atomic-examples/orchestration-agent): Coordinate multiple agents for complex tasks Community & Support[](#community-support "Link to this heading") ----------------------------------------------------------------- * [GitHub Repository](https://github.com/BrainBlend-AI/atomic-agents) * [Issue Tracker](https://github.com/BrainBlend-AI/atomic-agents/issues) * [Reddit Community](https://www.reddit.com/r/AtomicAgents/) Indices and References[](#indices-and-references "Link to this heading") ------------------------------------------------------------------------- * [Index](genindex.html) * [Module Index](py-modindex.html) * [Search Page](search.html) ================================================================================ END OF DOCUMENT ================================================================================