Agents

Schema Hierarchy

The Atomic Agents framework uses Pydantic for schema validation and serialization. All input and output schemas follow this inheritance pattern:

pydantic.BaseModel
    └── BaseIOSchema
        ├── BaseAgentInputSchema
        └── BaseAgentOutputSchema

BaseIOSchema

The base schema class that all agent input/output schemas inherit from.

class BaseIOSchema

Base schema class for all agent input/output schemas. Inherits from pydantic.BaseModel.

All agent schemas must inherit from this class to ensure proper serialization and validation.

Inheritance:

BaseAgentInputSchema

The default input schema for agents.

class BaseAgentInputSchema

Default input schema for agent interactions.

Inheritance:
chat_message: str

The message to send to the agent.

Example:
>>> input_schema = BaseAgentInputSchema(chat_message="Hello, agent!")
>>> agent.run(input_schema)

BaseAgentOutputSchema

The default output schema for agents.

class BaseAgentOutputSchema

Default output schema for agent responses.

Inheritance:
chat_message: str

The response message from the agent.

Example:
>>> response = agent.run(input_schema)
>>> print(response.chat_message)

Creating Custom Schemas

You can create custom input/output schemas by inheriting from BaseIOSchema:

from pydantic import Field
from typing import List
from atomic_agents.lib.base.base_io_schema import BaseIOSchema

class CustomInputSchema(BaseIOSchema):
    chat_message: str = Field(..., description="User's message")
    context: str = Field(None, description="Optional context for the agent")

class CustomOutputSchema(BaseIOSchema):
    chat_message: str = Field(..., description="Agent's response")
    follow_up_questions: List[str] = Field(
        default_factory=list,
        description="Suggested follow-up questions"
    )
    confidence: float = Field(
        ...,
        description="Confidence score for the response",
        ge=0.0,
        le=1.0
    )

Base Agent

The BaseAgent class is the foundation for building AI agents in the Atomic Agents framework. It handles chat interactions, memory management, system prompts, and responses from language models.

from atomic_agents.agents.base_agent import BaseAgent, BaseAgentConfig
from atomic_agents.lib.components.agent_memory import AgentMemory
from atomic_agents.lib.components.system_prompt_generator import SystemPromptGenerator

# Create agent with basic configuration
agent = BaseAgent(
    config=BaseAgentConfig(
        client=instructor.from_openai(OpenAI()),
        model="gpt-4-turbo-preview",
        memory=AgentMemory(),
        system_prompt_generator=SystemPromptGenerator()
    )
)

# Run the agent
response = agent.run(user_input)

# Stream responses
async for partial_response in agent.run_async(user_input):
    print(partial_response)

Configuration

The BaseAgentConfig class provides configuration options:

class BaseAgentConfig:
    client: instructor.Instructor  # Client for interacting with the language model
    model: str = "gpt-4-turbo-preview"  # Model to use
    memory: Optional[AgentMemory] = None  # Memory component
    system_prompt_generator: Optional[SystemPromptGenerator] = None  # Prompt generator
    input_schema: Optional[Type[BaseModel]] = None  # Custom input schema
    output_schema: Optional[Type[BaseModel]] = None  # Custom output schema
    model_api_parameters: Optional[dict] = None  # Additional API parameters

Input/Output Schemas

Default schemas for basic chat interactions:

class BaseAgentInputSchema(BaseIOSchema):
    """Input from the user to the AI agent."""
    chat_message: str = Field(
        ...,
        description="The chat message sent by the user."
    )

class BaseAgentOutputSchema(BaseIOSchema):
    """Response generated by the chat agent."""
    chat_message: str = Field(
        ...,
        description="The markdown-enabled response generated by the chat agent."
    )

Key Methods

  • run(user_input: Optional[BaseIOSchema] = None) -> BaseIOSchema: Process user input and get response

  • run_async(user_input: Optional[BaseIOSchema] = None): Stream responses asynchronously

  • get_response(response_model=None) -> Type[BaseModel]: Get direct model response

  • reset_memory(): Reset memory to initial state

  • get_context_provider(provider_name: str): Get a registered context provider

  • register_context_provider(provider_name: str, provider: SystemPromptContextProviderBase): Register a new context provider

  • unregister_context_provider(provider_name: str): Remove a context provider

Context Providers

Context providers can be used to inject dynamic information into the system prompt:

from atomic_agents.lib.components.system_prompt_generator import SystemPromptContextProviderBase

class SearchResultsProvider(SystemPromptContextProviderBase):
    def __init__(self, title: str):
        super().__init__(title=title)
        self.results = []

    def get_info(self) -> str:
        return "\n\n".join([
            f"Result {idx}:\n{result}"
            for idx, result in enumerate(self.results, 1)
        ])

# Register with agent
agent.register_context_provider(
    "search_results",
    SearchResultsProvider("Search Results")
)

Streaming Support

The agent supports streaming responses for more interactive experiences:

async def chat():
    async for partial_response in agent.run_async(user_input):
        # Handle each chunk of the response
        print(partial_response.chat_message)

Memory Management

The agent automatically manages conversation history through the AgentMemory component:

# Access memory
history = agent.memory.get_history()

# Reset to initial state
agent.reset_memory()

# Save/load memory state
serialized = agent.memory.dump()
agent.memory.load(serialized)

Custom Schemas

You can use custom input/output schemas for structured interactions:

from pydantic import BaseModel, Field
from typing import List

class CustomInput(BaseIOSchema):
    """Custom input with specific fields"""
    question: str = Field(..., description="User's question")
    context: str = Field(..., description="Additional context")

class CustomOutput(BaseIOSchema):
    """Custom output with structured data"""
    answer: str = Field(..., description="Answer to the question")
    sources: List[str] = Field(..., description="Source references")

# Create agent with custom schemas
agent = BaseAgent[CustomInput, CustomOutput](
    config=BaseAgentConfig(
        client=client,
        model=model,
        input_schema=CustomInput,
        output_schema=CustomOutput
    )
)

For full API details:

atomic_agents.agents.base_agent.model_from_chunks_patched(cls, json_chunks, **kwargs)[source]
async atomic_agents.agents.base_agent.model_from_chunks_async_patched(cls, json_chunks, **kwargs)[source]
class atomic_agents.agents.base_agent.BaseAgentInputSchema(*, chat_message: str)[source]

Bases: BaseIOSchema

This schema represents the input from the user to the AI agent.

chat_message: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class atomic_agents.agents.base_agent.BaseAgentOutputSchema(*, chat_message: str)[source]

Bases: BaseIOSchema

This schema represents the response generated by the chat agent.

chat_message: str
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class atomic_agents.agents.base_agent.BaseAgentConfig(*, client: Instructor, model: str = 'gpt-4o-mini', memory: AgentMemory | None = None, system_prompt_generator: SystemPromptGenerator | None = None, input_schema: Type[BaseModel] | None = None, output_schema: Type[BaseModel] | None = None, temperature: float | None = 0, max_tokens: int | None = None, model_api_parameters: dict | None = None)[source]

Bases: BaseModel

client: Instructor
model: str
memory: AgentMemory | None
system_prompt_generator: SystemPromptGenerator | None
input_schema: Type[BaseModel] | None
output_schema: Type[BaseModel] | None
model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

temperature: float | None
max_tokens: int | None
model_api_parameters: dict | None
class atomic_agents.agents.base_agent.BaseAgent(config: BaseAgentConfig)[source]

Bases: object

Base class for chat agents.

This class provides the core functionality for handling chat interactions, including managing memory, generating system prompts, and obtaining responses from a language model.

input_schema

Schema for the input data.

Type:

Type[BaseIOSchema]

output_schema

Schema for the output data.

Type:

Type[BaseIOSchema]

client

Client for interacting with the language model.

model

The model to use for generating responses.

Type:

str

memory

Memory component for storing chat history.

Type:

AgentMemory

system_prompt_generator

Component for generating system prompts.

Type:

SystemPromptGenerator

initial_memory

Initial state of the memory.

Type:

AgentMemory

temperature

Temperature for response generation, typically ranging from 0 to 1. For models such as OpenAI o3-mini that do not support temperature, you must explicitly pass ‘None’. DEPRECATED: Include ‘temperature’ in model_api_parameters instead.

Type:

float

max_tokens

Maximum number of tokens allowed in the response. DEPRECATED: Include ‘max_tokens’ in model_api_parameters instead.

Type:

int

model_api_parameters

Additional parameters passed to the API provider.

Type:

dict

__init__(config: BaseAgentConfig)[source]

Initializes the BaseAgent.

Parameters:

config (BaseAgentConfig) – Configuration for the chat agent.

input_schema

alias of BaseAgentInputSchema

output_schema

alias of BaseAgentOutputSchema

reset_memory()[source]

Resets the memory to its initial state.

get_response(response_model=None) Type[BaseModel][source]

Obtains a response from the language model synchronously.

Parameters:

response_model (Type[BaseModel], optional) – The schema for the response data. If not set, self.output_schema is used.

Returns:

The response from the language model.

Return type:

Type[BaseModel]

run(user_input: BaseIOSchema | None = None) BaseIOSchema[source]

Runs the chat agent with the given user input synchronously.

Parameters:

user_input (Optional[BaseIOSchema]) – The input from the user. If not provided, skips adding to memory.

Returns:

The response from the chat agent.

Return type:

BaseIOSchema

async run_async(user_input: BaseIOSchema | None = None)[source]

Runs the chat agent with the given user input, supporting streaming output asynchronously.

Parameters:

user_input (Optional[BaseIOSchema]) – The input from the user. If not provided, skips adding to memory.

Yields:

BaseModel – Partial responses from the chat agent.

async stream_response_async(user_input: Type[BaseIOSchema] | None = None)[source]

Deprecated method for streaming responses asynchronously. Use run_async instead.

Parameters:

user_input (Optional[Type[BaseIOSchema]]) – The input from the user. If not provided, skips adding to memory.

Yields:

BaseModel – Partial responses from the chat agent.

get_context_provider(provider_name: str) Type[SystemPromptContextProviderBase][source]

Retrieves a context provider by name.

Parameters:

provider_name (str) – The name of the context provider.

Returns:

The context provider if found.

Return type:

SystemPromptContextProviderBase

Raises:

KeyError – If the context provider is not found.

register_context_provider(provider_name: str, provider: SystemPromptContextProviderBase)[source]

Registers a new context provider.

Parameters:
unregister_context_provider(provider_name: str)[source]

Unregisters an existing context provider.

Parameters:

provider_name (str) – The name of the context provider to remove.