Agents
Schema Hierarchy
The Atomic Agents framework uses Pydantic for schema validation and serialization. All input and output schemas follow this inheritance pattern:
pydantic.BaseModel
└── BaseIOSchema
├── BasicChatInputSchema
└── BasicChatOutputSchema
BaseIOSchema
The base schema class that all agent input/output schemas inherit from.
- class BaseIOSchema
Base schema class for all agent input/output schemas. Inherits from
pydantic.BaseModel
.All agent schemas must inherit from this class to ensure proper serialization and validation.
- Inheritance:
BasicChatInputSchema
The default input schema for agents.
- class BasicChatInputSchema
Default input schema for agent interactions.
- Inheritance:
- Example:
>>> input_schema = BasicChatInputSchema(chat_message="Hello, agent!") >>> agent.run(input_schema)
BasicChatOutputSchema
The default output schema for agents.
- class BasicChatOutputSchema
Default output schema for agent responses.
- Inheritance:
- Example:
>>> response = agent.run(input_schema) >>> print(response.chat_message)
Creating Custom Schemas
You can create custom input/output schemas by inheriting from BaseIOSchema
:
from pydantic import Field
from typing import List
from atomic_agents import BaseIOSchema
class CustomInputSchema(BaseIOSchema):
chat_message: str = Field(..., description="User's message")
context: str = Field(None, description="Optional context for the agent")
class CustomOutputSchema(BaseIOSchema):
chat_message: str = Field(..., description="Agent's response")
follow_up_questions: List[str] = Field(
default_factory=list,
description="Suggested follow-up questions"
)
confidence: float = Field(
...,
description="Confidence score for the response",
ge=0.0,
le=1.0
)
Base Agent
The AtomicAgent
class is the foundation for building AI agents in the Atomic Agents framework. It handles chat interactions, history management, system prompts, and responses from language models.
from atomic_agents import AtomicAgent, AgentConfig
from atomic_agents.context import ChatHistory, SystemPromptGenerator
# Create agent with basic configuration
agent = AtomicAgent[BasicChatInputSchema, BasicChatOutputSchema](
config=AgentConfig(
client=instructor.from_openai(OpenAI()),
model="gpt-4-turbo-preview",
history=ChatHistory(),
system_prompt_generator=SystemPromptGenerator()
)
)
# Run the agent
response = agent.run(user_input)
# Stream responses
async for partial_response in agent.run_async(user_input):
print(partial_response)
Configuration
The AgentConfig
class provides configuration options:
class AgentConfig:
client: instructor.Instructor # Client for interacting with the language model
model: str = "gpt-4-turbo-preview" # Model to use
history: Optional[ChatHistory] = None # History component
system_prompt_generator: Optional[SystemPromptGenerator] = None # Prompt generator
input_schema: Optional[Type[BaseModel]] = None # Custom input schema
output_schema: Optional[Type[BaseModel]] = None # Custom output schema
model_api_parameters: Optional[dict] = None # Additional API parameters
Input/Output Schemas
Default schemas for basic chat interactions:
class BasicChatInputSchema(BaseIOSchema):
"""Input from the user to the AI agent."""
chat_message: str = Field(
...,
description="The chat message sent by the user."
)
class BasicChatOutputSchema(BaseIOSchema):
"""Response generated by the chat agent."""
chat_message: str = Field(
...,
description="The markdown-enabled response generated by the chat agent."
)
Key Methods
run(user_input: Optional[BaseIOSchema] = None) -> BaseIOSchema
: Process user input and get responserun_async(user_input: Optional[BaseIOSchema] = None)
: Stream responses asynchronouslyget_response(response_model=None) -> Type[BaseModel]
: Get direct model responsereset_history()
: Reset history to initial stateget_context_provider(provider_name: str)
: Get a registered context providerregister_context_provider(provider_name: str, provider: BaseDynamicContextProvider)
: Register a new context providerunregister_context_provider(provider_name: str)
: Remove a context provider
Context Providers
Context providers can be used to inject dynamic information into the system prompt:
from atomic_agents.context import BaseDynamicContextProvider
class SearchResultsProvider(BaseDynamicContextProvider):
def __init__(self, title: str):
super().__init__(title=title)
self.results = []
def get_info(self) -> str:
return "\n\n".join([
f"Result {idx}:\n{result}"
for idx, result in enumerate(self.results, 1)
])
# Register with agent
agent.register_context_provider(
"search_results",
SearchResultsProvider("Search Results")
)
Streaming Support
The agent supports streaming responses for more interactive experiences:
async def chat():
async for partial_response in agent.run_async(user_input):
# Handle each chunk of the response
print(partial_response.chat_message)
History Management
The agent automatically manages conversation history through the ChatHistory
component:
# Access history
history = agent.history.get_history()
# Reset to initial state
agent.reset_history()
# Save/load history state
serialized = agent.history.dump()
agent.history.load(serialized)
Custom Schemas
You can use custom input/output schemas for structured interactions:
from pydantic import BaseModel, Field
from typing import List
class CustomInput(BaseIOSchema):
"""Custom input with specific fields"""
question: str = Field(..., description="User's question")
context: str = Field(..., description="Additional context")
class CustomOutput(BaseIOSchema):
"""Custom output with structured data"""
answer: str = Field(..., description="Answer to the question")
sources: List[str] = Field(..., description="Source references")
# Create agent with custom schemas
agent = AtomicAgent[CustomInput, CustomOutput](
config=AgentConfig(
client=client,
model=model,
)
)
For full API details:
- async atomic_agents.agents.atomic_agent.model_from_chunks_async_patched(cls, json_chunks, **kwargs)[source]
- class atomic_agents.agents.atomic_agent.BasicChatInputSchema(*, chat_message: str)[source]
Bases:
BaseIOSchema
This schema represents the input from the user to the AI agent.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class atomic_agents.agents.atomic_agent.BasicChatOutputSchema(*, chat_message: str)[source]
Bases:
BaseIOSchema
This schema represents the response generated by the chat agent.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class atomic_agents.agents.atomic_agent.AgentConfig(*, client: Instructor, model: str = 'gpt-4o-mini', history: ChatHistory | None = None, system_prompt_generator: SystemPromptGenerator | None = None, system_role: str | None = 'system', model_api_parameters: dict | None = None)[source]
Bases:
BaseModel
- client: Instructor
- history: ChatHistory | None
- system_prompt_generator: SystemPromptGenerator | None
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class atomic_agents.agents.atomic_agent.AtomicAgent(config: AgentConfig)[source]
Bases:
Generic
Base class for chat agents with full Instructor hook system integration.
This class provides the core functionality for handling chat interactions, including managing history, generating system prompts, and obtaining responses from a language model. It includes comprehensive hook system support for monitoring and error handling.
- Type Parameters:
InputSchema: Schema for the user input, must be a subclass of BaseIOSchema. OutputSchema: Schema for the agent’s output, must be a subclass of BaseIOSchema.
- client
Client for interacting with the language model.
- history
History component for storing chat history.
- Type:
- system_prompt_generator
Component for generating system prompts.
- Type:
- system_role
The role of the system in the conversation. None means no system prompt.
- Type:
Optional[str]
- initial_history
Initial state of the history.
- Type:
- current_user_input
The current user input being processed.
- Type:
Optional[InputSchema]
- model_api_parameters
Additional parameters passed to the API provider. - Use this for parameters like ‘temperature’, ‘max_tokens’, etc.
- Type:
- Hook System:
The AtomicAgent integrates with Instructor’s hook system to provide comprehensive monitoring and error handling capabilities. Supported events include:
‘parse:error’: Triggered when Pydantic validation fails
‘completion:kwargs’: Triggered before completion request
‘completion:response’: Triggered after completion response
‘completion:error’: Triggered on completion errors
‘completion:last_attempt’: Triggered on final retry attempt
- Hook Methods:
register_hook(event, handler): Register a hook handler for an event
unregister_hook(event, handler): Remove a hook handler
clear_hooks(event=None): Clear hooks for specific event or all events
enable_hooks()/disable_hooks(): Control hook processing
hooks_enabled: Property to check if hooks are enabled
Example
```python # Basic usage agent = AtomicAgent[InputSchema, OutputSchema](config)
# Register parse error hook for intelligent retry handling def handle_parse_error(error):
print(f”Validation failed: {error}”) # Implement custom retry logic, logging, etc.
agent.register_hook(“parse:error”, handle_parse_error)
# Now parse:error hooks will fire on validation failures response = agent.run(user_input) ```
- __init__(config: AgentConfig)[source]
Initializes the AtomicAgent.
- Parameters:
config (AgentConfig) – Configuration for the chat agent.
- property input_schema: Type[BaseIOSchema]
- property output_schema: Type[BaseIOSchema]
- run(user_input: InputSchema | None = None) OutputSchema [source]
Runs the chat agent with the given user input synchronously.
- Parameters:
user_input (Optional[InputSchema]) – The input from the user. If not provided, skips adding to history.
- Returns:
The response from the chat agent.
- Return type:
OutputSchema
- run_stream(user_input: InputSchema | None = None) Generator[OutputSchema, None, OutputSchema] [source]
Runs the chat agent with the given user input, supporting streaming output.
- Parameters:
user_input (Optional[InputSchema]) – The input from the user. If not provided, skips adding to history.
- Yields:
OutputSchema – Partial responses from the chat agent.
- Returns:
The final response from the chat agent.
- Return type:
OutputSchema
- async run_async(user_input: InputSchema | None = None) OutputSchema [source]
Runs the chat agent asynchronously with the given user input.
- Parameters:
user_input (Optional[InputSchema]) – The input from the user. If not provided, skips adding to history.
- Returns:
The response from the chat agent.
- Return type:
OutputSchema
- Raises:
NotAsyncIterableError – If used as an async generator (in an async for loop). Use run_async_stream() method instead for streaming responses.
- async run_async_stream(user_input: InputSchema | None = None) AsyncGenerator[OutputSchema, None] [source]
Runs the chat agent asynchronously with the given user input, supporting streaming output.
- Parameters:
user_input (Optional[InputSchema]) – The input from the user. If not provided, skips adding to history.
- Yields:
OutputSchema – Partial responses from the chat agent.
- get_context_provider(provider_name: str) Type[BaseDynamicContextProvider] [source]
Retrieves a context provider by name.
- register_context_provider(provider_name: str, provider: BaseDynamicContextProvider)[source]
Registers a new context provider.
- Parameters:
provider_name (str) – The name of the context provider.
provider (BaseDynamicContextProvider) – The context provider instance.
- unregister_context_provider(provider_name: str)[source]
Unregisters an existing context provider.
- Parameters:
provider_name (str) – The name of the context provider to remove.
- register_hook(event: str, handler: Callable) None [source]
Registers a hook handler for a specific event.
- Parameters:
event (str) – The event name (e.g., ‘parse:error’, ‘completion:kwargs’, etc.)
handler (Callable) – The callback function to handle the event
- unregister_hook(event: str, handler: Callable) None [source]
Unregisters a hook handler for a specific event.
- Parameters:
event (str) – The event name
handler (Callable) – The callback function to remove