Components
Agent Memory
The AgentMemory
class manages conversation history and state for AI agents:
from atomic_agents.lib.components.agent_memory import AgentMemory
from atomic_agents.lib.base.base_io_schema import BaseIOSchema
# Initialize memory with optional max messages
memory = AgentMemory(max_messages=10)
# Add messages
memory.add_message(
role="user",
content=BaseIOSchema(...)
)
# Initialize a new turn
memory.initialize_turn()
turn_id = memory.get_current_turn_id()
# Access history
history = memory.get_history()
# Manage memory
memory.get_message_count() # Get number of messages
memory.delete_turn_id(turn_id) # Delete messages by turn
# Persistence
serialized = memory.dump() # Save to string
memory.load(serialized) # Load from string
# Create copy
new_memory = memory.copy()
Key features:
Message history management with role-based messages
Turn-based conversation tracking
Support for multimodal content (images, etc.)
Serialization and persistence
Memory size management
Deep copy functionality
Message Structure
Messages in memory are structured as:
class Message(BaseModel):
role: str # e.g., 'user', 'assistant', 'system'
content: BaseIOSchema # Message content following schema
turn_id: Optional[str] # Unique ID for grouping messages
Multimodal Support
The memory system automatically handles multimodal content:
# For content with images
history = memory.get_history()
for message in history:
if isinstance(message.content, list):
text_content = message.content[0] # JSON string
images = message.content[1:] # List of images
System Prompt Generator
The SystemPromptGenerator
creates structured system prompts for AI agents:
from atomic_agents.lib.components.system_prompt_generator import (
SystemPromptGenerator,
SystemPromptContextProviderBase
)
# Create generator with static content
generator = SystemPromptGenerator(
background=[
"You are a helpful AI assistant.",
"You specialize in technical support."
],
steps=[
"1. Understand the user's request",
"2. Analyze available information",
"3. Provide clear solutions"
],
output_instructions=[
"Use clear, concise language",
"Include step-by-step instructions",
"Cite relevant documentation"
]
)
# Generate prompt
prompt = generator.generate_prompt()
Dynamic Context Providers
Context providers inject dynamic information into prompts:
from dataclasses import dataclass
from typing import List
@dataclass
class SearchResult:
content: str
metadata: dict
class SearchResultsProvider(SystemPromptContextProviderBase):
def __init__(self, title: str):
super().__init__(title=title)
self.results: List[SearchResult] = []
def get_info(self) -> str:
"""Format search results for the prompt"""
if not self.results:
return "No search results available."
return "\n\n".join([
f"Result {idx}:\nMetadata: {result.metadata}\nContent:\n{result.content}\n{'-' * 80}"
for idx, result in enumerate(self.results, 1)
])
# Use with generator
generator = SystemPromptGenerator(
background=["You answer based on search results."],
context_providers={
"search_results": SearchResultsProvider("Search Results")
}
)
The generated prompt will include:
Background information
Processing steps (if provided)
Dynamic context from providers
Output instructions
Base Components
BaseIOSchema
Base class for all input/output schemas:
from atomic_agents.lib.base.base_io_schema import BaseIOSchema
from pydantic import Field
class CustomSchema(BaseIOSchema):
"""Schema description (required)"""
field: str = Field(..., description="Field description")
Key features:
Requires docstring description
Rich representation support
Automatic schema validation
JSON serialization
BaseTool
Base class for creating tools:
from atomic_agents.lib.base.base_tool import BaseTool, BaseToolConfig
from pydantic import Field
class MyToolConfig(BaseToolConfig):
"""Tool configuration"""
api_key: str = Field(
default=os.getenv("API_KEY"),
description="API key for the service"
)
class MyTool(BaseTool):
"""Tool implementation"""
input_schema = MyToolInputSchema
output_schema = MyToolOutputSchema
def __init__(self, config: MyToolConfig = MyToolConfig()):
super().__init__(config)
self.api_key = config.api_key
def run(self, params: MyToolInputSchema) -> MyToolOutputSchema:
# Implement tool logic
pass
Key features:
Structured input/output schemas
Configuration management
Title and description overrides
Error handling
For full API details:
- class atomic_agents.lib.components.agent_memory.Message(*, role: str, content: BaseIOSchema, turn_id: str | None = None)[source]
Bases:
BaseModel
Represents a message in the chat history.
- content
The content of the message.
- Type:
- content: BaseIOSchema
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class atomic_agents.lib.components.agent_memory.AgentMemory(max_messages: int | None = None)[source]
Bases:
object
Manages the chat history for an AI agent.
- __init__(max_messages: int | None = None)[source]
Initializes the AgentMemory with an empty history and optional constraints.
- Parameters:
max_messages (Optional[int]) – Maximum number of messages to keep in history. When exceeded, oldest messages are removed first.
- add_message(role: str, content: BaseIOSchema) None [source]
Adds a message to the chat history and manages overflow.
- Parameters:
role (str) – The role of the message sender.
content (BaseIOSchema) – The content of the message.
- get_history() List[Dict] [source]
Retrieves the chat history, handling both regular and multimodal content.
- Returns:
The list of messages in the chat history as dictionaries.
- Return type:
List[Dict]
- copy() AgentMemory [source]
Creates a copy of the chat memory.
- Returns:
A copy of the chat memory.
- Return type:
- get_current_turn_id() str | None [source]
Returns the current turn ID.
- Returns:
The current turn ID, or None if not set.
- Return type:
Optional[str]
- delete_turn_id(turn_id: int)[source]
Delete messages from the memory by its turn ID.
- Parameters:
turn_id (int) – The turn ID of the message to delete.
- Returns:
A success message with the deleted turn ID.
- Return type:
- Raises:
ValueError – If the specified turn ID is not found in the memory.
- get_message_count() int [source]
Returns the number of messages in the chat history.
- Returns:
The number of messages.
- Return type:
- class atomic_agents.lib.components.system_prompt_generator.SystemPromptContextProviderBase(title: str)[source]
Bases:
ABC
- class atomic_agents.lib.components.system_prompt_generator.SystemPromptGenerator(background: List[str] | None = None, steps: List[str] | None = None, output_instructions: List[str] | None = None, context_providers: Dict[str, SystemPromptContextProviderBase] | None = None)[source]
Bases:
object
- class atomic_agents.lib.base.base_io_schema.BaseIOSchema[source]
Bases:
BaseModel
Base schema for input/output in the Atomic Agents framework.
- classmethod model_json_schema(*args, **kwargs)[source]
Generates a JSON schema for a model class.
- Parameters:
by_alias – Whether to use attribute aliases or not.
ref_template – The reference template.
schema_generator – To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications
mode – The mode in which to generate the schema.
- Returns:
The JSON schema for the given model class.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class atomic_agents.lib.base.base_tool.BaseToolConfig(*, title: str | None = None, description: str | None = None)[source]
Bases:
BaseModel
Configuration for a tool.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class atomic_agents.lib.base.base_tool.BaseTool(config: BaseToolConfig = BaseToolConfig(title=None, description=None))[source]
Bases:
object
Base class for tools within the Atomic Agents framework.
- input_schema
Schema defining the input data.
- Type:
Type[BaseIOSchema]
- output_schema
Schema defining the output data.
- Type:
Type[BaseIOSchema]
- tool_name
The name of the tool, derived from the input schema’s description or overridden by the user.
- Type:
- tool_description
Description of the tool, derived from the input schema’s description or overridden by the user.
- Type:
- input_schema: Type[BaseIOSchema]
- output_schema: Type[BaseIOSchema]
- __init__(config: BaseToolConfig = BaseToolConfig(title=None, description=None))[source]
Initializes the BaseTool with an optional configuration override.
- Parameters:
config (BaseToolConfig, optional) – Configuration for the tool, including optional title and description overrides.
- run(params: Type[BaseIOSchema]) BaseIOSchema [source]
Executes the tool with the provided parameters.
- Parameters:
params (BaseIOSchema) – Input parameters adhering to the input schema.
- Returns:
Output resulting from executing the tool, adhering to the output schema.
- Return type:
- Raises:
NotImplementedError – If the method is not implemented by a subclass.