Context
Agent History
The ChatHistory
class manages conversation history and state for AI agents:
from atomic_agents.context import ChatHistory
from atomic_agents import BaseIOSchema
# Initialize history with optional max messages
history = ChatHistory(max_messages=10)
# Add messages
history.add_message(
role="user",
content=BaseIOSchema(...)
)
# Initialize a new turn
history.initialize_turn()
turn_id = history.get_current_turn_id()
# Access history
history = history.get_history()
# Manage history
history.get_message_count() # Get number of messages
history.delete_turn_id(turn_id) # Delete messages by turn
# Persistence
serialized = history.dump() # Save to string
history.load(serialized) # Load from string
# Create copy
new_history = history.copy()
Key features:
Message history management with role-based messages
Turn-based conversation tracking
Support for multimodal content (images, etc.)
Serialization and persistence
History size management
Deep copy functionality
Message Structure
Messages in history are structured as:
class Message(BaseModel):
role: str # e.g., 'user', 'assistant', 'system'
content: BaseIOSchema # Message content following schema
turn_id: Optional[str] # Unique ID for grouping messages
Multimodal Support
The history system automatically handles multimodal content:
# For content with images
history = history.get_history()
for message in history:
if isinstance(message.content, list):
text_content = message.content[0] # JSON string
images = message.content[1:] # List of images
System Prompt Generator
The SystemPromptGenerator
creates structured system prompts for AI agents:
from atomic_agents.context import (
SystemPromptGenerator,
BaseDynamicContextProvider
)
# Create generator with static content
generator = SystemPromptGenerator(
background=[
"You are a helpful AI assistant.",
"You specialize in technical support."
],
steps=[
"1. Understand the user's request",
"2. Analyze available information",
"3. Provide clear solutions"
],
output_instructions=[
"Use clear, concise language",
"Include step-by-step instructions",
"Cite relevant documentation"
]
)
# Generate prompt
prompt = generator.generate_prompt()
Dynamic Context Providers
Context providers inject dynamic information into prompts:
from dataclasses import dataclass
from typing import List
@dataclass
class SearchResult:
content: str
metadata: dict
class SearchResultsProvider(BaseDynamicContextProvider):
def __init__(self, title: str):
super().__init__(title=title)
self.results: List[SearchResult] = []
def get_info(self) -> str:
"""Format search results for the prompt"""
if not self.results:
return "No search results available."
return "\n\n".join([
f"Result {idx}:\nMetadata: {result.metadata}\nContent:\n{result.content}\n{'-' * 80}"
for idx, result in enumerate(self.results, 1)
])
# Use with generator
generator = SystemPromptGenerator(
background=["You answer based on search results."],
context_providers={
"search_results": SearchResultsProvider("Search Results")
}
)
The generated prompt will include:
Background information
Processing steps (if provided)
Dynamic context from providers
Output instructions
Base Components
BaseIOSchema
Base class for all input/output schemas:
from atomic_agents import BaseIOSchema
from pydantic import Field
class CustomSchema(BaseIOSchema):
"""Schema description (required)"""
field: str = Field(..., description="Field description")
Key features:
Requires docstring description
Rich representation support
Automatic schema validation
JSON serialization
BaseTool
Base class for creating tools:
from atomic_agents import BaseTool, BaseToolConfig
from pydantic import Field
class MyToolConfig(BaseToolConfig):
"""Tool configuration"""
api_key: str = Field(
default=os.getenv("API_KEY"),
description="API key for the service"
)
class MyTool(BaseTool[MyToolInputSchema, MyToolOutputSchema]):
"""Tool implementation"""
input_schema = MyToolInputSchema
output_schema = MyToolOutputSchema
def __init__(self, config: MyToolConfig = MyToolConfig()):
super().__init__(config)
self.api_key = config.api_key
def run(self, params: MyToolInputSchema) -> MyToolOutputSchema:
# Implement tool logic
pass
Key features:
Structured input/output schemas
Configuration management
Title and description overrides
Error handling
For full API details:
- class atomic_agents.context.chat_history.Message(*, role: str, content: BaseIOSchema, turn_id: str | None = None)[source]
Bases:
BaseModel
Represents a message in the chat history.
- content
The content of the message.
- Type:
- content: BaseIOSchema
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class atomic_agents.context.chat_history.ChatHistory(max_messages: int | None = None)[source]
Bases:
object
Manages the chat history for an AI agent.
- __init__(max_messages: int | None = None)[source]
Initializes the ChatHistory with an empty history and optional constraints.
- Parameters:
max_messages (Optional[int]) – Maximum number of messages to keep in history. When exceeded, oldest messages are removed first.
- add_message(role: str, content: BaseIOSchema) None [source]
Adds a message to the chat history and manages overflow.
- Parameters:
role (str) – The role of the message sender.
content (BaseIOSchema) – The content of the message.
- get_history() List[Dict] [source]
Retrieves the chat history, handling both regular and multimodal content.
- Returns:
The list of messages in the chat history as dictionaries. Each dictionary has ‘role’ and ‘content’ keys, where ‘content’ is a list that may contain strings (JSON) or multimodal objects.
- Return type:
List[Dict]
Note
This method does not support nested multimodal content. If your schema contains nested objects that themselves contain multimodal content, only the top-level multimodal content will be properly processed.
- copy() ChatHistory [source]
Creates a copy of the chat history.
- Returns:
A copy of the chat history.
- Return type:
- get_current_turn_id() str | None [source]
Returns the current turn ID.
- Returns:
The current turn ID, or None if not set.
- Return type:
Optional[str]
- delete_turn_id(turn_id: int)[source]
Delete messages from the history by its turn ID.
- Parameters:
turn_id (int) – The turn ID of the message to delete.
- Returns:
A success message with the deleted turn ID.
- Return type:
- Raises:
ValueError – If the specified turn ID is not found in the history.
- get_message_count() int [source]
Returns the number of messages in the chat history.
- Returns:
The number of messages.
- Return type:
- class atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider(title: str)[source]
Bases:
ABC
- class atomic_agents.context.system_prompt_generator.SystemPromptGenerator(background: List[str] | None = None, steps: List[str] | None = None, output_instructions: List[str] | None = None, context_providers: Dict[str, BaseDynamicContextProvider] | None = None)[source]
Bases:
object
- class atomic_agents.base.base_io_schema.BaseIOSchema[source]
Bases:
BaseModel
Base schema for input/output in the Atomic Agents framework.
- classmethod model_json_schema(*args, **kwargs)[source]
Generates a JSON schema for a model class.
- Parameters:
by_alias – Whether to use attribute aliases or not.
ref_template – The reference template.
schema_generator – To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications
mode – The mode in which to generate the schema.
- Returns:
The JSON schema for the given model class.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class atomic_agents.base.base_tool.BaseToolConfig(*, title: str | None = None, description: str | None = None)[source]
Bases:
BaseModel
Configuration for a tool.
- model_config: ClassVar[ConfigDict] = {}
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class atomic_agents.base.base_tool.BaseTool(config: BaseToolConfig = BaseToolConfig(title=None, description=None))[source]
-
Base class for tools within the Atomic Agents framework.
Tools enable agents to perform specific tasks by providing a standardized interface for input and output. Each tool is defined with specific input and output schemas that enforce type safety and provide documentation.
- Type Parameters:
InputSchema: Schema defining the input data, must be a subclass of BaseIOSchema. OutputSchema: Schema defining the output data, must be a subclass of BaseIOSchema.
- config
Configuration for the tool, including optional title and description overrides.
- Type:
- input_schema
Schema class defining the input data (derived from generic type parameter).
- Type:
Type[InputSchema]
- output_schema
Schema class defining the output data (derived from generic type parameter).
- Type:
Type[OutputSchema]
- tool_name
The name of the tool, derived from the input schema’s title or overridden by the config.
- Type:
- tool_description
Description of the tool, derived from the input schema’s description or overridden by the config.
- Type:
- __init__(config: BaseToolConfig = BaseToolConfig(title=None, description=None))[source]
Initializes the BaseTool with an optional configuration override.
- Parameters:
config (BaseToolConfig, optional) – Configuration for the tool, including optional title and description overrides.
- property input_schema: Type
Returns the input schema class for the tool.
- Returns:
The input schema class.
- Return type:
Type[InputSchema]
- property output_schema: Type
Returns the output schema class for the tool.
- Returns:
The output schema class.
- Return type:
Type[OutputSchema]
- property tool_description: str
Returns the description of the tool.
- Returns:
The description of the tool.
- Return type:
- abstract run(params: InputSchema) OutputSchema [source]
Executes the tool with the provided parameters.
- Parameters:
params (InputSchema) – Input parameters adhering to the input schema.
- Returns:
Output resulting from executing the tool, adhering to the output schema.
- Return type:
OutputSchema
- Raises:
NotImplementedError – If the method is not implemented by a subclass.