Context

Agent History

The ChatHistory class manages conversation history and state for AI agents:

from atomic_agents.context import ChatHistory
from atomic_agents import BaseIOSchema

# Initialize history with optional max messages
history = ChatHistory(max_messages=10)

# Add messages
history.add_message(
    role="user",
    content=BaseIOSchema(...)
)

# Initialize a new turn
history.initialize_turn()
turn_id = history.get_current_turn_id()

# Access history
history = history.get_history()

# Manage history
history.get_message_count()  # Get number of messages
history.delete_turn_id(turn_id)  # Delete messages by turn

# Persistence
serialized = history.dump()  # Save to string
history.load(serialized)  # Load from string

# Create copy
new_history = history.copy()

Key features:

  • Message history management with role-based messages

  • Turn-based conversation tracking

  • Support for multimodal content (images, etc.)

  • Serialization and persistence

  • History size management

  • Deep copy functionality

Message Structure

Messages in history are structured as:

class Message(BaseModel):
    role: str  # e.g., 'user', 'assistant', 'system'
    content: BaseIOSchema  # Message content following schema
    turn_id: Optional[str]  # Unique ID for grouping messages

Multimodal Support

The history system automatically handles multimodal content:

# For content with images
history = history.get_history()
for message in history:
    if isinstance(message.content, list):
        text_content = message.content[0]  # JSON string
        images = message.content[1:]  # List of images

System Prompt Generator

The SystemPromptGenerator creates structured system prompts for AI agents:

from atomic_agents.context import (
    SystemPromptGenerator,
    BaseDynamicContextProvider
)

# Create generator with static content
generator = SystemPromptGenerator(
    background=[
        "You are a helpful AI assistant.",
        "You specialize in technical support."
    ],
    steps=[
        "1. Understand the user's request",
        "2. Analyze available information",
        "3. Provide clear solutions"
    ],
    output_instructions=[
        "Use clear, concise language",
        "Include step-by-step instructions",
        "Cite relevant documentation"
    ]
)

# Generate prompt
prompt = generator.generate_prompt()

Dynamic Context Providers

Context providers inject dynamic information into prompts:

from dataclasses import dataclass
from typing import List

@dataclass
class SearchResult:
    content: str
    metadata: dict

class SearchResultsProvider(BaseDynamicContextProvider):
    def __init__(self, title: str):
        super().__init__(title=title)
        self.results: List[SearchResult] = []

    def get_info(self) -> str:
        """Format search results for the prompt"""
        if not self.results:
            return "No search results available."

        return "\n\n".join([
            f"Result {idx}:\nMetadata: {result.metadata}\nContent:\n{result.content}\n{'-' * 80}"
            for idx, result in enumerate(self.results, 1)
        ])

# Use with generator
generator = SystemPromptGenerator(
    background=["You answer based on search results."],
    context_providers={
        "search_results": SearchResultsProvider("Search Results")
    }
)

The generated prompt will include:

  1. Background information

  2. Processing steps (if provided)

  3. Dynamic context from providers

  4. Output instructions

Base Components

BaseIOSchema

Base class for all input/output schemas:

from atomic_agents import BaseIOSchema
from pydantic import Field

class CustomSchema(BaseIOSchema):
    """Schema description (required)"""
    field: str = Field(..., description="Field description")

Key features:

  • Requires docstring description

  • Rich representation support

  • Automatic schema validation

  • JSON serialization

BaseTool

Base class for creating tools:

from atomic_agents import BaseTool, BaseToolConfig
from pydantic import Field

class MyToolConfig(BaseToolConfig):
    """Tool configuration"""
    api_key: str = Field(
        default=os.getenv("API_KEY"),
        description="API key for the service"
    )

class MyTool(BaseTool[MyToolInputSchema, MyToolOutputSchema]):
    """Tool implementation"""
    input_schema = MyToolInputSchema
    output_schema = MyToolOutputSchema

    def __init__(self, config: MyToolConfig = MyToolConfig()):
        super().__init__(config)
        self.api_key = config.api_key

    def run(self, params: MyToolInputSchema) -> MyToolOutputSchema:
        # Implement tool logic
        pass

Key features:

  • Structured input/output schemas

  • Configuration management

  • Title and description overrides

  • Error handling

For full API details:

class atomic_agents.context.chat_history.Message(*, role: str, content: BaseIOSchema, turn_id: str | None = None)[source]

Bases: BaseModel

Represents a message in the chat history.

role

The role of the message sender (e.g., ‘user’, ‘system’, ‘tool’).

Type:

str

content

The content of the message.

Type:

BaseIOSchema

turn_id

Unique identifier for the turn this message belongs to.

Type:

Optional[str]

role: str
content: BaseIOSchema
turn_id: str | None
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class atomic_agents.context.chat_history.ChatHistory(max_messages: int | None = None)[source]

Bases: object

Manages the chat history for an AI agent.

history

A list of messages representing the chat history.

Type:

List[Message]

max_messages

Maximum number of messages to keep in history.

Type:

Optional[int]

current_turn_id

The ID of the current turn.

Type:

Optional[str]

__init__(max_messages: int | None = None)[source]

Initializes the ChatHistory with an empty history and optional constraints.

Parameters:

max_messages (Optional[int]) – Maximum number of messages to keep in history. When exceeded, oldest messages are removed first.

initialize_turn() None[source]

Initializes a new turn by generating a random turn ID.

add_message(role: str, content: BaseIOSchema) None[source]

Adds a message to the chat history and manages overflow.

Parameters:
  • role (str) – The role of the message sender.

  • content (BaseIOSchema) – The content of the message.

get_history() List[Dict][source]

Retrieves the chat history, handling both regular and multimodal content.

Returns:

The list of messages in the chat history as dictionaries. Each dictionary has ‘role’ and ‘content’ keys, where ‘content’ is a list that may contain strings (JSON) or multimodal objects.

Return type:

List[Dict]

Note

This method does not support nested multimodal content. If your schema contains nested objects that themselves contain multimodal content, only the top-level multimodal content will be properly processed.

copy() ChatHistory[source]

Creates a copy of the chat history.

Returns:

A copy of the chat history.

Return type:

ChatHistory

get_current_turn_id() str | None[source]

Returns the current turn ID.

Returns:

The current turn ID, or None if not set.

Return type:

Optional[str]

delete_turn_id(turn_id: int)[source]

Delete messages from the history by its turn ID.

Parameters:

turn_id (int) – The turn ID of the message to delete.

Returns:

A success message with the deleted turn ID.

Return type:

str

Raises:

ValueError – If the specified turn ID is not found in the history.

get_message_count() int[source]

Returns the number of messages in the chat history.

Returns:

The number of messages.

Return type:

int

dump() str[source]

Serializes the entire ChatHistory instance to a JSON string.

Returns:

A JSON string representation of the ChatHistory.

Return type:

str

load(serialized_data: str) None[source]

Deserializes a JSON string and loads it into the ChatHistory instance.

Parameters:

serialized_data (str) – A JSON string representation of the ChatHistory.

Raises:

ValueError – If the serialized data is invalid or cannot be deserialized.

class atomic_agents.context.system_prompt_generator.BaseDynamicContextProvider(title: str)[source]

Bases: ABC

__init__(title: str)[source]
abstract get_info() str[source]
class atomic_agents.context.system_prompt_generator.SystemPromptGenerator(background: List[str] | None = None, steps: List[str] | None = None, output_instructions: List[str] | None = None, context_providers: Dict[str, BaseDynamicContextProvider] | None = None)[source]

Bases: object

__init__(background: List[str] | None = None, steps: List[str] | None = None, output_instructions: List[str] | None = None, context_providers: Dict[str, BaseDynamicContextProvider] | None = None)[source]
generate_prompt() str[source]
class atomic_agents.base.base_io_schema.BaseIOSchema[source]

Bases: BaseModel

Base schema for input/output in the Atomic Agents framework.

classmethod model_json_schema(*args, **kwargs)[source]

Generates a JSON schema for a model class.

Parameters:
  • by_alias – Whether to use attribute aliases or not.

  • ref_template – The reference template.

  • schema_generator – To override the logic used to generate the JSON schema, as a subclass of GenerateJsonSchema with your desired modifications

  • mode – The mode in which to generate the schema.

Returns:

The JSON schema for the given model class.

model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class atomic_agents.base.base_tool.BaseToolConfig(*, title: str | None = None, description: str | None = None)[source]

Bases: BaseModel

Configuration for a tool.

title

Overrides the default title of the tool.

Type:

Optional[str]

description

Overrides the default description of the tool.

Type:

Optional[str]

title: str | None
description: str | None
model_config: ClassVar[ConfigDict] = {}

Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].

class atomic_agents.base.base_tool.BaseTool(config: BaseToolConfig = BaseToolConfig(title=None, description=None))[source]

Bases: ABC, Generic

Base class for tools within the Atomic Agents framework.

Tools enable agents to perform specific tasks by providing a standardized interface for input and output. Each tool is defined with specific input and output schemas that enforce type safety and provide documentation.

Type Parameters:

InputSchema: Schema defining the input data, must be a subclass of BaseIOSchema. OutputSchema: Schema defining the output data, must be a subclass of BaseIOSchema.

config

Configuration for the tool, including optional title and description overrides.

Type:

BaseToolConfig

input_schema

Schema class defining the input data (derived from generic type parameter).

Type:

Type[InputSchema]

output_schema

Schema class defining the output data (derived from generic type parameter).

Type:

Type[OutputSchema]

tool_name

The name of the tool, derived from the input schema’s title or overridden by the config.

Type:

str

tool_description

Description of the tool, derived from the input schema’s description or overridden by the config.

Type:

str

__init__(config: BaseToolConfig = BaseToolConfig(title=None, description=None))[source]

Initializes the BaseTool with an optional configuration override.

Parameters:

config (BaseToolConfig, optional) – Configuration for the tool, including optional title and description overrides.

property input_schema: Type

Returns the input schema class for the tool.

Returns:

The input schema class.

Return type:

Type[InputSchema]

property output_schema: Type

Returns the output schema class for the tool.

Returns:

The output schema class.

Return type:

Type[OutputSchema]

property tool_name: str

Returns the name of the tool.

Returns:

The name of the tool.

Return type:

str

property tool_description: str

Returns the description of the tool.

Returns:

The description of the tool.

Return type:

str

abstract run(params: InputSchema) OutputSchema[source]

Executes the tool with the provided parameters.

Parameters:

params (InputSchema) – Input parameters adhering to the input schema.

Returns:

Output resulting from executing the tool, adhering to the output schema.

Return type:

OutputSchema

Raises:

NotImplementedError – If the method is not implemented by a subclass.