Chat Agent

AgentOpera's ChatAgent provides a flexible and powerful architecture for building intelligent agent applications. This document outlines the key components and features of the chat agent system.

Core Concepts

The AgentOpera agent system is built on several shared principles:

  • name: A unique identifier string for the agent

  • description: Text explaining the agent's purpose and capabilities

  • on_messages(): Primary method to process new messages and generate responses

  • on_messages_stream(): Stream-based variant that yields events during processing

  • on_reset(): Method to clear agent state and return to initial configuration

  • run() and run_stream(): Simplified interfaces for common task execution patterns

AssistantAgent

The AssistantAgent serves as the primary implementation class in the agent hierarchy, providing LLM-powered interaction capabilities and tool execution support.

from agentopera.chatflow.agents import AssistantAgent
from agentopera.chatflow.messages import TextMessage
from agentopera.engine.types.agent import CancellationToken
from agentopera.engine.models.openai import OpenAIChatCompletionClient
import asyncio

# Set up the model client
model_client = OpenAIChatCompletionClient(
    model="gpt-4o",
    # api_key="YOUR_API_KEY", 
)

# Initialize the assistant agent
agent = AssistantAgent(
    name="assistant",
    model_client=model_client,
    system_message="You are a helpful assistant. Always provide detailed answers."
)

# Execute a simple query
async def run_assistant():
    response = await agent.on_messages(
        [TextMessage(content="What is deep learning?", source="user")],
        CancellationToken()
    )
    print(response.chat_message.content)

# Launch the function
asyncio.run(run_assistant())

Responses

The on_messages() method produces a Response object with two key elements:

  • chat_message: The final message response from the agent

  • inner_messages: A sequence of intermediate messages showing the agent's reasoning process

Note: Because agents maintain internal state between calls, always provide only new messages to on_messages(), and avoid passing the entire conversation history.

Message Streaming

For applications requiring real-time interaction, AgentOpera provides streaming capabilities through on_messages_stream():

The stream yields a sequence of events such as:

  1. Agent processing events (tool calls, thought processes, etc.)

  2. The final Response object as the last item

Tools

Agents can extend their capabilities by connecting to external functions and services:

Token Streaming

For fine-grained control over output generation, enable token streaming with model_client_stream=True:

Context Management

The AssistantAgent offers intelligent context management for handling conversation history:

Default: Full History

By default, the agent uses UnboundedChatCompletionContext, which maintains and provides the complete conversation history to the language model.

Bounded History

For long-running conversations or performance optimization, you can limit the amount of conversation history sent to the model (context window):

This approach is particularly useful for applications with extended interactions where older context becomes less relevant.

Other Agent Types

The AgentOpera framework includes several specialized agent implementations:

  • UserProxyAgent: Interfaces with human users, capturing their input as agent messages

  • CodeExecutorAgent: Specialized for executing code in various programming languages

  • OpenAIAssistantAgent: Adapter for OpenAI's Assistant API with additional capabilities

Each agent implementation follows the same core interface and provides domain-specific functionality.

Last updated