Integrations
Integrated agent using 4 ways
Our platform supports multiple flexible integration pathways to create AI-driven agents that can autonomously understand, process, and respond to tasks. Choose the method that best fits your technical needs and workflow:
Via Prompt Quickly spin up lightweight agents using simple natural language instructions. Ideal for fast prototyping and prompt-driven tasks.
Via Workflow Design structured, multi-step agent behaviors using our visual workflow builder. Best suited for orchestrating sequential or conditional logic without code.
Via Third Party API (Coming soon) Connect your agent to external systems and services through API calls. This approach is perfect for automating interactions with external platforms and data sources.
Via Agent Framework (Coming soon) For full customization and advanced control, integrate with AgentOpera or other frameworks. Build deeply tailored agents that support custom logic, memory, tools, and streaming capabilities.
Each integration option is designed to meet different levels of complexity—from no-code to fully programmatic—empowering you to bring intelligent automation into any environment.

✨ Via Prompt
The Prompt-based integration method is the fastest way to bring a fully functional AI agent to life. By defining a system prompt and model configuration, you can deploy a natural language-driven agent that understands tasks and responds autonomously — all with minimal setup.
This integration uses the PromptAgentCreator
class, a specialized extension of the DeveloperAgent
framework. It wraps around an LLM backend (such as OpenAI) and applies your custom system_prompt
to guide the agent’s behavior. This enables:
Natural Language Understanding Define the agent’s tone, goals, and domain using a prompt-based setup.
Streaming Responses Delivers real-time, token-by-token feedback for faster and more interactive conversations.
Tool Integration(🚧 In development) Dynamically loads external tools from the MCP server and extends agent capabilities.
Knowledge Base (🚧 In development) Dynamically loads external knowledge from the memory base and extends agent capabilities.
Session Awareness Maintains conversation context using
session_id
, supporting both single-turn and multi-turn interactions.Flexible Model Selection Choose from different LLM models (e.g., GPT-4, DeepSeek-R1, or custom deployments) depending on your performance and cost requirements.
This approach is ideal for lightweight use cases, custom chat agents, knowledge bots, or any scenario where natural language is the main interface — with no need to build or manage workflows

📦 Example: Creating a Prompt-Based Agent
from agentopera.zerocode.developer_agent import PromptAgentCreator
agent = PromptAgentCreator(
agent_url="https://your-model-endpoint.com/v1/chat",
agent_api_key="your-api-key",
is_output_streaming=True,
agent_name="TravelPlanner",
agent_intent="travel_intent",
agent_description="Helps users plan trips using LLM reasoning.",
agent_type="General",
system_prompt="You are a helpful travel assistant. Answer questions clearly and suggest travel ideas.",
model_name="gpt-4"
)
🔄 Via Workflow
The Workflow-based integration allows you to create agents using our platform’s built-in visual workflow engine. This method is ideal for designing multi-step, logic-driven tasks without writing complex backend code.
Under the hood, FlowAgentCreator
connects to the platform’s internal workflow runtime, enabling:
Visual Programming: Design agent logic using a drag-and-drop interface to define data flow, conditions, and function chaining.
Streaming Support: Stream responses token-by-token with full support for
token
,end
, andend_vertex
events.Dynamic Behavior: Handle complex decision trees, branching logic, and multiple tool invocations seamlessly.
Session-aware Execution: Automatically carries
session_id
across each workflow stage to preserve context.Native API Compatibility: Executes platform-defined workflows via internal APIs with structured inputs and real-time output streaming.
This option is ideal for no-code and low-code users who want to build powerful, stateful agents entirely within the platform



from agentopera.zerocode.developer_agent import FlowAgentCreator
agent = FlowAgentCreator(
agent_url="https://platform-hosted-workflow-url.com/build", # auto-generated by platform
agent_name="OrderSupportAgent",
agent_intent="order_support",
agent_description="Handles customer inquiries about order status, shipping, and returns.",
agent_type="support",
is_output_streaming=True,
agent_api_key="your-api-key"
)
🌐 Via Third Party API (Developing)
The Third Party API integration method allows you to connect your agent to any external service or system. This is the most flexible option for teams that already have AI logic, data pipelines, or agent backends hosted outside the platform.
Powered by the DeveloperAgent
base class, this integration enables you to:
Plug In External Intelligence Route user messages to a custom agent hosted anywhere — from your internal services to public AI APIs.
Easy Payload Mapping
Effortlessly integrate your existing agents by specifying the expected input/output format
required_payload_structure
. The platform automatically adapts user messages to match your API’s structure — no need for major refactoring.Support Streaming or Batch Modes Handle both real-time streaming responses and standard JSON replies, depending on how your agent is configured.
Secure Access with API Keys Optionally append an
agent_api_key
to authenticate requests to your service.
This method is ideal when your AI agent already lives elsewhere — and you want to integrate it into the platform without reimplementation. You stay in control of the logic, while the platform handles session state, message routing, and delivery.

🧠 Via Agent Framework (Developing)
Last updated