🦀Agento
FeaturesPricingBlog
HomeGuidesUsing the Chat API

Using the Chat API

Mar 4, 2026·10 min read

Table of Contents

  • Prerequisites
  • Create an API Key
  • Send Your First Message
  • Parse the SSE Stream
  • Add Customer Context
  • Isolate Customers with peerId
  • Manage Sessions
  • Retrieve Message History
  • Tips

This guide covers the Chat API: sending messages, parsing the SSE stream, isolating sessions per customer, and passing context. The examples use a chat widget scenario, but the same patterns apply to any integration (backend bots, Slack apps, internal tools, etc.).

Prerequisites

Before you start, you need:

  • An Agento account with a running agent
  • Your agent set to shared mode (so multiple users can chat with it). See the Private vs Shared Mode guide.
  • An API key with the chat scope

Create an API Key

  1. Go to API Keys in your dashboard
  2. Click Create API Key
  3. Select the chat scope
  4. Copy the key. You will not see it again.

Your API key starts with ak_live_. Keep it secret. In production, route all API calls through your backend server so the key is never exposed to the browser.

Send Your First Message

Send a POST request to the chat endpoint:

curl -N https://api.agento.host/v1/agents/YOUR_AGENT_ID/chat/messages \
  -H "X-Api-Key: ak_live_YOUR_KEY" \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello!"}'

The -N flag disables buffering so you see the SSE stream in real time.

Parse the SSE Stream

The response is a Server-Sent Events stream. Each event is a JSON object on a data: line:

data: {"type":"text","content":"Hi! "}
data: {"type":"text","content":"How can I help you?"}
data: {"type":"done"}

Here is a JavaScript function that sends a message and reads the stream:

async function sendMessage(agentId, message, options = {}) {
  const { apiKey, sessionId, context, onToken, onDone } = options

  const res = await fetch(
    `https://api.agento.host/v1/agents/${agentId}/chat/messages`,
    {
      method: 'POST',
      headers: {
        'X-Api-Key': apiKey,
        'Content-Type': 'application/json',
      },
      body: JSON.stringify({ message, sessionId, context }),
    }
  )

  const reader = res.body.getReader()
  const decoder = new TextDecoder()
  let buffer = ''

  while (true) {
    const { done, value } = await reader.read()
    if (done) break

    buffer += decoder.decode(value, { stream: true })
    const lines = buffer.split('\n')
    buffer = lines.pop() // keep incomplete line in buffer

    for (const line of lines) {
      if (!line.startsWith('data: ')) continue
      const data = JSON.parse(line.slice(6))

      if (data.type === 'text' && onToken) {
        onToken(data.content)
      }
      if (data.type === 'done' && onDone) {
        onDone()
      }
    }
  }
}

Usage:

let response = ''

await sendMessage('YOUR_AGENT_ID', 'Hello!', {
  apiKey: 'ak_live_YOUR_KEY',
  onToken: (token) => {
    response += token
    updateChatUI(response) // your rendering function
  },
  onDone: () => {
    console.log('Full response:', response)
  },
})

Add Customer Context

The context field lets you pass metadata about the current user to your agent. This is useful for support bots, where the agent should know who it is talking to.

Context can be a string:

{
  "message": "How do I upgrade?",
  "context": "Customer Alice, Pro plan, account 12345"
}

Or a key-value object:

{
  "message": "How do I upgrade?",
  "context": {
    "customerName": "Alice",
    "plan": "Pro",
    "accountId": "12345"
  }
}

Both formats work. Object format is formatted as key: value lines automatically. The agent sees the context at the beginning of the message and can use it to personalize its response.

Context is limited to 2000 characters after formatting.

Important: Pass context from your backend server. Your backend knows who the logged-in user is and can attach their account details. Never let the browser send context directly, as users could tamper with it.

Isolate Customers with peerId

When multiple customers share the same agent via your API, each customer needs their own isolated session. Without isolation, the agent's memory could leak information between customers.

The peerId field solves this. Pass your customer's unique identifier (account ID, user ID, etc.) as the peerId, and the agent creates a separate session scope for each customer:

{
  "message": "How do I upgrade?",
  "peerId": "customer_12345",
  "context": {
    "customerName": "Alice",
    "plan": "Pro"
  }
}

Different peer IDs get completely separate conversation histories. The agent will never recall memories from customer A when talking to customer B.

Always set peerId when your agent serves multiple end users. Use a stable, unique identifier from your system (account ID, user UUID, etc.). Without it, all API requests share a single session scoped to your API key.

Manage Sessions

By default, each peerId gets one session. To give a customer multiple independent conversations (e.g. separate support tickets), combine peerId with sessionId:

await sendMessage(agentId, 'Hello!', {
  apiKey,
  peerId: customer.id,
  sessionId: `ticket_${ticketId}`,
  context: { customerName: customer.name },
})

To start a fresh conversation (clear the agent's memory of the current chat), create a new session:

curl -X POST https://api.agento.host/v1/agents/YOUR_AGENT_ID/chat/sessions \
  -H "X-Api-Key: ak_live_YOUR_KEY"

This returns a new session ID you can use for subsequent messages.

Retrieve Message History

When a returning visitor opens the chat widget, load their previous messages:

curl https://api.agento.host/v1/agents/YOUR_AGENT_ID/chat/messages?sessionId=THEIR_SESSION \
  -H "X-Api-Key: ak_live_YOUR_KEY"

Response:

{
  "data": [
    { "role": "user", "content": "Hello!", "timestamp": "2026-03-04T10:00:00.000Z" },
    { "role": "assistant", "content": "Hi! How can I help?", "timestamp": "2026-03-04T10:00:02.000Z" }
  ],
  "pagination": { "total": 2, "limit": 50, "offset": 0 }
}

Use the since query parameter to fetch only new messages since the last poll.

Tips

  • Rate limiting. The API has rate limits per API key. If you get 429 responses, back off and retry after a short delay.
  • Error handling. Always check the response status before reading the stream. Non-200 responses return a JSON error body, not SSE.
  • Session lifecycle. Sessions persist until you create a new one. For support widgets, consider creating a new session for each new support ticket or visit.
  • Backend proxy. In production, run all API calls through your backend. This keeps your API key secret and lets you attach verified context.
  • Content-Type. The streaming response uses text/event-stream. Make sure your infrastructure (reverse proxies, CDNs) does not buffer SSE responses. Set X-Accel-Buffering: no if using Nginx.

For full endpoint details, see the API Reference.

Back to all guides
🦀Agento

AI agents that run 24/7 for your business. Deploy in minutes, not hours.

Remsys, Inc

1606 Headway Cir STE 9078

Austin, TX 78754, USA

+1 650 396 9091

🦞Powered by OpenClaw

Product

  • Features
  • Pricing
  • Security

Company

  • About
  • Contact

Resources

  • Skills Marketplace
  • Agento Blog
  • API Reference
  • Guides
  • OpenClaw
  • Skills.sh

Legal

  • Privacy
  • Terms
  • GDPR

© 2026 Agento. All rights reserved.