DocsStreaming & Events

Streaming & Events

Build responsive UIs by streaming tokens and listening to agent lifecycle events.

Why Stream?#

LLMs are slow. A complex query might take 30 seconds. Streaming allows you to show the first word immediately, keeping the user engaged.

The Stream API#

Instead of `agent.run()`, use `agent.stream()`. This returns an async generator that yields events.

stream-example.ts
const stream = agent.stream("Write a short poem about Rust")

for await (const event of stream) {
  switch (event.type) {
    case 'token':
      // The raw text chunk (e.g. "The ", "iron ", "rusts")
      process.stdout.write(event.data)
      break
      
    case 'tool_start':
      console.log(`\n[Using Tool: ${event.data.tool}]`)
      break
      
    case 'tool_end':
      console.log(`[Tool Result: ${event.data.output}]`)
      break
  }
}

Frontend Integration (React/Next.js)#

Using `Vercel AI SDK` on the frontend with Akios on the backend is a powerful combo.

Edge Runtime

Streaming works best on the Edge. Ensure your API route uses `export const runtime = 'edge'`.
app/api/chat/route.ts
import { StreamingTextResponse, LangChainStream } from 'ai'
import { Agent } from '@akios/sdk'

export async function POST(req: Request) {
  const { messages } = await req.json()
  const lastMessage = messages[messages.length - 1].content

  const agent = new Agent({ /* config */ })

  // Convert Akios stream to standard text stream
  const stream = new ReadableStream({
    async start(controller) {
      for await (const event of agent.stream(lastMessage)) {
        if (event.type === 'token') {
          controller.enqueue(event.data)
        }
      }
      controller.close()
    }
  })

  return new StreamingTextResponse(stream)
}

Event Types Reference#

Event TypeData PayloadDescription
tokenstringA text chunk from the LLM.
tool_start{ tool: string, input: any }Agent decided to call a tool.
tool_end{ output: string }Tool execution completed.
stepStepObjectA full thought/action cycle finished.