Native Observability

See everything, from request to response. OpenTelemetry instrumentation built into the execution engine. Distributed traces connect your API request through workflow steps to every LLM call and tool invocation. No separate monitoring tools needed.

Trace: analyze_document
API Request
142ms
extract_text
89ms
llm.summarize
1.2s
openai.chat
1.1s

Built-in Tracing

OpenTelemetry instrumentation built into the runtime. Every workflow step, LLM call, and tool invocation is automatically traced. No manual instrumentation needed.

End-to-End Visibility

Distributed traces connect your API request through all workflow steps. See the complete execution path. Understand where time is spent and where failures occur.

Rich Context

Every span includes inputs, outputs, and metadata. Inspect LLM prompts and responses. View tool invocation details. Debug with complete context.

Performance Insights

Identify bottlenecks with waterfall views. Track P50, P95, P99 latencies. Monitor LLM token usage and costs. Optimize based on real data.

Monitor with complete visibility.

Get started