Installation
Quick Start
@observe Decorator
The@observe decorator is the primary way to instrument your AI calls. It automatically captures inputs, outputs, latency, and exceptions.
Parameters
| Parameter | Type | Description |
|---|---|---|
model | str | Model name (e.g. "gpt-4o", "claude-sonnet-4-20250514") |
tags | list[str] | Filterable labels |
expected_output | str | Ground truth for evaluation |
trace_id | str | Custom trace identifier |
metadata | dict | Arbitrary key-value pairs |
version | str | Code or prompt version tag |
custom_attributes | dict | Additional structured data |
span_name | str | Override the default span name |
capture_exceptions | bool | Log exceptions as trace errors (default True) |
session_id | str | Group traces into a session |
With All Options
Manual Logging
Useclient.observe.log() when you need full control over what gets recorded.
Parameters
| Parameter | Type | Description |
|---|---|---|
input | str | The input sent to the model |
output | str | The model’s response |
model | str | Model identifier |
latency_ms | float | Execution time in milliseconds |
tags | list[str] | Filterable labels |
expected_output | str | Ground truth for evaluation |
trace_id | str | Custom trace identifier |
token_usage | dict | Token counts (prompt_tokens, completion_tokens) |
metadata | dict | Arbitrary key-value pairs |
status | str | "success" or "error" |
error | str | Error message if applicable |
version | str | Version tag |
custom_attributes | dict | Additional structured data |
tool_calls | list[dict] | Tool/function calls made during execution |
input_structured | dict | Structured input (e.g. message arrays) |
output_structured | dict | Structured output (e.g. parsed JSON) |
spans | list[dict] | Sub-step span data |
cost | float | Estimated cost in USD |
session_id | str | Session grouping identifier |
Sessions
Group related traces into a session using theagent_session() context manager. This is useful for multi-step agent workflows.
Spans
Break a single trace into sub-steps with thespan() context manager.
Span Methods
| Method | Description |
|---|---|
set_input(value) | Record the span’s input |
set_output(value) | Record the span’s output |
set_attribute(key, value) | Attach a custom attribute |
set_model(name) | Set the model used in this span |
set_token_usage(usage) | Record token counts |
set_cost(amount) | Record cost in USD |
Scoring
Score traces for evaluation, either manually or with an AI judge.Prompt Management
Manage versioned prompts and build them with template variables.Analytics
Access trace analytics and cost data programmatically.Testing
Create test suites, add cases, and run evaluations against your AI functions.Versions
Publish, track, and roll back versioned deployments of your AI pipelines.Error Handling
All SDK errors raiseTrainlyError with structured context.