Score a Trace
POST /v1/{project_id}/traces/{trace_id}/scores
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Score name (e.g. "accuracy", "relevance", "toxicity") |
value | float | Yes | Score between 0 and 1 |
comment | string | No | Human-readable justification |
Response
LLM Judge Scoring
Run an automated evaluation using a built-in or custom LLM judge scorer.POST /v1/{project_id}/score-with-judge
Request Body
| Parameter | Type | Required | Description |
|---|---|---|---|
scorer_slug | string | Yes | Identifier for the scorer (e.g. "correctness", "faithfulness", "toxicity") |
trace_id | string | No | Link the score to an existing trace |
input | string | No | The original input/prompt |
output | string | No | The model output to evaluate |
expected_output | string | No | Ground-truth expected output for comparison |