Skip to main content

Quickstart

Get full observability on your AI pipeline in 5 minutes.

Prerequisites

  • A Trainly account (sign up free)
  • Python 3.8+ or Node.js 16+
  • An OpenAI, Anthropic, or any LLM API key

Step 1: Install the SDK

pip install trainly

Step 2: Get your API key

  1. Go to your Trainly Dashboard
  2. Navigate to Settings → API Keys
  3. Click Create API Key — it starts with tk_
  4. Copy your Project ID — it starts with proj_
Set these as environment variables so you don’t hardcode them:
export TRAINLY_API_KEY=tk_your_key_here
export TRAINLY_PROJECT_ID=proj_your_project_here

Step 3: Add @observe to your AI function

from trainly import TrainlyClient
from openai import OpenAI

# Initialize (reads from env vars automatically)
trainly = TrainlyClient()
openai_client = OpenAI()

@trainly.observe(model="gpt-4o", tags=["quickstart"])
def ask(question: str) -> str:
    response = openai_client.chat.completions.create(
        model="gpt-4o",
        messages=[{"role": "user", "content": question}]
    )
    return response.choices[0].message.content

# Run it
result = ask("Explain AI observability in one sentence.")
print(result)

Step 4: View your traces

Open your Trainly Dashboard and navigate to the Trace Explorer. You should see your trace with:
  • Input/Output — the exact prompt and response
  • Model — which LLM was used
  • Latency — how long the call took
  • Tokens — prompt and completion token counts
  • Cost — estimated cost based on model pricing

What’s next?

Core Concepts

Learn about traces, spans, sessions, and scoring.

Python SDK

Deep dive into @observe, manual logging, and agent sessions.

React SDK

Set up tracing in your React app with useTrainlyObserve.

API Reference

Direct REST API access for trace ingestion and analytics.