Installation
Install the Trainly React SDK from npm:
npm install @trainly/react
Requires React 18+ and TypeScript 5.0+. The SDK is fully tree-shakeable.
Quick Start
Wrap your application with TrainlyProvider and start tracing AI calls in under five minutes.
import { TrainlyProvider } from '@trainly/react';
function App() {
return (
<TrainlyProvider projectId="proj_..." apiKey="tk_...">
<MyAIFeature />
</TrainlyProvider>
);
}
TrainlyProvider
The root provider that initializes the SDK and manages connection state.
<TrainlyProvider
projectId="proj_abc123"
apiKey="tk_live_xyz789"
baseUrl="https://api.trainly.dev"
>
<App />
</TrainlyProvider>
Props
| Prop | Type | Required | Description |
|---|
projectId | string | Yes | Your Trainly project ID (starts with proj_) |
apiKey | string | Conditional | API key for authentication (starts with tk_). Required unless using V1 OAuth. |
appId | string | Conditional | App ID for V1 OAuth mode (starts with app_). Required for multi-user apps. |
baseUrl | string | No | Custom API endpoint. Defaults to Trainly cloud. |
getToken | () => Promise<string> | Conditional | Async function that returns a fresh OAuth token. Required when using V1 OAuth. |
useTrainlyObserve
The primary hook for AI observability. Returns functions to trace, wrap, span, and score your AI calls.
import { useTrainlyObserve } from '@trainly/react';
function MyAIFeature() {
const { trace, wrap, startSpan, score } = useTrainlyObserve({
model: 'gpt-4o',
tags: ['production'],
});
// ...
}
Log a completed AI call manually.
const handleQuery = async () => {
const prompt = "Summarize this document";
const result = await callOpenAI(prompt);
trace(prompt, result, {
model: 'gpt-4o',
tags: ['summarization'],
tokenUsage: { prompt: 120, completion: 85, total: 205 },
cost: 0.0041,
metadata: { documentId: 'doc_abc' },
});
};
wrap(fn, options?)
Wrap any async function for automatic tracing. The SDK captures input arguments, return values, latency, and errors.
const tracedFetch = wrap(
async (prompt: string) => {
const res = await fetch('/api/ai', {
method: 'POST',
body: JSON.stringify({ prompt }),
});
return res.json();
},
{ model: 'gpt-4o', tags: ['api-call'] }
);
// Every call is automatically traced
const result = await tracedFetch("What is the weather?");
startSpan(name, options?)
Create a manual span for fine-grained tracing of multi-step pipelines.
const handlePipeline = async () => {
const span = startSpan('data-processing-pipeline', {
tags: ['pipeline'],
});
try {
span.setInput({ query: userQuery });
// Step 1: Embedding
const embeddingSpan = startSpan('generate-embedding');
const embedding = await getEmbedding(userQuery);
embeddingSpan.setModel('text-embedding-3-small');
embeddingSpan.setTokenUsage({ prompt: 10, completion: 0, total: 10 });
embeddingSpan.setOutput(embedding);
embeddingSpan.end();
// Step 2: LLM call
const llmSpan = startSpan('llm-generation');
const response = await callLLM(userQuery, context);
llmSpan.setModel('gpt-4o');
llmSpan.setCost(0.012);
llmSpan.setOutput(response);
llmSpan.end();
span.setOutput({ response });
span.end();
} catch (error) {
span.setError(error);
span.end();
}
};
Span Methods
| Method | Description |
|---|
end() | Finalize and submit the span |
setAttribute(key, value) | Attach a custom attribute |
setError(error) | Mark the span as failed with an error |
setModel(name) | Set the model used in this span |
setTokenUsage({ prompt, completion, total }) | Record token counts |
setCost(amount) | Record the cost in USD |
setInput(data) | Set structured input data |
setOutput(data) | Set structured output data |
Score a trace for evaluation and quality tracking.
// Score on a 0-1 scale
score('trace_abc123', 'accuracy', 0.95, 'Response matched expected output');
score('trace_abc123', 'helpfulness', 0.8);
// Binary scoring
score('trace_abc123', 'thumbs-up', 1);
TraceOptions
All tracing functions accept these options:
| Option | Type | Description |
|---|
model | string | Model identifier (e.g., gpt-4o, claude-3-opus) |
tags | string[] | Tags for filtering and grouping traces |
version | string | Version label for A/B testing prompts |
metadata | Record<string, any> | Arbitrary metadata attached to the trace |
customAttributes | Record<string, any> | Custom key-value attributes |
sessionId | string | Session ID for grouping related traces |
traceId | string | Custom trace ID (auto-generated if omitted) |
expectedOutput | string | Expected output for evaluation comparison |
tokenUsage | { prompt, completion, total } | Token usage breakdown |
cost | number | Cost in USD |
toolCalls | ToolCall[] | Tool/function calls made during the trace |
inputStructured | object | Structured input (when input is not a plain string) |
outputStructured | object | Structured output (when output is not a plain string) |
TrainlySessionProvider
Groups traces into sessions automatically. Useful for tracking multi-turn agent interactions or user workflows.
import { TrainlySessionProvider, useTrainlySession } from '@trainly/react';
function AgentWorkflow() {
return (
<TrainlySessionProvider>
<AgentComponent />
</TrainlySessionProvider>
);
}
function AgentComponent() {
const { sessionId, startSession, endSession } = useTrainlySession();
const { trace } = useTrainlyObserve();
// All traces within this provider are auto-tagged with sessionId
const handleNewConversation = () => {
startSession(); // Generate a new sessionId
};
const handleEnd = () => {
endSession(); // Close the current session
};
return <div>Session: {sessionId}</div>;
}
useTrainlySession()
| Return Value | Type | Description |
|---|
sessionId | string | Current session identifier |
startSession() | () => void | Start a new session (generates a new ID) |
endSession() | () => void | End the current session |
V1 OAuth Authentication
For multi-user applications where each end-user should have isolated trace data, use V1 OAuth mode with your existing auth provider.
import { TrainlyProvider } from '@trainly/react';
import { useAuth } from '@clerk/nextjs';
function App() {
const { getToken } = useAuth();
return (
<TrainlyProvider
appId="app_abc123"
projectId="proj_xyz789"
getToken={() => getToken({ template: 'trainly' })}
>
<MyApp />
</TrainlyProvider>
);
}
Supported identity providers: Clerk, Auth0, AWS Cognito, Firebase Auth, and any custom OIDC-compliant provider.
Pre-built Components
The SDK includes optional UI components for specific use cases:
| Component | Description |
|---|
TrainlyChat | Chat interface with theme support, citations, and file upload |
TrainlyUpload | File upload with drag-drop, button, and minimal variants |
TrainlyStatus | Connection status indicator |
TrainlyFileManager | File management UI |
Pre-built components are designed for gated features and require additional configuration.
The observability features (useTrainlyObserve, sessions, scoring) work independently
without these components.
TypeScript Support
The SDK ships with full TypeScript definitions. All hooks, components, and options are fully typed.
import type { TraceOptions, SpanHandle } from '@trainly/react';
const options: TraceOptions = {
model: 'gpt-4o',
tags: ['production'],
tokenUsage: { prompt: 100, completion: 50, total: 150 },
};