Statsig’s Node.js AI SDK lets you manage prompts, run online and offline evals, and capture telemetry for LLM-powered features in your Node applications.
The Statsig Node AI SDK lets you manage your prompts, online and offline evals, and debug your LLM applications in production. It depends upon the Statsig Node Server SDK, but provides convenient hooks for AI-specific functionality.
1
Install the SDK
npm install @statsig/statsig-ai
If you have unique setup needs like a frozen lockfile, take a look at the Node Server SDK docs - the AI SDK will install Node Server if you don’t already have it.
2
Initialize the SDK
If you already have a Statsig instance, you can pass it into the SDK. Otherwise, we’ll create an instance for you internally.
Don't use Statsig
Already have Statsig instance
Initialize the AI SDK with a Server Secret Key from the Statsig console.
Server Secret Keys should always be kept private. If you expose one, you can
disable and recreate it in the Statsig console.
import { StatsigAI } from '@statsig/statsig-ai';const statsigAI = new StatsigAI({'YOUR_SERVER_SECRET_KEY'});await statsigAI.initialize();
Initializing With Options
Optionally, you can configure StatsigOptions for your Statsig instance:
import { StatsigAI, StatsigAIOptions } from '@statsig/statsig-ai';import { StatsigOptions } from '@statsig/statsig-server-core-node';// if you want to configure any statsig options, this is optional:const statsigOptions: StatsigOptions = { environment: 'production',};const statsigAI = new StatsigAI({'YOUR_SERVER_SECRET_KEY', statsigOptions});await statsigAI.initialize();// if you would like to use any statsig methods, you can access the statsig instance from the statsigAI instance:const gate = statsigAI.getStatsig().checkGate(statsigUser, 'my_gate');
Statsig can act as the control plane for your LLM prompts, allowing you to version and change them without deploying code. For more information, see the Prompts documentation.
import { StatsigUser } from '@statsig/statsig-ai';// Create a user objectconst user = new StatsigUser({ userID: 'a-user' });// Get the promptconst myPrompt = statsigAI.getPrompt(user, 'my_prompt');// Use the live version of the promptconst liveVersion = myPrompt.getLive();// Get the candidate versions of the promptconst candidateVersions = myPrompt.getCandidates();// Use the live version of the prompt in a completionconst response = await openai.chat.completions.create({ model: liveVersion.getModel({ fallback: 'gpt-4' }), // optional fallback temperature: liveVersion.getTemperature(), max_tokens: liveVersion.getMaxTokens(), messages: [{ role: 'user', content: 'Your prompt here' }],});
When running an online eval, you can log results back to Statsig for analysis.
Provide a score between 0 and 1, along with the grader name and any useful metadata (e.g., session IDs).
Currently, you must provide the grader manually — future releases will support automated grading options.
import { StatsigUser } from '@statsig/statsig-ai';const livePromptVersion = statsigAI.getPrompt(user, 'my_prompt').getLive();// Create a user objectconst user = new StatsigUser({ userID: 'a-user' });// Log the results of the evalstatsigAI.logEvalGrade(user, livePromptVersion, 0.5, 'my_grader', { session_id: '1234567890',});// flush eval grade events to statsigawait statsigAI.flush();
Programmatic evaluation allows you to run evaluations on datasets programmatically, automatically scoring outputs and sending results to Statsig for analysis.With programmatic evaluation, you can:
Run evaluations on datasets: Process arrays, iterators, or async generators of input/expected pairs
Define custom tasks: Create functions that generate outputs from inputs (supports both sync and async)
Score outputs: Use single or multiple named scorer functions to evaluate outputs (supports boolean, numeric, or metadata-rich scores)
Use parameters: Pass dynamic parameters to tasks using Zod schemas (Node) or dictionaries (Python)
Categorize data: Group evaluation records by categories for better analysis
Compute summary scores: Aggregate results across all records with custom summary functions
Handle errors gracefully: Task and scorer errors are caught and reported without stopping the evaluation
The evaluation automatically sends results to Statsig, where you can view them in the console alongside your other eval data.
Tasks and scorers can be async functions. Data can also be provided as async
functions, promises, or async iterators. The expected field in data records
is optional; scorers can evaluate outputs without expected values. Task and
scorer errors are automatically caught and reported in the results.
The AI SDK works with OpenTelemetry for sending telemetry to Statsig.
You can enable OTel tracing by calling the initializeTracing function.
You can also provide a custom TracerProvider to the initializeTracing function if you want to customize the tracing behavior.
More advanced OTel configuration and exporter support are on the way.The simplest way to start tracing with Statsig and OTel is to call initializeTracing() at the root of your application.
// instrumentation.{js,ts}import { initializeTracing } from '@statsig/statsig-ai/otel';initializeTracing({ // optional: enables the global trace provider registration // so that you can create spans without having to create a new trace provider enableGlobalTraceProviderRegistration: true,});
If you already have your own OTel setup with NodeSDK, you only need to initialize Statsig’s OTel tracing and use the processor created by initializeTracing().
// instrumentation.{js,ts}import { NodeSDK } from '@opentelemetry/sdk-node';import { PeriodicExportingMetricReader, ConsoleMetricExporter,} from '@opentelemetry/sdk-metrics';import { initializeTracing } from '@statsig/statsig-ai/otel';// when you have your own otel setup and don't want to use the global trace provider// you can disable it with the options belowconst { processor } = initializeTracing({ // prevents creating a global context manager skipGlobalContextManagerSetup: true, exporterOptions: { sdkKey: process.env.STATSIG_SDK_KEY!, },});const sdk = new NodeSDK({ // IMPORTANT: use the processor created by initializeTracing // to make sure that spans are exported to Statsig spanProcessors: [processor], metricReader: new PeriodicExportingMetricReader({ exporter: new ConsoleMetricExporter(), }), // ... other node sdk options like autoInstrumentations});sdk.start();export { sdk };
The initializeOTel function accepts the below options for setting up tracing with OTel.
type InitializeOptions = { /** An optional global context manager to use. If not provided, one will be created and set as the global context manager unless `skipGlobalContextManagerSetup` is true. */ globalContextManager?: ContextManager; /** If true, will not attempt to set up a global context manager automatically. */ skipGlobalContextManagerSetup?: boolean; /** If true, will register the trace provider globally. */ enableGlobalTraceProviderRegistration?: boolean; /** An optional global trace provider to use. If not provided, a new BasicTracerProvider will be created and optionally registered globally */ globalTraceProvider?: TracerProvider; /** Options to pass to the StatsigOTLPTraceExporter */ exporterOptions?: StatsigOTLPTraceExporterOptions; // resource options serviceName?: string; version?: string; environment?: string;};
The Statsig OpenAI Wrapper automatically adds tracing and log events to your OpenAI SDK usage, giving you in-console visibility with minimal setup.
import { wrapOpenAI, StatsigAI } from '@statsig/statsig-ai';import { OpenAI } from 'openai';// if you have your own otel, you do not need an statsigAI instance here.// But if you want to use the default Otel on statsigAI, you need to initialize the SDK.statsigAI = new StatsigAI({"YOUR_SERVER_SECRET_KEY"});await statsigAI.initialize();const client = wrapOpenAI( new OpenAI({ apiKey: process.env.OPENAI_API_KEY, }));const response = await client.chat.completions.create({ model: "gpt-4", messages: [{ role: "user", content: "Hello, world!" }],});
Whether you passed in a Statsig instance or not, you can access the Statsig instance from the statsigAI instance, and use its many methods:
// Check a gate valueconst gate = statsigAI.getStatsig().checkGate(statsigUser, 'my_gate');// Log an eventstatsigAI.getStatsig().logEvent(statsigUser, 'my_event', { value: 1 });
Refer to the Statsig Node SDK docs for more information on how to use the Core Statsig SDK methods, plus information on advanced setup + singleton usage.