Skip to main content

Overview

Lectr is a proxy that sits between your application and your AI provider. Change one line — the baseURL — and Lectr begins capturing metadata for every AI request your product makes: latency, tokens, cost, and errors. No SDK to install. No agent to run. No infrastructure changes.

Step 1: Get your org key

Sign in at app.lectr.ai and copy your org key from the dashboard. Your key looks like this:
lc_key_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
Your org key controls access to your Lectr data. Treat it like a password — store it in an environment variable and never hardcode it.

Step 2: Point your client at Lectr

Replace your provider’s base URL with https://proxy.lectr.ai/v1 and add your org key as a header.
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://proxy.lectr.ai/v1",
  defaultHeaders: {
    "X-Lectr-Key": process.env.LECTR_KEY,
  },
});
Your application now sends AI requests through Lectr.

Step 3: Send a request

Make any AI request as you normally would:
const response = await client.chat.completions.create({
  model: "gpt-4o-mini",
  messages: [{ role: "user", content: "Hello" }],
});

console.log(response.choices[0].message.content);
Open your dashboard at app.lectr.ai Within a few seconds you should see the request appear with:
  • model used
  • latency
  • token usage
  • cost
Your first request is now flowing through Lectr.

Optional: Test with curl

curl https://proxy.lectr.ai/v1/chat/completions \
  -H "Authorization: Bearer $OPENAI_API_KEY" \
  -H "X-Lectr-Key: $LECTR_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "gpt-4o-mini",
    "messages": [{"role":"user","content":"Hello"}]
  }'

Add the X-Lectr-Feature header to tell Lectr which part of your product made each request. This unlocks per-feature cost tracking, latency breakdowns, and model recommendations.
const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://proxy.lectr.ai/v1",
  defaultHeaders: {
    "X-Lectr-Key": process.env.LECTR_KEY,
    "X-Lectr-Feature": "chat", // name of the feature making this request
  },
});
Use a different client instance per feature, or override the header per request:
// Per-request override
const response = await client.chat.completions.create(
  {
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: "Summarise this document." }],
  },
  {
    headers: { "X-Lectr-Feature": "summariser" },
  },
);
Good feature names are short and match what you’d call the feature in conversation — chat, summariser, classifier, onboarding-assistant. Avoid generic names like api or backend.

Environment variables

We recommend storing your keys as environment variables:
.env
OPENAI_API_KEY=sk-...
LECTR_KEY=lc_key_...

What happens to your API key?

Your provider API key passes through Lectr in memory only and is forwarded directly to the provider. Lectr never stores, logs, or persists your API keys. See Security & Trust for the full details.

Supported providers

Lectr supports any OpenAI-compatible provider through the same endpoint:
ProviderDetectionNotes
OpenAIAutomaticAll models supported
AnthropicAutomaticVia OpenAI-compatible endpoint
GroqAutomaticVia OpenAI-compatible endpoint
Google GeminiAutomaticVia OpenAI-compatible endpoint
Azure OpenAIX-Lectr-Provider: azure headerRequires endpoint config in dashboard
Lectr detects the provider automatically from the model name. Use X-Lectr-Provider to override when needed.

Next steps

Feature Tagging

Break down cost and latency by feature across your product.

Task Types

Declare task type to unlock smarter model recommendations.

Routing Rules

Automatically route requests to the right model based on feature or task.

Multi-Provider

Route traffic across OpenAI, Anthropic, Groq, and Gemini from one proxy.