Skip to main content

Overview

Lectr works with all your AI providers through a single proxy URL. Change baseURL once per client — Lectr figures out where to send each request automatically.
your app → proxy.lectr.ai/v1 → OpenAI
                              → Anthropic
                              → Groq
                              → Gemini
                              → Azure OpenAI
One dashboard. Your entire AI spend. Every provider.

Supported providers

ProviderDetectionNotes
OpenAIAutomaticAll current models
AnthropicAutomaticVia OpenAI-compatible endpoint
GroqAutomaticVia OpenAI-compatible endpoint
Google GeminiAutomaticVia OpenAI-compatible endpoint
Azure OpenAIX-Lectr-Provider: azureRequires endpoint config — see below

How detection works

Lectr detects the provider from the model field in your request body. You do not need to tell it which provider you’re using — it already knows.
model: "gpt-4o"                     → OpenAI
model: "claude-3-5-sonnet-20241022" → Anthropic
model: "llama-3.1-70b-versatile"    → Groq
model: "gemini-1.5-pro"             → Gemini
If Lectr can’t determine the provider, it defaults to OpenAI and logs provider_unknown on the event. The request is never blocked.

Setup

The integration is identical across providers — same URL, same header, same pattern.
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://proxy.lectr.ai/v1",
  defaultHeaders: {
    "X-Lectr-Key": process.env.LECTR_KEY,
  },
});
from openai import OpenAI

client = OpenAI(
    api_key=os.environ["OPENAI_API_KEY"],
    base_url="https://proxy.lectr.ai/v1",
    default_headers={"X-Lectr-Key": os.environ["LECTR_KEY"]},
)
Your provider API key passes through Lectr in memory and is forwarded directly. It is never stored, never logged. See Security & Trust for the full picture.

Manual provider override

If Lectr misdetects a provider — or if you’re using a model name that isn’t in its registry yet — use the X-Lectr-Provider header to override:
const response = await client.chat.completions.create(
  { model: "my-custom-model", messages },
  {
    headers: {
      "X-Lectr-Provider": "anthropic",
    },
  },
);
Valid values: openai anthropic groq gemini azure

Azure OpenAI

Azure is the exception — every org has its own endpoint, so Lectr can’t auto-detect it from the model name alone. Two steps required. Step 1 — Configure your Azure endpoint in the dashboard Go to Settings → Providers → Azure and enter:
  • Your Azure endpoint: https://<resource-name>.openai.azure.com
  • API version: 2024-02-01 (or your preferred version)
Step 2 — Add the provider header to your requests
import OpenAI from "openai";

const client = new OpenAI({
  apiKey: process.env.AZURE_OPENAI_API_KEY,
  baseURL: "https://proxy.lectr.ai/v1",
  defaultHeaders: {
    "X-Lectr-Key": process.env.LECTR_KEY,
    "X-Lectr-Provider": "azure",
  },
});
Lectr will route requests to your configured Azure endpoint and handle the api-key header format Azure expects — your client doesn’t need to know.
Azure requests without a configured endpoint in the dashboard will return a 400 error. Configure the endpoint first.

Dashboard

Once traffic is flowing from multiple providers, your dashboard shows a unified view across all of them. Model distribution shows requests, cost, and latency per model across every provider in a single table — filterable by provider:
ModelProviderRequestsCostAvg latency
gpt-4oOpenAI1,204$142.001.38s
claude-3-5-sonnetAnthropic856$89.001.12s
gpt-4o-miniOpenAI401$8.000.94s
llama-3.1-70b-versatileGroq214$2.100.78s
Cost breakdown shows per-provider spend with period-over-period comparison. Provider health shows recent error rates per provider — derived from your own traffic, not external status pages.

Model recommendations across providers

Lectr’s recommendation engine works across all supported providers. Recommendations stay within provider — it won’t suggest switching from claude-3-5-sonnet to gpt-4o-mini. Cross-provider quality comparisons require data Lectr doesn’t have. Within a provider, the usual rules apply:
claude-3-opus     → claude-3-5-sonnet (conservative)
claude-3-opus     → claude-3-5-haiku  (aggressive)
llama-3.1-70b     → llama-3.1-8b      (moderate)
gemini-1.5-pro    → gemini-1.5-flash  (moderate)
Add task type tagging to get high-confidence recommendations rather than heuristic ones.

What’s coming

Routing rules (coming soon) will let you automatically route requests to specific provider + model combinations based on feature tag or task type — without touching your code.

Reference

HeaderX-Lectr-Provider
RequiredOnly for Azure
Valid valuesopenai anthropic groq gemini azure
DefaultAuto-detected from model name
FallbackDefaults to openai if detection fails

FAQ

Yes — that’s the whole point. All traffic from all providers flows through the same org key and appears in the same dashboard.
No. One org key covers all providers.
Use X-Lectr-Provider to override. New models are added to the detection registry regularly — if you’re missing one, let us know.
No. Your provider API key passes through Lectr in memory for the duration of the request and is forwarded directly to your provider. It is never persisted, never logged, never included in event metadata.
Yes. Streaming works identically across all supported providers.