Overview
Lectr works with all your AI providers through a single proxy URL. ChangebaseURL
once per client — Lectr figures out where to send each request automatically.
Supported providers
| Provider | Detection | Notes |
|---|---|---|
| OpenAI | Automatic | All current models |
| Anthropic | Automatic | Via OpenAI-compatible endpoint |
| Groq | Automatic | Via OpenAI-compatible endpoint |
| Google Gemini | Automatic | Via OpenAI-compatible endpoint |
| Azure OpenAI | X-Lectr-Provider: azure | Requires endpoint config — see below |
How detection works
Lectr detects the provider from themodel field in your request body. You do not
need to tell it which provider you’re using — it already knows.
provider_unknown on the event. The request is never blocked.
Setup
The integration is identical across providers — same URL, same header, same pattern.Manual provider override
If Lectr misdetects a provider — or if you’re using a model name that isn’t in its registry yet — use theX-Lectr-Provider header to override:
openai anthropic groq gemini azure
Azure OpenAI
Azure is the exception — every org has its own endpoint, so Lectr can’t auto-detect it from the model name alone. Two steps required. Step 1 — Configure your Azure endpoint in the dashboard Go to Settings → Providers → Azure and enter:- Your Azure endpoint:
https://<resource-name>.openai.azure.com - API version:
2024-02-01(or your preferred version)
api-key header format Azure expects — your client doesn’t need to know.
Dashboard
Once traffic is flowing from multiple providers, your dashboard shows a unified view across all of them. Model distribution shows requests, cost, and latency per model across every provider in a single table — filterable by provider:| Model | Provider | Requests | Cost | Avg latency |
|---|---|---|---|---|
gpt-4o | OpenAI | 1,204 | $142.00 | 1.38s |
claude-3-5-sonnet | Anthropic | 856 | $89.00 | 1.12s |
gpt-4o-mini | OpenAI | 401 | $8.00 | 0.94s |
llama-3.1-70b-versatile | Groq | 214 | $2.10 | 0.78s |
Model recommendations across providers
Lectr’s recommendation engine works across all supported providers. Recommendations stay within provider — it won’t suggest switching fromclaude-3-5-sonnet to
gpt-4o-mini. Cross-provider quality comparisons require data Lectr doesn’t have.
Within a provider, the usual rules apply:
What’s coming
Routing rules (coming soon) will let you automatically route requests to specific provider + model combinations based on feature tag or task type — without touching your code.Reference
| Header | X-Lectr-Provider |
|---|---|
| Required | Only for Azure |
| Valid values | openai anthropic groq gemini azure |
| Default | Auto-detected from model name |
| Fallback | Defaults to openai if detection fails |
FAQ
Can I use multiple providers in the same org?
Can I use multiple providers in the same org?
Yes — that’s the whole point. All traffic from all providers flows through
the same org key and appears in the same dashboard.
Do I need a separate org key per provider?
Do I need a separate org key per provider?
No. One org key covers all providers.
What if a new model isn't detected correctly?
What if a new model isn't detected correctly?
Use
X-Lectr-Provider to override. New models are added to the detection
registry regularly — if you’re missing one, let us know.Are provider API keys stored anywhere?
Are provider API keys stored anywhere?
No. Your provider API key passes through Lectr in memory for the duration of
the request and is forwarded directly to your provider. It is never persisted,
never logged, never included in event metadata.
Does Lectr support streaming for all providers?
Does Lectr support streaming for all providers?
Yes. Streaming works identically across all supported providers.