Overview
Lectr is a proxy that sits between your application and your AI provider. Change one line — thebaseURL — and Lectr begins capturing metadata for every AI request your product makes: latency, tokens, cost, and errors.
No SDK to install. No agent to run. No infrastructure changes.
Step 1: Get your org key
Sign in at app.lectr.ai and copy your org key from the dashboard. Your key looks like this:Step 2: Point your client at Lectr
Replace your provider’s base URL withhttps://proxy.lectr.ai/v1 and add your org key as a header.
Step 3: Send a request
Make any AI request as you normally would:- model used
- latency
- token usage
- cost
Optional: Test with curl
Step 4: Tag your features (recommended)
Add theX-Lectr-Feature header to tell Lectr which part of your product made each request. This unlocks per-feature cost tracking, latency breakdowns, and model recommendations.
Environment variables
We recommend storing your keys as environment variables:.env
What happens to your API key?
Your provider API key passes through Lectr in memory only and is forwarded directly to the provider. Lectr never stores, logs, or persists your API keys. See Security & Trust for the full details.Supported providers
Lectr supports any OpenAI-compatible provider through the same endpoint:| Provider | Detection | Notes |
|---|---|---|
| OpenAI | Automatic | All models supported |
| Anthropic | Automatic | Via OpenAI-compatible endpoint |
| Groq | Automatic | Via OpenAI-compatible endpoint |
| Google Gemini | Automatic | Via OpenAI-compatible endpoint |
| Azure OpenAI | X-Lectr-Provider: azure header | Requires endpoint config in dashboard |
X-Lectr-Provider to override when needed.
Next steps
Feature Tagging
Break down cost and latency by feature across your product.
Task Types
Declare task type to unlock smarter model recommendations.
Routing Rules
Automatically route requests to the right model based on feature or task.
Multi-Provider
Route traffic across OpenAI, Anthropic, Groq, and Gemini from one proxy.