What is Lectr?
Lectr is the control plane for your AI traffic. It sits between your application and your AI providers (OpenAI, Anthropic, Groq, Gemini, etc.) and gives you full visibility into cost, latency, errors, and model usage across your entire AI system. You change one line of code. Lectr gives you complete visibility into every AI request your product makes, across every provider, in real time.Why Lectr exists
Most teams building AI products reach the same point: the OpenAI bill arrives and nobody knows exactly why it’s that number. You can see total spend. You can see spend per model. But you cannot see:- Which features are driving cost
- Where latency is coming from
- Which requests are failing and why
- Whether you’re using the right model for each task
- How your AI spend breaks down across providers
OpenAI-compatible by design
Lectr implements the OpenAI API specification. Any client that works with OpenAI works with Lectr:- OpenAI SDK
- LangChain
- LlamaIndex
- Vercel AI SDK
- Direct HTTP requests
How it works
Lectr is a transparent proxy that sits between your application and your AI providers.What Lectr gives you
Complete Visibility
See every AI request across every provider in one place, including cost,
latency, tokens, and errors.
Feature-Level Cost Tracking
Tag requests with the feature that generated them so you can understand
exactly which parts of your product drive AI spend.
Multi-Provider Support
OpenAI, Anthropic, Groq, and Gemini through one endpoint. One dashboard for
your entire AI spend regardless of provider.
Anomaly Detection
Detect unusual patterns such as cost spikes, latency regressions, or rising
error rates before they impact users.
Routing Rules
Automatically route requests to the right model based on feature or task
type. Set it once, let Lectr handle it.
The integration promise
Lectr is designed to integrate in under 2 minutes and be invisible after that.- No SDK to install - works with any OpenAI-compatible client
- No agent to run - no sidecar, no daemon, no infrastructure changes
- No code changes - beyond the
baseURLand one header - No performance impact - the proxy adds negligible overhead
- No prompt storage - your data stays yours
Security and trust
Lectr is a proxy. By definition it sits in the path of your AI requests, which means your provider API keys pass through it. Here is exactly what Lectr does and does not do:| ✅ Forwards your API key to your provider | ✅ Captures request metadata (model, latency, tokens, cost) |
| ❌ Never stores your API key | ❌ Never reads your prompts |
| ❌ Never logs your API key | ❌ Never stores your responses |
| ❌ Never persists your API key | ❌ Never logs request bodies |
Ready to start?
Quick Start
Get your first request through Lectr in under 2 minutes.
How it works
A deeper look at the proxy architecture and data flow.