Skip to main content

What is Lectr?

Lectr is the control plane for your AI traffic. It sits between your application and your AI providers (OpenAI, Anthropic, Groq, Gemini, etc.) and gives you full visibility into cost, latency, errors, and model usage across your entire AI system. You change one line of code. Lectr gives you complete visibility into every AI request your product makes, across every provider, in real time.
const client = new OpenAI({
  baseURL: "https://proxy.lectr.ai/v1", // the only change
  defaultHeaders: { "X-Lectr-Key": process.env.LECTR_KEY },
});
Everything else stays the same. Your existing code, your existing models, your existing providers — unchanged.

Why Lectr exists

Most teams building AI products reach the same point: the OpenAI bill arrives and nobody knows exactly why it’s that number. You can see total spend. You can see spend per model. But you cannot see:
  • Which features are driving cost
  • Where latency is coming from
  • Which requests are failing and why
  • Whether you’re using the right model for each task
  • How your AI spend breaks down across providers
This information exists and it passes through your code on every request. But without something in the path capturing it, it disappears. Lectr captures it.

OpenAI-compatible by design

Lectr implements the OpenAI API specification. Any client that works with OpenAI works with Lectr:
  • OpenAI SDK
  • LangChain
  • LlamaIndex
  • Vercel AI SDK
  • Direct HTTP requests
Just change the baseURL. No SDK migration. No refactoring. No new framework.

How it works

Lectr is a transparent proxy that sits between your application and your AI providers.
Your App → Lectr → AI Provider → Lectr → Your App
On every request Lectr captures metadata (model, latency, tokens, cost, errors) and makes it available in your dashboard within seconds. It never reads or stores your prompts or responses. It never introduces meaningful latency. It never changes the response your app receives. The proxy is the foundation. Everything else is built on top of it.

What Lectr gives you

Complete Visibility

See every AI request across every provider in one place, including cost, latency, tokens, and errors.

Feature-Level Cost Tracking

Tag requests with the feature that generated them so you can understand exactly which parts of your product drive AI spend.

Multi-Provider Support

OpenAI, Anthropic, Groq, and Gemini through one endpoint. One dashboard for your entire AI spend regardless of provider.

Anomaly Detection

Detect unusual patterns such as cost spikes, latency regressions, or rising error rates before they impact users.

Routing Rules

Automatically route requests to the right model based on feature or task type. Set it once, let Lectr handle it.

The integration promise

Lectr is designed to integrate in under 2 minutes and be invisible after that.
  • No SDK to install - works with any OpenAI-compatible client
  • No agent to run - no sidecar, no daemon, no infrastructure changes
  • No code changes - beyond the baseURL and one header
  • No performance impact - the proxy adds negligible overhead
  • No prompt storage - your data stays yours
If you can feel Lectr in your request path, something is wrong. The goal is to be invisible to your application while being completely transparent to you.

Security and trust

Lectr is a proxy. By definition it sits in the path of your AI requests, which means your provider API keys pass through it. Here is exactly what Lectr does and does not do:
✅ Forwards your API key to your provider✅ Captures request metadata (model, latency, tokens, cost)
❌ Never stores your API key❌ Never reads your prompts
❌ Never logs your API key❌ Never stores your responses
❌ Never persists your API key❌ Never logs request bodies
Your API key exists in memory for the duration of the request and nowhere else. See Security & Trust for the complete picture.

Ready to start?

Quick Start

Get your first request through Lectr in under 2 minutes.

How it works

A deeper look at the proxy architecture and data flow.