The short version
Lectr is a proxy. It sits between your code and your AI provider. That means your provider API keys pass through it on every request. Here is the complete picture of what happens to your data:| ✅ Your API key is forwarded to your provider | ✅ Request metadata is captured (model, latency, tokens, cost) |
| ❌ Your API key is never stored | ❌ Your prompts are never read |
| ❌ Your API key is never logged | ❌ Your prompts are never stored |
| ❌ Your API key is never persisted | ❌ Your responses are never stored |
API key handling
When you send a request to Lectr, your provider API key travels in theAuthorization header — the same header you’d send directly to your provider.
What Lectr does with it:
What Lectr does store
Lectr captures metadata about each request — not the content of the request. Stored per request:- Timestamp
- Provider and model (requested and actual)
- Endpoint
- Status code
- Latency (total and TTFB)
- Token counts
- Cost estimate
- Streaming flag
- Error category and source
- Feature tag (
X-Lectr-Feature) - Task type (
X-Lectr-Task) - Rule applied (if routing rules are configured)
- Org ID
- Prompt content
- Message content
- Response content
- Provider API keys
- Request or response bodies of any kind
Prompt privacy
Lectr never reads, stores, or logs prompt content or response content. This is not a configuration option — it is a hard architectural constraint. The event pipeline that powers your dashboard captures metadata only. There is no code path that writes message content to the database. If you are evaluating Lectr for a use case with strict data privacy requirements — healthcare, legal, finance — this is the answer to “does the proxy see our data?” The proxy sees your request in memory. It reads themodel field and the headers.
It does not read, parse, or store the messages array.
Token counts and streaming
For non-streaming requests, token counts come directly from your provider’s response — exact figures. For streaming requests, OpenAI and other providers do not return token counts in the stream. Lectr uses a tokeniser to count tokens from the assembled response after the stream completes. These counts are clearly labelled in the dashboard:| Label | Source |
|---|---|
| Exact | Provider-returned (non-streaming) |
| Measured | Lectr tokeniser (streaming) |
| Measured (calibrated) | Tokeniser with historical calibration |
Dashboard authentication
Dashboard access is handled by Auth0. Lectr does not manage passwords or session tokens directly.- Login via email + password or GitHub
- Sessions are managed by Auth0
- Dashboard data is scoped to your org — you cannot see another org’s data
- Org keys are hashed at rest — the plaintext key is shown once at creation or rotation and never again
Org key security
Your org key (lc_key_...) authenticates proxy requests. Treat it like a
password.
Best practices:
- Store it as an environment variable — never hardcode it
- Use separate org keys for separate environments (staging, production)
- Rotate the key immediately if you suspect it has been compromised
- Only share it with team members who need to make proxy requests
401 from that moment.
Generate a new key and update your environment variables.
Transport security
All Lectr endpoints — proxy, dashboard API, management API — are HTTPS only. Plain HTTP is not accepted.Trust model
Lectr is a transparent proxy. By definition, it processes your requests in memory. This requires a degree of trust. What you are trusting Lectr with:- Your provider API keys pass through in memory
- Your request metadata is stored and processed to power your dashboard
- Your org traffic data is visible to Lectr’s infrastructure
- Your prompt content — never read or stored
- Your response content — never read or stored
- Permanent access to your API keys — they exist in memory per request only