Skip to main content

The short version

Lectr is a proxy. It sits between your code and your AI provider. That means your provider API keys pass through it on every request. Here is the complete picture of what happens to your data:
✅ Your API key is forwarded to your provider✅ Request metadata is captured (model, latency, tokens, cost)
❌ Your API key is never stored❌ Your prompts are never read
❌ Your API key is never logged❌ Your prompts are never stored
❌ Your API key is never persisted❌ Your responses are never stored
Your API key exists in memory for the duration of the request. After that it is gone. There is no database row, no log line, no cache entry containing it.

API key handling

When you send a request to Lectr, your provider API key travels in the Authorization header — the same header you’d send directly to your provider. What Lectr does with it:
Request arrives with Authorization: Bearer sk-...

Key extracted from header (in memory only)

Auth header normalised for target provider
(e.g. Bearer → x-api-key for Anthropic)

Request forwarded to provider with normalised key

Key discarded — request lifecycle ends
The key is never written to disk, never included in logs, never stored in the database, and never included in the event metadata that powers your dashboard.

What Lectr does store

Lectr captures metadata about each request — not the content of the request. Stored per request:
  • Timestamp
  • Provider and model (requested and actual)
  • Endpoint
  • Status code
  • Latency (total and TTFB)
  • Token counts
  • Cost estimate
  • Streaming flag
  • Error category and source
  • Feature tag (X-Lectr-Feature)
  • Task type (X-Lectr-Task)
  • Rule applied (if routing rules are configured)
  • Org ID
Never stored:
  • Prompt content
  • Message content
  • Response content
  • Provider API keys
  • Request or response bodies of any kind
The distinction is metadata vs content. Lectr knows a request happened, how long it took, what it cost, and whether it succeeded. It does not know what you said or what the model replied.

Prompt privacy

Lectr never reads, stores, or logs prompt content or response content. This is not a configuration option — it is a hard architectural constraint. The event pipeline that powers your dashboard captures metadata only. There is no code path that writes message content to the database. If you are evaluating Lectr for a use case with strict data privacy requirements — healthcare, legal, finance — this is the answer to “does the proxy see our data?” The proxy sees your request in memory. It reads the model field and the headers. It does not read, parse, or store the messages array.

Token counts and streaming

For non-streaming requests, token counts come directly from your provider’s response — exact figures. For streaming requests, OpenAI and other providers do not return token counts in the stream. Lectr uses a tokeniser to count tokens from the assembled response after the stream completes. These counts are clearly labelled in the dashboard:
LabelSource
ExactProvider-returned (non-streaming)
MeasuredLectr tokeniser (streaming)
Measured (calibrated)Tokeniser with historical calibration
Cost estimates derived from measured token counts carry a margin of error. The dashboard labels these clearly so you always know what you’re looking at.

Dashboard authentication

Dashboard access is handled by Auth0. Lectr does not manage passwords or session tokens directly.
  • Login via email + password or GitHub
  • Sessions are managed by Auth0
  • Dashboard data is scoped to your org — you cannot see another org’s data
  • Org keys are hashed at rest — the plaintext key is shown once at creation or rotation and never again

Org key security

Your org key (lc_key_...) authenticates proxy requests. Treat it like a password. Best practices:
  • Store it as an environment variable — never hardcode it
  • Use separate org keys for separate environments (staging, production)
  • Rotate the key immediately if you suspect it has been compromised
  • Only share it with team members who need to make proxy requests
If a key is compromised: Go to Settings → Integration → Rotate key. The old key is invalidated immediately. All requests using the old key will receive 401 from that moment. Generate a new key and update your environment variables.

Transport security

All Lectr endpoints — proxy, dashboard API, management API — are HTTPS only. Plain HTTP is not accepted.

Trust model

Lectr is a transparent proxy. By definition, it processes your requests in memory. This requires a degree of trust. What you are trusting Lectr with:
  • Your provider API keys pass through in memory
  • Your request metadata is stored and processed to power your dashboard
  • Your org traffic data is visible to Lectr’s infrastructure
What you are not trusting Lectr with:
  • Your prompt content — never read or stored
  • Your response content — never read or stored
  • Permanent access to your API keys — they exist in memory per request only
For teams with enterprise security requirements: BYOK (Bring Your Own Key) — where your provider API keys are stored encrypted in your own infrastructure and never pass through Lectr’s servers — is on the roadmap for Phase 3.3.

Data isolation

Each org’s data is isolated at the database level. Org scoping is enforced on every query — there is no query path that returns data across org boundaries. Dashboard authentication via Auth0 ensures users only see data for orgs they are members of. Org membership and roles are enforced at the API layer on every request.

Responsible disclosure

If you discover a security vulnerability in Lectr, please report it responsibly. Do not open a public GitHub issue for security vulnerabilities. Contact: security@lectr.ai We aim to acknowledge reports within 24 hours and resolve critical issues within 72 hours.