Overview
Errors from Lectr come in two flavours:
- Proxy errors — Lectr itself rejected the request before it reached your provider
- Provider errors — your provider returned an error, which Lectr forwards in OpenAI format
Both are returned in the same shape so your existing error handling works
regardless of where the error originated.
{
"error": {
"message": "missing X-Lectr-Key header",
"type": "authentication_error",
"code": "missing_lectr_key"
}
}
Proxy errors
These come from Lectr before your provider is ever contacted.
401 — Authentication errors
| Code | Message | Fix |
|---|
missing_lectr_key | Missing X-Lectr-Key header | Add X-Lectr-Key header to your request |
invalid_lectr_key | Invalid or revoked X-Lectr-Key | Check your key in the dashboard — it may have been rotated |
org_disabled | Organisation is disabled | Contact support |
400 — Bad request errors
| Code | Message | Fix |
|---|
payload_too_large | Request body exceeds 2MB limit | Reduce prompt size or split into smaller requests |
invalid_provider | Unknown value for X-Lectr-Provider | Use a valid provider: openai anthropic groq gemini azure |
azure_config_missing | No Azure endpoint configured for this org | Configure your Azure endpoint in Settings → Providers → Azure |
invalid_request | Malformed request body | Ensure request body is valid JSON with a model field |
429 — Rate limit errors
| Code | Message | Fix |
|---|
rate_limit_exceeded | Too many requests | Back off and retry — rate limits are per org key |
concurrent_streams_exceeded | Too many concurrent streaming requests | Reduce concurrent stream count |
Rate limiting is per org key and per instance. If you’re hitting limits
unexpectedly, check the dashboard for unusual traffic spikes — the anomaly
detector will surface them.
413 — Payload errors
| Code | Message | Fix |
|---|
request_too_large | Request exceeds maximum allowed size | Keep request bodies under 2MB |
503 — Proxy errors
| Code | Message | Fix |
|---|
provider_unreachable | Could not connect to provider | Check provider status — retry with backoff |
Provider errors
When your provider returns an error, Lectr normalises it to OpenAI format and
forwards it with the original HTTP status code. The error.type and error.code
reflect the provider’s original error.
Common provider errors you’ll see:
| Status | Type | Typical cause |
|---|
401 | authentication_error | Invalid or expired provider API key |
429 | rate_limit_error | Provider rate limit hit |
400 | invalid_request_error | Invalid model name, bad parameters |
500 | server_error | Provider internal error |
503 | service_unavailable | Provider outage |
Identifying error source
The dashboard attributes errors to their source — proxy or provider — so you
can tell at a glance whether the issue is with Lectr or with your provider.
The Recent Failures table shows:
| Request ID | Model | Latency | Status |
|---|
| ed3f80fa… | gpt-4o-mini | 101ms | openai_error |
| 10405cb9… | gpt-4o | 96ms | openai_error |
| 3e9db15f… | gpt-4o | — | proxy_error |
proxy_error means Lectr rejected it. Everything else means the provider did.
Error handling in code
Provider errors pass through in OpenAI SDK format — your existing error handling
works unchanged.
import OpenAI, { APIError } from "openai";
try {
const response = await client.chat.completions.create({ ... });
} catch (err) {
if (err instanceof APIError) {
console.error(err.status); // HTTP status
console.error(err.message); // error message
console.error(err.code); // error code
}
}
Retrying errors
Not all errors are worth retrying. A quick guide:
| Status | Retry? | Notes |
|---|
400 | No | Fix the request first |
401 | No | Fix your API key |
413 | No | Reduce payload size |
429 | Yes | Exponential backoff — wait before retrying |
500 | Yes | Retry once — if it persists, check provider status |
503 | Yes | Retry with backoff — likely a provider outage |
For 429 errors, start with a 1 second delay and double on each retry up to
a maximum of 60 seconds.