Skip to main content

Overview

Errors from Lectr come in two flavours:
  • Proxy errors — Lectr itself rejected the request before it reached your provider
  • Provider errors — your provider returned an error, which Lectr forwards in OpenAI format
Both are returned in the same shape so your existing error handling works regardless of where the error originated.
{
  "error": {
    "message": "missing X-Lectr-Key header",
    "type": "authentication_error",
    "code": "missing_lectr_key"
  }
}

Proxy errors

These come from Lectr before your provider is ever contacted.

401 — Authentication errors

CodeMessageFix
missing_lectr_keyMissing X-Lectr-Key headerAdd X-Lectr-Key header to your request
invalid_lectr_keyInvalid or revoked X-Lectr-KeyCheck your key in the dashboard — it may have been rotated
org_disabledOrganisation is disabledContact support

400 — Bad request errors

CodeMessageFix
payload_too_largeRequest body exceeds 2MB limitReduce prompt size or split into smaller requests
invalid_providerUnknown value for X-Lectr-ProviderUse a valid provider: openai anthropic groq gemini azure
azure_config_missingNo Azure endpoint configured for this orgConfigure your Azure endpoint in Settings → Providers → Azure
invalid_requestMalformed request bodyEnsure request body is valid JSON with a model field

429 — Rate limit errors

CodeMessageFix
rate_limit_exceededToo many requestsBack off and retry — rate limits are per org key
concurrent_streams_exceededToo many concurrent streaming requestsReduce concurrent stream count
Rate limiting is per org key and per instance. If you’re hitting limits unexpectedly, check the dashboard for unusual traffic spikes — the anomaly detector will surface them.

413 — Payload errors

CodeMessageFix
request_too_largeRequest exceeds maximum allowed sizeKeep request bodies under 2MB

503 — Proxy errors

CodeMessageFix
provider_unreachableCould not connect to providerCheck provider status — retry with backoff

Provider errors

When your provider returns an error, Lectr normalises it to OpenAI format and forwards it with the original HTTP status code. The error.type and error.code reflect the provider’s original error. Common provider errors you’ll see:
StatusTypeTypical cause
401authentication_errorInvalid or expired provider API key
429rate_limit_errorProvider rate limit hit
400invalid_request_errorInvalid model name, bad parameters
500server_errorProvider internal error
503service_unavailableProvider outage

Identifying error source

The dashboard attributes errors to their source — proxy or provider — so you can tell at a glance whether the issue is with Lectr or with your provider. The Recent Failures table shows:
Request IDModelLatencyStatus
ed3f80fa…gpt-4o-mini101msopenai_error
10405cb9…gpt-4o96msopenai_error
3e9db15f…gpt-4oproxy_error
proxy_error means Lectr rejected it. Everything else means the provider did.

Error handling in code

Provider errors pass through in OpenAI SDK format — your existing error handling works unchanged.
import OpenAI, { APIError } from "openai";

try {
  const response = await client.chat.completions.create({ ... });
} catch (err) {
  if (err instanceof APIError) {
    console.error(err.status);   // HTTP status
    console.error(err.message);  // error message
    console.error(err.code);     // error code
  }
}

Retrying errors

Not all errors are worth retrying. A quick guide:
StatusRetry?Notes
400NoFix the request first
401NoFix your API key
413NoReduce payload size
429YesExponential backoff — wait before retrying
500YesRetry once — if it persists, check provider status
503YesRetry with backoff — likely a provider outage
For 429 errors, start with a 1 second delay and double on each retry up to a maximum of 60 seconds.