Skip to main content

Providers

ProviderDetectionEndpoint
OpenAIAutomatichttps://api.openai.com
AnthropicAutomatichttps://api.anthropic.com
GroqAutomatichttps://api.groq.com/openai
Google GeminiAutomatichttps://generativelanguage.googleapis.com/v1beta/openai
Azure OpenAIX-Lectr-Provider: azurePer-org — configured in dashboard
All providers use the same proxy endpoint: https://proxy.lectr.ai/v1

OpenAI

ModelRouting targetRecommendations
gpt-4oDowngrade → gpt-4o-mini
gpt-4o-mini
gpt-4-turboDowngrade → gpt-4o
gpt-4Downgrade → gpt-4o
gpt-4.1Downgrade → gpt-4.1-mini
gpt-4.1-miniDowngrade → gpt-4.1-nano
gpt-4.1-nano
gpt-3.5-turboDowngrade → gpt-4.1-nano
o1Downgrade → o1-mini
o1-miniDowngrade → gpt-4o
o3Downgrade → o3-mini
o3-miniDowngrade → gpt-4o-mini
o4-miniDowngrade → gpt-4o-mini

Anthropic

ModelRouting targetRecommendations
claude-3-opus-20240229Downgrade → claude-3-5-sonnet-20241022 (conservative) or claude-3-5-haiku-20241022 (aggressive)
claude-3-5-sonnet-20241022Downgrade → claude-3-5-haiku-20241022
claude-3-sonnet-20240229Downgrade → claude-3-5-haiku-20241022
claude-3-5-haiku-20241022
claude-3-haiku-20240307
Anthropic model names include a date suffix (e.g. 20241022). Lectr also uses prefix matching — new Claude models are detected automatically even if not yet in this list.

Groq

ModelRouting targetRecommendations
llama-3.2-90b-visionDowngrade → llama-3.1-70b-versatile
llama-3.1-70b-versatileDowngrade → llama-3.1-8b-instant
llama-3.1-8b-instant
mixtral-8x7b-32768Downgrade → llama-3.1-8b-instant
gemma2-9b-itDowngrade → llama-3.1-8b-instant

Google Gemini

ModelRouting targetRecommendations
gemini-1.5-proDowngrade → gemini-1.5-flash
gemini-1.5-flash
gemini-2.0-flashDowngrade → gemini-1.5-flash

Azure OpenAI

Azure uses deployment names as model identifiers — these are org-specific and not in the detection registry. Always use X-Lectr-Provider: azure and configure your endpoint in the dashboard. See Multi-Provider Setup → Azure for setup instructions.

Model not listed?

Lectr uses prefix matching as a fallback — so new model versions from known providers are usually detected correctly even before this list is updated. If a model isn’t detected:
  • Use X-Lectr-Provider to set the provider manually
  • The request goes through — detection failure never blocks traffic
  • Cost tracking shows $0.00 (pricing unavailable) until the model is added
Model list and pricing are updated regularly. If you’re missing a model you use in production, let us know.

Routing targets

The “Routing target” column indicates whether a model can be used as the destination in a routing rule. All listed models are valid routing targets. Routing rules are coming in a future release.

Recommendation aggressiveness

Recommendations stay within provider — Lectr won’t suggest switching from Anthropic to OpenAI. Within a provider, the downgrade aggressiveness depends on task type:
Task typeAggressiveness
classificationAggressive — smallest viable model
extractionAggressive (short prompts) / Moderate (long prompts)
summarisationModerate (short prompts) / Conservative (long prompts)
generationConservative — relies on other signals
reasoningNever recommended for downgrade
See Task Types for how to tag your requests.