Skip to main content

Overview

By default, Lectr tracks every request your application makes. But without context, all you see is an aggregate — total cost, total requests, average latency across everything. Feature tagging tells Lectr which part of your product made each request. Once tagged, the dashboard breaks down cost, latency, errors, and token usage by feature.
Feature tagging is the foundation for several Lectr capabilities — including cost breakdowns, anomaly detection, model recommendations, and routing rules.

ScopeCost
All requests (aggregate)$142.00

FeatureCostShare
chat$89.0063%
summariser$38.0027%
classifier$8.006%
onboarding$7.004%
One header. Significantly more useful data.

How it works

Add one header to your requests:
X-Lectr-Feature: chat
Lectr reads this header when the request enters the proxy and associates the value with the request metadata it captures — including cost, latency, tokens, and errors. This value then appears throughout the dashboard in feature-level charts, tables, and filters. The header is optional. Requests without it are grouped under untagged in the dashboard. You will not be warned or blocked for missing it — but you will get significantly less useful data.

Adding the header

The simplest approach is creating a dedicated client instance per feature.
import OpenAI from "openai";

// One client per feature
const chatClient = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://proxy.lectr.ai/v1",
  defaultHeaders: {
    "X-Lectr-Key": process.env.LECTR_KEY,
    "X-Lectr-Feature": "chat",
  },
});

const summariserClient = new OpenAI({
  apiKey: process.env.OPENAI_API_KEY,
  baseURL: "https://proxy.lectr.ai/v1",
  defaultHeaders: {
    "X-Lectr-Key": process.env.LECTR_KEY,
    "X-Lectr-Feature": "summariser",
  },
});

Per-request override

If you share a single client across your application, override the header per request instead:
const response = await client.chat.completions.create(
  {
    model: "gpt-4o-mini",
    messages: [{ role: "user", content: input }],
  },
  {
    headers: {
      "X-Lectr-Feature": "classifier",
    },
  },
);

Framework patterns

If your application uses a wrapper or service layer around your AI client, pass the feature tag at the call site:
// lib/ai.ts — shared AI wrapper
export async function complete(
  messages: Message[],
  options: { feature: string; model?: string },
) {
  return client.chat.completions.create(
    {
      model: options.model ?? "gpt-4o",
      messages,
    },
    {
      headers: {
        "X-Lectr-Feature": options.feature,
      },
    },
  );
}

// Usage — feature tag lives at the call site, not buried in the wrapper
const response = await complete(messages, { feature: "onboarding-assistant" });
This keeps feature tags explicit and searchable across your codebase.

Naming your features

Feature names are freeform strings. A few conventions that make your dashboard more useful: Use names that match how you talk about the feature internally
✅  chat
✅  summariser
✅  classifier
✅  onboarding-assistant
✅  code-review
✅  email-drafter

❌  feature1
❌  api
❌  backend
❌  llm-call
Keep them short and lowercase Feature names appear in charts, tables, and dropdowns. Short names are easier to scan. Use hyphens for multi-word names
✅  onboarding-assistant
✅  code-review
❌  onboardingAssistant
❌  code_review
❌  Code Review
Be consistent across your codebase Lectr treats each unique string as a separate feature. If one service sends summariser and another sends summary, they appear as two features.
Keep your feature names in a constants file to avoid typos.
// lib/features.ts
export const Features = {
  CHAT: "chat",
  SUMMARISER: "summariser",
  CLASSIFIER: "classifier",
  ONBOARDING: "onboarding-assistant",
} as const;

What you unlock in the dashboard

Once requests are tagged, Lectr surfaces feature-level insights across every metric. Cost breakdown See exactly which features drive your AI spend.
Feature          Requests   Tokens      Cost       vs last week
chat             4,821      2.1M        $89.00     ↑ 22%
summariser       1,204      890K        $38.00     ↓ 8%
classifier       9,442      420K        $8.00      → 0%
onboarding         312      180K        $7.00      ↑ 41%
Latency analysis Average and p95 latency per feature. If one feature slows down, you can see it immediately. Error rates Understand which features are experiencing reliability issues. Model recommendations Lectr analyses usage patterns per feature and suggests cheaper models when quality is unlikely to change. Routing rules Feature-based routing rules use feature tags to decide which model handles each request. This feature is in work and will be available in the future learn more.

Untagged traffic

Requests without X-Lectr-Feature are grouped under untagged in the dashboard. If a large share of your traffic is untagged, the dashboard will prompt you:
💡 38% of your requests have no feature tag.
   Add X-Lectr-Feature to unlock per-feature cost and latency breakdowns.
There is no penalty for untagged traffic. But the more of your traffic is tagged, the more useful your dashboard becomes.

Combining with task types

Feature tagging tells Lectr which feature made a request. Task type tagging tells Lectr what kind of work the request performs. Used together, they give Lectr enough context to make confident model recommendations:
X-Lectr-Feature: classifier
X-Lectr-Task: classification
→ Strong recommendation: gpt-4o-mini handles classification
  at a fraction of gpt-4o cost
See Task Types for how to add task tagging alongside feature tagging.

Reference

HeaderValueRequired
X-Lectr-FeatureAny short string identifying the featureNo
Valid values: any non-empty string up to 100 characters. Lectr does not validate feature names. Any string is accepted and stored as provided.

FAQ

Lectr treats each unique string as a separate feature. If you send summariser from one service and summary from another, the dashboard shows two separate features. Use a constants file to keep names consistent across your codebase.
Yes. Update the header value in your code. Historical requests keep the original tag.
No hard limit. In practice, most teams have between 3 and 15 features. Very large numbers of feature tags (50+) can make the dashboard harder to read — consider grouping related features if you find yourself with many.
No. The header is read once when the request enters the proxy and stored as metadata after the request completes.
Yes. Feature tagging works the same for streaming and non-streaming requests.