Live · last 60 min

Active Users (1h)
-
Requests (1h)
-
Tokens In (1h)
-
Tokens Out (1h)
-
Period:
Total Requests
-
Total Cost
-
Active Users
-
Avg Cost / Request
-
Input Tokens
-
Output Tokens
-
Errors
-
Cache Hit Rate
-

Cost Trends

Last 7 days

Provider Load & Routing

Top Spending Users

Create User expand

All Users

NameEmailStatusRoleWeekly CapOrganizationActions

All Organizations

NameSlugOwnerMembersAPI Spend CapRolloverDLPCreatedActions
Create App expand

All Apps

NameSlugContactActiveDLPCreatedActions

Model Router

Configure model aliases to override what LLM is actually used when clients request a specific model via the OpenAI or Anthropic gateway endpoints. Org-specific aliases take priority over global ones.

Requested ModelRouted ToScopeActions

Agent Configurations

Each agent can have its own model per cost mode. Sub-agents default to the role model but can be individually overridden. Purple-bordered inputs indicate a per-agent override. Changes are saved to model_config.yaml.
NameAgent IDRoleNormal ModelHeavy ModelMax Model

Available Models

Gateway Logs

Recent API requests through the OpenAI/Anthropic gateway endpoints.

TimeKeyUserEndpointRequestedActualTokensCost

Per-User Usage

UserEmailRequestsInput TokensOutput TokensCostActions
Stop sensitive data before it leaves your network.
Requests are scanned in-gateway. Matched values are replaced with tokens like [EMAIL_1] before reaching the LLM provider — the model can still reason about them by reference, but the raw value never leaves your infrastructure. Turn protection on per-organization (for user keys) or per-app (for app keys).
Redactions · 24h
Bypasses · 24h
Organizations protected
Apps protected
Click a row to configure rules.
Name Status Rules enabled Last activity
Loading…

Recent activity

WhenKindScopeRuleMatchesSizeRequestDetail
Enable protection for an organization or app, then make a request — activity will appear here.

Gateway Settings

Configure gateway-wide features. Users can opt-in to individual features from their dashboard.

⚖️

Routing Strategy

Controls how the gateway orders fallback models on each request. Takes effect immediately — no restart required. Super-admin only.

🦙

Ollama (Self-Hosted Models)

Connect a remote or local Ollama instance to route requests through self-hosted models at zero cost. Models are referenced as ollama/model-name.

🔑

LLM Provider Credentials

Manage API keys and connection settings for each LLM provider without editing .env. Stored AES-256-GCM encrypted. Database values override environment variables at startup; changes hot-reload immediately. Super-admin only.

🔄

Model Fallback Chains

Configure which model steps in when a primary model fails (provider error, credit expired, or downtime).

Primary Model Fallback Chain Action
No fallback chains configured
📊

User Activity Export

Export all API activity for a specific user as a downloadable CSV. Includes input/output conversations, token counts, model used, cost, and latency. Useful for auditing and debugging user-side issues.

Error Reports

TimeSourceTypeModelErrorUserAgentVersion