A unified AI model routing platform. One API key, … models, smart auto-routing.
Auto routing is enabled by default. To control it:
Console → Settings → Routing → Auto routing toggle
Once enabled, GateRouter automatically selects the best model for each request. If you prefer to pick models yourself, skip this step and specify models directly (e.g. anthropic/claude-sonnet-4.6).
Fully compatible with the OpenAI API. Supports Python, Node.js, curl, and tools across the ecosystem.
Replace the Base URL ( https://api.gaterouter.ai/openai/v1 ) and API key to start using it.
from openai import OpenAI
client = OpenAI(
api_key="GATEROUTER_API_KEY", # get GATEROUTER_API_KEY from gaterouter.ai (API Key)
base_url="https://api.gaterouter.ai/openai/v1",
)
completion = client.chat.completions.create(
model="auto",
messages=[
{"role": "system", "content": "system prompt"},
{"role": "user", "content": "how are you?"}
],
)
# get the response from LLM (role=assistant)
print(completion.choices[0].message.content){
"id": "243c850e-214c-431e-977f-ebaf4aa95f56",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! Nice to meet you. How can I help you?"
},
"finish_reason": "stop"
}
],
"created": 1773408946,
"model": "deepseek.v3-v1:0",
"object": "chat.completion",
"usage": {
"prompt_tokens": 5,
"completion_tokens": 15,
"total_tokens": 20
}
}If you already have OpenClaw installed, follow the steps below to connect GateRouter.
In a terminal, run:
openclaw dashboardThe browser will open the console (usually http://127.0.0.1:18789). If the browser does not open automatically, please visit that address manually.
Select Config → Raw mode.
Add env and set GATEROUTER_API_KEY to your GateRouter API Key:
env: {
vars: {
GATEROUTER_API_KEY: 'sk-or-v1-xxxxxxxxxxxxxxxx',
},
},Add models with baseUrl set to https://api.gaterouter.ai/openai/v1:
models: {
mode: 'merge',
providers: {
gaterouter: {
baseUrl: 'https://api.gaterouter.ai/openai/v1',
apiKey: '${GATEROUTER_API_KEY}',
api: 'openai-completions',
models: [
{
id: 'gaterouter/auto',
name: 'Gaterouter Auto',
api: 'openai-completions',
reasoning: false,
input: ['text'],
cost: {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
},
contextWindow: 200000,
maxTokens: 8192,
},
],
},
},
},Replace the original "agents": {...} section with:
agents: {
defaults: {
model: {
primary: 'gaterouter/auto',
},
models: {
'gaterouter/auto': {
alias: 'Gaterouter Auto',
},
},
},
},Web console: Click Save in the top right, then Update.
In OpenClaw Chat, send a test message such as "Hello". If configured correctly GateRouter API is called → auto-routed to the best model → response is returned.
macOS:
Open Finder, press Command + Shift + G
Enter: ~/.openclaw
Press Enter to see openclaw.json.
Windows:
Path: C:\Users\<YourUsername>\.openclaw\openclaw.json
Add env and set GATEROUTER_API_KEY to your GateRouter API Key:
"env": {
"vars": {
"GATEROUTER_API_KEY": "sk-or-v1-xxxxxxxxxxxxxxxx"
}
},Add models with baseUrl set to https://api.gaterouter.ai/openai/v1:
"models": {
"mode": "merge",
"providers": {
"gaterouter": {
"baseUrl": "https://api.gaterouter.ai/openai/v1",
"apiKey": "${GATEROUTER_API_KEY}",
"api": "openai-completions",
"models": [
{
"id": "gaterouter/auto",
"name": "Gaterouter Auto",
"api": "openai-completions",
"reasoning": false,
"input": ["text"],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 200000,
"maxTokens": 8192
}
]
}
}
},Replace the original "agents": {...}, section with:
"agents": {
"defaults": {
"model": {
"primary": "gaterouter/minimax/minimax-m2.5"
},
"models": {
"gaterouter/auto": {
"alias": "Gaterouter Auto"
}
}
}
},After saving the config file, run the following in a terminal to view the file and confirm it is correct:
cat ~/.openclaw/openclaw.jsonRun the following in a local terminal to start a CLI conversation:
openclaw tuiOr run the following to use OpenClaw Chat in the browser:
openclaw dashboardAuto model routing
GateRouter recommends setting primary to gaterouter/auto.
Automatically selects the best model by price, latency, and availability.
Use a specific model
To use a fixed model, e.g. set primary to gaterouter/deepseek/deepseek-v3.2
Only OpenAI models succeed; other models fail
Models available through GateRouter use the OpenAI-compatible protocol. In OpenClaw integration settings, set the api field to openai-completions (as in the examples above). If OpenAI-family models work but all others fail, check the providers entry: the api type.
Model not found or empty response
Confirm the model ID spelling is correct; the configured provider name matches what you reference; and reasoning must be set to false.
If you already have Cursor installed, follow these steps to connect GateRouter.
Use the menu in the top-right corner → Settings.

In the left sidebar:

Configure API access:

In Chat, Composer, or Agent, choose your GateRouter model from the model dropdown.

If you already have Claude Code (Anthropic's terminal / IDE coding assistant) installed, follow these steps to connect GateRouter.
sk-or-v1- and use it to replace the placeholders below.Claude Code reads environment variables. We recommend following the official LLM gateway documentation and setting:
| Variable | Value |
|---|---|
ANTHROPIC_BASE_URL | https://api.gaterouter.ai/anthropic |
ANTHROPIC_API_KEY | Your GateRouter API key (sk-or-v1-…) |
export ANTHROPIC_BASE_URL="https://api.gaterouter.ai/anthropic"
export ANTHROPIC_API_KEY="sk-or-v1-xxxxxxxxxxxxxxxx"
claudeAppend the following to ~/.zshrc or ~/.bashrc:
export ANTHROPIC_BASE_URL="https://api.gaterouter.ai/anthropic"
export ANTHROPIC_API_KEY="sk-or-v1-xxxxxxxxxxxxxxxx"After running source ~/.zshrc (or opening a new terminal), run claude.
settings.json (recommended)In user-level or project-level configuration, add an env block (see Claude Code settings for paths), for example in your project directory .claude/settings.json:
{
"env": {
"ANTHROPIC_BASE_URL": "https://api.gaterouter.ai/anthropic",
"ANTHROPIC_API_KEY": "sk-or-v1-xxxxxxxxxxxxxxxx"
}
}Security note: Do not commit real keys to a public repository; use your OS secret store or CI secret injection, and keep local secrets in environment variables only.
To temporarily skip the gateway:
env -u ANTHROPIC_BASE_URL -u ANTHROPIC_API_KEY claude(You must already have Anthropic official credentials or another default provider configured.)
In GateRouter docs, model IDs look like provider/model-name (for example anthropic/claude-sonnet-4.6). They are not identical to Claude Code built-in aliases such as sonnet. Pick one approach:
export ANTHROPIC_MODEL="anthropic/claude-sonnet-4.6"Or add the same key under env in settings.json.
Map aliases to GateRouter model IDs (Sonnet example):
export ANTHROPIC_DEFAULT_SONNET_MODEL="anthropic/claude-sonnet-4.6"
export ANTHROPIC_DEFAULT_OPUS_MODEL="anthropic/claude-opus-4.6"
export ANTHROPIC_DEFAULT_HAIKU_MODEL="anthropic/claude-haiku-4.5"Use the GateRouter docs — Models list as the source of truth for IDs.
/modelTo pick a gateway-backed model in the UI, use Claude Code's custom model option (see the official Model configuration - ANTHROPIC_CUSTOM_MODEL_OPTION):
export ANTHROPIC_CUSTOM_MODEL_OPTION="anthropic/claude-sonnet-4.6"
export ANTHROPIC_CUSTOM_MODEL_OPTION_NAME="Sonnet (GateRouter)"autoIf auto routing is enabled in the dashboard, you can try setting ANTHROPIC_MODEL to auto (same meaning as auto on the OpenAI setup page). If you see errors, switch back to an explicit model ID such as anthropic/claude-sonnet-4.6.
claudeAfter the session starts, send a simple prompt, for example: In one sentence, introduce yourself.
If you get a normal reply with no auth or routing errors, requests are reaching GateRouter and the selected model is responding.
| Symptom | Likely cause | What to try |
|---|---|---|
| 401 / auth failure | Wrong API key or env not exported | Check ANTHROPIC_API_KEY matches the dashboard key |
| 404 on URL | Base URL points at the OpenAI path | Use https://api.gaterouter.ai/anthropic |
| Model missing / routing error | Bad model ID or model not allowed | Compare with the Models table; check routing and allow lists in the dashboard |
| Still hitting Anthropic directly | Env vars not applied | Confirm settings.json scope; in a new shell run echo $ANTHROPIC_BASE_URL to verify |
Run in your terminal:
hermes modelIn the menu, choose Custom endpoint and fill in the fields:
| Item | Value |
|---|---|
| API base URL | https://api.gaterouter.ai/openai/v1 |
| API key | Your GateRouter API key |
| Model | auto (recommended for auto routing), or the full model ID from the console (e.g. deepseek/deepseek-v3.2) |
If you are asked for context length, press Enter to leave it blank (Hermes will detect it).
To edit configuration in the browser, run:
hermes dashboardhermes chat "Hello"Success means the request reached GateRouter and smart routing or your chosen model returned a result.
You can also run hermes doctor to verify the connection.
macOS / Linux
~/.hermes/config.yaml, main config (model, provider, base_url, api_key, etc.)~/.hermes/.env, secrets and env vars (recommended)Windows
C:\Users\<username>\.hermes\config.yamlC:\Users\<username>\.hermes\.env.env (pick one)Option A (same name as GateRouter)
# GateRouter API 密钥
GATEROUTER_API_KEY=sk-or-v1_xxxxxxxxxxxxxxxxxxxxxOption B (common Hermes custom-endpoint convention)
If model.api_key is not set for a custom endpoint, Hermes falls back to OPENAI_API_KEY. Put your GateRouter key there:
OPENAI_API_KEY=sk-or-v1_xxxxxxxxxxxxxxxxxxxxxmodel in config.yamlAuto routing (auto)
model:
default: auto
provider: custom
base_url: https://api.gaterouter.ai/openai/v1
api_key: ${GATEROUTER_API_KEY}If you use Option B, leave api_key blank or remove it so Hermes uses OPENAI_API_KEY.
Hermes expands ${VAR} when loading config (variables must exist in the environment, usually from ~/.hermes/.env).
Fixed model example
The model ID must match the GateRouter model list.
model:
default: deepseek/deepseek-v3.2
provider: custom
base_url: https://api.gaterouter.ai/openai/v1
api_key: ${GATEROUTER_API_KEY}After saving, run hermes chat "Hello" to verify the GateRouter connection.
If you need multiple logical routes under one GateRouter key (e.g. one with auto and one fixed to deepseek/deepseek-v3.2), add custom_providers in config.yaml (names: letters, digits, hyphens; e.g. gaterouter-auto):
model:
default: auto
provider: custom
base_url: https://api.gaterouter.ai/openai/v1
api_key: ${GATEROUTER_API_KEY}
custom_providers:
- name: gaterouter-auto
base_url: https://api.gaterouter.ai/openai/v1
api_key: ${GATEROUTER_API_KEY}
model: auto
- name: gaterouter-deepseek
base_url: https://api.gaterouter.ai/openai/v1
api_key: ${GATEROUTER_API_KEY}
model: deepseek/deepseek-v3.2hermes model again and pick the named route or Custom endpoint./model syntax from the Hermes docs, for example:/model custom:gaterouter-auto:auto
/model custom:gaterouter-deepseek:deepseek/deepseek-v3.2(Names follow custom_providers[].name; format is custom:<profile name>:<model id>.)
Only some models work
Confirm model.provider is custom, and Base URL is https://api.gaterouter.ai/openai/v1. If OpenAI-compatible models work but others do not, check model IDs and routing settings.
401 / invalid API key
Check the key is copied correctly and not expired; after editing .env, restart running hermes and hermes gateway before retrying.
Model not found or empty reply
deepseek/deepseek-v3.2).hermes doctor to inspect config and connectivity.If you already have QClaw installed, follow these steps to connect GateRouter.
1. In the chat, send the message below. Replace the apiKey value with your GateRouter API key.
Help me add a new provider
Name: GateRouter
apiKey: sk-or-v1-xxxxxxxxxxxxxxxx
baseUrl: https://api.gaterouter.ai/openai/v1
Models (you can pass multiple): 1. auto 2. deepseek/deepseek-v3.2QClaw will add the provider and restart automatically.
Ask: “Help me verify that my GateRouter configuration is working.” The assistant should reply with something like “GateRouter provider was added successfully!” (exact wording may vary.)
Ask: “Switch to auto under GateRouter.” The assistant should reply with something like “Switched successfully!” (exact wording may vary.)
Click Preferences in the bottom-left, go to Models & API, then click Add custom model.
Click the connection test. If you see “Test successful”, the setup is correct.
GateRouter(deepseek-v3.2) to use it.| Field | Value |
|---|---|
| Base URL | https://api.gaterouter.ai/openai/v1 |
| Auth | Authorization: Bearer <API_KEY> |
| Format | OpenAI-compatible |
| Pricing | Pay-as-you-go |
Note: The API path is /openai/v1 (not /v1).
| Method | Path | Description |
|---|---|---|
| POST | /chat/completions | Chat completions (streaming supported) |
| GET | /models | List available models |
| Model ID | Description | Use Case |
|---|---|---|
| openai/gpt-5.2 | OpenAI latest | Reasoning tasks |
| openai/gpt-5 | OpenAI general-purpose flagship | General purpose |
| openai/gpt-5-mini | OpenAI lightweight | General / cost optimization |
| openai/gpt-5-nano | OpenAI ultra low cost | Simple tasks |
| openai/gpt-4.1 | OpenAI stable | General purpose |
| openai/gpt-4.1-nano | OpenAI lightweight stable | Simple tasks |
| anthropic/claude-opus-4.6 | Anthropic's most capable | Complex reasoning |
| anthropic/claude-sonnet-4.6 | Anthropic balanced | General purpose |
| anthropic/claude-sonnet-4.5 | Anthropic previous gen | General purpose |
| anthropic/claude-haiku-4.5 | Anthropic fast | Simple tasks |
| google/gemini-3.1-pro | Google latest flagship | Long context / reasoning |
| google/gemini-2.5-pro | Google previous gen flagship | Long context |
| deepseek/deepseek-v3.2 | DeepSeek latest | Cost-effective |
| deepseek/deepseek-v3.1 | DeepSeek previous gen | General purpose |
| x-ai/grok-4 | xAI latest flagship | Reasoning / real-time info |
| x-ai/grok-4.1-fast | xAI high-speed | Fast response |
| moonshotai/kimi-k2.5 | Moonshot strong long-context | Long context |
| z-ai/glm-5 | Z.ai latest | General purpose |
| z-ai/glm-5-turbo | Coding & reasoning | Multi-scenario |
| z-ai/glm-4.7-flash | Z.ai fast tier | Simple tasks |
| minimax/minimax-m2.5 | MiniMax multimodal | General purpose |
Model ID format: provider/model-name. Version numbers use . (e.g. 4.6), not -.
For more models, visit the Models page.
| Error | Cause | Solution |
|---|---|---|
| auto routing is not enabled | Auto routing not turned on | Open Dashboard → Settings → Routing, then turn on auto routing |
| provider routing is not configured | Wrong model ID format | Open Docs → Models to browse the catalog |
| 404 page not found | Wrong API path | Confirm Base URL is https://api.gaterouter.ai/openai/v1 |
| unsupported parameter: max_tokens | Some models don't support it | Use max_completion_tokens instead |