A unified AI model routing platform. One API key, … models, smart auto-routing.
Auto routing is enabled by default. To control it:
Console → Settings → Routing → Auto routing toggle
Once enabled, GateRouter automatically selects the best model for each request. If you prefer to pick models yourself, skip this step and specify models directly (e.g. anthropic/claude-sonnet-4.6).
Fully compatible with the OpenAI API. Supports Python, Node.js, curl, and tools across the ecosystem.
Replace the Base URL ( https://api.gaterouter.ai/openai/v1 ) and API key to start using it.
from openai import OpenAI
client = OpenAI(
api_key="GATEROUTER_API_KEY", # get GATEROUTER_API_KEY from gaterouter.ai (API Key)
base_url="https://api.gaterouter.ai/openai/v1",
)
completion = client.chat.completions.create(
model="auto",
messages=[
{"role": "system", "content": "system prompt"},
{"role": "user", "content": "how are you?"}
],
)
# get the response from LLM (role=assistant)
print(completion.choices[0].message.content){
"id": "243c850e-214c-431e-977f-ebaf4aa95f56",
"choices": [
{
"index": 0,
"message": {
"role": "assistant",
"content": "Hello! Nice to meet you. How can I help you?"
},
"finish_reason": "stop"
}
],
"created": 1773408946,
"model": "deepseek.v3-v1:0",
"object": "chat.completion",
"usage": {
"prompt_tokens": 5,
"completion_tokens": 15,
"total_tokens": 20
}
}If you already have OpenClaw installed, follow the steps below to connect GateRouter.
In a terminal, run:
openclaw dashboardThe browser will open the console (usually http://127.0.0.1:18789). If the browser does not open automatically, please visit that address manually.
Select Config → Raw mode.
Add env and set GATEROUTER_API_KEY to your GateRouter API Key:
env: {
vars: {
GATEROUTER_API_KEY: 'sk-or-v1-xxxxxxxxxxxxxxxx',
},
},Add models with baseUrl set to https://api.gaterouter.ai/openai/v1:
models: {
mode: 'merge',
providers: {
gaterouter: {
baseUrl: 'https://api.gaterouter.ai/openai/v1',
apiKey: '${GATEROUTER_API_KEY}',
api: 'openai-completions',
models: [
{
id: 'gaterouter/auto',
name: 'Gaterouter Auto',
api: 'openai-completions',
reasoning: false,
input: ['text'],
cost: {
input: 0,
output: 0,
cacheRead: 0,
cacheWrite: 0,
},
contextWindow: 200000,
maxTokens: 8192,
},
],
},
},
},Replace the original "agents": {...} section with:
agents: {
defaults: {
model: {
primary: 'gaterouter/auto',
},
models: {
'gaterouter/auto': {
alias: 'Gaterouter Auto',
},
},
},
},Web console: Click Save in the top right, then Update.
In OpenClaw Chat, send a test message such as "Hello". If configured correctly GateRouter API is called → auto-routed to the best model → response is returned.
macOS:
Open Finder, press Command + Shift + G
Enter: ~/.openclaw
Press Enter to see openclaw.json.
Windows:
Path: C:\Users\<YourUsername>\.openclaw\openclaw.json
Add env and set GATEROUTER_API_KEY to your GateRouter API Key:
"env": {
"vars": {
"GATEROUTER_API_KEY": "sk-or-v1-xxxxxxxxxxxxxxxx"
}
},Add models with baseUrl set to https://api.gaterouter.ai/openai/v1:
"models": {
"mode": "merge",
"providers": {
"gaterouter": {
"baseUrl": "https://api.gaterouter.ai/openai/v1",
"apiKey": "${GATEROUTER_API_KEY}",
"api": "openai-completions",
"models": [
{
"id": "gaterouter/auto",
"name": "Gaterouter Auto",
"api": "openai-completions",
"reasoning": false,
"input": ["text"],
"cost": {
"input": 0,
"output": 0,
"cacheRead": 0,
"cacheWrite": 0
},
"contextWindow": 200000,
"maxTokens": 8192
}
]
}
}
},Replace the original "agents": {...}, section with:
"agents": {
"defaults": {
"model": {
"primary": "gaterouter/minimax/minimax-m2.5"
},
"models": {
"gaterouter/auto": {
"alias": "Gaterouter Auto"
}
}
}
},After saving the config file, run the following in a terminal to view the file and confirm it is correct:
cat ~/.openclaw/openclaw.jsonRun the following in a local terminal to start a CLI conversation:
openclaw tuiOr run the following to use OpenClaw Chat in the browser:
openclaw dashboardAuto model routing
GateRouter recommends setting primary to gaterouter/auto.
Automatically selects the best model by price, latency, and availability.
Use a specific model
To use a fixed model, e.g. set primary to gaterouter/deepseek/deepseek-v3.2
Only OpenAI models succeed; other models fail
Models available through GateRouter use the OpenAI-compatible protocol. In OpenClaw integration settings, set the api field to openai-completions (as in the examples above). If OpenAI-family models work but all others fail, check the providers entry: the api type.
Model not found or empty response
Confirm the model ID spelling is correct; the configured provider name matches what you reference; and reasoning must be set to false.
If you already have QClaw installed, follow these steps to connect GateRouter.
1. In the chat, send the message below. Replace the apiKey value with your GateRouter API key.
Help me add a new provider
Name: GateRouter
apiKey: sk-or-v1-xxxxxxxxxxxxxxxx
baseUrl: https://api.gaterouter.ai/openai/v1
Models (you can pass multiple): 1. auto 2. deepseek/deepseek-v3.2QClaw will add the provider and restart automatically.
Ask: “Help me verify that my GateRouter configuration is working.” The assistant should reply with something like “GateRouter provider was added successfully!” (exact wording may vary.)
Ask: “Switch to auto under GateRouter.” The assistant should reply with something like “Switched successfully!” (exact wording may vary.)
Click Preferences in the bottom-left, go to Models & API, then click Add custom model.
Click the connection test. If you see “Test successful”, the setup is correct.
GateRouter(deepseek-v3.2) to use it.If you already have Cursor installed, follow these steps to connect GateRouter.
Use the menu in the top-right corner → Settings.

In the left sidebar:

Configure API access:

In Chat, Composer, or Agent, choose your GateRouter model from the model dropdown.

| Field | Value |
|---|---|
| Base URL | https://api.gaterouter.ai/openai/v1 |
| Auth | Authorization: Bearer <API_KEY> |
| Format | OpenAI-compatible |
| Pricing | Pay-as-you-go |
Note: The API path is /openai/v1 (not /v1).
| Method | Path | Description |
|---|---|---|
| POST | /chat/completions | Chat completions (streaming supported) |
| GET | /models | List available models |
| Model ID | Description | Use Case |
|---|---|---|
| openai/gpt-5.2 | OpenAI latest | Reasoning tasks |
| openai/gpt-5 | OpenAI general-purpose flagship | General purpose |
| openai/gpt-5-mini | OpenAI lightweight | General / cost optimization |
| openai/gpt-5-nano | OpenAI ultra low cost | Simple tasks |
| openai/gpt-4.1 | OpenAI stable | General purpose |
| openai/gpt-4.1-nano | OpenAI lightweight stable | Simple tasks |
| anthropic/claude-opus-4.6 | Anthropic's most capable | Complex reasoning |
| anthropic/claude-sonnet-4.6 | Anthropic balanced | General purpose |
| anthropic/claude-sonnet-4.5 | Anthropic previous gen | General purpose |
| anthropic/claude-haiku-4.5 | Anthropic fast | Simple tasks |
| google/gemini-3.1-pro | Google latest flagship | Long context / reasoning |
| google/gemini-2.5-pro | Google previous gen flagship | Long context |
| deepseek/deepseek-v3.2 | DeepSeek latest | Cost-effective |
| deepseek/deepseek-v3.1 | DeepSeek previous gen | General purpose |
| x-ai/grok-4 | xAI latest flagship | Reasoning / real-time info |
| x-ai/grok-4.1-fast | xAI high-speed | Fast response |
| moonshotai/kimi-k2.5 | Moonshot strong long-context | Long context |
| z-ai/glm-5 | Z.ai latest | General purpose |
| z-ai/glm-5-turbo | Coding & reasoning | Multi-scenario |
| z-ai/glm-4.7-flash | Z.ai fast tier | Simple tasks |
| minimax/minimax-m2.5 | MiniMax multimodal | General purpose |
Model ID format: provider/model-name. Version numbers use . (e.g. 4.6), not -.
For more models, visit the Models page.
| Error | Cause | Solution |
|---|---|---|
| auto routing is not enabled | Auto routing not turned on | Open Dashboard → Settings → Routing, then turn on auto routing |
| provider routing is not configured | Wrong model ID format | Open Docs → Models to browse the catalog |
| 404 page not found | Wrong API path | Confirm Base URL is https://api.gaterouter.ai/openai/v1 |
| unsupported parameter: max_tokens | Some models don't support it | Use max_completion_tokens instead |