GateRouter Documentation

    A unified AI model routing platform. One API key, … models, smart auto-routing.


    Getting Started

    1. Create an API key

    1. Go to gaterouter.ai, choose login method, and authorize
    2. Go to Console → Settings → API keys → Create a key

    2. Auto routing (optional)

    Auto routing is enabled by default. To control it:

    Console → Settings → Routing → Auto routing toggle

    Once enabled, GateRouter automatically selects the best model for each request. If you prefer to pick models yourself, skip this step and specify models directly (e.g. anthropic/claude-sonnet-4.6).


    Standard Setup

    Fully compatible with the OpenAI API. Supports Python, Node.js, curl, and tools across the ecosystem.

    Replace the Base URL ( https://api.gaterouter.ai/openai/v1 ) and API key to start using it.

    from openai import OpenAI
    
    client = OpenAI(
        api_key="GATEROUTER_API_KEY",  # get GATEROUTER_API_KEY from gaterouter.ai (API Key)
        base_url="https://api.gaterouter.ai/openai/v1",
    )
    
    completion = client.chat.completions.create(
        model="auto",
        messages=[
            {"role": "system", "content": "system prompt"},
            {"role": "user", "content": "how are you?"}
        ],
    )
    
    # get the response from LLM (role=assistant)
    print(completion.choices[0].message.content)

    Response example:

    {
        "id": "243c850e-214c-431e-977f-ebaf4aa95f56",
        "choices": [
            {
                "index": 0,
                "message": {
                    "role": "assistant",
                    "content": "Hello! Nice to meet you. How can I help you?"
                },
                "finish_reason": "stop"
            }
        ],
        "created": 1773408946,
        "model": "deepseek.v3-v1:0",
        "object": "chat.completion",
        "usage": {
            "prompt_tokens": 5,
            "completion_tokens": 15,
            "total_tokens": 20
        }
    }

    OpenClaw Setup

    If you already have OpenClaw installed, follow the steps below to connect GateRouter.

    Connect GateRouter

    Method 1: Web console

    1. Start the web console

    In a terminal, run:

    openclaw dashboard

    The browser will open the console (usually http://127.0.0.1:18789). If the browser does not open automatically, please visit that address manually.

    2. Go to configuration page

    Select Config → Raw mode.

    3. Add GateRouter configuration

    Add env and set GATEROUTER_API_KEY to your GateRouter API Key:

    env: {
        vars: {
          GATEROUTER_API_KEY: 'sk-or-v1-xxxxxxxxxxxxxxxx',
        },
      },

    Add models with baseUrl set to https://api.gaterouter.ai/openai/v1:

    models: {
      mode: 'merge',
      providers: {
        gaterouter: {
          baseUrl: 'https://api.gaterouter.ai/openai/v1',
          apiKey: '${GATEROUTER_API_KEY}',
          api: 'openai-completions',
          models: [
            {
              id: 'gaterouter/auto',
              name: 'Gaterouter Auto',
              api: 'openai-completions',
              reasoning: false,
              input: ['text'],
              cost: {
                input: 0,
                output: 0,
                cacheRead: 0,
                cacheWrite: 0,
              },
              contextWindow: 200000,
              maxTokens: 8192,
            },
          ],
        },
      },
    },

    Replace the original "agents": {...} section with:

    agents: {
      defaults: {
        model: {
          primary: 'gaterouter/auto',
        },
        models: {
          'gaterouter/auto': {
            alias: 'Gaterouter Auto',
          },
        },
      },
    },

    4. Save and apply configuration

    Web console: Click Save in the top right, then Update.

    5. Verify connection

    In OpenClaw Chat, send a test message such as "Hello". If configured correctly GateRouter API is called → auto-routed to the best model → response is returned.

    Method 2: Edit config file

    1. Locate the openclaw.json file

    macOS:

    Open Finder, press Command + Shift + G

    Enter: ~/.openclaw

    Press Enter to see openclaw.json.

    Windows:

    Path: C:\Users\<YourUsername>\.openclaw\openclaw.json

    2. Add GateRouter configuration

    Add env and set GATEROUTER_API_KEY to your GateRouter API Key:

    "env": {
      "vars": {
        "GATEROUTER_API_KEY": "sk-or-v1-xxxxxxxxxxxxxxxx"
      }
    },

    Add models with baseUrl set to https://api.gaterouter.ai/openai/v1:

    "models": {
      "mode": "merge",
      "providers": {
        "gaterouter": {
          "baseUrl": "https://api.gaterouter.ai/openai/v1",
          "apiKey": "${GATEROUTER_API_KEY}",
          "api": "openai-completions",
          "models": [
            {
              "id": "gaterouter/auto",
              "name": "Gaterouter Auto",
              "api": "openai-completions",
              "reasoning": false,
              "input": ["text"],
              "cost": {
                "input": 0,
                "output": 0,
                "cacheRead": 0,
                "cacheWrite": 0
              },
              "contextWindow": 200000,
              "maxTokens": 8192
            }
          ]
        }
      }
    },

    Replace the original "agents": {...}, section with:

    "agents": {
      "defaults": {
        "model": {
          "primary": "gaterouter/minimax/minimax-m2.5"
        },
        "models": {
          "gaterouter/auto": {
            "alias": "Gaterouter Auto"
          }
        }
      }
    },

    3. Save and verify configuration

    After saving the config file, run the following in a terminal to view the file and confirm it is correct:

    cat ~/.openclaw/openclaw.json

    4. Verify connection

    Run the following in a local terminal to start a CLI conversation:

    openclaw tui

    Or run the following to use OpenClaw Chat in the browser:

    openclaw dashboard

    Optional configuration

    Auto model routing

    GateRouter recommends setting primary to gaterouter/auto.

    Automatically selects the best model by price, latency, and availability.

    Use a specific model

    To use a fixed model, e.g. set primary to gaterouter/deepseek/deepseek-v3.2

    FAQ

    1. Only OpenAI models succeed; other models fail

      Models available through GateRouter use the OpenAI-compatible protocol. In OpenClaw integration settings, set the api field to openai-completions (as in the examples above). If OpenAI-family models work but all others fail, check the providers entry: the api type.

    2. Model not found or empty response

      Confirm the model ID spelling is correct; the configured provider name matches what you reference; and reasoning must be set to false.


    Cursor Setup

    If you already have Cursor installed, follow these steps to connect GateRouter.

    1. Open Cursor Settings

    Use the menu in the top-right corner → Settings.

    Placeholder for a screenshot of opening Cursor Settings from the top-right menu
    Reference: Settings entry (screenshot)

    2. Models

    In the left sidebar:

    • Open Models.
    • Choose View All Models, scroll to the bottom, then Add Custom Model.
    • Enter the specific model ID, e.g. deepseek/deepseek-v3.2. Do not use auto.
    Placeholder for a screenshot of Models, View All Models, and Add Custom Model
    Reference: Models list and Add Custom Model (screenshot)

    3. Add GateRouter

    Configure API access:

    • Expand API Keys.
    • Paste your GateRouter API key.
    • Set Base URL to https://api.gaterouter.ai/openai/v1
    Placeholder for a screenshot of API Keys and Base URL fields
    Reference: API Keys and Base URL (screenshot)

    4. Save and close the Settings page.

    5. Use GateRouter in Cursor

    In Chat, Composer, or Agent, choose your GateRouter model from the model dropdown.

    Placeholder for a screenshot of selecting a GateRouter model in the dropdown
    Reference: model dropdown in Chat / Composer / Agent (screenshot)

    Claude Code Setup

    If you already have Claude Code (Anthropic's terminal / IDE coding assistant) installed, follow these steps to connect GateRouter.

    1. Create a GateRouter API key

    1. Go to Dashboard → Settings → API Keys and create a key.
    2. Copy the key that starts with sk-or-v1- and use it to replace the placeholders below.
    3. For auto routing: go to Dashboard → Settings → Routing and enable Auto routing. When it is off, specify a model ID explicitly in each request.

    2. Configure Anthropic Base URL and API key

    Claude Code reads environment variables. We recommend following the official LLM gateway documentation and setting:

    VariableValue
    ANTHROPIC_BASE_URLhttps://api.gaterouter.ai/anthropic
    ANTHROPIC_API_KEYYour GateRouter API key (sk-or-v1-…)

    Option A: Current terminal session (temporary)

    export ANTHROPIC_BASE_URL="https://api.gaterouter.ai/anthropic"
    export ANTHROPIC_API_KEY="sk-or-v1-xxxxxxxxxxxxxxxx"
    claude

    Option B: Shell profile

    Append the following to ~/.zshrc or ~/.bashrc:

    export ANTHROPIC_BASE_URL="https://api.gaterouter.ai/anthropic"
    export ANTHROPIC_API_KEY="sk-or-v1-xxxxxxxxxxxxxxxx"

    After running source ~/.zshrc (or opening a new terminal), run claude.

    Option C: Claude Code settings.json (recommended)

    In user-level or project-level configuration, add an env block (see Claude Code settings for paths), for example in your project directory .claude/settings.json:

    {
      "env": {
        "ANTHROPIC_BASE_URL": "https://api.gaterouter.ai/anthropic",
        "ANTHROPIC_API_KEY": "sk-or-v1-xxxxxxxxxxxxxxxx"
      }
    }

    Security note: Do not commit real keys to a public repository; use your OS secret store or CI secret injection, and keep local secrets in environment variables only.

    Bypass the gateway and use Anthropic directly

    To temporarily skip the gateway:

    env -u ANTHROPIC_BASE_URL -u ANTHROPIC_API_KEY claude

    (You must already have Anthropic official credentials or another default provider configured.)

    3. Configure models (GateRouter model IDs)

    In GateRouter docs, model IDs look like provider/model-name (for example anthropic/claude-sonnet-4.6). They are not identical to Claude Code built-in aliases such as sonnet. Pick one approach:

    3.1 Default model via environment variable

    export ANTHROPIC_MODEL="anthropic/claude-sonnet-4.6"

    Or add the same key under env in settings.json.

    3.2 Alias mapping

    Map aliases to GateRouter model IDs (Sonnet example):

    export ANTHROPIC_DEFAULT_SONNET_MODEL="anthropic/claude-sonnet-4.6"
    export ANTHROPIC_DEFAULT_OPUS_MODEL="anthropic/claude-opus-4.6"
    export ANTHROPIC_DEFAULT_HAIKU_MODEL="anthropic/claude-haiku-4.5"

    Use the GateRouter docs — Models list as the source of truth for IDs.

    3.3 Custom entry in /model

    To pick a gateway-backed model in the UI, use Claude Code's custom model option (see the official Model configuration - ANTHROPIC_CUSTOM_MODEL_OPTION):

    export ANTHROPIC_CUSTOM_MODEL_OPTION="anthropic/claude-sonnet-4.6"
    export ANTHROPIC_CUSTOM_MODEL_OPTION_NAME="Sonnet (GateRouter)"

    3.4 Auto-routing model auto

    If auto routing is enabled in the dashboard, you can try setting ANTHROPIC_MODEL to auto (same meaning as auto on the OpenAI setup page). If you see errors, switch back to an explicit model ID such as anthropic/claude-sonnet-4.6.

    4. Verify the setup

    1. In a terminal with the environment configured, run:
    claude
    1. After the session starts, send a simple prompt, for example: In one sentence, introduce yourself.

    2. If you get a normal reply with no auth or routing errors, requests are reaching GateRouter and the selected model is responding.

    5. FAQ

    SymptomLikely causeWhat to try
    401 / auth failureWrong API key or env not exportedCheck ANTHROPIC_API_KEY matches the dashboard key
    404 on URLBase URL points at the OpenAI pathUse https://api.gaterouter.ai/anthropic
    Model missing / routing errorBad model ID or model not allowedCompare with the Models table; check routing and allow lists in the dashboard
    Still hitting Anthropic directlyEnv vars not appliedConfirm settings.json scope; in a new shell run echo $ANTHROPIC_BASE_URL to verify


    Hermes Setup

    Prerequisites

    • Create an API key in the GateRouter Console.
    • If you use auto routing, enable it under Settings → Routing in the console.

    Option 1: Terminal setup

    1. Choose model and custom endpoint

    Run in your terminal:

    hermes model

    In the menu, choose Custom endpoint and fill in the fields:

    ItemValue
    API base URLhttps://api.gaterouter.ai/openai/v1
    API keyYour GateRouter API key
    Modelauto (recommended for auto routing), or the full model ID from the console (e.g. deepseek/deepseek-v3.2)

    If you are asked for context length, press Enter to leave it blank (Hermes will detect it).

    2. (Optional) Local web dashboard

    To edit configuration in the browser, run:

    hermes dashboard

    3. Verify

    hermes chat "Hello"

    Success means the request reached GateRouter and smart routing or your chosen model returned a result.

    You can also run hermes doctor to verify the connection.

    Option 2: Edit configuration files

    1. File locations

    macOS / Linux

    • ~/.hermes/config.yaml, main config (model, provider, base_url, api_key, etc.)
    • ~/.hermes/.env, secrets and env vars (recommended)

    Windows

    • C:\Users\<username>\.hermes\config.yaml
    • C:\Users\<username>\.hermes\.env

    2. Save the API key in .env (pick one)

    Option A (same name as GateRouter)

    # GateRouter API 密钥
    GATEROUTER_API_KEY=sk-or-v1_xxxxxxxxxxxxxxxxxxxxx

    Option B (common Hermes custom-endpoint convention)

    If model.api_key is not set for a custom endpoint, Hermes falls back to OPENAI_API_KEY. Put your GateRouter key there:

    OPENAI_API_KEY=sk-or-v1_xxxxxxxxxxxxxxxxxxxxx

    3. Configure model in config.yaml

    Auto routing (auto)

    model:
      default: auto
      provider: custom
      base_url: https://api.gaterouter.ai/openai/v1
      api_key: ${GATEROUTER_API_KEY}

    If you use Option B, leave api_key blank or remove it so Hermes uses OPENAI_API_KEY.

    Hermes expands ${VAR} when loading config (variables must exist in the environment, usually from ~/.hermes/.env).

    Fixed model example

    The model ID must match the GateRouter model list.

    model:
      default: deepseek/deepseek-v3.2
      provider: custom
      base_url: https://api.gaterouter.ai/openai/v1
      api_key: ${GATEROUTER_API_KEY}

    4. Verify after saving

    After saving, run hermes chat "Hello" to verify the GateRouter connection.

    Multiple routes / models

    If you need multiple logical routes under one GateRouter key (e.g. one with auto and one fixed to deepseek/deepseek-v3.2), add custom_providers in config.yaml (names: letters, digits, hyphens; e.g. gaterouter-auto):

    model:
      default: auto
      provider: custom
      base_url: https://api.gaterouter.ai/openai/v1
      api_key: ${GATEROUTER_API_KEY}
    
    custom_providers:
      - name: gaterouter-auto
        base_url: https://api.gaterouter.ai/openai/v1
        api_key: ${GATEROUTER_API_KEY}
        model: auto
    
      - name: gaterouter-deepseek
        base_url: https://api.gaterouter.ai/openai/v1
        api_key: ${GATEROUTER_API_KEY}
        model: deepseek/deepseek-v3.2

    How to switch

    1. Terminal: run hermes model again and pick the named route or Custom endpoint.
    2. In chat (TUI): use the /model syntax from the Hermes docs, for example:
    /model custom:gaterouter-auto:auto
    /model custom:gaterouter-deepseek:deepseek/deepseek-v3.2

    (Names follow custom_providers[].name; format is custom:<profile name>:<model id>.)

    FAQ

    Only some models work

    Confirm model.provider is custom, and Base URL is https://api.gaterouter.ai/openai/v1. If OpenAI-compatible models work but others do not, check model IDs and routing settings.

    401 / invalid API key

    Check the key is copied correctly and not expired; after editing .env, restart running hermes and hermes gateway before retrying.

    Model not found or empty reply

    • Model ID matches the console (e.g. deepseek/deepseek-v3.2).
    • Auto routing is enabled in the GateRouter console.
    • Run hermes doctor to inspect config and connectivity.

    QClaw Setup

    If you already have QClaw installed, follow these steps to connect GateRouter.

    Configure in chat

    1. In the chat, send the message below. Replace the apiKey value with your GateRouter API key.

    Help me add a new provider
    Name: GateRouter
    apiKey: sk-or-v1-xxxxxxxxxxxxxxxx
    baseUrl: https://api.gaterouter.ai/openai/v1
    Models (you can pass multiple): 1. auto  2. deepseek/deepseek-v3.2

    QClaw will add the provider and restart automatically.

    2. Verify

    Ask: “Help me verify that my GateRouter configuration is working.” The assistant should reply with something like “GateRouter provider was added successfully!” (exact wording may vary.)

    3. Switch to GateRouter

    Ask: “Switch to auto under GateRouter.” The assistant should reply with something like “Switched successfully!” (exact wording may vary.)


    AutoClaw Setup

    1. Open the configuration entry

    Click Preferences in the bottom-left, go to Models & API, then click Add custom model.

    2. Add a model

    • Set the provider to Custom.
    • Enter a GateRouter-supported model ID, e.g. deepseek/deepseek-v3.2.
    • Enter a display name, e.g. GateRouter(deepseek-v3.2).
    • Enter your API Key, e.g. sk-or-v1-xxxxxxxxxxxxxxxx.
    • Base URL: https://api.gaterouter.ai/openai/v1

    3. Test the configuration

    Click the connection test. If you see “Test successful”, the setup is correct.

    4. Use the model

    • Click Add. After it saves, return to the app.
    • Below the chat input, switch the model to your configured GateRouter(deepseek-v3.2) to use it.

    API Reference

    FieldValue
    Base URLhttps://api.gaterouter.ai/openai/v1
    AuthAuthorization: Bearer <API_KEY>
    FormatOpenAI-compatible
    PricingPay-as-you-go

    Note: The API path is /openai/v1 (not /v1).

    Endpoints

    MethodPathDescription
    POST/chat/completionsChat completions (streaming supported)
    GET/modelsList available models

    Models

    Model IDDescriptionUse Case
    openai/gpt-5.2OpenAI latestReasoning tasks
    openai/gpt-5OpenAI general-purpose flagshipGeneral purpose
    openai/gpt-5-miniOpenAI lightweightGeneral / cost optimization
    openai/gpt-5-nanoOpenAI ultra low costSimple tasks
    openai/gpt-4.1OpenAI stableGeneral purpose
    openai/gpt-4.1-nanoOpenAI lightweight stableSimple tasks
    anthropic/claude-opus-4.6Anthropic's most capableComplex reasoning
    anthropic/claude-sonnet-4.6Anthropic balancedGeneral purpose
    anthropic/claude-sonnet-4.5Anthropic previous genGeneral purpose
    anthropic/claude-haiku-4.5Anthropic fastSimple tasks
    google/gemini-3.1-proGoogle latest flagshipLong context / reasoning
    google/gemini-2.5-proGoogle previous gen flagshipLong context
    deepseek/deepseek-v3.2DeepSeek latestCost-effective
    deepseek/deepseek-v3.1DeepSeek previous genGeneral purpose
    x-ai/grok-4xAI latest flagshipReasoning / real-time info
    x-ai/grok-4.1-fastxAI high-speedFast response
    moonshotai/kimi-k2.5Moonshot strong long-contextLong context
    z-ai/glm-5Z.ai latestGeneral purpose
    z-ai/glm-5-turboCoding & reasoningMulti-scenario
    z-ai/glm-4.7-flashZ.ai fast tierSimple tasks
    minimax/minimax-m2.5MiniMax multimodalGeneral purpose

    Model ID format: provider/model-name. Version numbers use . (e.g. 4.6), not -.

    For more models, visit the Models page.


    Troubleshooting

    ErrorCauseSolution
    auto routing is not enabledAuto routing not turned onOpen Dashboard → Settings → Routing, then turn on auto routing
    provider routing is not configuredWrong model ID formatOpen Docs → Models to browse the catalog
    404 page not foundWrong API pathConfirm Base URL is https://api.gaterouter.ai/openai/v1
    unsupported parameter: max_tokensSome models don't support itUse max_completion_tokens instead