Everything you need to integrate GateRouter into your application.
The fastest way to get started depends on your setup.
Already using OpenClaw? One command gets you auto-routing, skills store, and x402 payments — zero config.
# Step 1: Install the GateRouter skill
npx openclaw add gaterouter
# Step 2: Set your API key in .env
GATEROUTER_API_KEY=your-gaterouter-api-key
# Step 3: Use it in your agent — routing, skills, x402 all work automatically
import { Agent } from "openclaw";
const agent = new Agent({
skills: ["gaterouter"], // That's it! Auto-routing enabled.
});
const result = await agent.run("Analyze BTC market trends");
console.log(result.output);The skill manifest is bundled automatically when you install. It acts as the single source of truth for provider configuration:
your-project/ ├── skills/ │ └── gaterouter/ │ ├── skill.md # Manifest — provider, routing, x402 │ ├── index.ts # Skill entry point │ └── config.json # Overridable settings └── .env # GATEROUTER_API_KEY lives here
If your SDK supports OpenAI, it works with GateRouter. No special client or wrapper needed.
Build AI agents with smart auto-routing. Set model: "auto" to let GateRouter pick the best model.
import { Agent } from "openclaw";
const agent = new Agent({
llm: {
provider: "openai-compatible",
baseURL: "https://api.gaterouter.ai/v1",
apiKey: process.env.GATEROUTER_API_KEY,
model: "auto", // Smart router picks the best model
},
skills: ["web-search", "code-interpreter"],
});
const result = await agent.run(
"Analyze the latest BTC market trends"
);
console.log(result.output);Use GateRouter as a drop-in replacement for OpenAI in LangChain.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
base_url="https://api.gaterouter.ai/v1",
api_key="your-gaterouter-api-key",
model="auto",
)
response = llm.invoke("What's the weather in Tokyo?")
print(response.content)Tip: Setting model: "auto" enables GateRouter's smart routing, which picks the best model based on the task, your feedback history, and cost optimization.
https://api.gaterouter.ai/v1Include your API key in the Authorization header:
Authorization: Bearer your-gaterouter-api-key/v1/chat/completionsChat completions (streaming supported)/v1/modelsList available models/v1/embeddingsText embeddingsGateRouter supports optional headers for advanced routing and features.
x-skillsweb-searchEnable specific skills (comma-separated)x-402-enabledtrueEnable x402 on-chain payment flowx-route-preferencecost | speed | qualityHint for model routing preferencex-gaterouter-titleMyAppApp name for analytics (optional)// Example: Enable web search skill with speed preference
const response = await client.chat.completions.create({
model: "auto",
messages: [{ role: "user", content: "What happened today?" }],
}, {
headers: {
"x-skills": "web-search",
"x-route-preference": "speed",
},
});x402 is an HTTP-native payment protocol that lets AI agents pay for API calls using on-chain stablecoins — no pre-funded account needed.
// x402 payment flow
const response = await fetch("https://api.gaterouter.ai/v1/chat/completions", {
method: "POST",
headers: {
"Content-Type": "application/json",
"x-402-enabled": "true",
},
body: JSON.stringify({
model: "gpt-4o",
messages: [{ role: "user", content: "Hello!" }],
}),
});
if (response.status === 402) {
const paymentDetails = await response.json();
// { network, token, amount, recipient, ... }
// Sign tx → retry with proof header
}Extend your API with plugins and MCP tool integrations. Enable them in your dashboard or per-request via headers.
Augment responses with real-time web search results
Auto-fix malformed JSON responses from LLMs
Official Gate exchange skills — trading, market data, portfolio
Enable via request header:
// Enable web search skill
const response = await client.chat.completions.create({
model: "gpt-4o",
messages: [{ role: "user", content: "What happened today?" }],
}, {
headers: { "x-skills": "web-search" },
});