Models
One API for model discovery and capability metadata
TheRouter.ai publishes model metadata through a standardized Models API. Use it to find supported modalities, context lengths, pricing, and parameters before sending traffic.
Model naming convention
Model IDs use a consistent provider/model-name format, so your routing and analytics stay portable across providers.
anthropic/claude-sonnet-4.5
openai/gpt-4o
google/gemini-2.5-pro
mistralai/mistral-largeListing models
Query /models to retrieve all available models and metadata. You can also filter on the web UI and keep systems synchronized using the RSS endpoint.
curl https://api.therouter.ai/v1/models \
-H "Authorization: Bearer $THEROUTER_API_KEY"Core schema
Each model object includes identity fields, context limits, architecture capabilities, and pricing data.
{
"id": "google/gemini-2.5-pro",
"name": "Gemini 2.5 Pro",
"context_length": 1048576,
"architecture": {
"input_modalities": ["text", "image", "audio", "video", "file"],
"output_modalities": ["text"],
"tokenizer": "sentencepiece",
"instruct_type": null
},
"pricing": {
"prompt": "0.00000125",
"completion": "0.00000500",
"request": "0",
"image": "0",
"web_search": "0"
},
"supported_parameters": ["temperature", "top_p", "tools", "response_format"]
}Pricing units
Pricing fields are stored in USD per token or per unit. For human-readable display, convert to per-million token rates.
prompt: 0.00000125 USD/token -> $1.25 per 1M input tokens
completion: 0.00000500 USD/token -> $5.00 per 1M output tokensContext lengths and provider limits
Use context_length and provider metadata to preflight request sizes and avoid token overflow failures in production.