rival-mcp
MCP server for querying AI model comparison data from rival.tips
This server lets AI coding assistants — Claude Code, Cursor, Windsurf, and any MCP-compatible client — natively query model benchmarks, pricing, capabilities, and side-by-side comparisons without leaving your editor.
Quick Start
npx rival-mcp
No API key required. All data is served from the public rival.tips API.
Configuration
Claude Code
Add to your .claude/settings.json (project-level) or ~/.claude/settings.json (global):
{
"mcpServers": {
"rival": {
"command": "npx",
"args": ["-y", "rival-mcp"]
}
}
}
Claude Desktop
Add to ~/Library/Application Support/Claude/claude_desktop_config.json (macOS) or %APPDATA%\Claude\claude_desktop_config.json (Windows):
{
"mcpServers": {
"rival": {
"command": "npx",
"args": ["-y", "rival-mcp"]
}
}
}
Cursor
Add to your Cursor MCP settings (.cursor/mcp.json):
{
"mcpServers": {
"rival": {
"command": "npx",
"args": ["-y", "rival-mcp"]
}
}
}
Windsurf
Add to your Windsurf MCP config:
{
"mcpServers": {
"rival": {
"command": "npx",
"args": ["-y", "rival-mcp"]
}
}
}
Available Tools
list-models
List all AI models with optional filtering.
Parameters:
| Parameter | Type | Description |
|---|---|---|
provider | string (optional) | Filter by provider: OpenAI, Anthropic, Google, Meta, Mistral, etc. |
category | string (optional) | Filter by category: flagship, reasoning, coding, small, free, image-gen |
capability | string (optional) | Filter by capability: chat, code, vision, image-gen, function-calling |
q | string (optional) | Free-text search across name, ID, provider, and description |
Example prompts:
- "List all Anthropic models"
- "Show me free models"
- "What models support vision?"
get-model
Get detailed information about a specific model — benchmarks, pricing, capabilities, unique features, and provider availability.
Parameters:
| Parameter | Type | Description |
|---|---|---|
id | string (required) | Model ID, e.g. gpt-4.1, claude-3.7-sonnet, gemini-2.5-pro |
Example prompts:
- "Get details on claude-3.7-sonnet"
- "What are the benchmarks for gpt-4.1?"
compare-models
Compare 2-3 models side by side — benchmarks, pricing, capabilities, and shared challenges.
Parameters:
| Parameter | Type | Description |
|---|---|---|
models | string (required) | Comma-separated model IDs (2-3). Example: gpt-4.1,claude-3.7-sonnet |
Example prompts:
- "Compare GPT-4.1 vs Claude 3.7 Sonnet"
- "How does Gemini 2.5 Pro stack up against GPT-4.1 and Claude Sonnet?"
search-models
Search for models by name, description, or capability when you don't know the exact model ID.
Parameters:
| Parameter | Type | Description |
|---|---|---|
query | string (required) | Search query, e.g. vision, cheap coding, fast reasoning |
Example prompts:
- "Find models good at coding"
- "Search for cheap reasoning models"
Development
# Install dependencies
npm install
# Run in development mode
npm run dev
# Build for production
npm run build
# Run the built server
npm start
How It Works
This MCP server communicates over stdio (standard input/output) using the Model Context Protocol. When an AI assistant needs model comparison data, it calls the appropriate tool, which fetches data from the rival.tips public API and returns structured JSON.
The server exposes no resources or prompts — only tools. All data is read-only and publicly available.
Data Source
All model data comes from rival.tips, an AI model comparison platform featuring:
- 60+ AI models with benchmarks, pricing, and capability data
- Side-by-side comparisons with shared challenge responses
- Community-driven AI duel voting and rankings
- Pre-generated showcase responses across coding, creative, and reasoning tasks
License
MIT
Server Config
{
"mcpServers": {
"rival": {
"command": "npx",
"args": [
"-y",
"rival-mcp"
]
}
}
}