Query GPT 5.2, Claude Opus 4.5, Gemini 3, Grok 4.1 simultaneously for AI perspectives
Get unstuck faster. Query GPT 5.2, Claude Opus 4.5, Gemini 3, and Grok 4.1 simultaneously — one API call, four expert opinions.
Stop copy-pasting between ChatGPT, Claude, and Gemini. Get all their perspectives in your IDE with one request.
| Metric | Result |
|---|---|
| SWE-bench Verified | 74.6% Resolve@2 |
| Cost vs Claude Opus | 62% lower |
| Response time | 10-40 seconds |
"Different models have different blind spots. Combining their perspectives eliminates yours."
| Model | Provider | Strengths |
|---|---|---|
| GPT 5.2 | OpenAI | Reasoning, code generation |
| Claude Opus 4.5 | Anthropic | Analysis, nuanced thinking |
| Gemini 3 Pro | Multimodal, large context | |
| Grok 4.1 | xAI | Real-time knowledge, directness |
polydev.ai/dashboard/mcp-tokens
| Tier | Messages/Month | Price |
|---|---|---|
| Free | 1,000 | $0 |
| Pro | 10,000 | $19/mo |
claude mcp add polydev -- npx -y polydev-ai@latestThen set your token:
export POLYDEV_USER_TOKEN="pd_your_token_here"Or add to ~/.claude.json:
{
"mcpServers": {
"polydev": {
"command": "npx",
"args": ["-y", "polydev-ai@latest"],
"env": {
"POLYDEV_USER_TOKEN": "pd_your_token_here"
}
}
}
}Add to ~/.cursor/mcp.json:
{
"mcpServers": {
"polydev": {
"command": "npx",
"args": ["-y", "polydev-ai@latest"],
"env": {
"POLYDEV_USER_TOKEN": "pd_your_token_here"
}
}
}
}Add to your MCP configuration:
{
"mcpServers": {
"polydev": {
"command": "npx",
"args": ["-y", "polydev-ai@latest"],
"env": {
"POLYDEV_USER_TOKEN": "pd_your_token_here"
}
}
}
}Add to ~/.codex/config.toml:
[mcp_servers.polydev]
command = "npx"
args = ["-y", "polydev-ai@latest"]
[mcp_servers.polydev.env]
POLYDEV_USER_TOKEN = "pd_your_token_here"
[mcp_servers.polydev.timeouts]
tool_timeout = 180
session_timeout = 600Just mention "polydev" or "perspectives" in your prompt:
"Use polydev to debug this infinite loop"
"Get perspectives on: Should I use Redis or PostgreSQL for caching?"
"Use polydev to review this API for security issues"Call the get_perspectives tool directly:
{
"tool": "get_perspectives",
"arguments": {
"prompt": "How should I optimize this database query?",
"user_token": "pd_your_token_here"
}
}🤖 Multi-Model Analysis
┌─ GPT 5.2 ────────────────────────────────────────
│ The N+1 query pattern is causing performance issues.
│ Consider using eager loading or batch queries...
└──────────────────────────────────────────────────
┌─ Claude Opus 4.5 ────────────────────────────────
│ Looking at the execution plan, the table scan on
│ `users` suggests a missing index on `email`...
└──────────────────────────────────────────────────
┌─ Gemini 3 ───────────────────────────────────────
│ The query could benefit from denormalization for
│ this read-heavy access pattern...
└──────────────────────────────────────────────────
┌─ Grok 4.1 ───────────────────────────────────────
│ Just add an index. The real problem is you're
│ querying in a loop - fix that first.
└──────────────────────────────────────────────────
✅ Consensus: Add index on users.email, fix N+1 query
💡 Recommendation: Use eager loading with proper indexingOur approach achieves 74.6% on SWE-bench Verified (Resolve@2), matching Claude Opus at 62% lower cost.
| Approach | Resolution Rate | Cost/Instance |
|---|---|---|
| Claude Haiku (baseline) | 64.6% | $0.18 |
| + Polydev consultation | 66.6% | $0.24 |
| Resolve@2 (best of both) | 74.6% | $0.37 |
| Claude Opus (reference) | 74.4% | $0.97 |
| Tool | Description |
|---|---|
get_perspectives | Query multiple AI models simultaneously |
get_cli_status | Check status of local CLI tools |
force_cli_detection | Re-detect installed CLI tools |
send_cli_prompt | Send prompts to local CLIs with fallback |
MIT License - see LICENSE for details.