Multi-MCP
Multi-model AI orchestration MCP server with code review, compare, and debate tools.
โ
19MITai-ml
Install
Config snippet generator goes here (5 client tabs)
README
# Multi-MCP: Multi-Model Code Review and Analysis MCP Server for Claude Code
<!-- mcp-name: io.github.religa/multi-mcp -->
[](https://github.com/religa/multi_mcp/actions)
[](https://opensource.org/licenses/MIT)
[](https://www.python.org/downloads/)
A **multi-model AI orchestration MCP server** for **automated code review** and **LLM-powered analysis**. Multi-MCP integrates with **Claude Code CLI** and **OpenCode** to orchestrate multiple AI models (OpenAI GPT, Anthropic Claude, Google Gemini) for **code quality checks**, **security analysis** (OWASP Top 10), and **multi-agent consensus**. Built on the **Model Context Protocol (MCP)**, this tool enables Python developers and DevOps teams to automate code reviews with AI-powered insights directly in their development workflow.

## Features
- **๐ Code Review** - Systematic workflow with OWASP Top 10 security checks and performance analysis
- **๐ฌ Chat** - Interactive development assistance with repository context awareness
- **๐ Compare** - Parallel multi-model analysis for architectural decisions
- **๐ญ Debate** - Multi-agent consensus workflow (independent answers + critique)
- **๐ค Multi-Model Support** - OpenAI GPT, Anthropic Claude, Google Gemini, and OpenRouter
- **๐ฅ๏ธ CLI & API Models** - Mix CLI-based (Gemini CLI, Codex CLI) and API models
- **๐ท๏ธ Model Aliases** - Use short names like `mini`, `sonnet`, `gemini`
- **๐งต Threading** - Maintain context across multi-step reviews
## How It Works
Multi-MCP acts as an **MCP server** that Claude Code or OpenCode connects to, providing AI-powered code analysis tools:
1. **Install** the MCP server and configure your AI model API keys
2. **Integrate** with Claude Code or OpenCode automatically via `make install`
3. **Invoke** tools using natural language (e.g., "multi codereview this file")
4. **Get Results** from multiple AI models orchestrated in parallel
## Performance
**Fast Multi-Model Analysis:**
- โก **Parallel Execution** - 3 models in ~10s (vs ~30s sequential)
- ๐ **Async Architecture** - Non-blocking Python asyncio
- ๐พ **Conversation Threading** - Maintains context across multi-step reviews
- ๐ **Low Latency** - Response time = slowest model, not sum of all models
## Quick Start
**Prerequisites:**
- Python 3.11+
- API key for at least one provider (OpenAI, Anthropic, Google, or OpenRouter)
### Installation
<!-- Claude Code Plugin - Coming Soon
#### Option 1: Claude Code Plugin (Recommended)
```bash
# Add the marketplace
/plugin marketplace add religa/multi_mcp
# Install the plugin
/plugin install multi-mcp@multi_mcp
```
Then configure API keys in `~/.multi_mcp/.env` (see [Configuration](#configuration)).
-->
#### Option 1: From Source
```bash
# Clone and install
git clone https://github.com/religa/multi_mcp.git
cd multi_mcp
# Execute ./scripts/install.sh
make install
# The installer will:
# 1. Install dependencies (uv sync)
# 2. Generate your .env file
# 3. Automatically add to Claude Code / OpenCode config (requires jq)
# 4. Test the installation
```
#### Option 2: Manual Configuration
If you prefer not to run `make install`:
```bash
# Install dependencies
uv sync
# Copy and configure .env
cp .env.example .env
# Edit .env with your API keys
```
Add to Claude Code (`~/.claude.json`) or OpenCode (`~/.opencode/opencode.json`), replacing `/path/to/multi_mcp` with your actual clone path:
**Claude Code:**
```json
{
"mcpServers": {
"multi": {
"type": "stdio",
"command": "/path/to/multi_mcp/.venv/bin/python",
"args": ["-m", "multi_mcp.server"]
}
}
}
```
**OpenCode:**
```json
{
"mcp": {
"multi": {
"type": "local",
"command": ["/path/to/multi_mcp/.venv/bin/python", "-m", "multi_mcp.server"],
"enabled": true
}
}
}
```
## Configuration
### Environment Configuration (API Keys & Settings)
Multi-MCP loads settings from `.env` files in this order (highest priority first):
1. **Environment variables** (already set in shell)
2. **Project `.env`** (current directory or project root)
3. **User `.env`** (`~/.multi_mcp/.env`) - fallback for pip installs
Edit `.env` with your API keys:
```bash
# API Keys (configure at least one)
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GEMINI_API_KEY=...
OPENROUTER_API_KEY=sk-or-...
# Azure OpenAI (optional)
AZURE_API_KEY=...
AZURE_API_BASE=https://your-resource.openai.azure.com/
# AWS Bedrock (optional)
AWS_ACCESS_KEY_ID=...
AWS_SECRET_ACCESS_KEY=...
AWS_REGION_NAME=us-east-1
# Model Configuration
DEFAULT_MODEL=gpt-5-mini
DEFAULT_MODEL_LIST=gpt-5-mini,gemini-3-flash
```
### Model Configuration (Adding Custom Models)
Models are defined in YAML configuration files (user config wins):
1. **Pac