Back to Directory/Monitoring & Observability

com.scoutapm/scout-mcp-local

An MCP server for Scout Monitoring data interactions.

Monitoring & ObservabilityPythonv2025.11.4

Scout Monitoring MCP

<a href="https://glama.ai/mcp/servers/@scoutapp/scout-mcp-local"> </a>

MCP Badge

<summary> Claude Code</summary>
sh
claude mcp add scoutmcp -e SCOUT_API_KEY=your_scout_api_key_here -- docker run --rm -i -e SCOUT_API_KEY scoutapp/scout-mcp-local
</details> <details> <summary>Cursor</summary>

Install MCP Server

MAKE SURE to update the SCOUT_API_KEY value to your actual api key in Arguments in the Cursor Settings > MCP

</details> <details> <summary>VS Code Copilot</summary> </details> <details> <summary>Claude Desktop</summary>

Add the following to your claude config file:

  • macOS: ~/Library/Application Support/Claude/claude_desktop_config.json
  • Windows: %APPDATA%/Claude/claude_desktop_config.json
json
{
  "mcpServers": {
    "scout-apm": {
      "command": "docker",
      "args": ["run", "--rm", "-i", "--env", "SCOUT_API_KEY", "scoutapp/scout-mcp-local"],
      "env": { "SCOUT_API_KEY": "your_scout_api_key_here"}
    }
  }
}
</details>

Using the Scout Monitoring MCP

Scout's MCP is intended to put error and performance data directly in the... hands? of your AI Assistant. Use it to get traces and errors with line-of-code information that the AI can use to target fixes right in your editor.

Most assistants will show you both raw tool calls and perform analysis. Desktop assistants can readily create custom JS applications to explore whatever data you desire. Assistants integrated into code editors can use trace data and error backtraces to make fixes right in your codebase.

Combine Scout's MCP with your AI Assistant's other tools to:

  • Create rich GitHub/GitLab issues based on errors and performance data
  • Make JIRA fun - have your AI Assistant create tickets with all the details
  • Generate PRs that fix specific errors and performance problems

Tools

The Scout MCP provides the following tools for accessing Scout APM data:

  • list_apps - List available Scout APM applications, with optional filtering by last active date
  • get_app_metrics - Get individual metric data (response_time, throughput, etc.) for a specific application
  • get_app_endpoints - Get all endpoints for an application with aggregated performance metrics
  • get_endpoint_metrics - Get timeseries metrics for a specific endpoint in an application
  • get_app_endpoint_traces - Get recent traces for an app filtered to a specific endpoint
  • get_app_trace - Get an individual trace with all spans and detailed execution information
  • get_app_error_groups - Get recent error groups for an app, optionally filtered by endpoint
  • get_app_insights - Get performance insights including N+1 queries, memory bloat, and slow queries

Resources

The Scout MCP provides configuration templates as resources that your AI assistant can read and apply:

  • scoutapm://config-resources/{framework} - Setup instructions for supported framework or library (rails, django, flask, fastapi)
  • scoutapm://config-resources/list - List all available configuration templates
  • scoutapm://metrics - List of all available metrics for Scout APM

Useful Prompts

Setup & Configuration

  • "Help me set up Scout monitoring for my Rails application"
  • "Create a Scout APM config file for my Django project with key ABC123"

Performance & Monitoring

  • "Summarize the available tools in the Scout Monitoring MCP."
  • "Find the slowest endpoints for app my-app-name in the last 7 days. Generate a table with the results including the average response time, throughput, and P95 response time."
  • "Show me the highest-frequency errors for app Foo in the last 24 hours. Get the latest error detail, examine the backtrace and suggest a fix."
  • "Get any recent n+1 insights for app Bar. Pull the specific trace by id and help me optimize it based on the backtrace data."

Token Usage

We are currently more interested in expanding available information than strictly controlling response size from our MCP tools. If your AI Assistant has a configurable token limit (e.g. Claude Code export MAX_MCP_OUTPUT_TOKENS=50000), we recommend setting it generously high, e.g. 50,000 tokens.

Local Development

We use uv and taskipy to manage environments and run tasks for this project.

Run with Inspector

bash
uv run task dev

Connect within inspector to add API key, set to STDIO transport

Build the Docker image

bash
docker build -t scout-mcp-local .

Release

  1. Branch and bump versions with uv run python bump_versions.py
  2. Get that merged
  3. Create a GitHub release with the new version (gh release create v2025.11.3 --generate-notes --draft)

For the bots:

mcp-name: com.scoutapm/scout-mcp-local

Learn More