Aegis — AI Agent Governance
Policy-based governance for AI agent tool calls. YAML policy, approval gates, audit logging.
★ 2MITai-ml
Install
Config snippet generator goes here (5 client tabs)
README
<!-- mcp-name: io.github.Acacian/aegis -->
<p align="center">
<h1 align="center">Aegis</h1>
<p align="center">
<strong>OpenTelemetry for AI safety.<br/>Auto-instrument any AI framework with guardrails — zero code changes.</strong>
</p>
<p align="center">
<code>pip install agent-aegis</code> and add <b>one line</b>. Aegis monkey-patches LangChain, CrewAI, OpenAI Agents SDK, OpenAI, and Anthropic at runtime — every LLM call and tool invocation passes through prompt-injection detection, PII masking, toxicity filtering, and a full audit trail. No refactoring. No infra. No LLM-based guardrails (all checks are deterministic and sub-millisecond).
</p>
</p>
<p align="center">
<a href="https://github.com/Acacian/aegis/actions/workflows/ci.yml"><img src="https://github.com/Acacian/aegis/actions/workflows/ci.yml/badge.svg" alt="CI"></a>
<a href="https://pypi.org/project/agent-aegis/"><img src="https://img.shields.io/pypi/v/agent-aegis?color=blue&cacheSeconds=3600" alt="PyPI"></a>
<a href="https://pypi.org/project/langchain-aegis/"><img src="https://img.shields.io/pypi/v/langchain-aegis?label=langchain-aegis&color=blue&cacheSeconds=3600" alt="langchain-aegis"></a>
<a href="https://pypi.org/project/agent-aegis/"><img src="https://img.shields.io/pypi/pyversions/agent-aegis?cacheSeconds=3600" alt="Python"></a>
<a href="https://github.com/Acacian/aegis/blob/main/LICENSE"><img src="https://img.shields.io/badge/License-MIT-blue.svg" alt="License"></a>
<a href="https://acacian.github.io/aegis/"><img src="https://img.shields.io/badge/docs-acacian.github.io%2Faegis-blue" alt="Docs"></a>
<br/>
<a href="https://github.com/Acacian/aegis/actions/workflows/ci.yml"><img src="https://img.shields.io/badge/tests-2540%2B_passed-brightgreen" alt="Tests"></a>
<a href="https://github.com/Acacian/aegis/actions/workflows/ci.yml"><img src="https://img.shields.io/badge/coverage-92%25-brightgreen" alt="Coverage"></a>
<a href="https://acacian.github.io/aegis/playground/"><img src="https://img.shields.io/badge/playground-Try_it_Live-ff6b6b" alt="Playground"></a>
<a href="https://www.bestpractices.dev/projects/12253"><img src="https://www.bestpractices.dev/projects/12253/badge" alt="OpenSSF Best Practices"></a>
</p>
<p align="center">
<a href="#auto-instrumentation"><strong>Auto-Instrumentation</strong></a> •
<a href="#quick-start">Quick Start</a> •
<a href="#supported-frameworks">Supported Frameworks</a> •
<a href="#three-pillars">Three Pillars</a> •
<a href="https://acacian.github.io/aegis/">Documentation</a> •
<a href="#integrations">Integrations</a> •
<a href="https://acacian.github.io/aegis/playground/"><strong>Try it Live</strong></a> •
<a href="https://github.com/Acacian/aegis/issues?q=is%3Aissue+is%3Aopen+label%3A%22good+first+issue%22">Contributing</a>
</p>
<p align="center">
<b>English</b> •
<a href="./README.ko.md">한국어</a>
</p>
---
## Auto-Instrumentation
Add AI safety to any project in 30 seconds. No refactoring, no wrappers, no config files.
```python
import aegis
aegis.auto_instrument()
# That's it. Every LangChain, CrewAI, OpenAI Agents SDK, OpenAI API,
# and Anthropic API call in your application now passes through:
# - Prompt injection detection (blocks attacks)
# - Toxicity detection (blocks harmful content)
# - PII detection (warns on personal data exposure)
# - Prompt leak detection (warns on system prompt extraction)
# - Full audit trail (every call logged)
```
Or zero code changes — just set an environment variable:
```bash
AEGIS_INSTRUMENT=1 python my_agent.py
```
Aegis monkey-patches framework internals at import time, the same approach used by OpenTelemetry for observability and Sentry for error tracking. Your existing code stays untouched.
### How It Works
```
Your code Aegis layer (invisible)
--------- -----------------------
chain.invoke("Hello") --> [input guardrails] --> LangChain --> [output guardrails] --> response
Runner.run(agent, "query") --> [input guardrails] --> OpenAI SDK --> [output guardrails] --> response
crew.kickoff() --> [task guardrails] --> CrewAI --> [tool guardrails] --> response
client.chat.completions() --> [input guardrails] --> OpenAI API --> [output guardrails] --> response
```
Every call is checked on both input and output. Blocked content raises `AegisGuardrailError` (configurable to warn or log instead).
### Supported Frameworks
| Framework | What gets patched | Status |
|-----------|------------------|--------|
| **LangChain** | `BaseChatModel.invoke/ainvoke`, `BaseTool.invoke/ainvoke` | Stable |
| **CrewAI** | `Crew.kickoff/kickoff_async`, global `BeforeToolCallHook` | Stable |
| **OpenAI Agents SDK** | `Runner.run`, `Runner.run_sync` | Stable |
| **OpenAI API** | `Completions.create` (chat & completions) | Stable |
| **Anthropic API** | `Messages.create` | Stable |
| **LiteLLM