Building a Compliance-First AI Pipeline with AgentGate
Compliance-first means building the audit trail, policy enforcement, and monitoring infrastructure before you write your business logic — not after you have a working agent and are trying to retrofit governance on top of it. This approach sounds slower. In practice, it is faster because you avoid the costly rework of redesigning a production system to satisfy regulatory requirements discovered late.
This guide walks through the architecture of a compliance-first AI pipeline, the specific design decisions that make it maintainable at scale, and a complete working implementation using AgentGate.
The Core Principle: Compliance as Infrastructure
Think of compliance controls the same way you think about observability. You would not build a production system without logging, metrics, and tracing — not because you expect everything to break immediately, but because you know you will need that data eventually. Compliance infrastructure is the same: audit trails, policy gates, and fairness metrics are data you will need, and building them in from the start costs a fraction of what retrofitting costs.
A compliance-first pipeline has these properties:
- Every agent invocation is logged automatically, without developer action
- Policy violations block execution before they can affect users
- Fairness metrics are computed continuously, not in quarterly audits
- Evidence for any audit question can be produced in minutes, not weeks
Pipeline Architecture
A compliance-first AI pipeline has four layers:
┌────────────────────────────────────────────────────┐
│ Layer 1: Request Gateway │
│ - Authentication (per-agent API keys) │
│ - Rate limiting │
│ - Input validation and sanitization │
│ - PII detection and masking │
├────────────────────────────────────────────────────┤
│ Layer 2: Policy Engine │
│ - Pre-execution policy checks │
│ - Confidence thresholds │
│ - Action allowlists │
│ - Human-in-the-loop routing │
├────────────────────────────────────────────────────┤
│ Layer 3: Agent Execution │
│ - Model inference │
│ - Tool calls (with audit logging per call) │
│ - Output generation │
├────────────────────────────────────────────────────┤
│ Layer 4: Post-Execution Compliance │
│ - Audit trail entry creation │
│ - Fairness metric update │
│ - Quality gate evaluation │
│ - Anomaly detection │
└────────────────────────────────────────────────────┘
Step 1: Set Up AgentGate as Your Compliance Backbone
import AgentGate from '@agengate/sdk';
import Anthropic from '@anthropic-ai/sdk';
// Initialize once, reuse everywhere
export const gate = new AgentGate({
apiKey: process.env.AGENGATE_API_KEY!,
environment: process.env.NODE_ENV === 'production' ? 'production' : 'staging',
compliance: {
frameworks: ['gdpr', 'eu_ai_act', 'sox'],
risk_class: 'high',
data_classification: 'confidential',
audit_retention_years: 7
},
monitoring: {
fairness: { enabled: true, window_days: 30 },
drift: { enabled: true, alert_threshold: 0.15 },
anomaly: { enabled: true }
}
});
export const claude = new Anthropic({
apiKey: process.env.ANTHROPIC_API_KEY!
});
Step 2: Define Policies Before Business Logic
Write your compliance policies before you write your agent logic. Policies are version-controlled, reviewable artifacts — treat them like code.
// policies/credit-agent.policy.ts
import { gate } from '../config';
export async function registerCreditAgentPolicy() {
await gate.policies.upsert({
id: 'credit-agent-v1',
version: '1.0.0',
rules: [
// Block decisions below confidence threshold
{
id: 'confidence-gate',
type: 'threshold',
condition: 'output.confidence < 0.80',
action: 'route_to_human_review',
reason: 'Low confidence decisions require human review per Article 22'
},
// Prevent direct fund transfers — agent can only recommend
{
id: 'action-allowlist',
type: 'action_control',
allowed_actions: ['recommend_approval', 'recommend_decline', 'request_documents'],
blocked_actions: ['approve_loan', 'disburse_funds', 'modify_limit'],
reason: 'Segregation of duties — AI cannot take irreversible financial actions'
},
// Require explanation for all decisions
{
id: 'explanation-required',
type: 'output_requirement',
required_fields: ['decision', 'confidence', 'reason_codes', 'explanation'],
reason: 'GDPR Article 22 explanation right'
}
]
});
}
Step 3: Build the Request Gateway with PII Protection
// gateway/request-handler.ts
import { gate, claude } from '../config';
interface AgentRequest {
agent_id: string;
subject_id: string;
input: Record;
requesting_user: string;
}
export async function handleAgentRequest(req: AgentRequest) {
// Step 1: Validate and authenticate
const authResult = await gate.auth.validate({
agent_id: req.agent_id,
requesting_user: req.requesting_user
});
if (!authResult.authorized) {
throw new Error(`Unauthorized: ${authResult.reason}`);
}
// Step 2: Scan for PII in the input and mask if necessary
const { sanitized_input, pii_detected } = await gate.pii.scan(req.input);
if (pii_detected.high_sensitivity.length > 0) {
// Log that high-sensitivity PII was detected and masked
await gate.audit.record({
event_type: 'pii_masked',
agent_id: req.agent_id,
subject_id: req.subject_id,
fields_masked: pii_detected.high_sensitivity
});
}
// Step 3: Run pre-execution policy check
const policyCheck = await gate.policies.evaluate({
policy_id: `${req.agent_id}-policy`,
phase: 'pre_execution',
context: { agent_id: req.agent_id, input: sanitized_input }
});
if (policyCheck.blocked) {
throw new Error(`Policy blocked execution: ${policyCheck.reason}`);
}
return { sanitized_input, policyCheck };
}
Step 4: Instrument Agent Execution
// agents/credit-agent.ts
import { gate, claude } from '../config';
import { handleAgentRequest } from '../gateway/request-handler';
export async function runCreditAgent(
applicant_id: string,
application_data: CreditApplication,
requesting_user: string
) {
// Gateway validation and PII sanitization
const { sanitized_input } = await handleAgentRequest({
agent_id: 'credit-agent-v1',
subject_id: applicant_id,
input: application_data,
requesting_user
});
// Record the start of execution
const executionId = await gate.execution.start({
agent_id: 'credit-agent-v1',
model_version: process.env.MODEL_VERSION!,
subject_id: applicant_id,
input_hash: gate.hash(sanitized_input)
});
try {
// Run the model
const response = await claude.messages.create({
model: 'claude-sonnet-4-6',
max_tokens: 1024,
system: CREDIT_AGENT_SYSTEM_PROMPT,
messages: [{ role: 'user', content: JSON.stringify(sanitized_input) }]
});
const output = parseAgentOutput(response);
// Post-execution policy evaluation
const postCheck = await gate.policies.evaluate({
policy_id: 'credit-agent-v1-policy',
phase: 'post_execution',
context: { output }
});
// Record the completed execution with full audit trail
await gate.execution.complete({
execution_id: executionId,
output_hash: gate.hash(output),
output,
policy_result: postCheck,
routed_to_human: postCheck.route_to_human
});
return { output, requires_human_review: postCheck.route_to_human };
} catch (error) {
await gate.execution.fail({ execution_id: executionId, error });
throw error;
}
}
Step 5: Set Up Continuous Compliance Monitoring
// monitoring/compliance-monitor.ts
import { gate } from '../config';
export async function initializeComplianceMonitoring() {
// Fairness monitoring — alert before threshold breach
await gate.monitoring.configure({
agent_id: 'credit-agent-v1',
checks: [
{
name: 'disparate_impact_ratio',
metric: 'dir',
warning_threshold: 0.83, // Warn before the 0.80 hard limit
critical_threshold: 0.80,
protected_attributes: ['gender', 'age_group'],
window_days: 30
},
{
name: 'decision_rate_drift',
metric: 'positive_rate',
max_drift_pct: 15,
baseline_period_days: 90,
window_days: 7
},
{
name: 'audit_chain_integrity',
metric: 'chain_valid',
check_interval_hours: 24,
alert_on_break: true
}
],
alert_channels: [
'slack:#compliance-monitoring',
'email:compliance@yourcompany.com'
]
});
}
What You Get Out of the Box
With this architecture in place, you have:
- An automated audit trail for every agent invocation, compliant with GDPR, SOX, and PCI-DSS logging requirements
- Policy enforcement that prevents non-compliant actions before they affect users
- Continuous fairness monitoring with early warnings before threshold breaches
- PII masking at the gateway layer, preventing sensitive data from entering your agent logs
- On-demand compliance export for any audit period
The total integration time for a new agent using this pattern is typically 2-4 hours for a developer familiar with the codebase. The compliance infrastructure is reusable across every subsequent agent you add.
See the complete implementation in the AgentGate documentation, including framework-specific configuration guides for Next.js, FastAPI, and Go.
Start building compliance-first today
AgentGate gives you the complete compliance infrastructure — audit trails, policy engine, fairness monitoring, and PII protection — as an API. Free for your first 1,000 events.
Start free | Read the quickstart | See pricing