AI Architecture Labs · Secure Agent Patterns

OWASP-inspired AI security patterns.

Myria helps teams move from AI prototypes to production-grade systems with stronger control boundaries, safer tool execution, auditable decisions, and more resilient operating models.

Pattern 1
Scoped tool authorization
Sensitive actions stay behind explicit permission boundaries, approval checks, and role-aware execution rules.
Next.js / TypeScript
type ToolName = "read_report" | "send_email" | "database_write" | "file_delete";

type ToolContext = {
  role: "viewer" | "operator" | "admin";
  userConfirmed?: boolean;
};

export function authorizeToolCall(tool: ToolName, context: ToolContext) {
  const sensitive = ["send_email", "database_write", "file_delete"];

  if (sensitive.includes(tool) && !context.userConfirmed) {
    return { allowed: false, status: "pending_confirmation" };
  }

  if (tool === "file_delete" && context.role !== "admin") {
    return { allowed: false, status: "forbidden" };
  }

  return { allowed: true, status: "approved" };
}
Pattern 2
Prompt-injection boundaries
Untrusted content is isolated, labeled, sanitized, and handled separately from system instructions.
Next.js / TypeScript
import { z } from "zod";

const ExternalContentSchema = z.object({
  source: z.string().max(200),
  content: z.string().max(12000),
});

export function prepareUntrustedContent(input: unknown) {
  const parsed = ExternalContentSchema.parse(input);

  const normalized = parsed.content
    .replace(/ignore previous instructions/gi, "[FILTERED]")
    .replace(/system prompt/gi, "[FILTERED]");

  return `UNTRUSTED_DATA_START
Source: ${parsed.source}
Content:
${normalized}
UNTRUSTED_DATA_END`;
}
Pattern 3
Memory isolation
Conversation memory is scoped per user and session, limited by size, and pruned with expiration rules.
Next.js / TypeScript
type MemoryItem = {
  userId: string;
  sessionId: string;
  content: string;
  createdAt: number;
};

const TTL_MS = 24 * 60 * 60 * 1000;

export class SecureAgentMemory {
  private items: MemoryItem[] = [];

  add(userId: string, sessionId: string, content: string) {
    this.items.push({
      userId,
      sessionId,
      content: content.slice(0, 5000),
      createdAt: Date.now(),
    });
  }

  getForSession(userId: string, sessionId: string) {
    const cutoff = Date.now() - TTL_MS;
    return this.items.filter(
      (item) =>
        item.userId === userId &&
        item.sessionId === sessionId &&
        item.createdAt >= cutoff
    );
  }
}
Pattern 4
Human approval for risky actions
High-impact agent actions are paused for review before execution, instead of running autonomously.
Next.js / TypeScript
export function classifyActionRisk(toolName: string) {
  if (["database_write", "send_email", "file_delete"].includes(toolName)) {
    return "high";
  }
  return "low";
}

export async function executeAgentAction(toolName: string) {
  const risk = classifyActionRisk(toolName);

  if (risk === "high") {
    return {
      status: "awaiting_human_approval",
      toolName,
    };
  }

  return { status: "executed", toolName };
}
Pattern 5
Audit logging and observability
Every sensitive action can be traced with event logs that support review, investigation, and operational visibility.
Next.js / TypeScript
type AuditEvent = {
  userId: string;
  action: string;
  toolName?: string;
  status: "approved" | "blocked" | "pending_confirmation";
  timestamp: string;
};

export async function logAuditEvent(event: AuditEvent) {
  console.log(JSON.stringify(event));
  // Send to Postgres, SIEM, or observability pipeline
}

Security should be part of the architecture, not added after the agent is live.

AI Architecture Labs helps organizations review agent design, execution flows, governance boundaries, and production controls before risk becomes operational debt.

Review architecture decisions before scaling agents into production
Identify gaps in tool security, memory boundaries, approvals, and logging
Translate risk into a concrete roadmap leadership can act on