Sandboxing AIOps and Agentic AI Security
13 hours ago
- #WebAssembly Security
- #AI Sandboxes
- #AIOps Governance
- AI sandboxes currently rely on traditional tools like seccomp and containers, which are not designed for agentic AI security and suffer from ambient authority.
- Ambient authority, where processes inherit permissions from their environment, poses significant risks with non-deterministic AI agents and LLMs, violating the principle of least authority.
- Conventional sandboxing approaches require continuous patching to manage exfiltration paths and credential vectors, akin to a 'cartographer's dilemma' of mapping a shifting coastline.
- WebAssembly (Wasm) and WASI offer a zero-authority model where components start with no permissions; capabilities must be explicitly granted as typed imports, enforcing least authority.
- Wasm components virtualize capabilities (e.g., filesystem access) with abstractions like in-memory stores, preventing escape, and compose grants scoped to specific interfaces (e.g., HTTP, keyvalue).
- AIOps is proposed as an operational framework for governing autonomous AI work, covering intent capture, plan extraction, policy-based scheduling, bounded execution, validation, and observability.
- Agent outputs fall into three shapes: producing artifacts (with scoped grants like filesystem or signing keys), acting on systems (via typed capabilities), and triggering workflows (with bounded recursion).
- Artifacts produced by AI agents should run in the same Wasm sandbox with least authority, ensuring security and behavior control in production, aligning with continuous delivery and AI agent access control.
- Cosmonic Control is introduced as a control plane for AIOps, enabling dense, capability-bounded functions at scale without cold starts, ambient authority, or governance bottlenecks.
- The componentized approach via Wasm provides a direct path to secure AI agent operations, integrating with existing governance systems and avoiding the pitfalls of traditional sandboxing.