Hasty Briefsbeta

Bilingual

Don't trust AI agents

5 hours ago
  • #Container Isolation
  • #AI Security
  • #Trust Models
  • AI agents should be treated as untrusted and potentially malicious.
  • NanoClaw uses container isolation as a core part of its architecture to ensure security.
  • Each agent in NanoClaw runs in its own container, with ephemeral containers created per invocation.
  • Agents in NanoClaw are unprivileged and can only access explicitly mounted directories.
  • OpenClaw's sandbox mode shares containers among agents, risking information leakage.
  • NanoClaw prevents data leakage between agents by isolating them in separate sandboxes.
  • NanoClaw includes a mount allowlist to prevent accidental exposure of sensitive paths.
  • Non-main groups in NanoClaw are untrusted by default to prevent unauthorized actions.
  • OpenClaw's complexity (400k+ lines of code) makes it hard to review and increases vulnerability risks.
  • NanoClaw is designed to be simple and reviewable, with a focus on minimal, tailored code.
  • New functionality in NanoClaw is added via skills, which are reviewed before merging.
  • Security in NanoClaw is enforced via containers, mount restrictions, and filesystem isolation.
  • The goal is to minimize trust in AI agents and contain potential damage from misbehavior.