I Want Everything Local – Building My Offline AI Workspace
16 days ago
- #AI
- #privacy
- #local-computing
- A friend's requirement for a completely local system with no cloud or remote code execution.
- Discussion on mainstream LLM chat apps and the need for local alternatives.
- The necessity of an isolated environment for code execution alongside local LLMs.
- Use of Docker for containerization and browser access for content.
- Challenges in developing a native Mac app, leading to a local web version.
- Issues with model support for tool-calling in Ollama and other platforms.
- Implementation of an isolated VM using Apple's 'Container' tool for code execution.
- Integration of a Jupyter server exposed via MCP for AI-generated code execution.
- Addition of a headless browser (Playwright) for online information retrieval.
- Final setup includes local LLM, sandboxed code execution, and a headless browser.
- Philosophy shift towards local compute and privacy, with no cloud dependency.
- Invitation to check out coderunner-ui on GitHub for feedback and contributions.