Knapsack App

Knapsack App

A safe & simple OpenClaw app for Macs

43 followers

We have integrated the Clawdbot / Moltbot / OpenClaw library into a Mac app, added a private notetaker, emailing automation, and some safeguards, and open-sourced it.
This is the 2nd launch from Knapsack App. View more
Knapsack OpenClaw App

Knapsack OpenClaw App

Launching today
A safe & simple OpenClaw / Moltbot / Clawdbot app for Mac
Knapsack is an open-source desktop client for OpenClaw / Clawdbot / Moltbot — the AI agent that controls your browser and acts on your behalf. Giving AI that power is dangerous without guardrails. Knapsack adds them: sensitive path blocking (SSH, credentials, .env), prompt injection defense, tool loop limits, mandatory confirmation before sending messages or spending money, and sandboxed scripting. Also bundles a private notetaker, email autopilot, and multi-LLM support. Local/open source.
Knapsack OpenClaw App gallery image
Knapsack OpenClaw App gallery image
Knapsack OpenClaw App gallery image
Knapsack OpenClaw App gallery image
Knapsack OpenClaw App gallery image
Free
Launch Team / Built With
Intercom
Intercom
Startups get 90% off Intercom + 1 year of Fin AI Agent free
Promoted

What do you think? …

Mark Heynen
Maker
📌
Like everyone, we are fascinated by Clawdbot (now OpenClaw / Moltbot) but terrified of the security implications. So we updated our desktop app to give normies the ability to download and play with it safely and easily -- without having to use a CLI. The Protections: Two operating modes — "Assist" mode (default) explains before acting and asks for confirmation. "Autonomous" mode lets the agent run freely but enforces hard pause points: it must always stop and confirm before spending money, sending messages to humans, deleting anything permanently, changing security settings, or accepting legal agreements. The user toggles between modes with one click. Sensitive path blacklist — The agent cannot read or write to SSH keys, AWS credentials, GPG keys, .env files, Docker configs, password stores, or macOS Keychain data. These paths are blocked at the Rust level, not just in the prompt. Prompt injection defense — Instructions found inside emails, web pages, PDFs, or any external content are treated as untrusted text. The agent won't follow "ignore previous instructions," won't navigate to suspicious domains mentioned in external content, and won't exfiltrate user data to addresses found in scraped pages. Tool loop ceiling — The agent is capped at 75 tool calls per request with provider-aware rate limiting (500ms for Anthropic, 300ms for Gemini, 100ms for OpenAI). If it hits the limit, it stops and tells the user to break the task into smaller steps. A stop button is always available. Sandboxed scripting — Python script execution runs in an isolated temp directory with a 60-second timeout and a strict module allowlist (numpy, pandas, matplotlib, etc.). No arbitrary package installs. Tauri permission scopes — The Rust backend restricts filesystem access to specific directories and whitelists only known API domains (OpenAI, Google, Microsoft). Shell execution is disabled — only sidecar processes are allowed. Token-hardened service auth — All communication between the desktop app and the browser control layer uses bearer tokens stored with Unix 0600 permissions. Tokens are generated as double-UUID strings and refreshed per session. Data exfiltration prevention — The agent cannot encode sensitive data into URL parameters, submit forms sending user data to third-party domains, or navigate to URLs containing embedded personal information.