Launched this week
Jootle
The AI-Native Operations Platform
47 followers
The AI-Native Operations Platform
47 followers
Jootle gives you a private AI instance that actually does things: manages your projects, monitors your inbox, sends messages, runs playbooks, and builds tools. Not a chatbot. A member of your team that shows up every day.








Raycast
@chrismessina Good question, there's real surface overlap (multi-channel agent, skills/tools, the AI can actually do things and not just chat). Design intent is pretty different though.
OpenClaw is a brilliant self-hosted gateway. You bring the box, you bring the model, you bring the runtime, and you get a personal autonomous agent with eyes and hands. Powerful, fast-moving project, and the community momentum is real.
Jootle is a managed workflow platform. We provision and operate a dedicated private VPS for each customer, and the agent lives inside a lot more scaffolding. A few of the things that scaffolding gives you:
Structured work, not freeform agent runs. Every piece of work is a Project or a Task with state, history, and dependencies. The agent doesn't just go off and "do stuff," it executes against playbooks (we ship 60+ seeded ones across HR, finance, marketing, dev work, household ops, and so on) that define the actual process. When a step fails or the situation drifts from the playbook, the playbook engine self heals: revises the plan, asks for clarification, escalates to a human approval gate if the work crosses a sensitivity threshold. You always know what the agent is doing and why.
Production Studio. This is one of the surfaces I'm most excited about. When you assign work to the agent through the Studio, you don't just get back a wall of text and hope it's right. You can revise the output line by line. You can mark specific spots on an image and tell the agent "change this here." You can scope the next pass to just one paragraph, just one region of a design, just one row of a table. It turns iterative collaboration with the AI into something that feels closer to working with a junior teammate than prompting a chatbot. The agent does the work, you stay in control at whatever precision you actually need, and you never have to throw out a whole result just to fix a small piece of it.
Hosted Sites. The agent isn't just generating text and code, it can build full websites and host them for you. During iterative development the site lives in a staging environment behind basic auth on your instance, so you can preview, shape, and iterate without exposing rough drafts to the world. When you're ready you point your custom domain at the instance and your assistant pushes it live. After launch the assistant keeps owning the site: monitoring it, making content and design updates as you ask, adding pages, fixing things you flag, even watching for issues. It's a long running relationship between you, the agent, and your live site, rather than a one off "generate me a landing page" deliverable.
Audit and governance. Full audit trail on every action and decision the agent takes. Human in the loop approval gates on anything risky (sends, payments, irreversible changes, anything you've configured as needing sign off). Per instance LLM keys (BYOK), so you control your own AI provider and budget. Visibility into prompts, tokens, costs, and the agent's reasoning at the action level.
Knowledge graph. The instance learns your people, projects, relationships, preferences, and decisions over time, and uses that context to route messages, surface relevant history, and avoid asking you the same question twice. You mention "Jason" in passing on Telegram and the assistant knows which Jason, which project he's tied to, what you last talked about, and what he's waiting on from you.
Multi channel that actually routes. Email, Telegram, SMS, and web chat are all wired up. The interesting part is that incoming messages are semantically routed to the right project automatically. You can text a thought on the go and the assistant drops it into the right place in your active workspace, rather than dumping everything into a single global inbox you then have to sort.
Toolkit library. An open marketplace for installable agent capabilities. Customers and partners can publish toolkits and others install them with one click. A toolkit can ship custom agents, seeded knowledge, goals, custom entity types, queryable documents, even whole pre defined governance frameworks. Think VS Code extensions but for the AI's actual operating capabilities, not just the UI around it.
Specialist agents and Goals. Out of the box you have around 28 specialist agents (project manager, accounting, marketing, software dev, research, household ops, plenty more) each with their own scope and tooling. Goals are first class objects the instance carries with it, giving each instance personality and direction so it can advocate for the things you care about, not just react to messages.
Lessons. The instance writes lessons from its own mistakes and from your corrections, and those lessons feed back into the agent's behavior. The longer you use your Jootle the better it gets at the way you specifically work, without you having to maintain a giant system prompt.
The platform layer. Everything above describes Jootle as a product an end user buys. There's a separate layer aimed at AI integrators, consultancies, agencies, and enterprises that want to build their own AI application on top of Jootle and take it to market under their own brand. The shape: you customize the platform (branding, product name, copy, plan catalog, optionally strip features you don't want, and add your own capabilities via custom toolkits), connect your Stripe account, and you get a fully white labeled customer storefront on your own domain, with subscription billing, customer management, multi instance provisioning, and the whole supporting cast of running a real SaaS already wired up. You focus on the AI application you're uniquely positioned to build (vertical expertise, compliance frameworks, industry workflows, governance frameworks, niche operations, whatever your edge is) and you don't have to think about the orchestration layer, the multi tenancy, the billing plumbing, the deployment pipeline, the customer support tooling, or any of the other stuff that usually eats six months before you ship anything to a real customer. We use Stripe Connect Express for payouts, so partners get paid directly into their own Stripe account on every customer subscription and Jootle takes an application fee. No revenue split spreadsheets, no monthly reconciliation, no separate billing system to maintain. This is a fit for small consultancies wanting to package their expertise into a recurring revenue product, mid sized agencies adding a SaaS line of business without a six month build out, and large enterprises that want to deploy a private AI platform internally while keeping branding and control on their side. Application based, we work closely with each partner to set them up properly.
So the rough cut: if you want a personal AI assistant on your own server and you're happy to operate it yourself, OpenClaw is the right shape. If you want AI running actual business or personal operations end to end with structure, visibility, governance, and a workspace where you can shape the agent's work at whatever precision you need, Jootle is the product. And if you're an AI integrator, agency, or enterprise looking to build and ship your own AI application without having to build all the infrastructure underneath it, Jootle is also the platform.
Happy to dig into any of these if you want more detail on a specific piece. -Shane Grant
Love the 'always-on' concept for families and teams. Bridging the gap between a business tool and a household organizer is a tough needle to thread. How do you handle data partitioning to keep work and personal assistant contexts separate?
@rivra_dev I'll answer with how I actually use it plus what's happening under the hood.
Most of the time I'm just on the dashboard talking to my assistant about whatever is on my mind, household stuff, business stuff, all of it, in the same chat. I don't pre-scope by project, I don't switch tabs, I don't think about it. The assistant decides where a conversation belongs and quietly attaches it to the right project or program. If it isn't sure which entity I mean, it just asks.
The mechanism underneath is a six-phase routing cascade that runs on every incoming message. Cheapest and most certain phases go first:
Explicit context. If the chat session already has a project_id (you started the conversation from a project page), that wins immediately.
Recent conversation with anaphoric detection. If you said "make that a doc" or "convert it" or "continue from there" recently, the router knows "that" refers to the project you were just in. We also catch the negative case: "this is unrelated" or "nothing to do with that" correctly does not fire.
Program mention. Word-boundary regex match against active program names. "Get groceries for the Smith family reunion" routes to that program even if no project has been spun up for it yet.
Knowledge graph traversal. The instance maintains a knowledge graph of entities (people, companies, tools, locations). Names and references in the message get extracted, looked up in the graph, and the router traverses edges to find which projects are connected to those entities.
Semantic search via pgvector. Embeddings over both projects and tasks. A matching task inherits its parent project's id, with a slight confidence discount for the indirect path.
Ambiguity resolution. Phases 4 and 5 produce ranked candidates with scores. If the top two are within a small gap of each other and both above the ask threshold, the router builds a disambiguation prompt: "I found two projects that might match. Which one did you mean? 1. ... 2. ..." If all candidates are below threshold the conversation stays standalone, because guessing wrong is worse than not guessing.
Every decision (chosen project, candidates considered, scores, whether it was ambiguous, which channel the message came in on) gets logged to a routing_decisions table. That's the feedback loop we use to tune thresholds and catch where routing goes wrong.
So in practice the partitioning is an emergent property of the router doing the work. You talk, it sorts, it asks when it isn't sure.
Inside one instance the knowledge graph is intentionally shared across projects. We thought about partitioning the memory itself and decided against it, because the most useful behaviors come from cross-context awareness. I want the assistant to know about my kid's recital when planning business travel, or to flag a date conflict between a board meeting and a family event. Hard partitioning would lose all of that.
For users who genuinely need a hard wall (regulated industries, multiple unrelated clients, anything where context leakage is unacceptable), you can run two separate instances. Most people, including me, don't need that.
Happy to go deeper on any of the phases, the threshold tuning, or the disambiguation prompt design if it's useful.
-Shane Grant
Don't forget to checkout our platform where you can build and deliver you ai ideas!
https://jootle.com/platform/