Launching today

OpenHuman
An open source AI harness built with the human in mind
554 followers
An open source AI harness built with the human in mind
554 followers
90% of people who try AI agents give up. Three reasons: memory that resets every session, your data sitting in someone else's cloud and a terminal just to get started. Real blockers. OpenHuman fixes all of it. Local-first, privacy-first. It remembers everything about you and actually gets smarter the more you use it. Every feature lives in one simple interface. Fully open source. One-click setup. P.S. The product is in beta, so expect bugs, but we're building and shipping fast.








OpenHuman
Heya! I'm Steven, founder of TinyHumans.
A few months ago I tried to set up an open-source AI agent for my dad. Three hours later and after wrestling with API keys, YAML and a terminal he had never opened in his life, we both gave up.
That's when I realised that every powerful AI agent today is built for the 0.01% who can spin up their own runtime. The other 99.99% are watching the agent revolution from the sidelines.
So we built OpenHuman.
OpenHuman is a super-intelligent AI agent that anyone can use. Two-minute setup. No config files. A simple GUI you'd hand to your parents and they'd actually figure out. Connect Gmail, Slack, Telegram, Notion, and GitHub in one click and it just works.
A few things I'm proud of:
* It runs locally. Encrypted vault. We never sell your data.
* It never forgets. Real memory across sessions, not session-only.
* It's open-source under GNU.
* It's free to start: no engineer, no GPU, no $6k setup bill.
Early signal has been wild: 8000+ GitHub stars, 5000+ users in the first 7 days, and 150% week-over-week growth.
Today, we're opening it up to all of you. Note that it is still in beta, so you're super early if you find bugs. Feel free to report them to me over at the Discord.
I'll be in the comments all day. You can break it, roast it, tell me what's missing, ask anything. We ship fixes live.
Would love your feedback!
@enamakel @kunal_karani this seems to be an interesting product. looking forward to experimenting with it it in my daily life!
OpenHuman
@kunal_karani @dipakgr yeah go for it. it should super simple and let me know what you think about it!
@enamakel Hi Steven, awesome product and congrats on the launch. Too lazy to see the GH now but what local slm do you use to orchestrate and memorize before invoking LLMs?
OpenHuman
@zolani_matebese Hey Zolani, Great question. It's a 1B Gemma3 model from Google which runs on most laptops. You can choose the local model you'd like to use in the settings page.
You can also select local LLMs for all the inference you want to do. It's possible to run OpenHuman completely local.
OpenHuman
@ferdi_sigona Hey Ferdi, thanks for the great question.
Short version: chunks carry temporal metadata plus a source weight and at retrieval the agent reasons over both rather than relying purely on similarity.
Recency is one signal but not the only one. So, a confirmation from your finance team about a contract amount outweighs a casual slack message from three weeks ago even if the slack message is more recent.
Source authority is computed per connector.
We also track explicit revisions where a later document or message overrides an earlier claim so the canonical state is the latest non-contradicted assertion rather than just the latest chunk.
Recall stays fast because we keep a rolling summary tree per entity that gets updated incrementally rather than recomputed. Long-horizon contradiction handling is one of the things we're actively improving.
The privacy angle is a big reason to go local-first, but the persistent memory is what actually makes it usable day to day. I I am tired of re-explaining my tech stack and project goals every single time I open a new session.
Since you mentioned that it is in beta and remembers everything, I want to know how you handle context window limits or database bloat over time. Does it start getting sluggish once it knows too much about my work history, or is there some kind of automated cleanup?
OpenHuman
@ritikgupta_01 Great question! That re-explaining loop is exactly why we built the Memory Tree the way we did. We don't dump your entire work history into the prompt. The Memory Tree canonicalizes everything into chunks (3k tokens max), scores them by relevance, and folds them into summary trees: per-source, per-topic, per-day. When the agent needs context, it retrieves the most relevant chunks and summaries, not a raw dump. TokenJuice compacts verbose tool output before it ever hits the model, so even sweeping months of email stays cheap. And on the sluggishness end - The retrieval layer is local SQLite with indexed chunks. The agent isn't scanning a massive log every time. It pulls what matters for the task at hand. So yes, it remembers everything, but it doesn't remember everything all at once in the expensive way. It remembers like a human does: details for what matters, summaries for the rest.
How did this not exist before?! Love the idea, want to give it a try even though I don't necessarily need an agent in my life.
Something I've seen that feels related: the concept of moving our personal data from the custody of vendors (social, doctors, marketers, employers etc) back to us - the owners. Eg a graph of all your data with granular sharing and privacy policies that you directly control. There's an obvious privacy aspect to this, but it's also natural to imagine how agents can thrive with (controlled) access to it, as an extension of the permanent memory you've built.
OpenHuman
@aleksandr_rakitin right? That's the reaction we keep getting. and honestly, the I don't necessarily need an agent thing is fair. Most people don't need another chatbot. but here's the shift: once an agent actually remembers your world, your preferences, your projects, your people, you stop thinking of it as a tool you use. It becomes context you live inside. you realize how much mental overhead you were carrying just to keep all your apps and threads straight. That's why we built the Memory Tree into a local Obsidian vault with plain .md files. the graph of your data shouldn't live in Notion's cloud, or Google's servers, or Anthropic's memory. Ii should live on your machine, in formats you can read, edit, export, or delete. Granular control by default.
The gap between powerful agent and usable by normal people is still massive and most projects only solve the first half.
OpenHuman
@bruce_warren Hey Bruce, nice to see your comment.
You just described the entire reason we built this.
We've spent more engineering hours on installer, defaults, error messages, and recovery than we have on the LLM layer. So I agree when you state the gap between usability of agents and normal people.
So, I would like to say this one is different. It is something that anybody can use because of its easy and simple interface.
can it use my skills and maybe commands? Tool calls? Is it good for coding?
OpenHuman
@robert_douglass yep. 118+ integrations (Gmail, Slack, Notion, etc.) full tool calling with chaining plus retries, and a built-in code sandbox for writing, running, and debugging. it actually does stuff, not just chats about it. :)
GraphBit
How it's handle hallucination thing?
OpenHuman
@imrulkaayes Good question.
Three layers here:
First, OpenHuman grounds its answers in your actual data (emails, slack, notion, etc.) rather than generating from training memory alone.
Second, every memory chunk has a deterministic ID and we can show you the exact source for any claim the agent makes.
Third, when the agent isn't confident, it tells you and asks rather than guessing. We're not pretending hallucinations are solved, but grounding in your real corpus plus auditable retrieval cuts the worst of it.
Happy to go deeper if useful.