ken

Atomic - Turn scattered notes into a connected knowledge graph

by
Atomic is a self-hosted, AI-native knowledge base. Write notes, get a semantic graph. Ask questions, get cited answers from your own content. Auto-generates wiki articles as your knowledge grows. MCP server built-in for Claude/Cursor. Local-first. Open source. Everything you know, connected.

Add a comment

Replies

Best
ken
Maker
📌
Hey PH! I'm Ken, the maker of Atomic 👋 I built this because every note-taking tool I tried either buried my ideas in folders or gave me AI features that felt bolted on. I wanted something where the AI was baked into the structure itself, not a chatbot sitting on top of my notes. The feature I'm most proud of is wiki synthesis: Atomic reads all your atoms under a tag and generates a cited wiki article. Every claim links back to the source note. It's like having your own research assistant. A few fun facts about Atomic: - It's built in Rust + SQLite — the whole thing, including vector embeddings, lives in a single file - There's a built-in MCP server so Claude, Cursor, and other AI tools can query and write to your KB directly - It runs fully local with Ollama or any other OpenAI-compatible provider (LM Studio, LiteLLM, etc) No data leaves your machine Still early days but the core loop is solid. Happy to answer anything — architecture questions, roadmap, weird use cases, all fair game. 🙏
Artem Kosilov

@kenforthewin92 This gets more interesting once agents start writing into the same place as humans. A lot of tools look good while the knowledge base is still clean. Then feeds pile in, agent notes pile in, and the real problem becomes whether the thing stays usable or turns into a smart junk drawer. How are you thinking about that part?

Lakshay Gupta

Coolest launch of the day fs! Btw do you see atomic as a note taking tool, a personal knowledge OS or something closer to a local-first AI assistant? Also are you using this yourself, if so is it creating an impact in your daily tasks??

ken

@lak7 

Thanks! and great question — it's closer to a knowledge OS the way I use it.

I have RSS feeds (such as Hacker News front page) piped directly into Atomic, so interesting articles get ingested and tagged automatically as they come in. And via MCP, my AI agents can read and write to the KB mid-task, so research they do during a session gets persisted and searchable later.

The mental model I've landed on: it's the long-term memory layer for both me and my agents. Notes, feeds, and agent outputs all flow in; semantic search and wiki synthesis make it queryable.

Still early but that loop — ingest → tag → synthesize → query — is where it gets powerful.

Lakshay Gupta

@kenforthewin92 pretty cool man, will definitely give it a try!

RAJ KUMAR G

Congrats on the launch! Really love the focus on local-first and keeping everything in a single SQLite/Rust file. How does the performance hold up once the knowledge graph gets significantly large (e.g., thousands of atoms/notes)?

Sayanta Ghosh

Nice one @kenforthewin92 , couldn't resonate with the problem more. Though I'd love my Slack knowledge to be ingested as well.

Mihir Kanzariya

The wiki synthesis feature is the killer differentiator here imo. Every note tool I've used just gives you a folder of disconnected stuff. Having AI that actually reads across your notes and generates cited articles from them is something I haven't seen before.

Built in Rust + SQLite in a single file is also really smart for local-first. No Docker, no Postgres, just works. How big can the graph get before performance starts degrading? Asking because my notes tend to spiral into thousands of entries pretty fast lol.

ken

@mihir_kanzariya 

Totally agree on wiki synthesis, that's the feature that made everything click for me too. The "folder of disconnected stuff" problem is exactly what I was trying to solve.

On performance: the graph uses Sigma.js under the hood, which renders via WebGL — so it's GPU-accelerated and can handle 100k+ nodes without breaking a sweat. I regularly stress test by ingesting thousands of Wikipedia articles in batch, and the graph stays snappy.

The SQLite + Rust combo does a lot of heavy lifting on the backend side too — vector search, full-text search, and graph queries all running against a single file with no external dependencies. For your use case (spiraling thousands of notes) it should be very much in its comfort zone.

Basically: throw everything at it. That's kind of the point. 😄

Sophia Falck-Ytter

The local-first + no data leaves your machine angle is underrated. We're building an AI that reads Google Drive files to organize them, and "who sees my content?" is the first question every user asks. Having the model run locally removes that friction entirely. Curious, does Atomic work well with existing large note collections, or is it better started fresh?

ken

@sophiafyi 

For sure - privacy anxiety is real friction and local-first resonates, especially with technically-minded folk.

To your question: Atomic is built for existing collections as well as starting a KB from scratch. The ingestion pipeline is batch-optimized, so dropping in a large library is fast even at scale. A few ways to get existing notes in:

- Folder of markdown files - point it at your vault and it imports in bulk

- RSS feeds - ongoing ingestion, auto-tagged as items come in

- REST API - if you have a custom pipeline or want to push from other tools, it's fully pluggable

Piotr Sędzik

the MCP server integration caught my eye immediately - we've been building MCP servers for our open source projects and it's such a game changer for Claude workflows. curious how you handle the semantic graph generation? are you using embeddings for the connections or something more sophisticated?

Piotr Ratkowski

love that you went self-hosted AND local-first. so many knowledge tools force you into their cloud. the auto-generated wiki articles sound interesting - does it actually synthesize new content from your notes or just organize existing stuff? could see this being huge for technical documentation.