![SoulMate [Built on Soul.Py]](https://ph-files.imgix.net/b89fb491-825f-4ba4-8571-fe601d97f26a.png?auto=compress&codec=mozjpeg&cs=strip&auto=format&w=64&h=64&fit=crop&frame=1)
SoulMate [Built on Soul.Py]
Two files. Any LLM. Your AI finally remembers.
3 followers
Two files. Any LLM. Your AI finally remembers.
3 followers
Every AI assistant has amnesia — close the terminal and context is gone forever. soul.py fixes this with two markdown files: SOUL.md (identity) and MEMORY.md (memory). Git-versioned, human-editable, zero infrastructure. Works with Claude, GPT, or free local models via Ollama. pip install soul-agent → soul init → soul chat. v2.0 includes a RAG+RLM hybrid router. soul-stack (Docker) adds it as a persistent service for n8n. Enterprise teams can use the SoulMate API for managed cloud memory!

![SoulMate [Built on Soul.Py] gallery image](https://ph-files.imgix.net/ba4d3ae5-f10a-40be-9f8d-3a34dc8082f4.png?auto=compress&codec=mozjpeg&cs=strip&auto=format&w=220&h=220&fit=max&frame=1)
![SoulMate [Built on Soul.Py] gallery image](https://ph-files.imgix.net/3a6f4b09-ae3c-4d9a-b3cc-abb8667e5651.png?auto=compress&codec=mozjpeg&cs=strip&auto=format&w=165&h=220&fit=max&frame=1)
![SoulMate [Built on Soul.Py] gallery image](https://ph-files.imgix.net/324091b8-c116-439e-b0e4-3fc52e9f906a.png?auto=compress&codec=mozjpeg&cs=strip&auto=format&w=173&h=220&fit=max&frame=1)
I built soul.py after getting frustrated re-explaining my projects to AI assistants every single session. The insight was simple: memory shouldn't require infrastructure. Two markdown files you can read, edit, and git diff — that's it.
Three ways to use it:
- Python library — pip install soul-agent (PyPI v0.1.4) — embed directly in any Python project
- CLI — soul init && soul chat — interactive REPL with persistent memory, works with Ollama locally
- Docker — docker pull pgmenon/soul-stack — REST API wrapping the agent, drop into any n8n or self-hosted workflow
On naming: the GitHub repo is soul.py (the project name), the installable package is soul-agent on PyPI. Same thing, different contexts — soul-agent is what you pip install, soul.py is what you star on GitHub.
Within 48 hours of launching on Reddit it hit 50,000+ views and #1 in r/ollama.
The community immediately started building with it — Docker deployments, n8n integrations, Ollama setups with Llama3, Mistral, IBM Granite. We shipped 4 PyPI releases in the first week responding to real user feedback.
Happy to answer anything — architecture, the RAG vs RLM routing decision, Docker setup, or where this is going. 🧠