Launching today

Soul.Py
Two files. Any LLM. Your AI finally remembers.
1 follower
Two files. Any LLM. Your AI finally remembers.
1 follower
Every AI assistant has amnesia — close the terminal and context is gone forever. soul.py fixes this with two markdown files: SOUL.md (identity) and MEMORY.md (memory). Git-versioned, human-editable, zero infrastructure. Works with Claude, GPT, or free local models via Ollama. pip install soul-agent → soul init → soul chat. v2.0 includes a RAG+RLM hybrid router. soul-stack (Docker) adds it as a persistent service for n8n. Enterprise teams can use SoulMate API for managed cloud memory!




I built soul.py after getting frustrated re-explaining my projects to AI assistants every single session. The insight was simple: memory shouldn't require infrastructure. Two markdown files you can read, edit, and git diff — that's it.
Three ways to use it:
- Python library — pip install soul-agent (PyPI v0.1.4) — embed directly in any Python project
- CLI — soul init && soul chat — interactive REPL with persistent memory, works with Ollama locally
- Docker — docker pull pgmenon/soul-stack — REST API wrapping the agent, drop into any n8n or self-hosted workflow
On naming: the GitHub repo is soul.py (the project name), the installable package is soul-agent on PyPI. Same thing, different contexts — soul-agent is what you pip install, soul.py is what you star on GitHub.
Within 48 hours of launching on Reddit it hit 50,000+ views and #1 in r/ollama.
The community immediately started building with it — Docker deployments, n8n integrations, Ollama setups with Llama3, Mistral, IBM Granite. We shipped 4 PyPI releases in the first week responding to real user feedback.
Happy to answer anything — architecture, the RAG vs RLM routing decision, Docker setup, or where this is going. 🧠