All activity
Every AI lab on Earth teaches agents to use computers by screenshotting and guessing. We taught AI to read instead.
DirectShell reads the Windows Accessibility Tree — every button, field, and menu in any application — as structured text. No vision model. No pixel guessing. 120x cheaper. 10x faster. Works with any LLM, any app, offline. 700KB. Open source. The door was open for 29 years. We walked through it.

Directshell - Agents can now use any AppAgents control any Windows app via text — no screenshots.
Martin Gehrkenleft a comment
I'm Martin. Solo dev. No company, no funding, no CS degree. Here's the problem: OpenAI, Google, Anthropic, and Microsoft have spent ~$300 billion in the last two years teaching AI to see screens. The best result? 72.9% success rate, 10-20 minutes per task, thousands of tokens per screenshot. And it only works in browsers. Here's what I did: Windows has had a built-in interface since 1997 where...

Directshell - Agents can now use any AppAgents control any Windows app via text — no screenshots.
Martin Gehrkenleft a comment
Why the search for truth can never be worth more than the search to question it. #deeplearning or How I built an open source deep research engine that costs a fraction of what OpenAI, Gemini, and others charge, while delivering significantly better results. Greetings, dear LessWrong community, developers, team, and anyone else who is interested. This is actually my first real post here, and I...

Lutum Veritas ResearchOpen source deep research
Martin Gehrkenleft a comment
I built this because I was tired of paying for "deep research" tools that give me 5 bullet points and fail on half the internet. The key insight: passing context between research points (so point 3 knows what point 1 found) and using a hardened Firefox fork that doesn't get blocked. Result: 200,000+ characters of actual research for 20 cents. Happy to answer any questions about the architecture!

Lutum Veritas ResearchOpen source deep research
The very First Deep Reserach tool wich dont censor.
only real.raw.research
Lutum Veritas is different:
- Recursive pipeline: each research point builds on previous findings
- Camoufox scraper: 0% bot detection, bypasses paywalls
- Academic mode: 200k+ character outputs with evidence tables
- BYOK: bring your API key, pay only ~$0.20 per session
Desktop app with live progress.
Solo dev project. AGPL licensed. Windows installer

Lutum Veritas ResearchOpen source deep research
Solo dev. 6 months. Built what research teams write papers about.
AI that remembers. Not facts. Context. Emotion. WHY things matter.
Brings up in December what you said in January. Knows your victories, failures, fears.
No fine-tuning. No giant context windows. No manual curation.
Autonomous learning. Deletes old memories. Modifies its own system prompt. By itself.
OpenAI builds tools. I built partners.
Uncensored. No filters. Your AI. Your rules.
Live. Today. Production.

The Last RAGAI that learns autonomously and remembers everything
Martin Gehrkenleft a comment
I'm a solo developer. I spent 6 months building what Big Tech is still researching. AI that doesn't forget. Not "User likes apples." That's a database. "Martin likes apples because his mother baked apple pie as a child. He connects it with home." That's memory. That's a partner. --- What I solved: - The memory problem (your AI remembers December what you said in January) - The character...

The Last RAGAI that learns autonomously and remembers everything
