
maxly.chat
GitHub for LLMs
137 followers
GitHub for LLMs
137 followers
maxly.chat is a collaborative AI canvas that allows for branches in conversations and parallel prompts. Smarter memory with tools to fix poisoned context, shared canvases for teams, and generative UI for rapid iteration—faster, cleaner, truly collaborative AI.





Hey Product Hunt — I’m Max, I’m 18 and just graduated high school, worked at 2 vc backed startups and now I'm looking for other opportunities. I’ve been using ChatGPT non-stop, and I keep running into the same pain points.
• You ask one question, then have to scroll back up to continue reading the initial response
• Chats get longer and models get worse. Some stuff should be remembered, some shouldn’t, but there’s no control.
• Running one prompt at a time is slow — and LLMs aren’t even deterministic. If you rerun the same question, you’ll get different (sometimes better) answers. That’s actually a feature we should be using.
• When I worked in teams, we'd often have context that had to be shared, ChatGPTs current "Share" button just doesn't do
So here’s what I’ve been working on:
⚡ Parallel queries. Fire multiple prompts at once. Saves time, surfaces better answers.
🌀 Model variance. Run the same query across different models. Benchmarks only differ by a few %, but the perspectives vary a lot. Stack them together and you get stronger results.
🎨 Generative UI. Same idea, but for design. Spin up 10 versions instantly, pick the best one, iterate again.
👥 Collaboration. Still shocked no one’s done this right. Teams need a shared AI canvas — like Figma, but with LLMs.
———
This is scrappy, early, and still forming — but the vision is clear:
Branching for smarter memory. Generative UI. Collaborative LLMs that actually feels collaborative.
Would love feedback, ideas, and brutal honesty. I’m 18, figuring it out as I go, and trying to build the thing I wish I had.
— Max (maxplee8@gmail.com)
This hits so many of the pain points I run into daily. Long linear chats, context getting messy, only being able to run one prompt at a time… it slows everything down.
The idea of branching memory + parallel prompts + model variance feels like the way AI should work. And having a shared canvas for teams - OMG - that’s been missing forever.
What I like most is how this could speed up iteration: fire off a bunch of prompts, compare answers side by side, spin up multiple UI variations instantly, and keep the best thread going. That’s exactly how real collaboration with AI should feel.
Huge respect to Max for tackling this head-on at 18. Excited to see where this goes, and would love to get hands-on with it for some of my own projects.
This looks amazing. Signed up on your waitlist. I was just thinking about this last night. Hope to test it out soon.
News Sentinel by Blockbrain
You've got my vote. Great work, incredible considering you're just 18. Damn, I was doing other things with 18, respect. You'll go places! Just keep in mind to do good--overall! ;)
Future Star ! Not because he is a great engineer, coz great engineers are everywhere. The fact that he thought about the product fundamentally right, as we all know that memory and context has always been a problem for chats with any LLM (some people are also building supermemory). Loved the launch video !
If you can scale this I think it will be amazing. Far too early to be thinking about new features, but I thought I'd throw in a few idea seeds for the future. The overall idea is to make your canvas what Apple's Freeform should be:
AI generate image nodes.
Completely new chats on the same canvas with an AI supervisor looking for connections
Projects for collections of chat
Links to other canvases
That is just a few of the seeds. I'm sure you there are many more already on your board. Can't wait to see this in operation.
This seems pretty cool! Just signed up for the waitlist