
maxly.chat
GitHub for LLMs
136 followers
GitHub for LLMs
136 followers
maxly.chat is a collaborative AI canvas that allows for branches in conversations and parallel prompts. Smarter memory with tools to fix poisoned context, shared canvases for teams, and generative UI for rapid iteration—faster, cleaner, truly collaborative AI.





Hey Product Hunt — I’m Max, I’m 18 and just graduated high school, worked at 2 vc backed startups and now I'm looking for other opportunities. I’ve been using ChatGPT non-stop, and I keep running into the same pain points.
• You ask one question, then have to scroll back up to continue reading the initial response
• Chats get longer and models get worse. Some stuff should be remembered, some shouldn’t, but there’s no control.
• Running one prompt at a time is slow — and LLMs aren’t even deterministic. If you rerun the same question, you’ll get different (sometimes better) answers. That’s actually a feature we should be using.
• When I worked in teams, we'd often have context that had to be shared, ChatGPTs current "Share" button just doesn't do
So here’s what I’ve been working on:
⚡ Parallel queries. Fire multiple prompts at once. Saves time, surfaces better answers.
🌀 Model variance. Run the same query across different models. Benchmarks only differ by a few %, but the perspectives vary a lot. Stack them together and you get stronger results.
🎨 Generative UI. Same idea, but for design. Spin up 10 versions instantly, pick the best one, iterate again.
👥 Collaboration. Still shocked no one’s done this right. Teams need a shared AI canvas — like Figma, but with LLMs.
———
This is scrappy, early, and still forming — but the vision is clear:
Branching for smarter memory. Generative UI. Collaborative LLMs that actually feels collaborative.
Would love feedback, ideas, and brutal honesty. I’m 18, figuring it out as I go, and trying to build the thing I wish I had.
— Max (maxplee8@gmail.com)
This hits so many of the pain points I run into daily. Long linear chats, context getting messy, only being able to run one prompt at a time… it slows everything down.
The idea of branching memory + parallel prompts + model variance feels like the way AI should work. And having a shared canvas for teams - OMG - that’s been missing forever.
What I like most is how this could speed up iteration: fire off a bunch of prompts, compare answers side by side, spin up multiple UI variations instantly, and keep the best thread going. That’s exactly how real collaboration with AI should feel.
Huge respect to Max for tackling this head-on at 18. Excited to see where this goes, and would love to get hands-on with it for some of my own projects.
This seems pretty cool! Just signed up for the waitlist
Close
This looks really cool, great job!
amazing!!!! will be using this.
Been waiting for someone to build this. Just signed up.
SuperSEO Tips
This looks awesome!