All activity
Jack Andersohnleft a comment
This hits so many of the pain points I run into daily. Long linear chats, context getting messy, only being able to run one prompt at a time… it slows everything down. The idea of branching memory + parallel prompts + model variance feels like the way AI should work. And having a shared canvas for teams - OMG - that’s been missing forever. What I like most is how this could speed up iteration:...

maxly.chatGitHub for LLMs
