Launched this week
StreamMD

StreamMD

Streaming MD for LLMs. 300x fewer chars parsed per token

3 followers

Every AI chat app re-parses the entire markdown on each token—O(n²) slowdown. After 500 tokens, that’s 500 full re-renders. StreamMD fixes this with incremental block parsing: only new text is processed, completed blocks are memoized (no re-render), and only the active block updates. Result: ~300x less work. Built-in syntax highlighting (15 langs, ~3kB, zero deps). Drop-in:

StreamMD makers

Here are the founders, developers, designers and product people who worked on StreamMD