StreamMD
p/streammd
Streaming MD for LLMs. 300x fewer chars parsed per token
0 reviews3 followers
Start new thread
trending

StreamMD - Streaming MD for LLMs. 300x fewer chars parsed per token

Every AI chat app re-parses the entire markdown on each token—O(n²) slowdown. After 500 tokens, that’s 500 full re-renders. StreamMD fixes this with incremental block parsing: only new text is processed, completed blocks are memoized (no re-render), and only the active block updates. Result: ~300x less work. Built-in syntax highlighting (15 langs, ~3kB, zero deps). Drop-in: