Launching today
StreamMD

StreamMD

Streaming MD for LLMs. 300x fewer chars parsed per token

2 followers

Every AI chat app re-parses the entire markdown on each token—O(n²) slowdown. After 500 tokens, that’s 500 full re-renders. StreamMD fixes this with incremental block parsing: only new text is processed, completed blocks are memoized (no re-render), and only the active block updates. Result: ~300x less work. Built-in syntax highlighting (15 langs, ~3kB, zero deps). Drop-in:
StreamMD gallery image
Free
Launch Team