How markdown rendering works
The rendering pipeline has three steps:- Receive:
useStreamaccumulates the streamed text intomsg.texton each AI message, updating reactively as new tokens arrive. - Parse: A markdown parser converts the raw text to HTML (or a React element tree). This runs on every update but is fast enough for chat-length content (< 5ms for a 5 KB message).
- Render: The parsed output is rendered into the DOM. React uses virtual
DOM diffing; Vue and Svelte use
v-html/{@html}with sanitized HTML.
Setting up useStream
The markdown pattern uses a simple chat agent with no special configuration. Wire upuseStream with your agent URL and assistant ID.
Define a TypeScript interface matching your agent’s state schema and pass it as a type parameter to useStream for type-safe access to state values. In the examples below, replace typeof myAgent with your interface name:
Choosing a markdown library
Each framework has a natural choice for markdown rendering:| Framework | Library | Output | Why |
|---|---|---|---|
| React | react-markdown + remark-gfm | React elements | Component-based, virtual DOM diffing, no dangerouslySetInnerHTML |
| Vue | marked + dompurify | Sanitized HTML via v-html | Lightweight, fast, GFM built-in |
| Svelte | marked + dompurify | Sanitized HTML via {@html} | Same as Vue, consistent API |
| Angular | marked + dompurify | Sanitized HTML via [innerHTML] | Same as Vue/Svelte |
Building the Markdown component
Sanitizing HTML output
When rendering parsed markdown as raw HTML (v-html, {@html}, [innerHTML]),
you must sanitize the output to prevent cross-site scripting (XSS). LLM
responses may contain arbitrary text, including markup that a markdown parser
could turn into executable HTML.
Use dompurify to strip dangerous elements:
<script> tags, onclick attributes, javascript: URLs,
and other XSS vectors while preserving safe markdown output like headings,
lists, code blocks, tables, and links.
React’s
react-markdown does not need dompurify because it produces React
elements directly, no raw HTML injection is involved.Streaming considerations
useStream updates msg.text reactively as each token arrives. The markdown
component re-parses on every update. For typical chat messages, this is
performant:
markedparses at ~1 MB/s—a 5 KB message takes < 5msreact-markdown+ remark pipeline is similarly fast for chat-length content- The browser’s layout engine handles the DOM update efficiently
- Throttle renders: use
requestAnimationFrameto batch updates at 60fps instead of re-rendering on every token - Incremental parsing: parse only new content and append to a rendered buffer (advanced, typically not needed for chat UIs)
For most chat applications, the simple approach of re-parsing the full message
on each token is sufficient. Only optimize if you observe janky scrolling or
dropped frames with very long messages.
Styling markdown content
Apply styles to the.markdown-content class to control the appearance of
rendered markdown. Here are the essential styles:
Best practices
- Always sanitize: when using
v-html,{@html}, or[innerHTML], always run the parsed output throughdompurify. Never trust raw HTML from a markdown parser fed with LLM output. - Enable GFM: GitHub Flavored Markdown adds tables, strikethrough, task lists, and autolinks. These features are commonly used by LLMs.
- Handle empty content: check for empty strings before parsing to avoid rendering empty containers.
- Use
breaks: true: enable line break conversion so single newlines in LLM output render as<br>rather than being ignored. LLMs often use single newlines for visual separation. - Style for chat context: use compact margins and sizes appropriate for chat bubbles, not full-width article layouts.
- Test with rich content: verify rendering with headings, nested lists, code blocks with long lines, wide tables, and blockquotes to catch overflow or layout issues.
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

