Why message queues?
In a typical chat interface, users must wait for the agent to finish responding before sending another message. This creates friction in several scenarios:- Batch questions — a user wants to ask five related questions at once rather than waiting for each answer
- Follow-up chains — submitting clarifications or additional context while the agent is still working
- Automated testing sequences — programmatically sending a series of prompts to validate agent behavior
- Data entry workflows — feeding structured inputs one after another for processing
How it works
Under the hood, LangGraph usesmultitaskStrategy: "enqueue" to manage concurrent submissions. When a message is submitted while the agent is already processing, it gets added to a server-side queue. Once the current run completes, the next queued message is picked up automatically.
The useStream hook exposes a queue property that provides real-time visibility into pending messages:
| Property | Type | Description |
|---|---|---|
queue.entries | QueueEntry[] | Array of all pending queue entries |
queue.size | number | Number of entries currently in the queue |
queue.cancel(id) | (id: string) => Promise<void> | Cancel a specific queued entry by ID |
queue.clear() | () => Promise<void> | Cancel all queued entries |
QueueEntry object contains:
| Field | Type | Description |
|---|---|---|
id | string | Unique identifier for this queue entry |
values | object | The input values (including messages) that were submitted |
options | object | Any additional options passed with the submission |
createdAt | string | ISO timestamp of when the entry was created |
Setting up useStream
Define a TypeScript interface matching your agent’s state schema and pass it as a type parameter to useStream for type-safe access to state values. In the examples below, replace typeof myAgent with your interface name:
Displaying the queue
Build aQueueList component that shows each pending message with a cancel button. This gives users visibility into what’s waiting and the ability to remove items they no longer need.
Cancelling queued messages
You have two levels of cancellation:Cancel a single entry
Remove a specific message from the queue by its ID. The agent will skip it and move to the next entry.Clear the entire queue
Remove all pending messages at once. Useful when the user changes context or wants to start over.Cancelling a queue entry only affects messages that have not yet started processing. If the agent is already working on a message, cancelling it from the queue has no effect — you would need to use
stream.stop() to interrupt the current run.Chaining follow-up submissions with onCreated
The onCreated callback fires when a new run is created, giving you a hook to submit follow-up messages programmatically. This is useful for building multi-step workflows where the next question depends on the previous submission being accepted.
Starting a new thread
When a user wants to begin a fresh conversation, useswitchThread(null) to create a new thread. This clears the current message history and queue.
Complete example
Putting it all together, here is a full chat component with queue management:Best practices
Queue size limits — While there is no hard client-side limit on queue size, be mindful that very large queues can degrade user experience. Consider showing a warning when the queue exceeds a reasonable threshold (e.g., 10 items).
- Show queue position — Number each queued item so users know the processing order
- Preserve input focus — Keep the input field focused after submission so users can type the next message immediately
- Animate transitions — Smoothly move items from the queue panel into the message list as they start processing
- Handle errors gracefully — If a queued message fails, surface the error without blocking subsequent queue entries
- Debounce rapid submissions — For automated or programmatic submissions, add a small delay between messages to avoid overwhelming the server
Connect these docs to Claude, VSCode, and more via MCP for real-time answers.

