useChat
useChat(
options?:object):UseChatResult
Defined in: src/expo/useChat.ts:120
A React hook for managing chat completions with authentication.
React Native version — Uses XMLHttpRequest for streaming since
fetch response body streaming isn’t available in React Native.
Delegates all tool loop logic to the shared runToolLoop.
Parameters
| Parameter | Type | Description |
|---|---|---|
|
|
|
Optional configuration object |
|
|
|
Which API endpoint to use. Default: “auto”
|
|
|
|
Optional base URL for the API requests. |
|
|
() => |
An async function that returns an authentication token. |
|
|
( |
Callback function to be called when a new data chunk is received. |
|
|
( |
Callback function to be called when an unexpected error is encountered. Note: This callback is NOT called for aborted requests (via |
|
|
( |
Callback function to be called when the chat completion finishes successfully. Receives raw API response - either Responses API or Completions API format. |
|
|
( |
Callback function to be called when a server-side tool (MCP) is invoked during streaming. Use this to show activity indicators like “Searching…” in the UI. |
|
|
( |
Called after each tool execution round completes. Receives the round index, model content, tool calls, results, and token usage. Useful for progress indicators, cost tracking, and custom early-exit logic. |
|
|
( |
Callback function to be called when thinking/reasoning content is received. This is called with delta chunks as the model “thinks” through a problem. |
|
|
( |
Callback function to be called when a tool call is requested by the LLM but no executor is registered for it (e.g. server-side tools). |
|
|
( |
Called with partial tool call arguments as they stream in. Use for live preview of artifacts (HTML, slides) being generated. |
|
|
|
Controls adaptive output smoothing for streaming responses. Fast models can return text faster than is comfortable to read — smoothing buffers incoming chunks and releases them at a consistent, adaptive pace.
Default |
Returns
UseChatResult
An object containing:
isLoading: A boolean indicating whether a request is currently in progresssendMessage: An async function to send chat messagesstop: A function to abort the current request
Example
const { isLoading, sendMessage, stop } = useChat({
getToken: async () => await getAuthToken(),
onFinish: (response) => console.log("Chat finished:", response),
onError: (error) => console.error("Chat error:", error)
});
const handleSend = async () => {
const result = await sendMessage({
messages: [{ role: 'user', content: [{ type: 'text', text: 'Hello!' }] }],
model: 'gpt-4o-mini'
});
};