Skip to Content
Anuma SDKExpoHooksuseChat

useChat

useChat(options?: object): UseChatResult

Defined in: src/expo/useChat.ts:120 

A React hook for managing chat completions with authentication.

React Native version — Uses XMLHttpRequest for streaming since fetch response body streaming isn’t available in React Native. Delegates all tool loop logic to the shared runToolLoop.

Parameters

ParameterTypeDescription

options?

object

Optional configuration object

options.apiType?

ApiType

Which API endpoint to use. Default: “auto”

  • “auto”: automatically selects the best API based on model support
  • “responses”: OpenAI Responses API (supports thinking, reasoning, conversations)
  • “completions”: OpenAI Chat Completions API (wider model compatibility)

options.baseUrl?

string

Optional base URL for the API requests.

options.getToken?

() => Promise<string | null>

An async function that returns an authentication token.

options.onData?

(chunk: string) => void

Callback function to be called when a new data chunk is received.

options.onError?

(error: Error) => void

Callback function to be called when an unexpected error is encountered.

Note: This callback is NOT called for aborted requests (via stop() or component unmount). Aborts are intentional actions and are not considered errors. To detect aborts, check the error field in the sendMessage result: result.error === "Request aborted".

options.onFinish?

(response: ApiResponse) => void

Callback function to be called when the chat completion finishes successfully. Receives raw API response - either Responses API or Completions API format.

options.onServerToolCall?

(toolCall: ServerToolCallEvent) => void

Callback function to be called when a server-side tool (MCP) is invoked during streaming. Use this to show activity indicators like “Searching…” in the UI.

options.onStepFinish?

(event: StepFinishEvent) => void

Called after each tool execution round completes. Receives the round index, model content, tool calls, results, and token usage. Useful for progress indicators, cost tracking, and custom early-exit logic.

options.onThinking?

(chunk: string) => void

Callback function to be called when thinking/reasoning content is received. This is called with delta chunks as the model “thinks” through a problem.

options.onToolCall?

(toolCall: LlmapiToolCall) => void

Callback function to be called when a tool call is requested by the LLM but no executor is registered for it (e.g. server-side tools).

options.onToolCallArgumentsDelta?

(event: ToolCallArgumentsDeltaEvent) => void

Called with partial tool call arguments as they stream in. Use for live preview of artifacts (HTML, slides) being generated.

options.smoothing?

boolean | StreamSmoothingConfig

Controls adaptive output smoothing for streaming responses. Fast models can return text faster than is comfortable to read — smoothing buffers incoming chunks and releases them at a consistent, adaptive pace.

  • true or omitted: enabled with defaults (200→400 chars/sec over 3s)
  • false: disabled, callbacks fire immediately with raw chunks
  • StreamSmoothingConfig: custom speed/ramp configuration

Default

true

Returns

UseChatResult

An object containing:

  • isLoading: A boolean indicating whether a request is currently in progress
  • sendMessage: An async function to send chat messages
  • stop: A function to abort the current request

Example

const { isLoading, sendMessage, stop } = useChat({ getToken: async () => await getAuthToken(), onFinish: (response) => console.log("Chat finished:", response), onError: (error) => console.error("Chat error:", error) }); const handleSend = async () => { const result = await sendMessage({ messages: [{ role: 'user', content: [{ type: 'text', text: 'Hello!' }] }], model: 'gpt-4o-mini' }); };
Last updated on