Skip to Content

UseChatStorageResult

Defined in: src/react/useChatStorage.ts:589 

Result returned by useChatStorage hook (React version)

Extends base result with React-specific sendMessage signature.

Extends

  • BaseUseChatStorageResult

Properties

clearQueue()

clearQueue: () => void

Defined in: src/react/useChatStorage.ts:725 

Clear all queued operations for the current wallet. Discards pending operations without writing them.

Returns

void


conversationId

conversationId: string | null

Defined in: src/lib/db/chat/types.ts:718 

Inherited from

BaseUseChatStorageResult.conversationId


createConversation()

createConversation: (options?: CreateConversationOptions) => Promise<StoredConversation>

Defined in: src/lib/db/chat/types.ts:720 

Parameters

ParameterType

options?

CreateConversationOptions

Returns

Promise<StoredConversation>

Inherited from

BaseUseChatStorageResult.createConversation


createMemoryEngineTool()

createMemoryEngineTool: (searchOptions?: Partial<MemoryEngineSearchOptions>) => ToolConfig

Defined in: src/react/useChatStorage.ts:643 

Create a memory engine tool for LLM to search past conversations. The tool is pre-configured with the hook’s storage context and auth.

Parameters

ParameterTypeDescription

searchOptions?

Partial<MemoryEngineSearchOptions>

Optional search configuration (limit, minSimilarity, etc.)

Returns

ToolConfig

A ToolConfig that can be passed to sendMessage’s clientTools

Example

const memoryTool = createMemoryEngineTool({ limit: 5 }); await sendMessage({ messages: [...], clientTools: [memoryTool], });

createMemoryVaultSearchTool()

createMemoryVaultSearchTool: (searchOptions?: MemoryVaultSearchOptions) => ToolConfig

Defined in: src/react/useChatStorage.ts:662 

Create a memory vault search tool for LLM to search vault memories using semantic similarity. Pre-configured with vault context, auth, and a shared embedding cache that is pre-populated on init.

Parameters

ParameterTypeDescription

searchOptions?

MemoryVaultSearchOptions

Optional search configuration (limit, minSimilarity)

Returns

ToolConfig

A ToolConfig that can be passed to sendMessage’s clientTools


createMemoryVaultTool()

createMemoryVaultTool: (options?: MemoryVaultToolOptions) => ToolConfig

Defined in: src/react/useChatStorage.ts:652 

Create a memory vault tool for LLM to save/update persistent memories. The tool is pre-configured with the hook’s vault context and encryption.

Parameters

ParameterTypeDescription

options?

MemoryVaultToolOptions

Optional configuration (onSave callback for confirmation)

Returns

ToolConfig

A ToolConfig that can be passed to sendMessage’s clientTools


createVaultMemory()

createVaultMemory: (content: string, scope?: string) => Promise<StoredVaultMemory>

Defined in: src/react/useChatStorage.ts:695 

Create a new vault memory with the given content.

Parameters

ParameterTypeDescription

content

string

The memory text

scope?

string

Optional scope (defaults to “private”)

Returns

Promise<StoredVaultMemory>


deleteConversation()

deleteConversation: (id: string) => Promise<boolean>

Defined in: src/lib/db/chat/types.ts:724 

Parameters

ParameterType

id

string

Returns

Promise<boolean>

Inherited from

BaseUseChatStorageResult.deleteConversation


deleteVaultMemory()

deleteVaultMemory: (id: string) => Promise<boolean>

Defined in: src/react/useChatStorage.ts:712 

Delete a vault memory by its ID (soft delete).

Parameters

ParameterType

id

string

Returns

Promise<boolean>

true if the memory was found and deleted


flushQueue()

flushQueue: () => Promise<FlushResult>

Defined in: src/react/useChatStorage.ts:719 

Manually flush all queued operations for the current wallet. Operations are encrypted and written to the database. Requires the encryption key to be available.

Returns

Promise<FlushResult>


getAllFiles()

getAllFiles: (options?: object) => Promise<StoredFileWithContext[]>

Defined in: src/react/useChatStorage.ts:623 

Get all files from all conversations, sorted by creation date (newest first). Returns files with conversation context for building file browser UIs.

Parameters

ParameterType

options?

object

options.conversationId?

string

options.limit?

number

Returns

Promise<StoredFileWithContext[]>


getConversation()

getConversation: (id: string) => Promise<StoredConversation | null>

Defined in: src/lib/db/chat/types.ts:721 

Parameters

ParameterType

id

string

Returns

Promise<StoredConversation | null>

Inherited from

BaseUseChatStorageResult.getConversation


getConversations()

getConversations: () => Promise<StoredConversation[]>

Defined in: src/lib/db/chat/types.ts:722 

Returns

Promise<StoredConversation[]>

Inherited from

BaseUseChatStorageResult.getConversations


getMessages()

getMessages: (conversationId: string) => Promise<StoredMessage[]>

Defined in: src/lib/db/chat/types.ts:725 

Parameters

ParameterType

conversationId

string

Returns

Promise<StoredMessage[]>

Inherited from

BaseUseChatStorageResult.getMessages


getVaultMemories()

getVaultMemories: (options?: object) => Promise<StoredVaultMemory[]>

Defined in: src/react/useChatStorage.ts:688 

Get all vault memories for context injection. Returns non-deleted memories sorted by creation date (newest first).

Parameters

ParameterTypeDescription

options?

object

Optional filtering (scopes to include)

options.scopes?

string[]

Returns

Promise<StoredVaultMemory[]>


isLoading

isLoading: boolean

Defined in: src/lib/db/chat/types.ts:716 

Inherited from

BaseUseChatStorageResult.isLoading


queueStatus

queueStatus: QueueStatus

Defined in: src/react/useChatStorage.ts:730 

Current status of the write queue.


searchVaultMemories()

searchVaultMemories: (query: string, searchOptions?: MemoryVaultSearchOptions) => Promise<VaultSearchResult[]>

Defined in: src/react/useChatStorage.ts:672 

Search vault memories programmatically using semantic similarity. Returns structured results sorted by descending similarity. Gracefully returns [] when auth is unavailable.

Parameters

ParameterTypeDescription

query

string

Natural language search query

searchOptions?

MemoryVaultSearchOptions

Optional search configuration (limit, minSimilarity, scopes)

Returns

Promise<VaultSearchResult[]>


sendMessage()

sendMessage: (args: object) => Promise<SendMessageWithStorageResult>

Defined in: src/react/useChatStorage.ts:618 

Sends a message to the AI and automatically persists both the user message and assistant response to the database.

This method handles the complete message lifecycle:

  1. Ensures a conversation exists (creates one if autoCreateConversation is enabled)
  2. Optionally includes conversation history for context
  3. Stores the user message before sending
  4. Streams the response via the underlying useChat hook
  5. Stores the assistant response (including usage stats, sources, and thinking)
  6. Handles abort/error states gracefully

Parameters

ParameterTypeDescription

args

object

args.apiType?

ApiType

Override the API type for this specific request.

  • “responses”: OpenAI Responses API (supports thinking, reasoning, conversations)
  • “completions”: OpenAI Chat Completions API (wider model compatibility)

Useful when different models need different APIs within the same hook instance.

args.clientTools?

LlmapiChatCompletionTool[]

Client-side tools with optional executors. These tools run in the browser/app and can have JavaScript executor functions.

args.clientToolsFilter?

ClientToolsFilterFn

Dynamic filter for client-side tools based on prompt embeddings. Receives the prompt embedding(s) (or null for short messages) and all client tools, returns tool names to include. Tools not in the returned list are excluded from the request.

Example

clientToolsFilter: (embeddings, tools) => { if (!embeddings) return []; // Short message — no client tools const matches = findMatchingTools(embeddings, pseudoServerTools); return matches.map(m => m.tool.name); }

args.conversationId?

string

Explicitly specify the conversation ID to send this message to. If provided, bypasses the automatic conversation detection/creation. Useful when sending a message immediately after creating a conversation, to avoid race conditions with React state updates.

args.fileContext?

string

Additional context from preprocessed file attachments. Contains extracted text from Excel, Word, PDF, and other document files. Injected as a system message so it’s available throughout the conversation.

args.files?

FileMetadata[]

File attachments to include with the message (images, documents, etc.). Files with image MIME types and URLs are sent as image content parts. File metadata is stored with the message (URLs are stripped if they’re data URIs).

args.getThoughtProcess?

() => ActivityPhase[]

Callback to get activity phases AFTER streaming completes. Use this instead of thoughtProcess when phases are added dynamically during streaming (e.g., via server tool call events like “Searching…”, “Generating image…”).

If both thoughtProcess and getThoughtProcess are provided, getThoughtProcess takes precedence.

args.headers?

Record<string, string>

Custom HTTP headers to include with the API request. Useful for passing additional authentication, tracking, or feature flags.

args.imageModel?

string

User-selected image generation model for server-side enforcement.

args.includeHistory?

boolean

Whether to automatically include previous messages from the conversation as context. When true, fetches stored messages and prepends them to the request. Ignored if messages is provided.

Default

true

args.maxHistoryMessages?

number

Maximum number of historical messages to include when includeHistory is true. Only the most recent N messages are included to manage context window size.

Default

50

args.maxOutputTokens?

number

Maximum number of tokens to generate in the response. Use this to limit response length and control costs.

args.maxToolRounds?

number

Maximum number of tool execution rounds before forcing the model to respond with text. After this many rounds, toolChoice is set to "none" on the next continuation, so the model produces a text answer using whatever tool results it has gathered.

Default

3

args.memoryContext?

string

Additional context from memory/RAG system to include in the request. Typically contains retrieved relevant information from past conversations.

args.messages

LlmapiMessage[]

The message array to send to the AI.

Uses the modern array format that supports multimodal content (text, images, files). The last user message in this array will be extracted and stored in the database.

When includeHistory is true (default), conversation history is prepended. When includeHistory is false, only these messages are sent.

Example

// Simple usage sendMessage({ messages: [ { role: "user", content: [{ type: "text", text: "Hello!" }] } ] }) // With system prompt and history disabled sendMessage({ messages: [ { role: "system", content: [{ type: "text", text: "You are helpful" }] }, { role: "user", content: [{ type: "text", text: "Question" }] }, ], includeHistory: false }) // With images sendMessage({ messages: [ { role: "user", content: [ { type: "text", text: "What's in this image?" }, { type: "image_url", image_url: { url: "data:image/png;base64,..." } } ]} ] })

args.model?

string

The model identifier to use for this request (e.g., “gpt-4o”, “claude-sonnet-4-20250514”). If not specified, uses the default model configured on the server.

args.onData?

(chunk: string) => void

Per-request callback invoked with each streamed response chunk. Overrides the hook-level onData callback for this request only. Use this to update UI as the response streams in.

args.onThinking?

(chunk: string) => void

Per-request callback for thinking/reasoning chunks. Called with delta chunks as the model “thinks” through a problem. Use this to display thinking progress in the UI.

args.parentMessageId?

string

Parent message ID for branching (edit/regenerate). Sets on the user message.

args.reasoning?

LlmapiResponseReasoning

Reasoning configuration for o-series and other reasoning models. Controls reasoning effort level and whether to include reasoning summary.

args.searchContext?

string

Additional context from search results to include in the request. Typically contains relevant information from web or document searches.

args.serverTools?

ServerToolsFilter

Server-side tools to include from /api/v1/tools.

  • undefined: Include all server-side tools (default)
  • string[]: Include only tools with these names
  • []: Include no server-side tools
  • function: Dynamic filter that receives prompt embedding(s) and all tools, returns tool names to include. Useful for semantic tool matching.

Example

// Include only specific server tools serverTools: ["generate_cloud_image", "perplexity_search"] // Disable server tools for this request serverTools: [] // Semantic tool matching based on prompt serverTools: (embeddings, tools) => { const matches = findMatchingTools(embeddings, tools, { limit: 5 }); return matches.map(m => m.tool.name); }

args.skipStorage?

boolean

Skip all storage operations (conversation, messages, embeddings, media). Use this for one-off tasks like title generation where you don’t want to pollute the database with utility messages.

When true:

  • No conversation is created or required
  • Messages are not stored in the database
  • No embeddings are generated
  • No media/files are processed for storage
  • Result will not include userMessage or assistantMessage

Default

false

Example

// Generate a title without storing anything const { data } = await sendMessage({ messages: [{ role: "user", content: [{ type: "text", text: "Generate a title for: ..." }] }], skipStorage: true, includeHistory: false, });

args.sources?

SearchSource[]

Search sources to attach to the stored message for citation/reference. Note: Sources are also automatically extracted from tool_call_events in the response.

args.summarizeHistory?

boolean

Enable progressive summarization of conversation history.

When enabled, older messages are summarized into a compact text using a cheap model, while recent messages are kept verbatim. This reduces input tokens by 50-70% for long conversations.

Requires includeHistory to be true (default). When includeHistory is false or summarizeHistory is false, all history is sent verbatim (current behavior).

Default

false

args.summaryMinWindowMessages?

number

Minimum number of recent messages to always keep verbatim (never summarized). Ensures the LLM always has immediate conversational context. Even if these messages exceed the token threshold, they are kept.

Default

4 (2 user-assistant turns)

args.summaryModel?

string

Model to use for generating conversation summaries. Should be a cheap, fast model since summarization is a straightforward task.

Default

'cerebras/qwen-3-235b-a22b-instruct-2507' ($0.60/1M input tokens)

args.summaryTokenThreshold?

number

Token threshold for conversation history before summarization triggers.

When the total token count of the cached summary + unsummarized messages exceeds this value, older messages are summarized to fit within the budget.

How to choose a value:

  • Lower (2000-3000): aggressive summarization, lowest cost, less verbatim context.
  • Default (4000): balanced — keeps history under ~0.01/messageattypicalpricing(0.01/message at typical pricing (2.50/1M tokens). Triggers for most conversations after 5-10 turns.
  • Higher (8000-16000): less frequent summarization, more context, higher cost. Good for code review or legal conversations needing precise recall.

The fixed overhead (system prompt + tools + memory ≈ 3,500 tokens) is NOT included — it is additive. Total input ≈ overhead + threshold + current message.

Default

4000

args.temperature?

number

Controls randomness in the response (0.0 to 2.0). Lower values make output more deterministic, higher values more creative.

args.thinking?

LlmapiThinkingOptions

Extended thinking configuration for Anthropic models (Claude). Enables the model to think through complex problems step by step before generating the final response.

args.thoughtProcess?

ActivityPhase[]

Activity phases for tracking the request lifecycle in the UI. Each phase represents a step like “Searching”, “Thinking”, “Generating”. The final phase is automatically marked as completed when stored.

Note: If you need activity phases that are added during streaming (e.g., server tool calls), use getThoughtProcess callback instead, which captures phases AFTER streaming completes.

args.toolChoice?

string

Controls which tool the model should use:

  • “auto”: Model decides whether to use a tool (default)
  • “any”: Model must use one of the provided tools
  • “none”: Model cannot use any tools
  • “required”: Model must use a tool
  • Specific tool name: Model must use that specific tool

Returns

Promise<SendMessageWithStorageResult>

Example

const result = await sendMessage({ content: "Explain quantum computing", model: "gpt-4o", includeHistory: true, onData: (chunk) => setStreamingText(prev => prev + chunk), }); if (result.error) { console.error("Failed:", result.error); } else { console.log("Stored message ID:", result.assistantMessage.uniqueId); }

setConversationId()

setConversationId: (id: string | null) => void

Defined in: src/lib/db/chat/types.ts:719 

Parameters

ParameterType

id

string | null

Returns

void

Inherited from

BaseUseChatStorageResult.setConversationId


stop()

stop: () => void

Defined in: src/lib/db/chat/types.ts:717 

Returns

void

Inherited from

BaseUseChatStorageResult.stop


updateConversationTitle()

updateConversationTitle: (id: string, title: string) => Promise<boolean>

Defined in: src/lib/db/chat/types.ts:723 

Parameters

ParameterType

id

string

title

string

Returns

Promise<boolean>

Inherited from

BaseUseChatStorageResult.updateConversationTitle


updateVaultMemory()

updateVaultMemory: (id: string, content: string, scope?: string) => Promise<StoredVaultMemory | null>

Defined in: src/react/useChatStorage.ts:702 

Update an existing vault memory’s content.

Parameters

ParameterTypeDescription

id

string

content

string

scope?

string

Optional new scope for the memory

Returns

Promise<StoredVaultMemory | null>

the updated memory, or null if not found


vaultEmbeddingCache

vaultEmbeddingCache: VaultEmbeddingCache

Defined in: src/react/useChatStorage.ts:681 

The shared vault embedding cache. Use this to eagerly embed content when saving vault memories (via eagerEmbedContent).

Last updated on