UseChatStorageResult
Defined in: src/react/useChatStorage.ts:589
Result returned by useChatStorage hook (React version)
Extends base result with React-specific sendMessage signature.
Extends
BaseUseChatStorageResult
Properties
clearQueue()
clearQueue: () =>
void
Defined in: src/react/useChatStorage.ts:725
Clear all queued operations for the current wallet. Discards pending operations without writing them.
Returns
void
conversationId
conversationId:
string|null
Defined in: src/lib/db/chat/types.ts:718
Inherited from
BaseUseChatStorageResult.conversationId
createConversation()
createConversation: (
options?:CreateConversationOptions) =>Promise<StoredConversation>
Defined in: src/lib/db/chat/types.ts:720
Parameters
| Parameter | Type |
|---|---|
|
|
Returns
Promise<StoredConversation>
Inherited from
BaseUseChatStorageResult.createConversation
createMemoryEngineTool()
createMemoryEngineTool: (
searchOptions?:Partial<MemoryEngineSearchOptions>) =>ToolConfig
Defined in: src/react/useChatStorage.ts:643
Create a memory engine tool for LLM to search past conversations. The tool is pre-configured with the hook’s storage context and auth.
Parameters
| Parameter | Type | Description |
|---|---|---|
|
|
|
Optional search configuration (limit, minSimilarity, etc.) |
Returns
ToolConfig
A ToolConfig that can be passed to sendMessage’s clientTools
Example
const memoryTool = createMemoryEngineTool({ limit: 5 });
await sendMessage({
messages: [...],
clientTools: [memoryTool],
});createMemoryVaultSearchTool()
createMemoryVaultSearchTool: (
searchOptions?:MemoryVaultSearchOptions) =>ToolConfig
Defined in: src/react/useChatStorage.ts:662
Create a memory vault search tool for LLM to search vault memories using semantic similarity. Pre-configured with vault context, auth, and a shared embedding cache that is pre-populated on init.
Parameters
| Parameter | Type | Description |
|---|---|---|
|
|
Optional search configuration (limit, minSimilarity) |
Returns
ToolConfig
A ToolConfig that can be passed to sendMessage’s clientTools
createMemoryVaultTool()
createMemoryVaultTool: (
options?:MemoryVaultToolOptions) =>ToolConfig
Defined in: src/react/useChatStorage.ts:652
Create a memory vault tool for LLM to save/update persistent memories. The tool is pre-configured with the hook’s vault context and encryption.
Parameters
| Parameter | Type | Description |
|---|---|---|
|
|
Optional configuration (onSave callback for confirmation) |
Returns
ToolConfig
A ToolConfig that can be passed to sendMessage’s clientTools
createVaultMemory()
createVaultMemory: (
content:string,scope?:string) =>Promise<StoredVaultMemory>
Defined in: src/react/useChatStorage.ts:695
Create a new vault memory with the given content.
Parameters
| Parameter | Type | Description |
|---|---|---|
|
|
|
The memory text |
|
|
|
Optional scope (defaults to “private”) |
Returns
Promise<StoredVaultMemory>
deleteConversation()
deleteConversation: (
id:string) =>Promise<boolean>
Defined in: src/lib/db/chat/types.ts:724
Parameters
| Parameter | Type |
|---|---|
|
|
|
Returns
Promise<boolean>
Inherited from
BaseUseChatStorageResult.deleteConversation
deleteVaultMemory()
deleteVaultMemory: (
id:string) =>Promise<boolean>
Defined in: src/react/useChatStorage.ts:712
Delete a vault memory by its ID (soft delete).
Parameters
| Parameter | Type |
|---|---|
|
|
|
Returns
Promise<boolean>
true if the memory was found and deleted
flushQueue()
flushQueue: () =>
Promise<FlushResult>
Defined in: src/react/useChatStorage.ts:719
Manually flush all queued operations for the current wallet. Operations are encrypted and written to the database. Requires the encryption key to be available.
Returns
Promise<FlushResult>
getAllFiles()
getAllFiles: (
options?:object) =>Promise<StoredFileWithContext[]>
Defined in: src/react/useChatStorage.ts:623
Get all files from all conversations, sorted by creation date (newest first). Returns files with conversation context for building file browser UIs.
Parameters
| Parameter | Type |
|---|---|
|
|
|
|
|
|
|
|
|
Returns
Promise<StoredFileWithContext[]>
getConversation()
getConversation: (
id:string) =>Promise<StoredConversation|null>
Defined in: src/lib/db/chat/types.ts:721
Parameters
| Parameter | Type |
|---|---|
|
|
|
Returns
Promise<StoredConversation | null>
Inherited from
BaseUseChatStorageResult.getConversation
getConversations()
getConversations: () =>
Promise<StoredConversation[]>
Defined in: src/lib/db/chat/types.ts:722
Returns
Promise<StoredConversation[]>
Inherited from
BaseUseChatStorageResult.getConversations
getMessages()
getMessages: (
conversationId:string) =>Promise<StoredMessage[]>
Defined in: src/lib/db/chat/types.ts:725
Parameters
| Parameter | Type |
|---|---|
|
|
|
Returns
Promise<StoredMessage[]>
Inherited from
BaseUseChatStorageResult.getMessages
getVaultMemories()
getVaultMemories: (
options?:object) =>Promise<StoredVaultMemory[]>
Defined in: src/react/useChatStorage.ts:688
Get all vault memories for context injection. Returns non-deleted memories sorted by creation date (newest first).
Parameters
| Parameter | Type | Description |
|---|---|---|
|
|
|
Optional filtering (scopes to include) |
|
|
|
‐ |
Returns
Promise<StoredVaultMemory[]>
isLoading
isLoading:
boolean
Defined in: src/lib/db/chat/types.ts:716
Inherited from
BaseUseChatStorageResult.isLoading
queueStatus
queueStatus:
QueueStatus
Defined in: src/react/useChatStorage.ts:730
Current status of the write queue.
searchVaultMemories()
searchVaultMemories: (
query:string,searchOptions?:MemoryVaultSearchOptions) =>Promise<VaultSearchResult[]>
Defined in: src/react/useChatStorage.ts:672
Search vault memories programmatically using semantic similarity. Returns structured results sorted by descending similarity. Gracefully returns [] when auth is unavailable.
Parameters
| Parameter | Type | Description |
|---|---|---|
|
|
|
Natural language search query |
|
|
Optional search configuration (limit, minSimilarity, scopes) |
Returns
Promise<VaultSearchResult[]>
sendMessage()
sendMessage: (
args:object) =>Promise<SendMessageWithStorageResult>
Defined in: src/react/useChatStorage.ts:618
Sends a message to the AI and automatically persists both the user message and assistant response to the database.
This method handles the complete message lifecycle:
- Ensures a conversation exists (creates one if
autoCreateConversationis enabled) - Optionally includes conversation history for context
- Stores the user message before sending
- Streams the response via the underlying
useChathook - Stores the assistant response (including usage stats, sources, and thinking)
- Handles abort/error states gracefully
Parameters
| Parameter | Type | Description |
|---|---|---|
|
|
|
‐ |
|
|
|
Override the API type for this specific request.
Useful when different models need different APIs within the same hook instance. |
|
|
Client-side tools with optional executors. These tools run in the browser/app and can have JavaScript executor functions. | |
|
|
Dynamic filter for client-side tools based on prompt embeddings. Receives the prompt embedding(s) (or null for short messages) and all client tools, returns tool names to include. Tools not in the returned list are excluded from the request. Example | |
|
|
|
Explicitly specify the conversation ID to send this message to. If provided, bypasses the automatic conversation detection/creation. Useful when sending a message immediately after creating a conversation, to avoid race conditions with React state updates. |
|
|
|
Additional context from preprocessed file attachments. Contains extracted text from Excel, Word, PDF, and other document files. Injected as a system message so it’s available throughout the conversation. |
|
|
File attachments to include with the message (images, documents, etc.). Files with image MIME types and URLs are sent as image content parts. File metadata is stored with the message (URLs are stripped if they’re data URIs). | |
|
|
() => |
Callback to get activity phases AFTER streaming completes.
Use this instead of If both |
|
|
|
Custom HTTP headers to include with the API request. Useful for passing additional authentication, tracking, or feature flags. |
|
|
|
User-selected image generation model for server-side enforcement. |
|
|
|
Whether to automatically include previous messages from the conversation as context.
When true, fetches stored messages and prepends them to the request.
Ignored if Default |
|
|
|
Maximum number of historical messages to include when Default |
|
|
|
Maximum number of tokens to generate in the response. Use this to limit response length and control costs. |
|
|
|
Maximum number of tool execution rounds before forcing the model to respond with text.
After this many rounds, Default |
|
|
|
Additional context from memory/RAG system to include in the request. Typically contains retrieved relevant information from past conversations. |
|
|
The message array to send to the AI. Uses the modern array format that supports multimodal content (text, images, files). The last user message in this array will be extracted and stored in the database. When Example | |
|
|
|
The model identifier to use for this request (e.g., “gpt-4o”, “claude-sonnet-4-20250514”). If not specified, uses the default model configured on the server. |
|
|
( |
Per-request callback invoked with each streamed response chunk.
Overrides the hook-level |
|
|
( |
Per-request callback for thinking/reasoning chunks. Called with delta chunks as the model “thinks” through a problem. Use this to display thinking progress in the UI. |
|
|
|
Parent message ID for branching (edit/regenerate). Sets on the user message. |
|
|
Reasoning configuration for o-series and other reasoning models. Controls reasoning effort level and whether to include reasoning summary. | |
|
|
|
Additional context from search results to include in the request. Typically contains relevant information from web or document searches. |
|
|
Server-side tools to include from /api/v1/tools.
Example | |
|
|
|
Skip all storage operations (conversation, messages, embeddings, media). Use this for one-off tasks like title generation where you don’t want to pollute the database with utility messages. When true:
Default Example |
|
|
Search sources to attach to the stored message for citation/reference. Note: Sources are also automatically extracted from tool_call_events in the response. | |
|
|
|
Enable progressive summarization of conversation history. When enabled, older messages are summarized into a compact text using a cheap model, while recent messages are kept verbatim. This reduces input tokens by 50-70% for long conversations. Requires Default |
|
|
|
Minimum number of recent messages to always keep verbatim (never summarized). Ensures the LLM always has immediate conversational context. Even if these messages exceed the token threshold, they are kept. Default |
|
|
|
Model to use for generating conversation summaries. Should be a cheap, fast model since summarization is a straightforward task. Default |
|
|
|
Token threshold for conversation history before summarization triggers. When the total token count of the cached summary + unsummarized messages exceeds this value, older messages are summarized to fit within the budget. How to choose a value:
The fixed overhead (system prompt + tools + memory ≈ 3,500 tokens) is NOT included — it is additive. Total input ≈ overhead + threshold + current message. Default |
|
|
|
Controls randomness in the response (0.0 to 2.0). Lower values make output more deterministic, higher values more creative. |
|
|
Extended thinking configuration for Anthropic models (Claude). Enables the model to think through complex problems step by step before generating the final response. | |
|
|
|
Activity phases for tracking the request lifecycle in the UI. Each phase represents a step like “Searching”, “Thinking”, “Generating”. The final phase is automatically marked as completed when stored. Note: If you need activity phases that are added during streaming (e.g., server tool calls),
use |
|
|
|
Controls which tool the model should use:
|
Returns
Promise<SendMessageWithStorageResult>
Example
const result = await sendMessage({
content: "Explain quantum computing",
model: "gpt-4o",
includeHistory: true,
onData: (chunk) => setStreamingText(prev => prev + chunk),
});
if (result.error) {
console.error("Failed:", result.error);
} else {
console.log("Stored message ID:", result.assistantMessage.uniqueId);
}setConversationId()
setConversationId: (
id:string|null) =>void
Defined in: src/lib/db/chat/types.ts:719
Parameters
| Parameter | Type |
|---|---|
|
|
|
Returns
void
Inherited from
BaseUseChatStorageResult.setConversationId
stop()
stop: () =>
void
Defined in: src/lib/db/chat/types.ts:717
Returns
void
Inherited from
BaseUseChatStorageResult.stop
updateConversationTitle()
updateConversationTitle: (
id:string,title:string) =>Promise<boolean>
Defined in: src/lib/db/chat/types.ts:723
Parameters
| Parameter | Type |
|---|---|
|
|
|
|
|
|
Returns
Promise<boolean>
Inherited from
BaseUseChatStorageResult.updateConversationTitle
updateVaultMemory()
updateVaultMemory: (
id:string,content:string,scope?:string) =>Promise<StoredVaultMemory|null>
Defined in: src/react/useChatStorage.ts:702
Update an existing vault memory’s content.
Parameters
| Parameter | Type | Description |
|---|---|---|
|
|
|
‐ |
|
|
|
‐ |
|
|
|
Optional new scope for the memory |
Returns
Promise<StoredVaultMemory | null>
the updated memory, or null if not found
vaultEmbeddingCache
vaultEmbeddingCache:
VaultEmbeddingCache
Defined in: src/react/useChatStorage.ts:681
The shared vault embedding cache. Use this to eagerly embed content when saving vault memories (via eagerEmbedContent).