SendMessageWithStorageArgs
Defined in: src/react/useChatStorage.ts:519
Arguments for sendMessage with storage (React version)
Extends base arguments with headers and apiType support.
Extends
BaseSendMessageWithStorageArgs
Properties
apiType?
optionalapiType:ApiType
Defined in: src/react/useChatStorage.ts:533
Override the API type for this specific request.
- “responses”: OpenAI Responses API (supports thinking, reasoning, conversations)
- “completions”: OpenAI Chat Completions API (wider model compatibility)
Useful when different models need different APIs within the same hook instance.
clientTools?
optionalclientTools:LlmapiChatCompletionTool[]
Defined in: src/lib/db/chat/types.ts:603
Client-side tools with optional executors. These tools run in the browser/app and can have JavaScript executor functions.
Inherited from
BaseSendMessageWithStorageArgs.clientTools
clientToolsFilter?
optionalclientToolsFilter:ClientToolsFilterFn
Defined in: src/lib/db/chat/types.ts:640
Dynamic filter for client-side tools based on prompt embeddings. Receives the prompt embedding(s) (or null for short messages) and all client tools, returns tool names to include. Tools not in the returned list are excluded from the request.
Example
clientToolsFilter: (embeddings, tools) => {
if (!embeddings) return []; // Short message — no client tools
const matches = findMatchingTools(embeddings, pseudoServerTools);
return matches.map(m => m.tool.name);
}Inherited from
BaseSendMessageWithStorageArgs.clientToolsFilter
conversationId?
optionalconversationId:string
Defined in: src/react/useChatStorage.ts:541
Explicitly specify the conversation ID to send this message to. If provided, bypasses the automatic conversation detection/creation. Useful when sending a message immediately after creating a conversation, to avoid race conditions with React state updates.
fileContext?
optionalfileContext:string
Defined in: src/lib/db/chat/types.ts:558
Additional context from preprocessed file attachments. Contains extracted text from Excel, Word, PDF, and other document files. Injected as a system message so it’s available throughout the conversation.
Inherited from
BaseSendMessageWithStorageArgs.fileContext
files?
optionalfiles:FileMetadata[]
Defined in: src/lib/db/chat/types.ts:532
File attachments to include with the message (images, documents, etc.). Files with image MIME types and URLs are sent as image content parts. File metadata is stored with the message (URLs are stripped if they’re data URIs).
Inherited from
BaseSendMessageWithStorageArgs.files
getThoughtProcess()?
optionalgetThoughtProcess: () =>ActivityPhase[]
Defined in: src/lib/db/chat/types.ts:583
Callback to get activity phases AFTER streaming completes.
Use this instead of thoughtProcess when phases are added dynamically during streaming
(e.g., via server tool call events like “Searching…”, “Generating image…”).
If both thoughtProcess and getThoughtProcess are provided, getThoughtProcess takes precedence.
Returns
ActivityPhase[]
Inherited from
BaseSendMessageWithStorageArgs.getThoughtProcess
headers?
optionalheaders:Record<string,string>
Defined in: src/react/useChatStorage.ts:524
Custom HTTP headers to include with the API request. Useful for passing additional authentication, tracking, or feature flags.
imageModel?
optionalimageModel:string
Defined in: src/lib/db/chat/types.ts:674
User-selected image generation model for server-side enforcement.
Inherited from
BaseSendMessageWithStorageArgs.imageModel
includeHistory?
optionalincludeHistory:boolean
Defined in: src/lib/db/chat/types.ts:467
Whether to automatically include previous messages from the conversation as context.
When true, fetches stored messages and prepends them to the request.
Ignored if messages is provided.
Default
trueInherited from
BaseSendMessageWithStorageArgs.includeHistory
maxHistoryMessages?
optionalmaxHistoryMessages:number
Defined in: src/lib/db/chat/types.ts:474
Maximum number of historical messages to include when includeHistory is true.
Only the most recent N messages are included to manage context window size.
Default
50Inherited from
BaseSendMessageWithStorageArgs.maxHistoryMessages
maxOutputTokens?
optionalmaxOutputTokens:number
Defined in: src/lib/db/chat/types.ts:597
Maximum number of tokens to generate in the response. Use this to limit response length and control costs.
Inherited from
BaseSendMessageWithStorageArgs.maxOutputTokens
maxToolRounds?
optionalmaxToolRounds:number
Defined in: src/lib/db/chat/types.ts:658
Maximum number of tool execution rounds before forcing the model to respond with text.
After this many rounds, toolChoice is set to "none" on the next continuation,
so the model produces a text answer using whatever tool results it has gathered.
Default
3Inherited from
BaseSendMessageWithStorageArgs.maxToolRounds
memoryContext?
optionalmemoryContext:string
Defined in: src/lib/db/chat/types.ts:545
Additional context from memory/RAG system to include in the request. Typically contains retrieved relevant information from past conversations.
Inherited from
BaseSendMessageWithStorageArgs.memoryContext
messages
messages:
LlmapiMessage[]
Defined in: src/lib/db/chat/types.ts:427
The message array to send to the AI.
Uses the modern array format that supports multimodal content (text, images, files). The last user message in this array will be extracted and stored in the database.
When includeHistory is true (default), conversation history is prepended.
When includeHistory is false, only these messages are sent.
Example
// Simple usage
sendMessage({
messages: [
{ role: "user", content: [{ type: "text", text: "Hello!" }] }
]
})
// With system prompt and history disabled
sendMessage({
messages: [
{ role: "system", content: [{ type: "text", text: "You are helpful" }] },
{ role: "user", content: [{ type: "text", text: "Question" }] },
],
includeHistory: false
})
// With images
sendMessage({
messages: [
{ role: "user", content: [
{ type: "text", text: "What's in this image?" },
{ type: "image_url", image_url: { url: "data:image/png;base64,..." } }
]}
]
})Inherited from
BaseSendMessageWithStorageArgs.messages
model?
optionalmodel:string
Defined in: src/lib/db/chat/types.ts:433
The model identifier to use for this request (e.g., “gpt-4o”, “claude-sonnet-4-20250514”). If not specified, uses the default model configured on the server.
Inherited from
BaseSendMessageWithStorageArgs.model
onData()?
optionalonData: (chunk:string) =>void
Defined in: src/lib/db/chat/types.ts:539
Per-request callback invoked with each streamed response chunk.
Overrides the hook-level onData callback for this request only.
Use this to update UI as the response streams in.
Parameters
| Parameter | Type |
|---|---|
|
|
|
Returns
void
Inherited from
BaseSendMessageWithStorageArgs.onData
onThinking()?
optionalonThinking: (chunk:string) =>void
Defined in: src/lib/db/chat/types.ts:681
Per-request callback for thinking/reasoning chunks. Called with delta chunks as the model “thinks” through a problem. Use this to display thinking progress in the UI.
Parameters
| Parameter | Type |
|---|---|
|
|
|
Returns
void
Inherited from
BaseSendMessageWithStorageArgs.onThinking
parentMessageId?
optionalparentMessageId:string
Defined in: src/lib/db/chat/types.ts:684
Parent message ID for branching (edit/regenerate). Sets on the user message.
Inherited from
BaseSendMessageWithStorageArgs.parentMessageId
reasoning?
optionalreasoning:LlmapiResponseReasoning
Defined in: src/lib/db/chat/types.ts:664
Reasoning configuration for o-series and other reasoning models. Controls reasoning effort level and whether to include reasoning summary.
Inherited from
BaseSendMessageWithStorageArgs.reasoning
searchContext?
optionalsearchContext:string
Defined in: src/lib/db/chat/types.ts:551
Additional context from search results to include in the request. Typically contains relevant information from web or document searches.
Inherited from
BaseSendMessageWithStorageArgs.searchContext
serverTools?
optionalserverTools:ServerToolsFilter
Defined in: src/lib/db/chat/types.ts:626
Server-side tools to include from /api/v1/tools.
- undefined: Include all server-side tools (default)
- string[]: Include only tools with these names
- []: Include no server-side tools
- function: Dynamic filter that receives prompt embedding(s) and all tools, returns tool names to include. Useful for semantic tool matching.
Example
// Include only specific server tools
serverTools: ["generate_cloud_image", "perplexity_search"]
// Disable server tools for this request
serverTools: []
// Semantic tool matching based on prompt
serverTools: (embeddings, tools) => {
const matches = findMatchingTools(embeddings, tools, { limit: 5 });
return matches.map(m => m.tool.name);
}Inherited from
BaseSendMessageWithStorageArgs.serverTools
skipStorage?
optionalskipStorage:boolean
Defined in: src/lib/db/chat/types.ts:459
Skip all storage operations (conversation, messages, embeddings, media). Use this for one-off tasks like title generation where you don’t want to pollute the database with utility messages.
When true:
- No conversation is created or required
- Messages are not stored in the database
- No embeddings are generated
- No media/files are processed for storage
- Result will not include userMessage or assistantMessage
Default
falseExample
// Generate a title without storing anything
const { data } = await sendMessage({
messages: [{ role: "user", content: [{ type: "text", text: "Generate a title for: ..." }] }],
skipStorage: true,
includeHistory: false,
});Inherited from
BaseSendMessageWithStorageArgs.skipStorage
sources?
optionalsources:SearchSource[]
Defined in: src/lib/db/chat/types.ts:564
Search sources to attach to the stored message for citation/reference. Note: Sources are also automatically extracted from tool_call_events in the response.
Inherited from
BaseSendMessageWithStorageArgs.sources
summarizeHistory?
optionalsummarizeHistory:boolean
Defined in: src/lib/db/chat/types.ts:488
Enable progressive summarization of conversation history.
When enabled, older messages are summarized into a compact text using a cheap model, while recent messages are kept verbatim. This reduces input tokens by 50-70% for long conversations.
Requires includeHistory to be true (default). When includeHistory is false
or summarizeHistory is false, all history is sent verbatim (current behavior).
Default
falseInherited from
BaseSendMessageWithStorageArgs.summarizeHistory
summaryMinWindowMessages?
optionalsummaryMinWindowMessages:number
Defined in: src/lib/db/chat/types.ts:517
Minimum number of recent messages to always keep verbatim (never summarized). Ensures the LLM always has immediate conversational context. Even if these messages exceed the token threshold, they are kept.
Default
4 (2 user-assistant turns)Inherited from
BaseSendMessageWithStorageArgs.summaryMinWindowMessages
summaryModel?
optionalsummaryModel:string
Defined in: src/lib/db/chat/types.ts:525
Model to use for generating conversation summaries. Should be a cheap, fast model since summarization is a straightforward task.
Default
'cerebras/qwen-3-235b-a22b-instruct-2507' ($0.60/1M input tokens)Inherited from
BaseSendMessageWithStorageArgs.summaryModel
summaryTokenThreshold?
optionalsummaryTokenThreshold:number
Defined in: src/lib/db/chat/types.ts:508
Token threshold for conversation history before summarization triggers.
When the total token count of the cached summary + unsummarized messages exceeds this value, older messages are summarized to fit within the budget.
How to choose a value:
- Lower (2000-3000): aggressive summarization, lowest cost, less verbatim context.
- Default (4000): balanced — keeps history under ~2.50/1M tokens). Triggers for most conversations after 5-10 turns.
- Higher (8000-16000): less frequent summarization, more context, higher cost. Good for code review or legal conversations needing precise recall.
The fixed overhead (system prompt + tools + memory ≈ 3,500 tokens) is NOT included — it is additive. Total input ≈ overhead + threshold + current message.
Default
4000Inherited from
BaseSendMessageWithStorageArgs.summaryTokenThreshold
temperature?
optionaltemperature:number
Defined in: src/lib/db/chat/types.ts:591
Controls randomness in the response (0.0 to 2.0). Lower values make output more deterministic, higher values more creative.
Inherited from
BaseSendMessageWithStorageArgs.temperature
thinking?
optionalthinking:LlmapiThinkingOptions
Defined in: src/lib/db/chat/types.ts:671
Extended thinking configuration for Anthropic models (Claude). Enables the model to think through complex problems step by step before generating the final response.
Inherited from
BaseSendMessageWithStorageArgs.thinking
thoughtProcess?
optionalthoughtProcess:ActivityPhase[]
Defined in: src/lib/db/chat/types.ts:574
Activity phases for tracking the request lifecycle in the UI. Each phase represents a step like “Searching”, “Thinking”, “Generating”. The final phase is automatically marked as completed when stored.
Note: If you need activity phases that are added during streaming (e.g., server tool calls),
use getThoughtProcess callback instead, which captures phases AFTER streaming completes.
Inherited from
BaseSendMessageWithStorageArgs.thoughtProcess
toolChoice?
optionaltoolChoice:string
Defined in: src/lib/db/chat/types.ts:650
Controls which tool the model should use:
- “auto”: Model decides whether to use a tool (default)
- “any”: Model must use one of the provided tools
- “none”: Model cannot use any tools
- “required”: Model must use a tool
- Specific tool name: Model must use that specific tool
Inherited from
BaseSendMessageWithStorageArgs.toolChoice