mirror of
https://github.com/openclaw/openclaw.git
synced 2026-02-09 05:19:32 +08:00
This commit is contained in:
@@ -88,7 +88,8 @@ Defaults:
|
||||
1. `local` if a `memorySearch.local.modelPath` is configured and the file exists.
|
||||
2. `openai` if an OpenAI key can be resolved.
|
||||
3. `gemini` if a Gemini key can be resolved.
|
||||
4. Otherwise memory search stays disabled until configured.
|
||||
4. `voyage` if a Voyage key can be resolved.
|
||||
5. Otherwise memory search stays disabled until configured.
|
||||
- Local mode uses node-llama-cpp and may require `pnpm approve-builds`.
|
||||
- Uses sqlite-vec (when available) to accelerate vector search inside SQLite.
|
||||
|
||||
@@ -96,7 +97,8 @@ Remote embeddings **require** an API key for the embedding provider. OpenClaw
|
||||
resolves keys from auth profiles, `models.providers.*.apiKey`, or environment
|
||||
variables. Codex OAuth only covers chat/completions and does **not** satisfy
|
||||
embeddings for memory search. For Gemini, use `GEMINI_API_KEY` or
|
||||
`models.providers.google.apiKey`. When using a custom OpenAI-compatible endpoint,
|
||||
`models.providers.google.apiKey`. For Voyage, use `VOYAGE_API_KEY` or
|
||||
`models.providers.voyage.apiKey`. When using a custom OpenAI-compatible endpoint,
|
||||
set `memorySearch.remote.apiKey` (and optional `memorySearch.remote.headers`).
|
||||
|
||||
### QMD backend (experimental)
|
||||
|
||||
@@ -66,7 +66,8 @@ Semantic memory search uses **embedding APIs** when configured for remote provid
|
||||
|
||||
- `memorySearch.provider = "openai"` → OpenAI embeddings
|
||||
- `memorySearch.provider = "gemini"` → Gemini embeddings
|
||||
- Optional fallback to OpenAI if local embeddings fail
|
||||
- `memorySearch.provider = "voyage"` → Voyage embeddings
|
||||
- Optional fallback to a remote provider if local embeddings fail
|
||||
|
||||
You can keep it local with `memorySearch.provider = "local"` (no API usage).
|
||||
|
||||
|
||||
Reference in New Issue
Block a user