* fix(ollama): add streaming config and fix OLLAMA_API_KEY env var support
Adds configurable streaming parameter to model configuration and sets streaming
to false by default for Ollama models. This addresses the corrupted response
issue caused by upstream SDK bug badlogic/pi-mono#1205 where interleaved
content/reasoning deltas in streaming responses cause garbled output.
Changes:
- Add streaming param to AgentModelEntryConfig type
- Set streaming: false default for Ollama models
- Add OLLAMA_API_KEY to envMap (was missing, preventing env var auth)
- Document streaming configuration in Ollama provider docs
- Add tests for Ollama model configuration
Users can now configure streaming per-model and Ollama authentication
via OLLAMA_API_KEY environment variable works correctly.
Fixes#8839
Related: badlogic/pi-mono#1205
* docs(ollama): use gpt-oss:20b as primary example
Updates documentation to use gpt-oss:20b as the primary example model
since it supports tool calling. The model examples now show:
- gpt-oss:20b as the primary recommended model (tool-capable)
- llama3.3 and qwen2.5-coder:32b as additional options
This provides users with a clear, working example that supports
OpenClaw's tool calling features.
* chore: remove unused vi import from ollama test
Both English and Chinese documentation had incorrect configuration template
using 'fallback' instead of 'fallbacks' in agents.defaults.model config.
Co-authored-by: damaozi <1811866786@qq.com>
Venice AI is a privacy-focused AI inference provider with support for
uncensored models and access to major proprietary models via their
anonymized proxy.
This integration adds:
- Complete model catalog with 25 models:
- 15 private models (Llama, Qwen, DeepSeek, Venice Uncensored, etc.)
- 10 anonymized models (Claude, GPT-5.2, Gemini, Grok, Kimi, MiniMax)
- Auto-discovery from Venice API with fallback to static catalog
- VENICE_API_KEY environment variable support
- Interactive onboarding via 'venice-api-key' auth choice
- Model selection prompt showing all available Venice models
- Provider auto-registration when API key is detected
- Comprehensive documentation covering:
- Privacy modes (private vs anonymized)
- All 25 models with context windows and features
- Streaming, function calling, and vision support
- Model selection recommendations
Privacy modes:
- Private: Fully private, no logging (open-source models)
- Anonymized: Proxied through Venice (proprietary models)
Default model: venice/llama-3.3-70b (good balance of capability + privacy)
Venice API: https://api.venice.ai/api/v1 (OpenAI-compatible)