mirror of
https://github.com/openclaw/openclaw.git
synced 2026-02-09 05:19:32 +08:00
* chore: apply local workspace updates * fix: resolve prep findings after rebase (#9898) (thanks @gumadeiras) * refactor: centralize model allowlist normalization (#9898) (thanks @gumadeiras) * fix: guard model allowlist initialization (#9911) * docs: update changelog scope for #9911 * docs: remove model names from changelog entry (#9911) * fix: satisfy type-aware lint in model allowlist (#9911)
1.5 KiB
1.5 KiB
summary, read_when, title
| summary | read_when | title | ||
|---|---|---|---|---|
| Model providers (LLMs) supported by OpenClaw |
|
Model Provider Quickstart |
Model Providers
OpenClaw can use many LLM providers. Pick one, authenticate, then set the default
model as provider/model.
Highlight: Venice (Venice AI)
Venice is our recommended Venice AI setup for privacy-first inference with an option to use Opus for the hardest tasks.
- Default:
venice/llama-3.3-70b - Best overall:
venice/claude-opus-45(Opus remains the strongest)
See Venice AI.
Quick start (two steps)
- Authenticate with the provider (usually via
openclaw onboard). - Set the default model:
{
agents: { defaults: { model: { primary: "anthropic/claude-opus-4-6" } } },
}
Supported providers (starter set)
- OpenAI (API + Codex)
- Anthropic (API + Claude Code CLI)
- OpenRouter
- Vercel AI Gateway
- Cloudflare AI Gateway
- Moonshot AI (Kimi + Kimi Coding)
- Synthetic
- OpenCode Zen
- Z.AI
- GLM models
- MiniMax
- Venice (Venice AI)
- Amazon Bedrock
For the full provider catalog (xAI, Groq, Mistral, etc.) and advanced configuration, see Model providers.