Docs: enable markdownlint autofixables except list numbering (#10476)

* docs(markdownlint): enable autofixable rules except list numbering

* docs(zalo): fix malformed bot platform link
This commit is contained in:
Seb Slight
2026-02-06 10:08:59 -05:00
committed by GitHub
parent 0a1f4f666a
commit 578a6e27aa
58 changed files with 171 additions and 109 deletions

View File

@@ -1,16 +1,12 @@
{
"globs": ["docs/**/*.md", "docs/**/*.mdx", "README.md"],
"ignores": ["docs/zh-CN/**", "docs/.i18n/**"],
"ignores": ["docs/zh-CN/**", "docs/.i18n/**", "docs/reference/templates/**"],
"config": {
"default": true,
"MD013": false,
"MD025": false,
"MD026": false,
"MD029": false,
"MD030": false,
"MD031": false,
"MD032": false,
"MD033": {
"allowed_elements": [
@@ -48,9 +44,7 @@
],
},
"MD034": false,
"MD036": false,
"MD037": false,
"MD040": false,
"MD041": false,
"MD046": false,

View File

@@ -12,7 +12,7 @@ OpenClaw uses Brave Search as the default provider for `web_search`.
## Get an API key
1. Create a Brave Search API account at https://brave.com/search/api/
1. Create a Brave Search API account at [https://brave.com/search/api/](https://brave.com/search/api/)
2. In the dashboard, choose the **Data for Search** plan and generate an API key.
3. Store the key in config (recommended) or set `BRAVE_API_KEY` in the Gateway environment.

View File

@@ -27,6 +27,7 @@ Status: bundled plugin that talks to the BlueBubbles macOS server over HTTP. **R
1. Install the BlueBubbles server on your Mac (follow the instructions at [bluebubbles.app/install](https://bluebubbles.app/install)).
2. In the BlueBubbles config, enable the web API and set a password.
3. Run `openclaw onboard` and select BlueBubbles, or configure manually:
```json5
{
channels: {
@@ -39,6 +40,7 @@ Status: bundled plugin that talks to the BlueBubbles macOS server over HTTP. **R
},
}
```
4. Point BlueBubbles webhooks to your gateway (example: `https://your-gateway-host:3000/bluebubbles-webhook?password=<password>`).
5. Start the gateway; it will register the webhook handler and start pairing.

View File

@@ -75,7 +75,7 @@ Choose **Feishu**, then enter the App ID and App Secret.
Visit [Feishu Open Platform](https://open.feishu.cn/app) and sign in.
Lark (global) tenants should use https://open.larksuite.com/app and set `domain: "lark"` in the Feishu config.
Lark (global) tenants should use [https://open.larksuite.com/app](https://open.larksuite.com/app) and set `domain: "lark"` in the Feishu config.
### 2. Create an app
@@ -261,10 +261,12 @@ After approval, you can chat normally.
- **Default**: `dmPolicy: "pairing"` (unknown users get a pairing code)
- **Approve pairing**:
```bash
openclaw pairing list feishu
openclaw pairing approve feishu <CODE>
```
- **Allowlist mode**: set `channels.feishu.allowFrom` with allowed Open IDs
### Group chats

View File

@@ -101,6 +101,7 @@ Use Tailscale Serve for the private dashboard and Funnel for the public webhook
If prompted, visit the authorization URL shown in the output to enable Funnel for this node in your tailnet policy.
5. **Verify the configuration:**
```bash
tailscale serve status
tailscale funnel status
@@ -225,6 +226,7 @@ This means the webhook handler isn't registered. Common causes:
If it shows "disabled", add `plugins.entries.googlechat.enabled: true` to your config.
3. **Gateway not restarted**: After adding config, restart the gateway:
```bash
openclaw gateway restart
```

View File

@@ -34,7 +34,7 @@ openclaw plugins install ./extensions/line
## Setup
1. Create a LINE Developers account and open the Console:
https://developers.line.biz/console/
[https://developers.line.biz/console/](https://developers.line.biz/console/)
2. Create (or pick) a Provider and add a **Messaging API** channel.
3. Copy the **Channel access token** and **Channel secret** from the channel settings.
4. Enable **Use webhook** in the Messaging API settings.

View File

@@ -74,7 +74,7 @@ Details: [Plugins](/plugin)
- When set, `channels.matrix.userId` should be the full Matrix ID (example: `@bot:example.org`).
5. Restart the gateway (or finish onboarding).
6. Start a DM with the bot or invite it to a room from any Matrix client
(Element, Beeper, etc.; see https://matrix.org/ecosystem/clients/). Beeper requires E2EE,
(Element, Beeper, etc.; see [https://matrix.org/ecosystem/clients/](https://matrix.org/ecosystem/clients/)). Beeper requires E2EE,
so set `channels.matrix.encryption: true` and verify the device.
Minimal config (access token, user ID auto-fetched):

View File

@@ -558,6 +558,7 @@ Bots don't have a personal OneDrive drive (the `/me/drive` Graph API endpoint do
```
4. **Configure OpenClaw:**
```json5
{
channels: {
@@ -747,7 +748,7 @@ Bots have limited support in private channels:
- **"Icon file cannot be empty":** The manifest references icon files that are 0 bytes. Create valid PNG icons (32x32 for `outline.png`, 192x192 for `color.png`).
- **"webApplicationInfo.Id already in use":** The app is still installed in another team/chat. Find and uninstall it first, or wait 5-10 minutes for propagation.
- **"Something went wrong" on upload:** Upload via https://admin.teams.microsoft.com instead, open browser DevTools (F12) → Network tab, and check the response body for the actual error.
- **"Something went wrong" on upload:** Upload via [https://admin.teams.microsoft.com](https://admin.teams.microsoft.com) instead, open browser DevTools (F12) → Network tab, and check the response body for the actual error.
- **Sideload failing:** Try "Upload an app to your org's app catalog" instead of "Upload a custom app" - this often bypasses sideload restrictions.
### RSC permissions not working

View File

@@ -34,9 +34,11 @@ Details: [Plugins](/plugin)
1. Install the Nextcloud Talk plugin.
2. On your Nextcloud server, create a bot:
```bash
./occ talk:bot:install "OpenClaw" "<shared-secret>" "<webhook-url>" --feature reaction
```
3. Enable the bot in the target room settings.
4. Configure OpenClaw:
- Config: `channels.nextcloud-talk.baseUrl` + `channels.nextcloud-talk.botSecret`

View File

@@ -30,7 +30,7 @@ Minimal config:
### Setup
1. Create a Slack app (From scratch) in https://api.slack.com/apps.
1. Create a Slack app (From scratch) in [https://api.slack.com/apps](https://api.slack.com/apps).
2. **Socket Mode** → toggle on. Then go to **Basic Information****App-Level Tokens****Generate Token and Scopes** with scope `connections:write`. Copy the **App Token** (`xapp-...`).
3. **OAuth & Permissions** → add bot token scopes (use the manifest below). Click **Install to Workspace**. Copy the **Bot User OAuth Token** (`xoxb-...`).
4. Optional: **OAuth & Permissions** → add **User Token Scopes** (see the read-only list below). Reinstall the app and copy the **User OAuth Token** (`xoxp-...`).
@@ -260,30 +260,30 @@ If you enable native commands, add one `slash_commands` entry per command you wa
Slack's Conversations API is type-scoped: you only need the scopes for the
conversation types you actually touch (channels, groups, im, mpim). See
https://docs.slack.dev/apis/web-api/using-the-conversations-api/ for the overview.
[https://docs.slack.dev/apis/web-api/using-the-conversations-api/](https://docs.slack.dev/apis/web-api/using-the-conversations-api/) for the overview.
### Bot token scopes (required)
- `chat:write` (send/update/delete messages via `chat.postMessage`)
https://docs.slack.dev/reference/methods/chat.postMessage
[https://docs.slack.dev/reference/methods/chat.postMessage](https://docs.slack.dev/reference/methods/chat.postMessage)
- `im:write` (open DMs via `conversations.open` for user DMs)
https://docs.slack.dev/reference/methods/conversations.open
[https://docs.slack.dev/reference/methods/conversations.open](https://docs.slack.dev/reference/methods/conversations.open)
- `channels:history`, `groups:history`, `im:history`, `mpim:history`
https://docs.slack.dev/reference/methods/conversations.history
[https://docs.slack.dev/reference/methods/conversations.history](https://docs.slack.dev/reference/methods/conversations.history)
- `channels:read`, `groups:read`, `im:read`, `mpim:read`
https://docs.slack.dev/reference/methods/conversations.info
[https://docs.slack.dev/reference/methods/conversations.info](https://docs.slack.dev/reference/methods/conversations.info)
- `users:read` (user lookup)
https://docs.slack.dev/reference/methods/users.info
[https://docs.slack.dev/reference/methods/users.info](https://docs.slack.dev/reference/methods/users.info)
- `reactions:read`, `reactions:write` (`reactions.get` / `reactions.add`)
https://docs.slack.dev/reference/methods/reactions.get
https://docs.slack.dev/reference/methods/reactions.add
[https://docs.slack.dev/reference/methods/reactions.get](https://docs.slack.dev/reference/methods/reactions.get)
[https://docs.slack.dev/reference/methods/reactions.add](https://docs.slack.dev/reference/methods/reactions.add)
- `pins:read`, `pins:write` (`pins.list` / `pins.add` / `pins.remove`)
https://docs.slack.dev/reference/scopes/pins.read
https://docs.slack.dev/reference/scopes/pins.write
[https://docs.slack.dev/reference/scopes/pins.read](https://docs.slack.dev/reference/scopes/pins.read)
[https://docs.slack.dev/reference/scopes/pins.write](https://docs.slack.dev/reference/scopes/pins.write)
- `emoji:read` (`emoji.list`)
https://docs.slack.dev/reference/scopes/emoji.read
[https://docs.slack.dev/reference/scopes/emoji.read](https://docs.slack.dev/reference/scopes/emoji.read)
- `files:write` (uploads via `files.uploadV2`)
https://docs.slack.dev/messaging/working-with-files/#upload
[https://docs.slack.dev/messaging/working-with-files/#upload](https://docs.slack.dev/messaging/working-with-files/#upload)
### User token scopes (optional, read-only by default)
@@ -302,9 +302,9 @@ Add these under **User Token Scopes** if you configure `channels.slack.userToken
- `mpim:write` (only if we add group-DM open/DM start via `conversations.open`)
- `groups:write` (only if we add private-channel management: create/rename/invite/archive)
- `chat:write.public` (only if we want to post to channels the bot isn't in)
https://docs.slack.dev/reference/scopes/chat.write.public
[https://docs.slack.dev/reference/scopes/chat.write.public](https://docs.slack.dev/reference/scopes/chat.write.public)
- `users:read.email` (only if we need email fields from `users.info`)
https://docs.slack.dev/changelog/2017-04-narrowing-email-access
[https://docs.slack.dev/changelog/2017-04-narrowing-email-access](https://docs.slack.dev/changelog/2017-04-narrowing-email-access)
- `files:read` (only if we start listing/reading file metadata)
## Config

View File

@@ -365,6 +365,7 @@ Alternate (official Bot API):
1. DM your bot.
2. Fetch updates with your bot token and read `message.from.id`:
```bash
curl "https://api.telegram.org/bot<bot_token>/getUpdates"
```

View File

@@ -34,7 +34,7 @@ Details: [Plugins](/plugin)
- Select **Bot Token**
- Verify scopes `chat:read` and `chat:write` are selected
- Copy the **Client ID** and **Access Token**
3. Find your Twitch user ID: https://www.streamweasels.com/tools/convert-twitch-username-to-user-id/
3. Find your Twitch user ID: [https://www.streamweasels.com/tools/convert-twitch-username-to-user-id/](https://www.streamweasels.com/tools/convert-twitch-username-to-user-id/)
4. Configure the token:
- Env: `OPENCLAW_TWITCH_ACCESS_TOKEN=...` (default account only)
- Or config: `channels.twitch.accessToken`
@@ -123,7 +123,7 @@ Prefer `allowFrom` for a hard allowlist. Use `allowedRoles` instead if you want
**Why user IDs?** Usernames can change, allowing impersonation. User IDs are permanent.
Find your Twitch user ID: https://www.streamweasels.com/tools/convert-twitch-username-%20to-user-id/ (Convert your Twitch username to ID)
Find your Twitch user ID: [https://www.streamweasels.com/tools/convert-twitch-username-%20to-user-id/](https://www.streamweasels.com/tools/convert-twitch-username-%20to-user-id/) (Convert your Twitch username to ID)
## Token refresh (optional)

View File

@@ -205,11 +205,13 @@ The wizard uses it to set your **allowlist/owner** so your own DMs are permitted
- `Body` is the current message body with envelope.
- Quoted reply context is **always appended**:
```
[Replying to +1555 id:ABC123]
<quoted text or <media:...>>
[/Replying]
```
- Reply metadata also set:
- `ReplyToId` = stanzaId
- `ReplyToBody` = quoted body or media placeholder

View File

@@ -57,7 +57,7 @@ It is a good fit for support or notifications where you want deterministic routi
### 1) Create a bot token (Zalo Bot Platform)
1. Go to **https://bot.zaloplatforms.com** and sign in.
1. Go to [https://bot.zaloplatforms.com](https://bot.zaloplatforms.com) and sign in.
2. Create a new bot and configure its settings.
3. Copy the bot token (format: `12345689:abc-xyz`).

View File

@@ -110,9 +110,11 @@ Details: [Gateway protocol](/gateway/protocol), [Pairing](/start/pairing),
- Preferred: Tailscale or VPN.
- Alternative: SSH tunnel
```bash
ssh -N -L 18789:127.0.0.1:18789 user@host
```
- The same handshake + auth token apply over the tunnel.
- TLS + optional pinning can be enabled for WS in remote setups.

View File

@@ -302,8 +302,8 @@ Why OpenAI batch is fast + cheap:
- For large backfills, OpenAI is typically the fastest option we support because we can submit many embedding requests in a single batch job and let OpenAI process them asynchronously.
- OpenAI offers discounted pricing for Batch API workloads, so large indexing runs are usually cheaper than sending the same requests synchronously.
- See the OpenAI Batch API docs and pricing for details:
- https://platform.openai.com/docs/api-reference/batch
- https://platform.openai.com/pricing
- [https://platform.openai.com/docs/api-reference/batch](https://platform.openai.com/docs/api-reference/batch)
- [https://platform.openai.com/pricing](https://platform.openai.com/pricing)
Config example:

View File

@@ -136,14 +136,14 @@ Moonshot uses OpenAI-compatible endpoints, so configure it as a custom provider:
Kimi K2 model IDs:
{/_ moonshot-kimi-k2-model-refs:start _/ && null}
{/_moonshot-kimi-k2-model-refs:start_/ && null}
- `moonshot/kimi-k2.5`
- `moonshot/kimi-k2-0905-preview`
- `moonshot/kimi-k2-turbo-preview`
- `moonshot/kimi-k2-thinking`
- `moonshot/kimi-k2-thinking-turbo`
{/_ moonshot-kimi-k2-model-refs:end _/ && null}
{/_moonshot-kimi-k2-model-refs:end_/ && null}
```json5
{
@@ -242,7 +242,7 @@ Ollama is a local LLM runtime that provides an OpenAI-compatible API:
- Provider: `ollama`
- Auth: None required (local server)
- Example model: `ollama/llama3.3`
- Installation: https://ollama.ai
- Installation: [https://ollama.ai](https://ollama.ai)
```bash
# Install Ollama, then pull a model:

View File

@@ -110,6 +110,6 @@ This keeps the base prompt small while still enabling targeted skill usage.
When available, the system prompt includes a **Documentation** section that points to the
local OpenClaw docs directory (either `docs/` in the repo workspace or the bundled npm
package docs) and also notes the public mirror, source repo, community Discord, and
ClawHub (https://clawhub.com) for skills discovery. The prompt instructs the model to consult local docs first
ClawHub ([https://clawhub.com](https://clawhub.com)) for skills discovery. The prompt instructs the model to consult local docs first
for OpenClaw behavior, commands, configuration, or architecture, and to run
`openclaw status` itself when possible (asking the user only when it lacks access).

View File

@@ -280,7 +280,7 @@ Unknown frame types are preserved as raw payloads for forward compatibility.
Generated JSON Schema is in the repo at `dist/protocol.schema.json`. The
published raw file is typically available at:
- https://raw.githubusercontent.com/openclaw/openclaw/main/dist/protocol.schema.json
- [https://raw.githubusercontent.com/openclaw/openclaw/main/dist/protocol.schema.json](https://raw.githubusercontent.com/openclaw/openclaw/main/dist/protocol.schema.json)
## When you change schemas

View File

@@ -62,19 +62,21 @@ node --import tsx scripts/repro/tsx-name-repro.ts
- Use Bun for dev scripts (current temporary revert).
- Use Node + tsc watch, then run compiled output:
```bash
pnpm exec tsc --watch --preserveWatchOutput
node --watch openclaw.mjs status
```
- Confirmed locally: `pnpm exec tsc -p tsconfig.json` + `node openclaw.mjs status` works on Node 25.
- Disable esbuild keepNames in the TS loader if possible (prevents `__name` helper insertion); tsx does not currently expose this.
- Test Node LTS (22/24) with `tsx` to see if the issue is Node 25specific.
## References
- https://opennext.js.org/cloudflare/howtos/keep_names
- https://esbuild.github.io/api/#keep-names
- https://github.com/evanw/esbuild/issues/1031
- [https://opennext.js.org/cloudflare/howtos/keep_names](https://opennext.js.org/cloudflare/howtos/keep_names)
- [https://esbuild.github.io/api/#keep-names](https://esbuild.github.io/api/#keep-names)
- [https://github.com/evanw/esbuild/issues/1031](https://github.com/evanw/esbuild/issues/1031)
## Next steps

View File

@@ -105,10 +105,13 @@ The Gateway advertises small nonsecret hints to make UI flows convenient:
Useful builtin tools:
- Browse instances:
```bash
dns-sd -B _openclaw-gw._tcp local.
```
- Resolve one instance (replace `<instance>`):
```bash
dns-sd -L "<instance>" _openclaw-gw._tcp local.
```

View File

@@ -1978,11 +1978,13 @@ Block streaming:
- `agents.defaults.blockStreamingChunk`: soft chunking for streamed blocks. Defaults to
8001200 chars, prefers paragraph breaks (`\n\n`), then newlines, then sentences.
Example:
```json5
{
agents: { defaults: { blockStreamingChunk: { minChars: 800, maxChars: 1200 } } },
}
```
- `agents.defaults.blockStreamingCoalesce`: merge streamed blocks before sending.
Defaults to `{ idleMs: 1000 }` and inherits `minChars` from `blockStreamingChunk`
with `maxChars` capped to the channel text limit. Signal/Slack/Discord/Google Chat default
@@ -1996,11 +1998,13 @@ Block streaming:
Modes: `off` (default), `natural` (8002500ms), `custom` (use `minMs`/`maxMs`).
Per-agent override: `agents.list[].humanDelay`.
Example:
```json5
{
agents: { defaults: { humanDelay: { mode: "natural" } } },
}
```
See [/concepts/streaming](/concepts/streaming) for behavior + chunking details.
Typing indicators:
@@ -2066,7 +2070,7 @@ of `every`, keep `HEARTBEAT.md` tiny, and/or choose a cheaper `model`.
- `tools.web.fetch.readability` (default true; disable to use basic HTML cleanup only)
- `tools.web.fetch.firecrawl.enabled` (default true when an API key is set)
- `tools.web.fetch.firecrawl.apiKey` (optional; defaults to `FIRECRAWL_API_KEY`)
- `tools.web.fetch.firecrawl.baseUrl` (default https://api.firecrawl.dev)
- `tools.web.fetch.firecrawl.baseUrl` (default [https://api.firecrawl.dev](https://api.firecrawl.dev))
- `tools.web.fetch.firecrawl.onlyMainContent` (default true)
- `tools.web.fetch.firecrawl.maxAgeMs` (optional)
- `tools.web.fetch.firecrawl.timeoutSeconds` (optional)
@@ -2482,7 +2486,7 @@ Select the model via `agents.defaults.model.primary` (provider/model).
OpenCode Zen is a multi-model gateway with per-model endpoints. OpenClaw uses
the built-in `opencode` provider from pi-ai; set `OPENCODE_API_KEY` (or
`OPENCODE_ZEN_API_KEY`) from https://opencode.ai/auth.
`OPENCODE_ZEN_API_KEY`) from [https://opencode.ai/auth](https://opencode.ai/auth).
Notes:

View File

@@ -49,9 +49,11 @@ pnpm gateway:watch
## Remote access
- Tailscale/VPN preferred; otherwise SSH tunnel:
```bash
ssh -N -L 18789:127.0.0.1:18789 user@host
```
- Clients then connect to `ws://127.0.0.1:18789` through the tunnel.
- If a token is configured, clients must include it in `connect.params.auth.token` even over the tunnel.

View File

@@ -52,7 +52,7 @@ Best current local stack. Load MiniMax M2.1 in LM Studio, enable the local serve
**Setup checklist**
- Install LM Studio: https://lmstudio.ai
- Install LM Studio: [https://lmstudio.ai](https://lmstudio.ai)
- In LM Studio, download the **largest MiniMax M2.1 build available** (avoid “small”/heavily quantized variants), start the server, confirm `http://127.0.0.1:1234/v1/models` lists it.
- Keep the model loaded; cold-load adds startup latency.
- Adjust `contextWindow`/`maxTokens` if your LM Studio build differs.

View File

@@ -773,18 +773,22 @@ If it fails, there are new candidates not yet in the baseline.
### If CI fails
1. Reproduce locally:
```bash
detect-secrets scan --baseline .secrets.baseline
```
2. Understand the tools:
- `detect-secrets scan` finds candidates and compares them to the baseline.
- `detect-secrets audit` opens an interactive review to mark each baseline
item as real or false positive.
3. For real secrets: rotate/remove them, then re-run the scan to update the baseline.
4. For false positives: run the interactive audit and mark them as false:
```bash
detect-secrets audit .secrets.baseline
```
5. If you need new excludes, add them to `.detect-secrets.cfg` and regenerate the
baseline with matching `--exclude-files` / `--exclude-lines` flags (the config
file is reference-only; detect-secrets doesnt read it automatically).
@@ -814,7 +818,7 @@ Mario asking for find ~
Found a vulnerability in OpenClaw? Please report responsibly:
1. Email: security@openclaw.ai
1. Email: [security@openclaw.ai](mailto:security@openclaw.ai)
2. Don't post publicly until fixed
3. We'll credit you (unless you prefer anonymity)

View File

@@ -121,7 +121,7 @@ Avoid Funnel for browser control; treat node pairing like operator access.
## Learn more
- Tailscale Serve overview: https://tailscale.com/kb/1312/serve
- `tailscale serve` command: https://tailscale.com/kb/1242/tailscale-serve
- Tailscale Funnel overview: https://tailscale.com/kb/1223/tailscale-funnel
- `tailscale funnel` command: https://tailscale.com/kb/1311/tailscale-funnel
- Tailscale Serve overview: [https://tailscale.com/kb/1312/serve](https://tailscale.com/kb/1312/serve)
- `tailscale serve` command: [https://tailscale.com/kb/1242/tailscale-serve](https://tailscale.com/kb/1242/tailscale-serve)
- Tailscale Funnel overview: [https://tailscale.com/kb/1223/tailscale-funnel](https://tailscale.com/kb/1223/tailscale-funnel)
- `tailscale funnel` command: [https://tailscale.com/kb/1311/tailscale-funnel](https://tailscale.com/kb/1311/tailscale-funnel)

View File

@@ -42,9 +42,11 @@ Fix options:
- Re-run onboarding and choose **Anthropic** for that agent.
- Or paste a setup-token on the **gateway host**:
```bash
openclaw models auth setup-token --provider anthropic
```
- Or copy `auth-profiles.json` from the main agent dir to the new agent dir.
Verify:
@@ -120,13 +122,17 @@ Doctor/service will show runtime state (PID/last exit) and log hints.
**Enable more logging:**
- Bump file log detail (persisted JSONL):
```json
{ "logging": { "level": "debug" } }
```
- Bump console verbosity (TTY output only):
```json
{ "logging": { "consoleLevel": "debug", "consoleStyle": "pretty" } }
```
- Quick tip: `--verbose` affects **console** output only. File logs remain controlled by `logging.level`.
See [/logging](/logging) for a full overview of formats, config, and access.
@@ -139,10 +145,13 @@ Gateway refuses to start.
**Fix (recommended):**
- Run the wizard and set the Gateway run mode to **Local**:
```bash
openclaw configure
```
- Or set it directly:
```bash
openclaw config set gateway.mode local
```
@@ -150,6 +159,7 @@ Gateway refuses to start.
**If you meant to run a remote Gateway instead:**
- Set a remote URL and keep `gateway.mode=remote`:
```bash
openclaw config set gateway.mode remote
openclaw config set gateway.remote.url "wss://gateway.example.com"
@@ -554,6 +564,7 @@ Notes:
- The git flow only rebases if the repo is clean. Commit or stash changes first.
- After switching, run:
```bash
openclaw doctor
openclaw gateway restart

View File

@@ -252,10 +252,12 @@ Quick answers plus deeper troubleshooting for real-world setups (local dev, VPS,
Repairs/migrates config/state + runs health checks. See [Doctor](/gateway/doctor).
7. **Gateway snapshot**
```bash
openclaw health --json
openclaw health --verbose # shows the target URL + config path on errors
```
Asks the running gateway for a full snapshot (WS-only). See [Health](/gateway/health).
## Quick start and first-run setup
@@ -266,8 +268,8 @@ Use a local AI agent that can **see your machine**. That is far more effective t
in Discord, because most "I'm stuck" cases are **local config or environment issues** that
remote helpers cannot inspect.
- **Claude Code**: https://www.anthropic.com/claude-code/
- **OpenAI Codex**: https://openai.com/codex/
- **Claude Code**: [https://www.anthropic.com/claude-code/](https://www.anthropic.com/claude-code/)
- **OpenAI Codex**: [https://openai.com/codex/](https://openai.com/codex/)
These tools can read the repo, run commands, inspect logs, and help fix your machine-level
setup (PATH, services, permissions, auth files). Give them the **full source checkout** via
@@ -285,8 +287,8 @@ Tip: ask the agent to **plan and supervise** the fix (step-by-step), then execut
necessary commands. That keeps changes small and easier to audit.
If you discover a real bug or fix, please file a GitHub issue or send a PR:
https://github.com/openclaw/openclaw/issues
https://github.com/openclaw/openclaw/pulls
[https://github.com/openclaw/openclaw/issues](https://github.com/openclaw/openclaw/issues)
[https://github.com/openclaw/openclaw/pulls](https://github.com/openclaw/openclaw/pulls)
Start with these commands (share outputs when asking for help):
@@ -432,7 +434,7 @@ Related: [Migrating](/install/migrating), [Where things live on disk](/help/faq#
### Where do I see what is new in the latest version
Check the GitHub changelog:
https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md
[https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md](https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md)
Newest entries are at the top. If the top section is marked **Unreleased**, the next dated
section is the latest shipped version. Entries are grouped by **Highlights**, **Changes**, and
@@ -443,10 +445,10 @@ section is the latest shipped version. Entries are grouped by **Highlights**, **
Some Comcast/Xfinity connections incorrectly block `docs.openclaw.ai` via Xfinity
Advanced Security. Disable it or allowlist `docs.openclaw.ai`, then retry. More
detail: [Troubleshooting](/help/troubleshooting#docsopenclawai-shows-an-ssl-error-comcastxfinity).
Please help us unblock it by reporting here: https://spa.xfinity.com/check_url_status.
Please help us unblock it by reporting here: [https://spa.xfinity.com/check_url_status](https://spa.xfinity.com/check_url_status).
If you still can't reach the site, the docs are mirrored on GitHub:
https://github.com/openclaw/openclaw/tree/main/docs
[https://github.com/openclaw/openclaw/tree/main/docs](https://github.com/openclaw/openclaw/tree/main/docs)
### What's the difference between stable and beta
@@ -460,7 +462,7 @@ that same version to `latest`**. That's why beta and stable can point at the
**same version**.
See what changed:
https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md
[https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md](https://github.com/openclaw/openclaw/blob/main/CHANGELOG.md)
### How do I install the beta version and whats the difference between beta and dev
@@ -478,7 +480,7 @@ curl -fsSL --proto '=https' --tlsv1.2 https://openclaw.ai/install.sh | bash -s -
```
Windows installer (PowerShell):
https://openclaw.ai/install.ps1
[https://openclaw.ai/install.ps1](https://openclaw.ai/install.ps1)
More detail: [Development channels](/install/development-channels) and [Installer flags](/install/installer).
@@ -559,9 +561,11 @@ Two common Windows issues:
- Your npm global bin folder is not on PATH.
- Check the path:
```powershell
npm config get prefix
```
- Ensure `<prefix>\\bin` is on PATH (on most systems it is `%AppData%\\npm`).
- Close and reopen PowerShell after updating PATH.
@@ -988,7 +992,7 @@ Advantages:
- **Always-on Gateway** (run on a VPS, interact from anywhere)
- **Nodes** for local browser/screen/camera/exec
Showcase: https://openclaw.ai/showcase
Showcase: [https://openclaw.ai/showcase](https://openclaw.ai/showcase)
## Skills and automation
@@ -1046,7 +1050,7 @@ Docs: [Cron jobs](/automation/cron-jobs), [Cron vs Heartbeat](/automation/cron-v
### How do I install skills on Linux
Use **ClawHub** (CLI) or drop skills into your workspace. The macOS Skills UI isn't available on Linux.
Browse skills at https://clawhub.com.
Browse skills at [https://clawhub.com](https://clawhub.com).
Install the ClawHub CLI (pick one package manager):
@@ -1085,13 +1089,16 @@ Run the Gateway on Linux, pair a macOS node (menubar app), and set **Node Run Co
Keep the Gateway on Linux, but make the required CLI binaries resolve to SSH wrappers that run on a Mac. Then override the skill to allow Linux so it stays eligible.
1. Create an SSH wrapper for the binary (example: `memo` for Apple Notes):
```bash
#!/usr/bin/env bash
set -euo pipefail
exec ssh -T user@mac-host /opt/homebrew/bin/memo "$@"
```
2. Put the wrapper on `PATH` on the Linux host (for example `~/bin/memo`).
3. Override the skill metadata (workspace or `~/.openclaw/skills`) to allow Linux:
```markdown
---
name: apple-notes
@@ -1099,6 +1106,7 @@ Keep the Gateway on Linux, but make the required CLI binaries resolve to SSH wra
metadata: { "openclaw": { "os": ["darwin", "linux"], "requires": { "bins": ["memo"] } } }
---
```
4. Start a new session so the skills snapshot refreshes.
### Do you have a Notion or HeyGen integration
@@ -1473,6 +1481,7 @@ Typical setup:
4. Open the macOS app locally and connect in **Remote over SSH** mode (or direct tailnet)
so it can register as a node.
5. Approve the node on the Gateway:
```bash
openclaw nodes pending
openclaw nodes approve <requestId>
@@ -1610,10 +1619,12 @@ This sets your workspace and restricts who can trigger the bot.
Minimal steps:
1. **Install + login on the VPS**
```bash
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale up
```
2. **Install + login on your Mac**
- Use the Tailscale app and sign in to the same tailnet.
3. **Enable MagicDNS (recommended)**
@@ -1640,6 +1651,7 @@ Recommended setup:
2. **Use the macOS app in Remote mode** (SSH target can be the tailnet hostname).
The app will tunnel the Gateway port and connect as a node.
3. **Approve the node** on the gateway:
```bash
openclaw nodes pending
openclaw nodes approve <requestId>
@@ -1702,9 +1714,11 @@ If the Gateway runs as a service (launchd/systemd), it won't inherit your shell
environment. Fix by doing one of these:
1. Put the token in `~/.openclaw/.env`:
```
COPILOT_GITHUB_TOKEN=...
```
2. Or enable shell import (`env.shellEnv.enabled: true`).
3. Or add it to your config `env` block (applies only if missing).
@@ -1801,6 +1815,7 @@ Use one of these:
or `/compact <instructions>` to guide the summary.
- **Reset** (fresh session ID for the same chat key):
```
/new
/reset
@@ -2071,9 +2086,11 @@ Fix checklist:
3. Use the exact model id (case-sensitive): `minimax/MiniMax-M2.1` or
`minimax/MiniMax-M2.1-lightning`.
4. Run:
```bash
openclaw models list
```
and pick from the list (or `/model list` in chat).
See [MiniMax](/providers/minimax) and [Models](/concepts/models).
@@ -2238,9 +2255,11 @@ can't find it in its auth store.
- **If you want to use an API key instead**
- Put `ANTHROPIC_API_KEY` in `~/.openclaw/.env` on the **gateway host**.
- Clear any pinned order that forces a missing profile:
```bash
openclaw models auth order clear --provider anthropic
```
- **Confirm you're running commands on the gateway host**
- In remote mode, auth profiles live on the gateway machine, not your laptop.

View File

@@ -65,7 +65,7 @@ You can also set `OPENCLAW_VERBOSE=1` instead of the flag.
Some Comcast/Xfinity connections block `docs.openclaw.ai` via Xfinity Advanced Security.
Disable Advanced Security or add `docs.openclaw.ai` to the allowlist, then retry.
- Xfinity Advanced Security help: https://www.xfinity.com/support/articles/using-xfinity-xfi-advanced-security
- Xfinity Advanced Security help: [https://www.xfinity.com/support/articles/using-xfinity-xfi-advanced-security](https://www.xfinity.com/support/articles/using-xfinity-xfi-advanced-security)
- Quick sanity checks: try a mobile hotspot or VPN to confirm its ISP-level filtering
### Service says running, but RPC probe fails

View File

@@ -787,6 +787,7 @@ Session reset
```
3. List all discovered hooks:
```bash
openclaw hooks list
```
@@ -818,6 +819,7 @@ Look for missing:
2. Restart your gateway process so hooks reload.
3. Check gateway logs for errors:
```bash
./scripts/clawlog.sh | grep hook
```
@@ -892,6 +894,7 @@ node -e "import('./path/to/handler.ts').then(console.log)"
```
4. Verify and restart your gateway process:
```bash
openclaw hooks list
# Should show: 🎯 my-hook ✓

View File

@@ -120,7 +120,7 @@ Need the full install and dev setup? See [Quick start](/start/quickstart).
Open the browser Control UI after the Gateway starts.
- Local default: http://127.0.0.1:18789/
- Local default: [http://127.0.0.1:18789/](http://127.0.0.1:18789/)
- Remote access: [Web surfaces](/web) and [Tailscale](/gateway/tailscale)
<p align="center">

View File

@@ -69,7 +69,7 @@ For the generic Docker flow, see [Docker](/install/docker).
**Option A: gcloud CLI** (recommended for automation)
Install from https://cloud.google.com/sdk/docs/install
Install from [https://cloud.google.com/sdk/docs/install](https://cloud.google.com/sdk/docs/install)
Initialize and authenticate:
@@ -80,7 +80,7 @@ gcloud auth login
**Option B: Cloud Console**
All steps can be done via the web UI at https://console.cloud.google.com
All steps can be done via the web UI at [https://console.cloud.google.com](https://console.cloud.google.com)
---
@@ -93,7 +93,7 @@ gcloud projects create my-openclaw-project --name="OpenClaw Gateway"
gcloud config set project my-openclaw-project
```
Enable billing at https://console.cloud.google.com/billing (required for Compute Engine).
Enable billing at [https://console.cloud.google.com/billing](https://console.cloud.google.com/billing) (required for Compute Engine).
Enable the Compute Engine API:
@@ -484,6 +484,7 @@ For automation or CI/CD pipelines, create a dedicated service account with minim
```
2. Grant Compute Instance Admin role (or narrower custom role):
```bash
gcloud projects add-iam-policy-binding my-openclaw-project \
--member="serviceAccount:openclaw-deploy@my-openclaw-project.iam.gserviceaccount.com" \
@@ -492,7 +493,7 @@ For automation or CI/CD pipelines, create a dedicated service account with minim
Avoid using the Owner role for automation. Use the principle of least privilege.
See https://cloud.google.com/iam/docs/understanding-roles for IAM role details.
See [https://cloud.google.com/iam/docs/understanding-roles](https://cloud.google.com/iam/docs/understanding-roles) for IAM role details.
---

View File

@@ -45,7 +45,7 @@ If Telegram DMs are set to pairing, the setup wizard can approve the pairing cod
### Discord bot token
1. Go to https://discord.com/developers/applications
1. Go to [https://discord.com/developers/applications](https://discord.com/developers/applications)
2. **New Application** → choose a name
3. **Bot** → **Add Bot**
4. **Enable MESSAGE CONTENT INTENT** under Bot → Privileged Gateway Intents (required or the bot will crash on startup)

View File

@@ -83,7 +83,7 @@ If Telegram DMs are set to pairing, the setup wizard can approve the pairing cod
### Discord bot token
1. Go to https://discord.com/developers/applications
1. Go to [https://discord.com/developers/applications](https://discord.com/developers/applications)
2. **New Application** → choose a name
3. **Bot** → **Add Bot**
4. **Enable MESSAGE CONTENT INTENT** under Bot → Privileged Gateway Intents (required or the bot will crash on startup)

View File

@@ -11,13 +11,7 @@ Deploy OpenClaw on Render using Infrastructure as Code. The included `render.yam
## Deploy with a Render Blueprint
<a
href="https://render.com/deploy?repo=https://github.com/openclaw/openclaw"
target="_blank"
rel="noreferrer"
>
Deploy to Render
</a>
[Deploy to Render](https://render.com/deploy?repo=https://github.com/openclaw/openclaw)
Clicking this link will:

View File

@@ -24,10 +24,13 @@ Notes:
- Add `--no-onboard` if you dont want the onboarding wizard to run again.
- For **source installs**, use:
```bash
curl -fsSL https://openclaw.ai/install.sh | bash -s -- --install-method git --no-onboard
```
The installer will `git pull --rebase` **only** if the repo is clean.
- For **global installs**, the script uses `npm install -g openclaw@latest` under the hood.
- Legacy note: `clawdbot` remains available as a compatibility shim.
@@ -225,4 +228,4 @@ git pull
- Run `openclaw doctor` again and read the output carefully (it often tells you the fix).
- Check: [Troubleshooting](/gateway/troubleshooting)
- Ask in Discord: https://discord.gg/clawd
- Ask in Discord: [https://discord.gg/clawd](https://discord.gg/clawd)

View File

@@ -362,6 +362,7 @@ After configuring multi-agent sandbox and tools:
- Verify the agent cannot use denied tools
4. **Monitor logs:**
```exec
tail -f "${OPENCLAW_STATE_DIR:-$HOME/.openclaw}/logs/gateway.log" | grep -E "routing|sandbox|tools"
```

View File

@@ -15,12 +15,12 @@ through Perplexitys direct API or via OpenRouter.
### Perplexity (direct)
- Base URL: https://api.perplexity.ai
- Base URL: [https://api.perplexity.ai](https://api.perplexity.ai)
- Environment variable: `PERPLEXITY_API_KEY`
### OpenRouter (alternative)
- Base URL: https://openrouter.ai/api/v1
- Base URL: [https://openrouter.ai/api/v1](https://openrouter.ai/api/v1)
- Environment variable: `OPENROUTER_API_KEY`
- Supports prepaid/crypto credits.

View File

@@ -66,5 +66,5 @@ If you only want to reset sessions, delete `agents/<agentId>/sessions/` and `age
## References
- https://docs.openclaw.ai/testing
- https://docs.openclaw.ai/start/getting-started
- [https://docs.openclaw.ai/testing](https://docs.openclaw.ai/testing)
- [https://docs.openclaw.ai/start/getting-started](https://docs.openclaw.ai/start/getting-started)

View File

@@ -98,10 +98,13 @@ Pairing details: [Gateway pairing](/gateway/pairing).
### 5) Verify the node is connected
- Via nodes status:
```bash
openclaw nodes status
```
- Via Gateway:
```bash
openclaw gateway call node.list --params "{}"
```

View File

@@ -13,8 +13,8 @@ This guide covers the necessary steps to build and run the OpenClaw macOS applic
Before building the app, ensure you have the following installed:
1. **Xcode 26.2+**: Required for Swift development.
2. **Node.js 22+ & pnpm**: Required for the gateway, CLI, and packaging scripts.
1. **Xcode 26.2+**: Required for Swift development.
2. **Node.js 22+ & pnpm**: Required for the gateway, CLI, and packaging scripts.
## 1. Install Dependencies
@@ -35,7 +35,7 @@ To build the macOS app and package it into `dist/OpenClaw.app`, run:
If you don't have an Apple Developer ID certificate, the script will automatically use **ad-hoc signing** (`-`).
For dev run modes, signing flags, and Team ID troubleshooting, see the macOS app README:
https://github.com/openclaw/openclaw/blob/main/apps/macos/README.md
[https://github.com/openclaw/openclaw/blob/main/apps/macos/README.md](https://github.com/openclaw/openclaw/blob/main/apps/macos/README.md)
> **Note**: Ad-hoc signed apps may trigger security prompts. If the app crashes immediately with "Abort trap 6", see the [Troubleshooting](#troubleshooting) section.
@@ -45,9 +45,9 @@ The macOS app expects a global `openclaw` CLI install to manage background tasks
**To install it (recommended):**
1. Open the OpenClaw app.
2. Go to the **General** settings tab.
3. Click **"Install CLI"**.
1. Open the OpenClaw app.
2. Go to the **General** settings tab.
3. Click **"Install CLI"**.
Alternatively, install it manually:
@@ -82,9 +82,11 @@ If the app crashes when you try to allow **Speech Recognition** or **Microphone*
**Fix:**
1. Reset the TCC permissions:
```bash
tccutil reset All bot.molt.mac.debug
```
2. If that fails, change the `BUNDLE_ID` temporarily in [`scripts/package-mac-app.sh`](https://github.com/openclaw/openclaw/blob/main/scripts/package-mac-app.sh) to force a "clean slate" from macOS.
### Gateway "Starting..." indefinitely

View File

@@ -19,9 +19,11 @@ agent (with a session switcher for other sessions).
- Manual: Lobster menu → “Open Chat”.
- Autoopen for testing:
```bash
dist/OpenClaw.app/Contents/MacOS/OpenClaw --webchat
```
- Logs: `./scripts/clawlog.sh` (subsystem `bot.molt`, category `WebChatSwiftUI`).
## How its wired

View File

@@ -20,7 +20,7 @@ Native Windows companion apps are planned.
- [Getting Started](/start/getting-started) (use inside WSL)
- [Install & updates](/install/updating)
- Official WSL2 guide (Microsoft): https://learn.microsoft.com/windows/wsl/install
- Official WSL2 guide (Microsoft): [https://learn.microsoft.com/windows/wsl/install](https://learn.microsoft.com/windows/wsl/install)
## Gateway

View File

@@ -11,7 +11,7 @@ title: "OpenProse"
OpenProse is a portable, markdown-first workflow format for orchestrating AI sessions. In OpenClaw it ships as a plugin that installs an OpenProse skill pack plus a `/prose` slash command. Programs live in `.prose` files and can spawn multiple sub-agents with explicit control flow.
Official site: https://www.prose.md
Official site: [https://www.prose.md](https://www.prose.md)
## What it can do

View File

@@ -131,9 +131,9 @@ launchctl bootstrap gui/$(id -u) ~/Library/LaunchAgents/com.claude-max-api.plist
## Links
- **npm:** https://www.npmjs.com/package/claude-max-api-proxy
- **GitHub:** https://github.com/atalovesyou/claude-max-api-proxy
- **Issues:** https://github.com/atalovesyou/claude-max-api-proxy/issues
- **npm:** [https://www.npmjs.com/package/claude-max-api-proxy](https://www.npmjs.com/package/claude-max-api-proxy)
- **GitHub:** [https://github.com/atalovesyou/claude-max-api-proxy](https://github.com/atalovesyou/claude-max-api-proxy)
- **Issues:** [https://github.com/atalovesyou/claude-max-api-proxy/issues](https://github.com/atalovesyou/claude-max-api-proxy/issues)
## Notes

View File

@@ -15,8 +15,8 @@ When enabled, OpenClaw uploads the audio file to Deepgram and injects the transc
into the reply pipeline (`{{Transcript}}` + `[Audio]` block). This is **not streaming**;
it uses the pre-recorded transcription endpoint.
Website: https://deepgram.com
Docs: https://developers.deepgram.com
Website: [https://deepgram.com](https://deepgram.com)
Docs: [https://developers.deepgram.com](https://developers.deepgram.com)
## Quick start

View File

@@ -179,7 +179,7 @@ Use the interactive config wizard to set MiniMax without editing JSON:
- Model refs are `minimax/<model>`.
- Coding Plan usage API: `https://api.minimaxi.com/v1/api/openplatform/coding_plan/remains` (requires a coding plan key).
- Update pricing values in `models.json` if you need exact cost tracking.
- Referral link for MiniMax Coding Plan (10% off): https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link
- Referral link for MiniMax Coding Plan (10% off): [https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link](https://platform.minimax.io/subscribe/coding-plan?code=DbXJTRClnb&source=link)
- See [/concepts/model-providers](/concepts/model-providers) for provider rules.
- Use `openclaw models list` and `openclaw models set minimax/MiniMax-M2.1` to switch.

View File

@@ -15,14 +15,14 @@ Kimi Coding with `kimi-coding/k2p5`.
Current Kimi K2 model IDs:
{/_ moonshot-kimi-k2-ids:start _/ && null}
{/_moonshot-kimi-k2-ids:start_/ && null}
- `kimi-k2.5`
- `kimi-k2-0905-preview`
- `kimi-k2-turbo-preview`
- `kimi-k2-thinking`
- `kimi-k2-thinking-turbo`
{/_ moonshot-kimi-k2-ids:end _/ && null}
{/_moonshot-kimi-k2-ids:end_/ && null}
```bash
openclaw onboard --auth-choice moonshot-api-key

View File

@@ -12,7 +12,7 @@ Ollama is a local LLM runtime that makes it easy to run open-source models on yo
## Quick start
1. Install Ollama: https://ollama.ai
1. Install Ollama: [https://ollama.ai](https://ollama.ai)
2. Pull a model:

View File

@@ -17,8 +17,8 @@ OpenClaw = CLAW + TARDIS, because every space lobster needs a time and space mac
## Core contributors
- **Maxim Vovshin** (@Hyaxia, 36747317+Hyaxia@users.noreply.github.com) - Blogwatcher skill
- **Nacho Iacovino** (@nachoiacovino, nacho.iacovino@gmail.com) - Location parsing (Telegram and WhatsApp)
- **Maxim Vovshin** (@Hyaxia, [36747317+Hyaxia@users.noreply.github.com](mailto:36747317+Hyaxia@users.noreply.github.com)) - Blogwatcher skill
- **Nacho Iacovino** (@nachoiacovino, [nacho.iacovino@gmail.com](mailto:nacho.iacovino@gmail.com)) - Location parsing (Telegram and WhatsApp)
## License

View File

@@ -85,7 +85,7 @@ re-caching the full prompt, reducing cache write costs.
For Anthropic API pricing, cache reads are significantly cheaper than input
tokens, while cache writes are billed at a higher multiplier. See Anthropics
prompt caching pricing for the latest rates and TTL multipliers:
https://docs.anthropic.com/docs/build-with-claude/prompt-caching
[https://docs.anthropic.com/docs/build-with-claude/prompt-caching](https://docs.anthropic.com/docs/build-with-claude/prompt-caching)
### Example: keep 1h cache warm with heartbeat

View File

@@ -35,7 +35,7 @@ If you have multiple profiles, pass `--browser-profile <name>` (the default is `
## X/Twitter: recommended flow
- **Read/search/threads:** use the **bird** CLI skill (no browser, stable).
- Repo: https://github.com/steipete/bird
- Repo: [https://github.com/steipete/bird](https://github.com/steipete/bird)
- **Post updates:** use the **host** browser (manual login).
## Sandboxing + host browser access

View File

@@ -338,5 +338,5 @@ OpenProse pairs well with Lobster: use `/prose` to orchestrate multi-agent prep,
One public example: a “second brain” CLI + Lobster pipelines that manage three Markdown vaults (personal, partner, shared). The CLI emits JSON for stats, inbox listings, and stale scans; Lobster chains those commands into workflows like `weekly-review`, `inbox-triage`, `memory-consolidation`, and `shared-task-sync`, each with approval gates. AI handles judgment (categorization) when available and falls back to deterministic rules when not.
- Thread: https://x.com/plattenschieber/status/2014508656335770033
- Repo: https://github.com/bloomedai/brain-cli
- Thread: [https://x.com/plattenschieber/status/2014508656335770033](https://x.com/plattenschieber/status/2014508656335770033)
- Repo: [https://github.com/bloomedai/brain-cli](https://github.com/bloomedai/brain-cli)

View File

@@ -50,7 +50,7 @@ tool surface those skills teach.
## ClawHub (install + sync)
ClawHub is the public skills registry for OpenClaw. Browse at
https://clawhub.com. Use it to discover, install, update, and back up skills.
[https://clawhub.com](https://clawhub.com). Use it to discover, install, update, and back up skills.
Full guide: [ClawHub](/tools/clawhub).
Common flows:
@@ -295,6 +295,6 @@ See [Skills config](/tools/skills-config) for the full configuration schema.
## Looking for more skills?
Browse https://clawhub.com.
Browse [https://clawhub.com](https://clawhub.com).
---

View File

@@ -71,7 +71,7 @@ Example: switch to Perplexity Sonar (direct API):
## Getting a Brave API key
1. Create a Brave Search API account at https://brave.com/search/api/
1. Create a Brave Search API account at [https://brave.com/search/api/](https://brave.com/search/api/)
2. In the dashboard, choose the **Data for Search** plan (not “Data for AI”) and generate an API key.
3. Run `openclaw configure --section web` to store the key in config (recommended), or set `BRAVE_API_KEY` in your environment.
@@ -95,7 +95,7 @@ crypto/prepaid).
### Getting an OpenRouter API key
1. Create an account at https://openrouter.ai/
1. Create an account at [https://openrouter.ai/](https://openrouter.ai/)
2. Add credits (supports crypto, prepaid, or credit card)
3. Generate an API key in your account settings

View File

@@ -21,7 +21,7 @@ deployments work at a high level.
- **GCP (Compute Engine)**: [GCP](/install/gcp)
- **exe.dev** (VM + HTTPS proxy): [exe.dev](/install/exe-dev)
- **AWS (EC2/Lightsail/free tier)**: works well too. Video guide:
https://x.com/techfrenAJ/status/2014934471095812547
[https://x.com/techfrenAJ/status/2014934471095812547](https://x.com/techfrenAJ/status/2014934471095812547)
## How cloud setups work

View File

@@ -19,7 +19,7 @@ It speaks **directly to the Gateway WebSocket** on the same port.
If the Gateway is running on the same computer, open:
- http://127.0.0.1:18789/ (or http://localhost:18789/)
- [http://127.0.0.1:18789/](http://127.0.0.1:18789/) (or [http://localhost:18789/](http://localhost:18789/))
If the page fails to load, start the Gateway first: `openclaw gateway`.

View File

@@ -12,7 +12,7 @@ The Gateway dashboard is the browser Control UI served at `/` by default
Quick open (local Gateway):
- http://127.0.0.1:18789/ (or http://localhost:18789/)
- [http://127.0.0.1:18789/](http://127.0.0.1:18789/) (or [http://localhost:18789/](http://localhost:18789/))
Key references: