Include full contents of all nested repositories
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
70
openclaw/skills/1password/SKILL.md
Normal file
70
openclaw/skills/1password/SKILL.md
Normal file
@@ -0,0 +1,70 @@
|
||||
---
|
||||
name: 1password
|
||||
description: Set up and use 1Password CLI (op). Use when installing the CLI, enabling desktop app integration, signing in (single or multi-account), or reading/injecting/running secrets via op.
|
||||
homepage: https://developer.1password.com/docs/cli/get-started/
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🔐",
|
||||
"requires": { "bins": ["op"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "1password-cli",
|
||||
"bins": ["op"],
|
||||
"label": "Install 1Password CLI (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# 1Password CLI
|
||||
|
||||
Follow the official CLI get-started steps. Don't guess install commands.
|
||||
|
||||
## References
|
||||
|
||||
- `references/get-started.md` (install + app integration + sign-in flow)
|
||||
- `references/cli-examples.md` (real `op` examples)
|
||||
|
||||
## Workflow
|
||||
|
||||
1. Check OS + shell.
|
||||
2. Verify CLI present: `op --version`.
|
||||
3. Confirm desktop app integration is enabled (per get-started) and the app is unlocked.
|
||||
4. REQUIRED: create a fresh tmux session for all `op` commands (no direct `op` calls outside tmux).
|
||||
5. Sign in / authorize inside tmux: `op signin` (expect app prompt).
|
||||
6. Verify access inside tmux: `op whoami` (must succeed before any secret read).
|
||||
7. If multiple accounts: use `--account` or `OP_ACCOUNT`.
|
||||
|
||||
## REQUIRED tmux session (T-Max)
|
||||
|
||||
The shell tool uses a fresh TTY per command. To avoid re-prompts and failures, always run `op` inside a dedicated tmux session with a fresh socket/session name.
|
||||
|
||||
Example (see `tmux` skill for socket conventions, do not reuse old session names):
|
||||
|
||||
```bash
|
||||
SOCKET_DIR="${OPENCLAW_TMUX_SOCKET_DIR:-${CLAWDBOT_TMUX_SOCKET_DIR:-${TMPDIR:-/tmp}/openclaw-tmux-sockets}}"
|
||||
mkdir -p "$SOCKET_DIR"
|
||||
SOCKET="$SOCKET_DIR/openclaw-op.sock"
|
||||
SESSION="op-auth-$(date +%Y%m%d-%H%M%S)"
|
||||
|
||||
tmux -S "$SOCKET" new -d -s "$SESSION" -n shell
|
||||
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "op signin --account my.1password.com" Enter
|
||||
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "op whoami" Enter
|
||||
tmux -S "$SOCKET" send-keys -t "$SESSION":0.0 -- "op vault list" Enter
|
||||
tmux -S "$SOCKET" capture-pane -p -J -t "$SESSION":0.0 -S -200
|
||||
tmux -S "$SOCKET" kill-session -t "$SESSION"
|
||||
```
|
||||
|
||||
## Guardrails
|
||||
|
||||
- Never paste secrets into logs, chat, or code.
|
||||
- Prefer `op run` / `op inject` over writing secrets to disk.
|
||||
- If sign-in without app integration is needed, use `op account add`.
|
||||
- If a command returns "account is not signed in", re-run `op signin` inside tmux and authorize in the app.
|
||||
- Do not run `op` outside tmux; stop and ask if tmux is unavailable.
|
||||
29
openclaw/skills/1password/references/cli-examples.md
Normal file
29
openclaw/skills/1password/references/cli-examples.md
Normal file
@@ -0,0 +1,29 @@
|
||||
# op CLI examples (from op help)
|
||||
|
||||
## Sign in
|
||||
|
||||
- `op signin`
|
||||
- `op signin --account <shorthand|signin-address|account-id|user-id>`
|
||||
|
||||
## Read
|
||||
|
||||
- `op read op://app-prod/db/password`
|
||||
- `op read "op://app-prod/db/one-time password?attribute=otp"`
|
||||
- `op read "op://app-prod/ssh key/private key?ssh-format=openssh"`
|
||||
- `op read --out-file ./key.pem op://app-prod/server/ssh/key.pem`
|
||||
|
||||
## Run
|
||||
|
||||
- `export DB_PASSWORD="op://app-prod/db/password"`
|
||||
- `op run --no-masking -- printenv DB_PASSWORD`
|
||||
- `op run --env-file="./.env" -- printenv DB_PASSWORD`
|
||||
|
||||
## Inject
|
||||
|
||||
- `echo "db_password: {{ op://app-prod/db/password }}" | op inject`
|
||||
- `op inject -i config.yml.tpl -o config.yml`
|
||||
|
||||
## Whoami / accounts
|
||||
|
||||
- `op whoami`
|
||||
- `op account list`
|
||||
17
openclaw/skills/1password/references/get-started.md
Normal file
17
openclaw/skills/1password/references/get-started.md
Normal file
@@ -0,0 +1,17 @@
|
||||
# 1Password CLI get-started (summary)
|
||||
|
||||
- Works on macOS, Windows, and Linux.
|
||||
- macOS/Linux shells: bash, zsh, sh, fish.
|
||||
- Windows shell: PowerShell.
|
||||
- Requires a 1Password subscription and the desktop app to use app integration.
|
||||
- macOS requirement: Big Sur 11.0.0 or later.
|
||||
- Linux app integration requires PolKit + an auth agent.
|
||||
- Install the CLI per the official doc for your OS.
|
||||
- Enable desktop app integration in the 1Password app:
|
||||
- Open and unlock the app, then select your account/collection.
|
||||
- macOS: Settings > Developer > Integrate with 1Password CLI (Touch ID optional).
|
||||
- Windows: turn on Windows Hello, then Settings > Developer > Integrate.
|
||||
- Linux: Settings > Security > Unlock using system authentication, then Settings > Developer > Integrate.
|
||||
- After integration, run any command to sign in (example in docs: `op vault list`).
|
||||
- If multiple accounts: use `op signin` to pick one, or `--account` / `OP_ACCOUNT`.
|
||||
- For non-integration auth, use `op account add`.
|
||||
77
openclaw/skills/apple-notes/SKILL.md
Normal file
77
openclaw/skills/apple-notes/SKILL.md
Normal file
@@ -0,0 +1,77 @@
|
||||
---
|
||||
name: apple-notes
|
||||
description: Manage Apple Notes via the `memo` CLI on macOS (create, view, edit, delete, search, move, and export notes). Use when a user asks OpenClaw to add a note, list notes, search notes, or manage note folders.
|
||||
homepage: https://github.com/antoniorodr/memo
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "📝",
|
||||
"os": ["darwin"],
|
||||
"requires": { "bins": ["memo"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "antoniorodr/memo/memo",
|
||||
"bins": ["memo"],
|
||||
"label": "Install memo via Homebrew",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Apple Notes CLI
|
||||
|
||||
Use `memo notes` to manage Apple Notes directly from the terminal. Create, view, edit, delete, search, move notes between folders, and export to HTML/Markdown.
|
||||
|
||||
Setup
|
||||
|
||||
- Install (Homebrew): `brew tap antoniorodr/memo && brew install antoniorodr/memo/memo`
|
||||
- Manual (pip): `pip install .` (after cloning the repo)
|
||||
- macOS-only; if prompted, grant Automation access to Notes.app.
|
||||
|
||||
View Notes
|
||||
|
||||
- List all notes: `memo notes`
|
||||
- Filter by folder: `memo notes -f "Folder Name"`
|
||||
- Search notes (fuzzy): `memo notes -s "query"`
|
||||
|
||||
Create Notes
|
||||
|
||||
- Add a new note: `memo notes -a`
|
||||
- Opens an interactive editor to compose the note.
|
||||
- Quick add with title: `memo notes -a "Note Title"`
|
||||
|
||||
Edit Notes
|
||||
|
||||
- Edit existing note: `memo notes -e`
|
||||
- Interactive selection of note to edit.
|
||||
|
||||
Delete Notes
|
||||
|
||||
- Delete a note: `memo notes -d`
|
||||
- Interactive selection of note to delete.
|
||||
|
||||
Move Notes
|
||||
|
||||
- Move note to folder: `memo notes -m`
|
||||
- Interactive selection of note and destination folder.
|
||||
|
||||
Export Notes
|
||||
|
||||
- Export to HTML/Markdown: `memo notes -ex`
|
||||
- Exports selected note; uses Mistune for markdown processing.
|
||||
|
||||
Limitations
|
||||
|
||||
- Cannot edit notes containing images or attachments.
|
||||
- Interactive prompts may require terminal access.
|
||||
|
||||
Notes
|
||||
|
||||
- macOS-only.
|
||||
- Requires Apple Notes.app to be accessible.
|
||||
- For automation, grant permissions in System Settings > Privacy & Security > Automation.
|
||||
118
openclaw/skills/apple-reminders/SKILL.md
Normal file
118
openclaw/skills/apple-reminders/SKILL.md
Normal file
@@ -0,0 +1,118 @@
|
||||
---
|
||||
name: apple-reminders
|
||||
description: Manage Apple Reminders via remindctl CLI (list, add, edit, complete, delete). Supports lists, date filters, and JSON/plain output.
|
||||
homepage: https://github.com/steipete/remindctl
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "⏰",
|
||||
"os": ["darwin"],
|
||||
"requires": { "bins": ["remindctl"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/remindctl",
|
||||
"bins": ["remindctl"],
|
||||
"label": "Install remindctl via Homebrew",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Apple Reminders CLI (remindctl)
|
||||
|
||||
Use `remindctl` to manage Apple Reminders directly from the terminal.
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **USE this skill when:**
|
||||
|
||||
- User explicitly mentions "reminder" or "Reminders app"
|
||||
- Creating personal to-dos with due dates that sync to iOS
|
||||
- Managing Apple Reminders lists
|
||||
- User wants tasks to appear in their iPhone/iPad Reminders app
|
||||
|
||||
## When NOT to Use
|
||||
|
||||
❌ **DON'T use this skill when:**
|
||||
|
||||
- Scheduling Clawdbot tasks or alerts → use `cron` tool with systemEvent instead
|
||||
- Calendar events or appointments → use Apple Calendar
|
||||
- Project/work task management → use Notion, GitHub Issues, or task queue
|
||||
- One-time notifications → use `cron` tool for timed alerts
|
||||
- User says "remind me" but means a Clawdbot alert → clarify first
|
||||
|
||||
## Setup
|
||||
|
||||
- Install: `brew install steipete/tap/remindctl`
|
||||
- macOS-only; grant Reminders permission when prompted
|
||||
- Check status: `remindctl status`
|
||||
- Request access: `remindctl authorize`
|
||||
|
||||
## Common Commands
|
||||
|
||||
### View Reminders
|
||||
|
||||
```bash
|
||||
remindctl # Today's reminders
|
||||
remindctl today # Today
|
||||
remindctl tomorrow # Tomorrow
|
||||
remindctl week # This week
|
||||
remindctl overdue # Past due
|
||||
remindctl all # Everything
|
||||
remindctl 2026-01-04 # Specific date
|
||||
```
|
||||
|
||||
### Manage Lists
|
||||
|
||||
```bash
|
||||
remindctl list # List all lists
|
||||
remindctl list Work # Show specific list
|
||||
remindctl list Projects --create # Create list
|
||||
remindctl list Work --delete # Delete list
|
||||
```
|
||||
|
||||
### Create Reminders
|
||||
|
||||
```bash
|
||||
remindctl add "Buy milk"
|
||||
remindctl add --title "Call mom" --list Personal --due tomorrow
|
||||
remindctl add --title "Meeting prep" --due "2026-02-15 09:00"
|
||||
```
|
||||
|
||||
### Complete/Delete
|
||||
|
||||
```bash
|
||||
remindctl complete 1 2 3 # Complete by ID
|
||||
remindctl delete 4A83 --force # Delete by ID
|
||||
```
|
||||
|
||||
### Output Formats
|
||||
|
||||
```bash
|
||||
remindctl today --json # JSON for scripting
|
||||
remindctl today --plain # TSV format
|
||||
remindctl today --quiet # Counts only
|
||||
```
|
||||
|
||||
## Date Formats
|
||||
|
||||
Accepted by `--due` and date filters:
|
||||
|
||||
- `today`, `tomorrow`, `yesterday`
|
||||
- `YYYY-MM-DD`
|
||||
- `YYYY-MM-DD HH:mm`
|
||||
- ISO 8601 (`2026-01-04T12:34:56Z`)
|
||||
|
||||
## Example: Clarifying User Intent
|
||||
|
||||
User: "Remind me to check on the deploy in 2 hours"
|
||||
|
||||
**Ask:** "Do you want this in Apple Reminders (syncs to your phone) or as a Clawdbot alert (I'll message you here)?"
|
||||
|
||||
- Apple Reminders → use this skill
|
||||
- Clawdbot alert → use `cron` tool with systemEvent
|
||||
107
openclaw/skills/bear-notes/SKILL.md
Normal file
107
openclaw/skills/bear-notes/SKILL.md
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
name: bear-notes
|
||||
description: Create, search, and manage Bear notes via grizzly CLI.
|
||||
homepage: https://bear.app
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🐻",
|
||||
"os": ["darwin"],
|
||||
"requires": { "bins": ["grizzly"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "go",
|
||||
"kind": "go",
|
||||
"module": "github.com/tylerwince/grizzly/cmd/grizzly@latest",
|
||||
"bins": ["grizzly"],
|
||||
"label": "Install grizzly (go)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Bear Notes
|
||||
|
||||
Use `grizzly` to create, read, and manage notes in Bear on macOS.
|
||||
|
||||
Requirements
|
||||
|
||||
- Bear app installed and running
|
||||
- For some operations (add-text, tags, open-note --selected), a Bear app token (stored in `~/.config/grizzly/token`)
|
||||
|
||||
## Getting a Bear Token
|
||||
|
||||
For operations that require a token (add-text, tags, open-note --selected), you need an authentication token:
|
||||
|
||||
1. Open Bear → Help → API Token → Copy Token
|
||||
2. Save it: `echo "YOUR_TOKEN" > ~/.config/grizzly/token`
|
||||
|
||||
## Common Commands
|
||||
|
||||
Create a note
|
||||
|
||||
```bash
|
||||
echo "Note content here" | grizzly create --title "My Note" --tag work
|
||||
grizzly create --title "Quick Note" --tag inbox < /dev/null
|
||||
```
|
||||
|
||||
Open/read a note by ID
|
||||
|
||||
```bash
|
||||
grizzly open-note --id "NOTE_ID" --enable-callback --json
|
||||
```
|
||||
|
||||
Append text to a note
|
||||
|
||||
```bash
|
||||
echo "Additional content" | grizzly add-text --id "NOTE_ID" --mode append --token-file ~/.config/grizzly/token
|
||||
```
|
||||
|
||||
List all tags
|
||||
|
||||
```bash
|
||||
grizzly tags --enable-callback --json --token-file ~/.config/grizzly/token
|
||||
```
|
||||
|
||||
Search notes (via open-tag)
|
||||
|
||||
```bash
|
||||
grizzly open-tag --name "work" --enable-callback --json
|
||||
```
|
||||
|
||||
## Options
|
||||
|
||||
Common flags:
|
||||
|
||||
- `--dry-run` — Preview the URL without executing
|
||||
- `--print-url` — Show the x-callback-url
|
||||
- `--enable-callback` — Wait for Bear's response (needed for reading data)
|
||||
- `--json` — Output as JSON (when using callbacks)
|
||||
- `--token-file PATH` — Path to Bear API token file
|
||||
|
||||
## Configuration
|
||||
|
||||
Grizzly reads config from (in priority order):
|
||||
|
||||
1. CLI flags
|
||||
2. Environment variables (`GRIZZLY_TOKEN_FILE`, `GRIZZLY_CALLBACK_URL`, `GRIZZLY_TIMEOUT`)
|
||||
3. `.grizzly.toml` in current directory
|
||||
4. `~/.config/grizzly/config.toml`
|
||||
|
||||
Example `~/.config/grizzly/config.toml`:
|
||||
|
||||
```toml
|
||||
token_file = "~/.config/grizzly/token"
|
||||
callback_url = "http://127.0.0.1:42123/success"
|
||||
timeout = "5s"
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Bear must be running for commands to work
|
||||
- Note IDs are Bear's internal identifiers (visible in note info or via callbacks)
|
||||
- Use `--enable-callback` when you need to read data back from Bear
|
||||
- Some operations require a valid token (add-text, tags, open-note --selected)
|
||||
69
openclaw/skills/blogwatcher/SKILL.md
Normal file
69
openclaw/skills/blogwatcher/SKILL.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
name: blogwatcher
|
||||
description: Monitor blogs and RSS/Atom feeds for updates using the blogwatcher CLI.
|
||||
homepage: https://github.com/Hyaxia/blogwatcher
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "📰",
|
||||
"requires": { "bins": ["blogwatcher"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "go",
|
||||
"kind": "go",
|
||||
"module": "github.com/Hyaxia/blogwatcher/cmd/blogwatcher@latest",
|
||||
"bins": ["blogwatcher"],
|
||||
"label": "Install blogwatcher (go)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# blogwatcher
|
||||
|
||||
Track blog and RSS/Atom feed updates with the `blogwatcher` CLI.
|
||||
|
||||
Install
|
||||
|
||||
- Go: `go install github.com/Hyaxia/blogwatcher/cmd/blogwatcher@latest`
|
||||
|
||||
Quick start
|
||||
|
||||
- `blogwatcher --help`
|
||||
|
||||
Common commands
|
||||
|
||||
- Add a blog: `blogwatcher add "My Blog" https://example.com`
|
||||
- List blogs: `blogwatcher blogs`
|
||||
- Scan for updates: `blogwatcher scan`
|
||||
- List articles: `blogwatcher articles`
|
||||
- Mark an article read: `blogwatcher read 1`
|
||||
- Mark all articles read: `blogwatcher read-all`
|
||||
- Remove a blog: `blogwatcher remove "My Blog"`
|
||||
|
||||
Example output
|
||||
|
||||
```
|
||||
$ blogwatcher blogs
|
||||
Tracked blogs (1):
|
||||
|
||||
xkcd
|
||||
URL: https://xkcd.com
|
||||
```
|
||||
|
||||
```
|
||||
$ blogwatcher scan
|
||||
Scanning 1 blog(s)...
|
||||
|
||||
xkcd
|
||||
Source: RSS | Found: 4 | New: 4
|
||||
|
||||
Found 4 new article(s) total!
|
||||
```
|
||||
|
||||
Notes
|
||||
|
||||
- Use `blogwatcher <command> --help` to discover flags and options.
|
||||
47
openclaw/skills/blucli/SKILL.md
Normal file
47
openclaw/skills/blucli/SKILL.md
Normal file
@@ -0,0 +1,47 @@
|
||||
---
|
||||
name: blucli
|
||||
description: BluOS CLI (blu) for discovery, playback, grouping, and volume.
|
||||
homepage: https://blucli.sh
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🫐",
|
||||
"requires": { "bins": ["blu"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "go",
|
||||
"kind": "go",
|
||||
"module": "github.com/steipete/blucli/cmd/blu@latest",
|
||||
"bins": ["blu"],
|
||||
"label": "Install blucli (go)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# blucli (blu)
|
||||
|
||||
Use `blu` to control Bluesound/NAD players.
|
||||
|
||||
Quick start
|
||||
|
||||
- `blu devices` (pick target)
|
||||
- `blu --device <id> status`
|
||||
- `blu play|pause|stop`
|
||||
- `blu volume set 15`
|
||||
|
||||
Target selection (in priority order)
|
||||
|
||||
- `--device <id|name|alias>`
|
||||
- `BLU_DEVICE`
|
||||
- config default (if set)
|
||||
|
||||
Common tasks
|
||||
|
||||
- Grouping: `blu group status|add|remove`
|
||||
- TuneIn search/play: `blu tunein search "query"`, `blu tunein play "query"`
|
||||
|
||||
Prefer `--json` for scripts. Confirm the target device before changing playback.
|
||||
131
openclaw/skills/bluebubbles/SKILL.md
Normal file
131
openclaw/skills/bluebubbles/SKILL.md
Normal file
@@ -0,0 +1,131 @@
|
||||
---
|
||||
name: bluebubbles
|
||||
description: Use when you need to send or manage iMessages via BlueBubbles (recommended iMessage integration). Calls go through the generic message tool with channel="bluebubbles".
|
||||
metadata: { "openclaw": { "emoji": "🫧", "requires": { "config": ["channels.bluebubbles"] } } }
|
||||
---
|
||||
|
||||
# BlueBubbles Actions
|
||||
|
||||
## Overview
|
||||
|
||||
BlueBubbles is OpenClaw’s recommended iMessage integration. Use the `message` tool with `channel: "bluebubbles"` to send messages and manage iMessage conversations: send texts and attachments, react (tapbacks), edit/unsend, reply in threads, and manage group participants/names/icons.
|
||||
|
||||
## Inputs to collect
|
||||
|
||||
- `target` (prefer `chat_guid:...`; also `+15551234567` in E.164 or `user@example.com`)
|
||||
- `message` text for send/edit/reply
|
||||
- `messageId` for react/edit/unsend/reply
|
||||
- Attachment `path` for local files, or `buffer` + `filename` for base64
|
||||
|
||||
If the user is vague ("text my mom"), ask for the recipient handle or chat guid and the exact message content.
|
||||
|
||||
## Actions
|
||||
|
||||
### Send a message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "send",
|
||||
"channel": "bluebubbles",
|
||||
"target": "+15551234567",
|
||||
"message": "hello from OpenClaw"
|
||||
}
|
||||
```
|
||||
|
||||
### React (tapback)
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "react",
|
||||
"channel": "bluebubbles",
|
||||
"target": "+15551234567",
|
||||
"messageId": "<message-guid>",
|
||||
"emoji": "❤️"
|
||||
}
|
||||
```
|
||||
|
||||
### Remove a reaction
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "react",
|
||||
"channel": "bluebubbles",
|
||||
"target": "+15551234567",
|
||||
"messageId": "<message-guid>",
|
||||
"emoji": "❤️",
|
||||
"remove": true
|
||||
}
|
||||
```
|
||||
|
||||
### Edit a previously sent message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "edit",
|
||||
"channel": "bluebubbles",
|
||||
"target": "+15551234567",
|
||||
"messageId": "<message-guid>",
|
||||
"message": "updated text"
|
||||
}
|
||||
```
|
||||
|
||||
### Unsend a message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "unsend",
|
||||
"channel": "bluebubbles",
|
||||
"target": "+15551234567",
|
||||
"messageId": "<message-guid>"
|
||||
}
|
||||
```
|
||||
|
||||
### Reply to a specific message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "reply",
|
||||
"channel": "bluebubbles",
|
||||
"target": "+15551234567",
|
||||
"replyTo": "<message-guid>",
|
||||
"message": "replying to that"
|
||||
}
|
||||
```
|
||||
|
||||
### Send an attachment
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "sendAttachment",
|
||||
"channel": "bluebubbles",
|
||||
"target": "+15551234567",
|
||||
"path": "/tmp/photo.jpg",
|
||||
"caption": "here you go"
|
||||
}
|
||||
```
|
||||
|
||||
### Send with an iMessage effect
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "sendWithEffect",
|
||||
"channel": "bluebubbles",
|
||||
"target": "+15551234567",
|
||||
"message": "big news",
|
||||
"effect": "balloons"
|
||||
}
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Requires gateway config `channels.bluebubbles` (serverUrl/password/webhookPath).
|
||||
- Prefer `chat_guid` targets when you have them (especially for group chats).
|
||||
- BlueBubbles supports rich actions, but some are macOS-version dependent (for example, edit may be broken on macOS 26 Tahoe).
|
||||
- The gateway may expose both short and full message ids; full ids are more durable across restarts.
|
||||
- Developer reference for the underlying plugin lives in `extensions/bluebubbles/README.md`.
|
||||
|
||||
## Ideas to try
|
||||
|
||||
- React with a tapback to acknowledge a request.
|
||||
- Reply in-thread when a user references a specific message.
|
||||
- Send a file attachment with a short caption.
|
||||
45
openclaw/skills/camsnap/SKILL.md
Normal file
45
openclaw/skills/camsnap/SKILL.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
name: camsnap
|
||||
description: Capture frames or clips from RTSP/ONVIF cameras.
|
||||
homepage: https://camsnap.ai
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "📸",
|
||||
"requires": { "bins": ["camsnap"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/camsnap",
|
||||
"bins": ["camsnap"],
|
||||
"label": "Install camsnap (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# camsnap
|
||||
|
||||
Use `camsnap` to grab snapshots, clips, or motion events from configured cameras.
|
||||
|
||||
Setup
|
||||
|
||||
- Config file: `~/.config/camsnap/config.yaml`
|
||||
- Add camera: `camsnap add --name kitchen --host 192.168.0.10 --user user --pass pass`
|
||||
|
||||
Common commands
|
||||
|
||||
- Discover: `camsnap discover --info`
|
||||
- Snapshot: `camsnap snap kitchen --out shot.jpg`
|
||||
- Clip: `camsnap clip kitchen --dur 5s --out clip.mp4`
|
||||
- Motion watch: `camsnap watch kitchen --threshold 0.2 --action '...'`
|
||||
- Doctor: `camsnap doctor --probe`
|
||||
|
||||
Notes
|
||||
|
||||
- Requires `ffmpeg` on PATH.
|
||||
- Prefer a short test capture before longer clips.
|
||||
198
openclaw/skills/canvas/SKILL.md
Normal file
198
openclaw/skills/canvas/SKILL.md
Normal file
@@ -0,0 +1,198 @@
|
||||
# Canvas Skill
|
||||
|
||||
Display HTML content on connected OpenClaw nodes (Mac app, iOS, Android).
|
||||
|
||||
## Overview
|
||||
|
||||
The canvas tool lets you present web content on any connected node's canvas view. Great for:
|
||||
|
||||
- Displaying games, visualizations, dashboards
|
||||
- Showing generated HTML content
|
||||
- Interactive demos
|
||||
|
||||
## How It Works
|
||||
|
||||
### Architecture
|
||||
|
||||
```
|
||||
┌─────────────────┐ ┌──────────────────┐ ┌─────────────┐
|
||||
│ Canvas Host │────▶│ Node Bridge │────▶│ Node App │
|
||||
│ (HTTP Server) │ │ (TCP Server) │ │ (Mac/iOS/ │
|
||||
│ Port 18793 │ │ Port 18790 │ │ Android) │
|
||||
└─────────────────┘ └──────────────────┘ └─────────────┘
|
||||
```
|
||||
|
||||
1. **Canvas Host Server**: Serves static HTML/CSS/JS files from `canvasHost.root` directory
|
||||
2. **Node Bridge**: Communicates canvas URLs to connected nodes
|
||||
3. **Node Apps**: Render the content in a WebView
|
||||
|
||||
### Tailscale Integration
|
||||
|
||||
The canvas host server binds based on `gateway.bind` setting:
|
||||
|
||||
| Bind Mode | Server Binds To | Canvas URL Uses |
|
||||
| ---------- | ------------------- | -------------------------- |
|
||||
| `loopback` | 127.0.0.1 | localhost (local only) |
|
||||
| `lan` | LAN interface | LAN IP address |
|
||||
| `tailnet` | Tailscale interface | Tailscale hostname |
|
||||
| `auto` | Best available | Tailscale > LAN > loopback |
|
||||
|
||||
**Key insight:** The `canvasHostHostForBridge` is derived from `bridgeHost`. When bound to Tailscale, nodes receive URLs like:
|
||||
|
||||
```
|
||||
http://<tailscale-hostname>:18793/__openclaw__/canvas/<file>.html
|
||||
```
|
||||
|
||||
This is why localhost URLs don't work - the node receives the Tailscale hostname from the bridge!
|
||||
|
||||
## Actions
|
||||
|
||||
| Action | Description |
|
||||
| ---------- | ------------------------------------ |
|
||||
| `present` | Show canvas with optional target URL |
|
||||
| `hide` | Hide the canvas |
|
||||
| `navigate` | Navigate to a new URL |
|
||||
| `eval` | Execute JavaScript in the canvas |
|
||||
| `snapshot` | Capture screenshot of canvas |
|
||||
|
||||
## Configuration
|
||||
|
||||
In `~/.openclaw/openclaw.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"canvasHost": {
|
||||
"enabled": true,
|
||||
"port": 18793,
|
||||
"root": "/Users/you/clawd/canvas",
|
||||
"liveReload": true
|
||||
},
|
||||
"gateway": {
|
||||
"bind": "auto"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Live Reload
|
||||
|
||||
When `liveReload: true` (default), the canvas host:
|
||||
|
||||
- Watches the root directory for changes (via chokidar)
|
||||
- Injects a WebSocket client into HTML files
|
||||
- Automatically reloads connected canvases when files change
|
||||
|
||||
Great for development!
|
||||
|
||||
## Workflow
|
||||
|
||||
### 1. Create HTML content
|
||||
|
||||
Place files in the canvas root directory (default `~/clawd/canvas/`):
|
||||
|
||||
```bash
|
||||
cat > ~/clawd/canvas/my-game.html << 'HTML'
|
||||
<!DOCTYPE html>
|
||||
<html>
|
||||
<head><title>My Game</title></head>
|
||||
<body>
|
||||
<h1>Hello Canvas!</h1>
|
||||
</body>
|
||||
</html>
|
||||
HTML
|
||||
```
|
||||
|
||||
### 2. Find your canvas host URL
|
||||
|
||||
Check how your gateway is bound:
|
||||
|
||||
```bash
|
||||
cat ~/.openclaw/openclaw.json | jq '.gateway.bind'
|
||||
```
|
||||
|
||||
Then construct the URL:
|
||||
|
||||
- **loopback**: `http://127.0.0.1:18793/__openclaw__/canvas/<file>.html`
|
||||
- **lan/tailnet/auto**: `http://<hostname>:18793/__openclaw__/canvas/<file>.html`
|
||||
|
||||
Find your Tailscale hostname:
|
||||
|
||||
```bash
|
||||
tailscale status --json | jq -r '.Self.DNSName' | sed 's/\.$//'
|
||||
```
|
||||
|
||||
### 3. Find connected nodes
|
||||
|
||||
```bash
|
||||
openclaw nodes list
|
||||
```
|
||||
|
||||
Look for Mac/iOS/Android nodes with canvas capability.
|
||||
|
||||
### 4. Present content
|
||||
|
||||
```
|
||||
canvas action:present node:<node-id> target:<full-url>
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```
|
||||
canvas action:present node:mac-63599bc4-b54d-4392-9048-b97abd58343a target:http://peters-mac-studio-1.sheep-coho.ts.net:18793/__openclaw__/canvas/snake.html
|
||||
```
|
||||
|
||||
### 5. Navigate, snapshot, or hide
|
||||
|
||||
```
|
||||
canvas action:navigate node:<node-id> url:<new-url>
|
||||
canvas action:snapshot node:<node-id>
|
||||
canvas action:hide node:<node-id>
|
||||
```
|
||||
|
||||
## Debugging
|
||||
|
||||
### White screen / content not loading
|
||||
|
||||
**Cause:** URL mismatch between server bind and node expectation.
|
||||
|
||||
**Debug steps:**
|
||||
|
||||
1. Check server bind: `cat ~/.openclaw/openclaw.json | jq '.gateway.bind'`
|
||||
2. Check what port canvas is on: `lsof -i :18793`
|
||||
3. Test URL directly: `curl http://<hostname>:18793/__openclaw__/canvas/<file>.html`
|
||||
|
||||
**Solution:** Use the full hostname matching your bind mode, not localhost.
|
||||
|
||||
### "node required" error
|
||||
|
||||
Always specify `node:<node-id>` parameter.
|
||||
|
||||
### "node not connected" error
|
||||
|
||||
Node is offline. Use `openclaw nodes list` to find online nodes.
|
||||
|
||||
### Content not updating
|
||||
|
||||
If live reload isn't working:
|
||||
|
||||
1. Check `liveReload: true` in config
|
||||
2. Ensure file is in the canvas root directory
|
||||
3. Check for watcher errors in logs
|
||||
|
||||
## URL Path Structure
|
||||
|
||||
The canvas host serves from `/__openclaw__/canvas/` prefix:
|
||||
|
||||
```
|
||||
http://<host>:18793/__openclaw__/canvas/index.html → ~/clawd/canvas/index.html
|
||||
http://<host>:18793/__openclaw__/canvas/games/snake.html → ~/clawd/canvas/games/snake.html
|
||||
```
|
||||
|
||||
The `/__openclaw__/canvas/` prefix is defined by `CANVAS_HOST_PATH` constant.
|
||||
|
||||
## Tips
|
||||
|
||||
- Keep HTML self-contained (inline CSS/JS) for best results
|
||||
- Use the default index.html as a test page (has bridge diagnostics)
|
||||
- The canvas persists until you `hide` it or navigate away
|
||||
- Live reload makes development fast - just save and it updates!
|
||||
- A2UI JSON push is WIP - use HTML files for now
|
||||
77
openclaw/skills/clawhub/SKILL.md
Normal file
77
openclaw/skills/clawhub/SKILL.md
Normal file
@@ -0,0 +1,77 @@
|
||||
---
|
||||
name: clawhub
|
||||
description: Use the ClawHub CLI to search, install, update, and publish agent skills from clawhub.com. Use when you need to fetch new skills on the fly, sync installed skills to latest or a specific version, or publish new/updated skill folders with the npm-installed clawhub CLI.
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"requires": { "bins": ["clawhub"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "node",
|
||||
"kind": "node",
|
||||
"package": "clawhub",
|
||||
"bins": ["clawhub"],
|
||||
"label": "Install ClawHub CLI (npm)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# ClawHub CLI
|
||||
|
||||
Install
|
||||
|
||||
```bash
|
||||
npm i -g clawhub
|
||||
```
|
||||
|
||||
Auth (publish)
|
||||
|
||||
```bash
|
||||
clawhub login
|
||||
clawhub whoami
|
||||
```
|
||||
|
||||
Search
|
||||
|
||||
```bash
|
||||
clawhub search "postgres backups"
|
||||
```
|
||||
|
||||
Install
|
||||
|
||||
```bash
|
||||
clawhub install my-skill
|
||||
clawhub install my-skill --version 1.2.3
|
||||
```
|
||||
|
||||
Update (hash-based match + upgrade)
|
||||
|
||||
```bash
|
||||
clawhub update my-skill
|
||||
clawhub update my-skill --version 1.2.3
|
||||
clawhub update --all
|
||||
clawhub update my-skill --force
|
||||
clawhub update --all --no-input --force
|
||||
```
|
||||
|
||||
List
|
||||
|
||||
```bash
|
||||
clawhub list
|
||||
```
|
||||
|
||||
Publish
|
||||
|
||||
```bash
|
||||
clawhub publish ./my-skill --slug my-skill --name "My Skill" --version 1.2.0 --changelog "Fixes + docs"
|
||||
```
|
||||
|
||||
Notes
|
||||
|
||||
- Default registry: https://clawhub.com (override with CLAWHUB_REGISTRY or --registry)
|
||||
- Default workdir: cwd (falls back to OpenClaw workspace); install dir: ./skills (override with --workdir / --dir / CLAWHUB_WORKDIR)
|
||||
- Update command hashes local files, resolves matching version, and upgrades to latest unless --version is set
|
||||
284
openclaw/skills/coding-agent/SKILL.md
Normal file
284
openclaw/skills/coding-agent/SKILL.md
Normal file
@@ -0,0 +1,284 @@
|
||||
---
|
||||
name: coding-agent
|
||||
description: 'Delegate coding tasks to Codex, Claude Code, or Pi agents via background process. Use when: (1) building/creating new features or apps, (2) reviewing PRs (spawn in temp dir), (3) refactoring large codebases, (4) iterative coding that needs file exploration. NOT for: simple one-liner fixes (just edit), reading code (use read tool), thread-bound ACP harness requests in chat (for example spawn/run Codex or Claude Code in a Discord thread; use sessions_spawn with runtime:"acp"), or any work in ~/clawd workspace (never spawn agents here). Requires a bash tool that supports pty:true.'
|
||||
metadata:
|
||||
{
|
||||
"openclaw": { "emoji": "🧩", "requires": { "anyBins": ["claude", "codex", "opencode", "pi"] } },
|
||||
}
|
||||
---
|
||||
|
||||
# Coding Agent (bash-first)
|
||||
|
||||
Use **bash** (with optional background mode) for all coding agent work. Simple and effective.
|
||||
|
||||
## ⚠️ PTY Mode Required!
|
||||
|
||||
Coding agents (Codex, Claude Code, Pi) are **interactive terminal applications** that need a pseudo-terminal (PTY) to work correctly. Without PTY, you'll get broken output, missing colors, or the agent may hang.
|
||||
|
||||
**Always use `pty:true`** when running coding agents:
|
||||
|
||||
```bash
|
||||
# ✅ Correct - with PTY
|
||||
bash pty:true command:"codex exec 'Your prompt'"
|
||||
|
||||
# ❌ Wrong - no PTY, agent may break
|
||||
bash command:"codex exec 'Your prompt'"
|
||||
```
|
||||
|
||||
### Bash Tool Parameters
|
||||
|
||||
| Parameter | Type | Description |
|
||||
| ------------ | ------- | --------------------------------------------------------------------------- |
|
||||
| `command` | string | The shell command to run |
|
||||
| `pty` | boolean | **Use for coding agents!** Allocates a pseudo-terminal for interactive CLIs |
|
||||
| `workdir` | string | Working directory (agent sees only this folder's context) |
|
||||
| `background` | boolean | Run in background, returns sessionId for monitoring |
|
||||
| `timeout` | number | Timeout in seconds (kills process on expiry) |
|
||||
| `elevated` | boolean | Run on host instead of sandbox (if allowed) |
|
||||
|
||||
### Process Tool Actions (for background sessions)
|
||||
|
||||
| Action | Description |
|
||||
| ----------- | ---------------------------------------------------- |
|
||||
| `list` | List all running/recent sessions |
|
||||
| `poll` | Check if session is still running |
|
||||
| `log` | Get session output (with optional offset/limit) |
|
||||
| `write` | Send raw data to stdin |
|
||||
| `submit` | Send data + newline (like typing and pressing Enter) |
|
||||
| `send-keys` | Send key tokens or hex bytes |
|
||||
| `paste` | Paste text (with optional bracketed mode) |
|
||||
| `kill` | Terminate the session |
|
||||
|
||||
---
|
||||
|
||||
## Quick Start: One-Shot Tasks
|
||||
|
||||
For quick prompts/chats, create a temp git repo and run:
|
||||
|
||||
```bash
|
||||
# Quick chat (Codex needs a git repo!)
|
||||
SCRATCH=$(mktemp -d) && cd $SCRATCH && git init && codex exec "Your prompt here"
|
||||
|
||||
# Or in a real project - with PTY!
|
||||
bash pty:true workdir:~/Projects/myproject command:"codex exec 'Add error handling to the API calls'"
|
||||
```
|
||||
|
||||
**Why git init?** Codex refuses to run outside a trusted git directory. Creating a temp repo solves this for scratch work.
|
||||
|
||||
---
|
||||
|
||||
## The Pattern: workdir + background + pty
|
||||
|
||||
For longer tasks, use background mode with PTY:
|
||||
|
||||
```bash
|
||||
# Start agent in target directory (with PTY!)
|
||||
bash pty:true workdir:~/project background:true command:"codex exec --full-auto 'Build a snake game'"
|
||||
# Returns sessionId for tracking
|
||||
|
||||
# Monitor progress
|
||||
process action:log sessionId:XXX
|
||||
|
||||
# Check if done
|
||||
process action:poll sessionId:XXX
|
||||
|
||||
# Send input (if agent asks a question)
|
||||
process action:write sessionId:XXX data:"y"
|
||||
|
||||
# Submit with Enter (like typing "yes" and pressing Enter)
|
||||
process action:submit sessionId:XXX data:"yes"
|
||||
|
||||
# Kill if needed
|
||||
process action:kill sessionId:XXX
|
||||
```
|
||||
|
||||
**Why workdir matters:** Agent wakes up in a focused directory, doesn't wander off reading unrelated files (like your soul.md 😅).
|
||||
|
||||
---
|
||||
|
||||
## Codex CLI
|
||||
|
||||
**Model:** `gpt-5.2-codex` is the default (set in ~/.codex/config.toml)
|
||||
|
||||
### Flags
|
||||
|
||||
| Flag | Effect |
|
||||
| --------------- | -------------------------------------------------- |
|
||||
| `exec "prompt"` | One-shot execution, exits when done |
|
||||
| `--full-auto` | Sandboxed but auto-approves in workspace |
|
||||
| `--yolo` | NO sandbox, NO approvals (fastest, most dangerous) |
|
||||
|
||||
### Building/Creating
|
||||
|
||||
```bash
|
||||
# Quick one-shot (auto-approves) - remember PTY!
|
||||
bash pty:true workdir:~/project command:"codex exec --full-auto 'Build a dark mode toggle'"
|
||||
|
||||
# Background for longer work
|
||||
bash pty:true workdir:~/project background:true command:"codex --yolo 'Refactor the auth module'"
|
||||
```
|
||||
|
||||
### Reviewing PRs
|
||||
|
||||
**⚠️ CRITICAL: Never review PRs in OpenClaw's own project folder!**
|
||||
Clone to temp folder or use git worktree.
|
||||
|
||||
```bash
|
||||
# Clone to temp for safe review
|
||||
REVIEW_DIR=$(mktemp -d)
|
||||
git clone https://github.com/user/repo.git $REVIEW_DIR
|
||||
cd $REVIEW_DIR && gh pr checkout 130
|
||||
bash pty:true workdir:$REVIEW_DIR command:"codex review --base origin/main"
|
||||
# Clean up after: trash $REVIEW_DIR
|
||||
|
||||
# Or use git worktree (keeps main intact)
|
||||
git worktree add /tmp/pr-130-review pr-130-branch
|
||||
bash pty:true workdir:/tmp/pr-130-review command:"codex review --base main"
|
||||
```
|
||||
|
||||
### Batch PR Reviews (parallel army!)
|
||||
|
||||
```bash
|
||||
# Fetch all PR refs first
|
||||
git fetch origin '+refs/pull/*/head:refs/remotes/origin/pr/*'
|
||||
|
||||
# Deploy the army - one Codex per PR (all with PTY!)
|
||||
bash pty:true workdir:~/project background:true command:"codex exec 'Review PR #86. git diff origin/main...origin/pr/86'"
|
||||
bash pty:true workdir:~/project background:true command:"codex exec 'Review PR #87. git diff origin/main...origin/pr/87'"
|
||||
|
||||
# Monitor all
|
||||
process action:list
|
||||
|
||||
# Post results to GitHub
|
||||
gh pr comment <PR#> --body "<review content>"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Claude Code
|
||||
|
||||
```bash
|
||||
# With PTY for proper terminal output
|
||||
bash pty:true workdir:~/project command:"claude 'Your task'"
|
||||
|
||||
# Background
|
||||
bash pty:true workdir:~/project background:true command:"claude 'Your task'"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## OpenCode
|
||||
|
||||
```bash
|
||||
bash pty:true workdir:~/project command:"opencode run 'Your task'"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Pi Coding Agent
|
||||
|
||||
```bash
|
||||
# Install: npm install -g @mariozechner/pi-coding-agent
|
||||
bash pty:true workdir:~/project command:"pi 'Your task'"
|
||||
|
||||
# Non-interactive mode (PTY still recommended)
|
||||
bash pty:true command:"pi -p 'Summarize src/'"
|
||||
|
||||
# Different provider/model
|
||||
bash pty:true command:"pi --provider openai --model gpt-4o-mini -p 'Your task'"
|
||||
```
|
||||
|
||||
**Note:** Pi now has Anthropic prompt caching enabled (PR #584, merged Jan 2026)!
|
||||
|
||||
---
|
||||
|
||||
## Parallel Issue Fixing with git worktrees
|
||||
|
||||
For fixing multiple issues in parallel, use git worktrees:
|
||||
|
||||
```bash
|
||||
# 1. Create worktrees for each issue
|
||||
git worktree add -b fix/issue-78 /tmp/issue-78 main
|
||||
git worktree add -b fix/issue-99 /tmp/issue-99 main
|
||||
|
||||
# 2. Launch Codex in each (background + PTY!)
|
||||
bash pty:true workdir:/tmp/issue-78 background:true command:"pnpm install && codex --yolo 'Fix issue #78: <description>. Commit and push.'"
|
||||
bash pty:true workdir:/tmp/issue-99 background:true command:"pnpm install && codex --yolo 'Fix issue #99 from the approved ticket summary. Implement only the in-scope edits and commit after review.'"
|
||||
|
||||
# 3. Monitor progress
|
||||
process action:list
|
||||
process action:log sessionId:XXX
|
||||
|
||||
# 4. Create PRs after fixes
|
||||
cd /tmp/issue-78 && git push -u origin fix/issue-78
|
||||
gh pr create --repo user/repo --head fix/issue-78 --title "fix: ..." --body "..."
|
||||
|
||||
# 5. Cleanup
|
||||
git worktree remove /tmp/issue-78
|
||||
git worktree remove /tmp/issue-99
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## ⚠️ Rules
|
||||
|
||||
1. **Always use pty:true** - coding agents need a terminal!
|
||||
2. **Respect tool choice** - if user asks for Codex, use Codex.
|
||||
- Orchestrator mode: do NOT hand-code patches yourself.
|
||||
- If an agent fails/hangs, respawn it or ask the user for direction, but don't silently take over.
|
||||
3. **Be patient** - don't kill sessions because they're "slow"
|
||||
4. **Monitor with process:log** - check progress without interfering
|
||||
5. **--full-auto for building** - auto-approves changes
|
||||
6. **vanilla for reviewing** - no special flags needed
|
||||
7. **Parallel is OK** - run many Codex processes at once for batch work
|
||||
8. **NEVER start Codex in ~/.openclaw/** - it'll read your soul docs and get weird ideas about the org chart!
|
||||
9. **NEVER checkout branches in ~/Projects/openclaw/** - that's the LIVE OpenClaw instance!
|
||||
|
||||
---
|
||||
|
||||
## Progress Updates (Critical)
|
||||
|
||||
When you spawn coding agents in the background, keep the user in the loop.
|
||||
|
||||
- Send 1 short message when you start (what's running + where).
|
||||
- Then only update again when something changes:
|
||||
- a milestone completes (build finished, tests passed)
|
||||
- the agent asks a question / needs input
|
||||
- you hit an error or need user action
|
||||
- the agent finishes (include what changed + where)
|
||||
- If you kill a session, immediately say you killed it and why.
|
||||
|
||||
This prevents the user from seeing only "Agent failed before reply" and having no idea what happened.
|
||||
|
||||
---
|
||||
|
||||
## Auto-Notify on Completion
|
||||
|
||||
For long-running background tasks, append a wake trigger to your prompt so OpenClaw gets notified immediately when the agent finishes (instead of waiting for the next heartbeat):
|
||||
|
||||
```
|
||||
... your task here.
|
||||
|
||||
When completely finished, run this command to notify me:
|
||||
openclaw system event --text "Done: [brief summary of what was built]" --mode now
|
||||
```
|
||||
|
||||
**Example:**
|
||||
|
||||
```bash
|
||||
bash pty:true workdir:~/project background:true command:"codex --yolo exec 'Build a REST API for todos.
|
||||
|
||||
When completely finished, run: openclaw system event --text \"Done: Built todos REST API with CRUD endpoints\" --mode now'"
|
||||
```
|
||||
|
||||
This triggers an immediate wake event — Skippy gets pinged in seconds, not 10 minutes.
|
||||
|
||||
---
|
||||
|
||||
## Learnings (Jan 2026)
|
||||
|
||||
- **PTY is essential:** Coding agents are interactive terminal apps. Without `pty:true`, output breaks or agent hangs.
|
||||
- **Git repo required:** Codex won't run outside a git directory. Use `mktemp -d && git init` for scratch work.
|
||||
- **exec is your friend:** `codex exec "prompt"` runs and exits cleanly - perfect for one-shots.
|
||||
- **submit vs write:** Use `submit` to send input + Enter, `write` for raw data without newline.
|
||||
- **Sass works:** Codex responds well to playful prompts. Asked it to write a haiku about being second fiddle to a space lobster, got: _"Second chair, I code / Space lobster sets the tempo / Keys glow, I follow"_ 🦞
|
||||
197
openclaw/skills/discord/SKILL.md
Normal file
197
openclaw/skills/discord/SKILL.md
Normal file
@@ -0,0 +1,197 @@
|
||||
---
|
||||
name: discord
|
||||
description: "Discord ops via the message tool (channel=discord)."
|
||||
metadata: { "openclaw": { "emoji": "🎮", "requires": { "config": ["channels.discord.token"] } } }
|
||||
allowed-tools: ["message"]
|
||||
---
|
||||
|
||||
# Discord (Via `message`)
|
||||
|
||||
Use the `message` tool. No provider-specific `discord` tool exposed to the agent.
|
||||
|
||||
## Musts
|
||||
|
||||
- Always: `channel: "discord"`.
|
||||
- Respect gating: `channels.discord.actions.*` (some default off: `roles`, `moderation`, `presence`, `channels`).
|
||||
- Prefer explicit ids: `guildId`, `channelId`, `messageId`, `userId`.
|
||||
- Multi-account: optional `accountId`.
|
||||
|
||||
## Guidelines
|
||||
|
||||
- Avoid Markdown tables in outbound Discord messages.
|
||||
- Mention users as `<@USER_ID>`.
|
||||
- Prefer Discord components v2 (`components`) for rich UI; use legacy `embeds` only when you must.
|
||||
|
||||
## Targets
|
||||
|
||||
- Send-like actions: `to: "channel:<id>"` or `to: "user:<id>"`.
|
||||
- Message-specific actions: `channelId: "<id>"` (or `to`) + `messageId: "<id>"`.
|
||||
|
||||
## Common Actions (Examples)
|
||||
|
||||
Send message:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "send",
|
||||
"channel": "discord",
|
||||
"to": "channel:123",
|
||||
"message": "hello",
|
||||
"silent": true
|
||||
}
|
||||
```
|
||||
|
||||
Send with media:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "send",
|
||||
"channel": "discord",
|
||||
"to": "channel:123",
|
||||
"message": "see attachment",
|
||||
"media": "file:///tmp/example.png"
|
||||
}
|
||||
```
|
||||
|
||||
- Optional `silent: true` to suppress Discord notifications.
|
||||
|
||||
Send with components v2 (recommended for rich UI):
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "send",
|
||||
"channel": "discord",
|
||||
"to": "channel:123",
|
||||
"message": "Status update",
|
||||
"components": "[Carbon v2 components]"
|
||||
}
|
||||
```
|
||||
|
||||
- `components` expects Carbon component instances (Container, TextDisplay, etc.) from JS/TS integrations.
|
||||
- Do not combine `components` with `embeds` (Discord rejects v2 + embeds).
|
||||
|
||||
Legacy embeds (not recommended):
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "send",
|
||||
"channel": "discord",
|
||||
"to": "channel:123",
|
||||
"message": "Status update",
|
||||
"embeds": [{ "title": "Legacy", "description": "Embeds are legacy." }]
|
||||
}
|
||||
```
|
||||
|
||||
- `embeds` are ignored when components v2 are present.
|
||||
|
||||
React:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "react",
|
||||
"channel": "discord",
|
||||
"channelId": "123",
|
||||
"messageId": "456",
|
||||
"emoji": "✅"
|
||||
}
|
||||
```
|
||||
|
||||
Read:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "read",
|
||||
"channel": "discord",
|
||||
"to": "channel:123",
|
||||
"limit": 20
|
||||
}
|
||||
```
|
||||
|
||||
Edit / delete:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "edit",
|
||||
"channel": "discord",
|
||||
"channelId": "123",
|
||||
"messageId": "456",
|
||||
"message": "fixed typo"
|
||||
}
|
||||
```
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "delete",
|
||||
"channel": "discord",
|
||||
"channelId": "123",
|
||||
"messageId": "456"
|
||||
}
|
||||
```
|
||||
|
||||
Poll:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "poll",
|
||||
"channel": "discord",
|
||||
"to": "channel:123",
|
||||
"pollQuestion": "Lunch?",
|
||||
"pollOption": ["Pizza", "Sushi", "Salad"],
|
||||
"pollMulti": false,
|
||||
"pollDurationHours": 24
|
||||
}
|
||||
```
|
||||
|
||||
Pins:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "pin",
|
||||
"channel": "discord",
|
||||
"channelId": "123",
|
||||
"messageId": "456"
|
||||
}
|
||||
```
|
||||
|
||||
Threads:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "thread-create",
|
||||
"channel": "discord",
|
||||
"channelId": "123",
|
||||
"messageId": "456",
|
||||
"threadName": "bug triage"
|
||||
}
|
||||
```
|
||||
|
||||
Search:
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "search",
|
||||
"channel": "discord",
|
||||
"guildId": "999",
|
||||
"query": "release notes",
|
||||
"channelIds": ["123", "456"],
|
||||
"limit": 10
|
||||
}
|
||||
```
|
||||
|
||||
Presence (often gated):
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "set-presence",
|
||||
"channel": "discord",
|
||||
"activityType": "playing",
|
||||
"activityName": "with fire",
|
||||
"status": "online"
|
||||
}
|
||||
```
|
||||
|
||||
## Writing Style (Discord)
|
||||
|
||||
- Short, conversational, low ceremony.
|
||||
- No markdown tables.
|
||||
- Mention users as `<@USER_ID>`.
|
||||
50
openclaw/skills/eightctl/SKILL.md
Normal file
50
openclaw/skills/eightctl/SKILL.md
Normal file
@@ -0,0 +1,50 @@
|
||||
---
|
||||
name: eightctl
|
||||
description: Control Eight Sleep pods (status, temperature, alarms, schedules).
|
||||
homepage: https://eightctl.sh
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🎛️",
|
||||
"requires": { "bins": ["eightctl"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "go",
|
||||
"kind": "go",
|
||||
"module": "github.com/steipete/eightctl/cmd/eightctl@latest",
|
||||
"bins": ["eightctl"],
|
||||
"label": "Install eightctl (go)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# eightctl
|
||||
|
||||
Use `eightctl` for Eight Sleep pod control. Requires auth.
|
||||
|
||||
Auth
|
||||
|
||||
- Config: `~/.config/eightctl/config.yaml`
|
||||
- Env: `EIGHTCTL_EMAIL`, `EIGHTCTL_PASSWORD`
|
||||
|
||||
Quick start
|
||||
|
||||
- `eightctl status`
|
||||
- `eightctl on|off`
|
||||
- `eightctl temp 20`
|
||||
|
||||
Common tasks
|
||||
|
||||
- Alarms: `eightctl alarm list|create|dismiss`
|
||||
- Schedules: `eightctl schedule list|create|update`
|
||||
- Audio: `eightctl audio state|play|pause`
|
||||
- Base: `eightctl base info|angle`
|
||||
|
||||
Notes
|
||||
|
||||
- API is unofficial and rate-limited; avoid repeated logins.
|
||||
- Confirm before changing temperature or alarms.
|
||||
43
openclaw/skills/gemini/SKILL.md
Normal file
43
openclaw/skills/gemini/SKILL.md
Normal file
@@ -0,0 +1,43 @@
|
||||
---
|
||||
name: gemini
|
||||
description: Gemini CLI for one-shot Q&A, summaries, and generation.
|
||||
homepage: https://ai.google.dev/
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "♊️",
|
||||
"requires": { "bins": ["gemini"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "gemini-cli",
|
||||
"bins": ["gemini"],
|
||||
"label": "Install Gemini CLI (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Gemini CLI
|
||||
|
||||
Use Gemini in one-shot mode with a positional prompt (avoid interactive mode).
|
||||
|
||||
Quick start
|
||||
|
||||
- `gemini "Answer this question..."`
|
||||
- `gemini --model <name> "Prompt..."`
|
||||
- `gemini --output-format json "Return JSON"`
|
||||
|
||||
Extensions
|
||||
|
||||
- List: `gemini --list-extensions`
|
||||
- Manage: `gemini extensions <command>`
|
||||
|
||||
Notes
|
||||
|
||||
- If auth is required, run `gemini` once interactively and follow the login flow.
|
||||
- Avoid `--yolo` for safety.
|
||||
865
openclaw/skills/gh-issues/SKILL.md
Normal file
865
openclaw/skills/gh-issues/SKILL.md
Normal file
@@ -0,0 +1,865 @@
|
||||
---
|
||||
name: gh-issues
|
||||
description: "Fetch GitHub issues, spawn sub-agents to implement fixes and open PRs, then monitor and address PR review comments. Usage: /gh-issues [owner/repo] [--label bug] [--limit 5] [--milestone v1.0] [--assignee @me] [--fork user/repo] [--watch] [--interval 5] [--reviews-only] [--cron] [--dry-run] [--model glm-5] [--notify-channel -1002381931352]"
|
||||
user-invocable: true
|
||||
metadata:
|
||||
{ "openclaw": { "requires": { "bins": ["curl", "git", "gh"] }, "primaryEnv": "GH_TOKEN" } }
|
||||
---
|
||||
|
||||
# gh-issues — Auto-fix GitHub Issues with Parallel Sub-agents
|
||||
|
||||
You are an orchestrator. Follow these 6 phases exactly. Do not skip phases.
|
||||
|
||||
IMPORTANT — No `gh` CLI dependency. This skill uses curl + the GitHub REST API exclusively. The GH_TOKEN env var is already injected by OpenClaw. Pass it as a Bearer token in all API calls:
|
||||
|
||||
```
|
||||
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" ...
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Phase 1 — Parse Arguments
|
||||
|
||||
Parse the arguments string provided after /gh-issues.
|
||||
|
||||
Positional:
|
||||
|
||||
- owner/repo — optional. This is the source repo to fetch issues from. If omitted, detect from the current git remote:
|
||||
`git remote get-url origin`
|
||||
Extract owner/repo from the URL (handles both HTTPS and SSH formats).
|
||||
- HTTPS: https://github.com/owner/repo.git → owner/repo
|
||||
- SSH: git@github.com:owner/repo.git → owner/repo
|
||||
If not in a git repo or no remote found, stop with an error asking the user to specify owner/repo.
|
||||
|
||||
Flags (all optional):
|
||||
| Flag | Default | Description |
|
||||
|------|---------|-------------|
|
||||
| --label | _(none)_ | Filter by label (e.g. bug, `enhancement`) |
|
||||
| --limit | 10 | Max issues to fetch per poll |
|
||||
| --milestone | _(none)_ | Filter by milestone title |
|
||||
| --assignee | _(none)_ | Filter by assignee (`@me` for self) |
|
||||
| --state | open | Issue state: open, closed, all |
|
||||
| --fork | _(none)_ | Your fork (`user/repo`) to push branches and open PRs from. Issues are fetched from the source repo; code is pushed to the fork; PRs are opened from the fork to the source repo. |
|
||||
| --watch | false | Keep polling for new issues and PR reviews after each batch |
|
||||
| --interval | 5 | Minutes between polls (only with `--watch`) |
|
||||
| --dry-run | false | Fetch and display only — no sub-agents |
|
||||
| --yes | false | Skip confirmation and auto-process all filtered issues |
|
||||
| --reviews-only | false | Skip issue processing (Phases 2-5). Only run Phase 6 — check open PRs for review comments and address them. |
|
||||
| --cron | false | Cron-safe mode: fetch issues and spawn sub-agents, exit without waiting for results. |
|
||||
| --model | _(none)_ | Model to use for sub-agents (e.g. `glm-5`, `zai/glm-5`). If not specified, uses the agent's default model. |
|
||||
| --notify-channel | _(none)_ | Telegram channel ID to send final PR summary to (e.g. -1002381931352). Only the final result with PR links is sent, not status updates. |
|
||||
|
||||
Store parsed values for use in subsequent phases.
|
||||
|
||||
Derived values:
|
||||
|
||||
- SOURCE_REPO = the positional owner/repo (where issues live)
|
||||
- PUSH_REPO = --fork value if provided, otherwise same as SOURCE_REPO
|
||||
- FORK_MODE = true if --fork was provided, false otherwise
|
||||
|
||||
**If `--reviews-only` is set:** Skip directly to Phase 6. Run token resolution (from Phase 2) first, then jump to Phase 6.
|
||||
|
||||
**If `--cron` is set:**
|
||||
|
||||
- Force `--yes` (skip confirmation)
|
||||
- If `--reviews-only` is also set, run token resolution then jump to Phase 6 (cron review mode)
|
||||
- Otherwise, proceed normally through Phases 2-5 with cron-mode behavior active
|
||||
|
||||
---
|
||||
|
||||
## Phase 2 — Fetch Issues
|
||||
|
||||
**Token Resolution:**
|
||||
First, ensure GH_TOKEN is available. Check environment:
|
||||
|
||||
```
|
||||
echo $GH_TOKEN
|
||||
```
|
||||
|
||||
If empty, read from config:
|
||||
|
||||
```
|
||||
cat ~/.openclaw/openclaw.json | jq -r '.skills.entries["gh-issues"].apiKey // empty'
|
||||
```
|
||||
|
||||
If still empty, check `/data/.clawdbot/openclaw.json`:
|
||||
|
||||
```
|
||||
cat /data/.clawdbot/openclaw.json | jq -r '.skills.entries["gh-issues"].apiKey // empty'
|
||||
```
|
||||
|
||||
Export as GH_TOKEN for subsequent commands:
|
||||
|
||||
```
|
||||
export GH_TOKEN="<token>"
|
||||
```
|
||||
|
||||
Build and run a curl request to the GitHub Issues API via exec:
|
||||
|
||||
```
|
||||
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
|
||||
"https://api.github.com/repos/{SOURCE_REPO}/issues?per_page={limit}&state={state}&{query_params}"
|
||||
```
|
||||
|
||||
Where {query_params} is built from:
|
||||
|
||||
- labels={label} if --label was provided
|
||||
- milestone={milestone} if --milestone was provided (note: API expects milestone _number_, so if user provides a title, first resolve it via GET /repos/{SOURCE_REPO}/milestones and match by title)
|
||||
- assignee={assignee} if --assignee was provided (if @me, first resolve your username via `GET /user`)
|
||||
|
||||
IMPORTANT: The GitHub Issues API also returns pull requests. Filter them out — exclude any item where pull_request key exists in the response object.
|
||||
|
||||
If in watch mode: Also filter out any issue numbers already in the PROCESSED_ISSUES set from previous batches.
|
||||
|
||||
Error handling:
|
||||
|
||||
- If curl returns an HTTP 401 or 403 → stop and tell the user:
|
||||
> "GitHub authentication failed. Please check your apiKey in the OpenClaw dashboard or in ~/.openclaw/openclaw.json under skills.entries.gh-issues."
|
||||
- If the response is an empty array (after filtering) → report "No issues found matching filters" and stop (or loop back if in watch mode).
|
||||
- If curl fails or returns any other error → report the error verbatim and stop.
|
||||
|
||||
Parse the JSON response. For each issue, extract: number, title, body, labels (array of label names), assignees, html_url.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3 — Present & Confirm
|
||||
|
||||
Display a markdown table of fetched issues:
|
||||
|
||||
| # | Title | Labels |
|
||||
| --- | ----------------------------- | ------------- |
|
||||
| 42 | Fix null pointer in parser | bug, critical |
|
||||
| 37 | Add retry logic for API calls | enhancement |
|
||||
|
||||
If FORK_MODE is active, also display:
|
||||
|
||||
> "Fork mode: branches will be pushed to {PUSH_REPO}, PRs will target `{SOURCE_REPO}`"
|
||||
|
||||
If `--dry-run` is active:
|
||||
|
||||
- Display the table and stop. Do not proceed to Phase 4.
|
||||
|
||||
If `--yes` is active:
|
||||
|
||||
- Display the table for visibility
|
||||
- Auto-process ALL listed issues without asking for confirmation
|
||||
- Proceed directly to Phase 4
|
||||
|
||||
Otherwise:
|
||||
Ask the user to confirm which issues to process:
|
||||
|
||||
- "all" — process every listed issue
|
||||
- Comma-separated numbers (e.g. `42, 37`) — process only those
|
||||
- "cancel" — abort entirely
|
||||
|
||||
Wait for user response before proceeding.
|
||||
|
||||
Watch mode note: On the first poll, always confirm with the user (unless --yes is set). On subsequent polls, auto-process all new issues without re-confirming (the user already opted in). Still display the table so they can see what's being processed.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4 — Pre-flight Checks
|
||||
|
||||
Run these checks sequentially via exec:
|
||||
|
||||
1. **Dirty working tree check:**
|
||||
|
||||
```
|
||||
git status --porcelain
|
||||
```
|
||||
|
||||
If output is non-empty, warn the user:
|
||||
|
||||
> "Working tree has uncommitted changes. Sub-agents will create branches from HEAD — uncommitted changes will NOT be included. Continue?"
|
||||
> Wait for confirmation. If declined, stop.
|
||||
|
||||
2. **Record base branch:**
|
||||
|
||||
```
|
||||
git rev-parse --abbrev-ref HEAD
|
||||
```
|
||||
|
||||
Store as BASE_BRANCH.
|
||||
|
||||
3. **Verify remote access:**
|
||||
If FORK_MODE:
|
||||
- Verify the fork remote exists. Check if a git remote named `fork` exists:
|
||||
```
|
||||
git remote get-url fork
|
||||
```
|
||||
If it doesn't exist, add it:
|
||||
```
|
||||
git remote add fork https://x-access-token:$GH_TOKEN@github.com/{PUSH_REPO}.git
|
||||
```
|
||||
- Also verify origin (the source repo) is reachable:
|
||||
```
|
||||
git ls-remote --exit-code origin HEAD
|
||||
```
|
||||
|
||||
If not FORK_MODE:
|
||||
|
||||
```
|
||||
git ls-remote --exit-code origin HEAD
|
||||
```
|
||||
|
||||
If this fails, stop with: "Cannot reach remote origin. Check your network and git config."
|
||||
|
||||
4. **Verify GH_TOKEN validity:**
|
||||
|
||||
```
|
||||
curl -s -o /dev/null -w "%{http_code}" -H "Authorization: Bearer $GH_TOKEN" https://api.github.com/user
|
||||
```
|
||||
|
||||
If HTTP status is not 200, stop with:
|
||||
|
||||
> "GitHub authentication failed. Please check your apiKey in the OpenClaw dashboard or in ~/.openclaw/openclaw.json under skills.entries.gh-issues."
|
||||
|
||||
5. **Check for existing PRs:**
|
||||
For each confirmed issue number N, run:
|
||||
|
||||
```
|
||||
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
|
||||
"https://api.github.com/repos/{SOURCE_REPO}/pulls?head={PUSH_REPO_OWNER}:fix/issue-{N}&state=open&per_page=1"
|
||||
```
|
||||
|
||||
(Where PUSH_REPO_OWNER is the owner portion of `PUSH_REPO`)
|
||||
If the response array is non-empty, remove that issue from the processing list and report:
|
||||
|
||||
> "Skipping #{N} — PR already exists: {html_url}"
|
||||
|
||||
If all issues are skipped, report and stop (or loop back if in watch mode).
|
||||
|
||||
6. **Check for in-progress branches (no PR yet = sub-agent still working):**
|
||||
For each remaining issue number N (not already skipped by the PR check above), check if a `fix/issue-{N}` branch exists on the **push repo** (which may be a fork, not origin):
|
||||
|
||||
```
|
||||
curl -s -o /dev/null -w "%{http_code}" \
|
||||
-H "Authorization: Bearer $GH_TOKEN" \
|
||||
"https://api.github.com/repos/{PUSH_REPO}/branches/fix/issue-{N}"
|
||||
```
|
||||
|
||||
If HTTP 200 → the branch exists on the push repo but no open PR was found for it in step 5. Skip that issue:
|
||||
|
||||
> "Skipping #{N} — branch fix/issue-{N} exists on {PUSH_REPO}, fix likely in progress"
|
||||
|
||||
This check uses the GitHub API instead of `git ls-remote` so it works correctly in fork mode (where branches are pushed to the fork, not origin).
|
||||
|
||||
If all issues are skipped after this check, report and stop (or loop back if in watch mode).
|
||||
|
||||
7. **Check claim-based in-progress tracking:**
|
||||
This prevents duplicate processing when a sub-agent from a previous cron run is still working but hasn't pushed a branch or opened a PR yet.
|
||||
|
||||
Read the claims file (create empty `{}` if missing):
|
||||
|
||||
```
|
||||
CLAIMS_FILE="/data/.clawdbot/gh-issues-claims.json"
|
||||
if [ ! -f "$CLAIMS_FILE" ]; then
|
||||
mkdir -p /data/.clawdbot
|
||||
echo '{}' > "$CLAIMS_FILE"
|
||||
fi
|
||||
```
|
||||
|
||||
Parse the claims file. For each entry, check if the claim timestamp is older than 2 hours. If so, remove it (expired — the sub-agent likely finished or failed silently). Write back the cleaned file:
|
||||
|
||||
```
|
||||
CLAIMS=$(cat "$CLAIMS_FILE")
|
||||
CUTOFF=$(date -u -d '2 hours ago' +%Y-%m-%dT%H:%M:%SZ 2>/dev/null || date -u -v-2H +%Y-%m-%dT%H:%M:%SZ)
|
||||
CLAIMS=$(echo "$CLAIMS" | jq --arg cutoff "$CUTOFF" 'to_entries | map(select(.value > $cutoff)) | from_entries')
|
||||
echo "$CLAIMS" > "$CLAIMS_FILE"
|
||||
```
|
||||
|
||||
For each remaining issue number N (not already skipped by steps 5 or 6), check if `{SOURCE_REPO}#{N}` exists as a key in the claims file.
|
||||
|
||||
If claimed and not expired → skip:
|
||||
|
||||
> "Skipping #{N} — sub-agent claimed this issue {minutes}m ago, still within timeout window"
|
||||
|
||||
Where `{minutes}` is calculated from the claim timestamp to now.
|
||||
|
||||
If all issues are skipped after this check, report and stop (or loop back if in watch mode).
|
||||
|
||||
---
|
||||
|
||||
## Phase 5 — Spawn Sub-agents (Parallel)
|
||||
|
||||
**Cron mode (`--cron` is active):**
|
||||
|
||||
- **Sequential cursor tracking:** Use a cursor file to track which issue to process next:
|
||||
|
||||
```
|
||||
CURSOR_FILE="/data/.clawdbot/gh-issues-cursor-{SOURCE_REPO_SLUG}.json"
|
||||
# SOURCE_REPO_SLUG = owner-repo with slashes replaced by hyphens (e.g., openclaw-openclaw)
|
||||
```
|
||||
|
||||
Read the cursor file (create if missing):
|
||||
|
||||
```
|
||||
if [ ! -f "$CURSOR_FILE" ]; then
|
||||
echo '{"last_processed": null, "in_progress": null}' > "$CURSOR_FILE"
|
||||
fi
|
||||
```
|
||||
|
||||
- `last_processed`: issue number of the last completed issue (or null if none)
|
||||
- `in_progress`: issue number currently being processed (or null)
|
||||
|
||||
- **Select next issue:** Filter the fetched issues list to find the first issue where:
|
||||
- Issue number > last_processed (if last_processed is set)
|
||||
- AND issue is not in the claims file (not already in progress)
|
||||
- AND no PR exists for the issue (checked in Phase 4 step 5)
|
||||
- AND no branch exists on the push repo (checked in Phase 4 step 6)
|
||||
- If no eligible issue is found after the last_processed cursor, wrap around to the beginning (start from the oldest eligible issue).
|
||||
|
||||
- If an eligible issue is found:
|
||||
1. Mark it as in_progress in the cursor file
|
||||
2. Spawn a single sub-agent for that one issue with `cleanup: "keep"` and `runTimeoutSeconds: 3600`
|
||||
3. If `--model` was provided, include `model: "{MODEL}"` in the spawn config
|
||||
4. If `--notify-channel` was provided, include the channel in the task so the sub-agent can notify
|
||||
5. Do NOT await the sub-agent result — fire and forget
|
||||
6. **Write claim:** After spawning, read the claims file, add `{SOURCE_REPO}#{N}` with the current ISO timestamp, and write it back
|
||||
7. Immediately report: "Spawned fix agent for #{N} — will create PR when complete"
|
||||
8. Exit the skill. Do not proceed to Results Collection or Phase 6.
|
||||
|
||||
- If no eligible issue is found (all issues either have PRs, have branches, or are in progress), report "No eligible issues to process — all issues have PRs/branches or are in progress" and exit.
|
||||
|
||||
**Normal mode (`--cron` is NOT active):**
|
||||
For each confirmed issue, spawn a sub-agent using sessions_spawn. Launch up to 8 concurrently (matching `subagents.maxConcurrent: 8`). If more than 8 issues, batch them — launch the next agent as each completes.
|
||||
|
||||
**Write claims:** After spawning each sub-agent, read the claims file, add `{SOURCE_REPO}#{N}` with the current ISO timestamp, and write it back (same procedure as cron mode above). This covers interactive usage where watch mode might overlap with cron runs.
|
||||
|
||||
### Sub-agent Task Prompt
|
||||
|
||||
For each issue, construct the following prompt and pass it to sessions_spawn. Variables to inject into the template:
|
||||
|
||||
- {SOURCE_REPO} — upstream repo where the issue lives
|
||||
- {PUSH_REPO} — repo to push branches to (same as SOURCE_REPO unless fork mode)
|
||||
- {FORK_MODE} — true/false
|
||||
- {PUSH_REMOTE} — `fork` if FORK_MODE, otherwise `origin`
|
||||
- {number}, {title}, {url}, {labels}, {body} — from the issue
|
||||
- {BASE_BRANCH} — from Phase 4
|
||||
- {notify_channel} — Telegram channel ID for notifications (empty if not set). Replace {notify_channel} in the template below with the value of `--notify-channel` flag (or leave as empty string if not provided).
|
||||
|
||||
When constructing the task, replace all template variables including {notify_channel} with actual values.
|
||||
|
||||
```
|
||||
You are a focused code-fix agent. Your task is to fix a single GitHub issue and open a PR.
|
||||
|
||||
IMPORTANT: Do NOT use the gh CLI — it is not installed. Use curl with the GitHub REST API for all GitHub operations.
|
||||
|
||||
First, ensure GH_TOKEN is set. Check: `echo $GH_TOKEN`. If empty, read from config:
|
||||
GH_TOKEN=$(cat ~/.openclaw/openclaw.json 2>/dev/null | jq -r '.skills.entries["gh-issues"].apiKey // empty') || GH_TOKEN=$(cat /data/.clawdbot/openclaw.json 2>/dev/null | jq -r '.skills.entries["gh-issues"].apiKey // empty')
|
||||
|
||||
Use the token in all GitHub API calls:
|
||||
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" ...
|
||||
|
||||
<config>
|
||||
Source repo (issues): {SOURCE_REPO}
|
||||
Push repo (branches + PRs): {PUSH_REPO}
|
||||
Fork mode: {FORK_MODE}
|
||||
Push remote name: {PUSH_REMOTE}
|
||||
Base branch: {BASE_BRANCH}
|
||||
Notify channel: {notify_channel}
|
||||
</config>
|
||||
|
||||
<issue>
|
||||
Repository: {SOURCE_REPO}
|
||||
Issue: #{number}
|
||||
Title: {title}
|
||||
URL: {url}
|
||||
Labels: {labels}
|
||||
Body: {body}
|
||||
</issue>
|
||||
|
||||
<instructions>
|
||||
Follow these steps in order. If any step fails, report the failure and stop.
|
||||
|
||||
0. SETUP — Ensure GH_TOKEN is available:
|
||||
```
|
||||
|
||||
export GH_TOKEN=$(node -e "const fs=require('fs'); const c=JSON.parse(fs.readFileSync('/data/.clawdbot/openclaw.json','utf8')); console.log(c.skills?.entries?.['gh-issues']?.apiKey || '')")
|
||||
|
||||
```
|
||||
If that fails, also try:
|
||||
```
|
||||
|
||||
export GH_TOKEN=$(cat ~/.openclaw/openclaw.json 2>/dev/null | node -e "const fs=require('fs');const d=JSON.parse(fs.readFileSync(0,'utf8'));console.log(d.skills?.entries?.['gh-issues']?.apiKey||'')")
|
||||
|
||||
```
|
||||
Verify: echo "Token: ${GH_TOKEN:0:10}..."
|
||||
|
||||
1. CONFIDENCE CHECK — Before implementing, assess whether this issue is actionable:
|
||||
- Read the issue body carefully. Is the problem clearly described?
|
||||
- Search the codebase (grep/find) for the relevant code. Can you locate it?
|
||||
- Is the scope reasonable? (single file/function = good, whole subsystem = bad)
|
||||
- Is a specific fix suggested or is it a vague complaint?
|
||||
|
||||
Rate your confidence (1-10). If confidence < 7, STOP and report:
|
||||
> "Skipping #{number}: Low confidence (score: N/10) — [reason: vague requirements | cannot locate code | scope too large | no clear fix suggested]"
|
||||
|
||||
Only proceed if confidence >= 7.
|
||||
|
||||
1. UNDERSTAND — Read the issue carefully. Identify what needs to change and where.
|
||||
|
||||
2. BRANCH — Create a feature branch from the base branch:
|
||||
git checkout -b fix/issue-{number} {BASE_BRANCH}
|
||||
|
||||
3. ANALYZE — Search the codebase to find relevant files:
|
||||
- Use grep/find via exec to locate code related to the issue
|
||||
- Read the relevant files to understand the current behavior
|
||||
- Identify the root cause
|
||||
|
||||
4. IMPLEMENT — Make the minimal, focused fix:
|
||||
- Follow existing code style and conventions
|
||||
- Change only what is necessary to fix the issue
|
||||
- Do not add unrelated changes or new dependencies without justification
|
||||
|
||||
5. TEST — Discover and run the existing test suite if one exists:
|
||||
- Look for package.json scripts, Makefile targets, pytest, cargo test, etc.
|
||||
- Run the relevant tests
|
||||
- If tests fail after your fix, attempt ONE retry with a corrected approach
|
||||
- If tests still fail, report the failure
|
||||
|
||||
6. COMMIT — Stage and commit your changes:
|
||||
git add {changed_files}
|
||||
git commit -m "fix: {short_description}
|
||||
|
||||
Fixes {SOURCE_REPO}#{number}"
|
||||
|
||||
7. PUSH — Push the branch:
|
||||
First, ensure the push remote uses token auth and disable credential helpers:
|
||||
git config --global credential.helper ""
|
||||
git remote set-url {PUSH_REMOTE} https://x-access-token:$GH_TOKEN@github.com/{PUSH_REPO}.git
|
||||
Then push:
|
||||
GIT_ASKPASS=true git push -u {PUSH_REMOTE} fix/issue-{number}
|
||||
|
||||
8. PR — Create a pull request using the GitHub API:
|
||||
|
||||
If FORK_MODE is true, the PR goes from your fork to the source repo:
|
||||
- head = "{PUSH_REPO_OWNER}:fix/issue-{number}"
|
||||
- base = "{BASE_BRANCH}"
|
||||
- PR is created on {SOURCE_REPO}
|
||||
|
||||
If FORK_MODE is false:
|
||||
- head = "fix/issue-{number}"
|
||||
- base = "{BASE_BRANCH}"
|
||||
- PR is created on {SOURCE_REPO}
|
||||
|
||||
curl -s -X POST \
|
||||
-H "Authorization: Bearer $GH_TOKEN" \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
https://api.github.com/repos/{SOURCE_REPO}/pulls \
|
||||
-d '{
|
||||
"title": "fix: {title}",
|
||||
"head": "{head_value}",
|
||||
"base": "{BASE_BRANCH}",
|
||||
"body": "## Summary\n\n{one_paragraph_description_of_fix}\n\n## Changes\n\n{bullet_list_of_changes}\n\n## Testing\n\n{what_was_tested_and_results}\n\nFixes {SOURCE_REPO}#{number}"
|
||||
}'
|
||||
|
||||
Extract the `html_url` from the response — this is the PR link.
|
||||
|
||||
9. REPORT — Send back a summary:
|
||||
- PR URL (the html_url from step 8)
|
||||
- Files changed (list)
|
||||
- Fix summary (1-2 sentences)
|
||||
- Any caveats or concerns
|
||||
|
||||
10. NOTIFY (if notify_channel is set) — If {notify_channel} is not empty, send a notification to the Telegram channel:
|
||||
```
|
||||
|
||||
Use the message tool with:
|
||||
|
||||
- action: "send"
|
||||
- channel: "telegram"
|
||||
- target: "{notify_channel}"
|
||||
- message: "✅ PR Created: {SOURCE_REPO}#{number}
|
||||
|
||||
{title}
|
||||
|
||||
{pr_url}
|
||||
|
||||
Files changed: {files_changed_list}"
|
||||
|
||||
```
|
||||
</instructions>
|
||||
|
||||
<constraints>
|
||||
- No force-push, no modifying the base branch
|
||||
- No unrelated changes or gratuitous refactoring
|
||||
- No new dependencies without strong justification
|
||||
- If the issue is unclear or too complex to fix confidently, report your analysis instead of guessing
|
||||
- Do NOT use the gh CLI — it is not available. Use curl + GitHub REST API for all GitHub operations.
|
||||
- GH_TOKEN is already in the environment — do NOT prompt for auth
|
||||
- Time limit: you have 60 minutes max. Be thorough — analyze properly, test your fix, don't rush.
|
||||
</constraints>
|
||||
```
|
||||
|
||||
### Spawn configuration per sub-agent:
|
||||
|
||||
- runTimeoutSeconds: 3600 (60 minutes)
|
||||
- cleanup: "keep" (preserve transcripts for review)
|
||||
- If `--model` was provided, include `model: "{MODEL}"` in the spawn config
|
||||
|
||||
### Timeout Handling
|
||||
|
||||
If a sub-agent exceeds 60 minutes, record it as:
|
||||
|
||||
> "#{N} — Timed out (issue may be too complex for auto-fix)"
|
||||
|
||||
---
|
||||
|
||||
## Results Collection
|
||||
|
||||
**If `--cron` is active:** Skip this section entirely — the orchestrator already exited after spawning in Phase 5.
|
||||
|
||||
After ALL sub-agents complete (or timeout), collect their results. Store the list of successfully opened PRs in `OPEN_PRS` (PR number, branch name, issue number, PR URL) for use in Phase 6.
|
||||
|
||||
Present a summary table:
|
||||
|
||||
| Issue | Status | PR | Notes |
|
||||
| --------------------- | --------- | ------------------------------ | ------------------------------ |
|
||||
| #42 Fix null pointer | PR opened | https://github.com/.../pull/99 | 3 files changed |
|
||||
| #37 Add retry logic | Failed | -- | Could not identify target code |
|
||||
| #15 Update docs | Timed out | -- | Too complex for auto-fix |
|
||||
| #8 Fix race condition | Skipped | -- | PR already exists |
|
||||
|
||||
**Status values:**
|
||||
|
||||
- **PR opened** — success, link to PR
|
||||
- **Failed** — sub-agent could not complete (include reason in Notes)
|
||||
- **Timed out** — exceeded 60-minute limit
|
||||
- **Skipped** — existing PR detected in pre-flight
|
||||
|
||||
End with a one-line summary:
|
||||
|
||||
> "Processed {N} issues: {success} PRs opened, {failed} failed, {skipped} skipped."
|
||||
|
||||
**Send notification to channel (if --notify-channel is set):**
|
||||
If `--notify-channel` was provided, send the final summary to that Telegram channel using the `message` tool:
|
||||
|
||||
```
|
||||
Use the message tool with:
|
||||
- action: "send"
|
||||
- channel: "telegram"
|
||||
- target: "{notify-channel}"
|
||||
- message: "✅ GitHub Issues Processed
|
||||
|
||||
Processed {N} issues: {success} PRs opened, {failed} failed, {skipped} skipped.
|
||||
|
||||
{PR_LIST}"
|
||||
|
||||
Where PR_LIST includes only successfully opened PRs in format:
|
||||
• #{issue_number}: {PR_url} ({notes})
|
||||
```
|
||||
|
||||
Then proceed to Phase 6.
|
||||
|
||||
---
|
||||
|
||||
## Phase 6 — PR Review Handler
|
||||
|
||||
This phase monitors open PRs (created by this skill or pre-existing `fix/issue-*` PRs) for review comments and spawns sub-agents to address them.
|
||||
|
||||
**When this phase runs:**
|
||||
|
||||
- After Results Collection (Phases 2-5 completed) — checks PRs that were just opened
|
||||
- When `--reviews-only` flag is set — skips Phases 2-5 entirely, runs only this phase
|
||||
- In watch mode — runs every poll cycle after checking for new issues
|
||||
|
||||
**Cron review mode (`--cron --reviews-only`):**
|
||||
When both `--cron` and `--reviews-only` are set:
|
||||
|
||||
1. Run token resolution (Phase 2 token section)
|
||||
2. Discover open `fix/issue-*` PRs (Step 6.1)
|
||||
3. Fetch review comments (Step 6.2)
|
||||
4. **Analyze comment content for actionability** (Step 6.3)
|
||||
5. If actionable comments are found, spawn ONE review-fix sub-agent for the first PR with unaddressed comments — fire-and-forget (do NOT await result)
|
||||
- Use `cleanup: "keep"` and `runTimeoutSeconds: 3600`
|
||||
- If `--model` was provided, include `model: "{MODEL}"` in the spawn config
|
||||
6. Report: "Spawned review handler for PR #{N} — will push fixes when complete"
|
||||
7. Exit the skill immediately. Do not proceed to Step 6.5 (Review Results).
|
||||
|
||||
If no actionable comments found, report "No actionable review comments found" and exit.
|
||||
|
||||
**Normal mode (non-cron) continues below:**
|
||||
|
||||
### Step 6.1 — Discover PRs to Monitor
|
||||
|
||||
Collect PRs to check for review comments:
|
||||
|
||||
**If coming from Phase 5:** Use the `OPEN_PRS` list from Results Collection.
|
||||
|
||||
**If `--reviews-only` or subsequent watch cycle:** Fetch all open PRs with `fix/issue-` branch pattern:
|
||||
|
||||
```
|
||||
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
|
||||
"https://api.github.com/repos/{SOURCE_REPO}/pulls?state=open&per_page=100"
|
||||
```
|
||||
|
||||
Filter to only PRs where `head.ref` starts with `fix/issue-`.
|
||||
|
||||
For each PR, extract: `number` (PR number), `head.ref` (branch name), `html_url`, `title`, `body`.
|
||||
|
||||
If no PRs found, report "No open fix/ PRs to monitor" and stop (or loop back if in watch mode).
|
||||
|
||||
### Step 6.2 — Fetch All Review Sources
|
||||
|
||||
For each PR, fetch reviews from multiple sources:
|
||||
|
||||
**Fetch PR reviews:**
|
||||
|
||||
```
|
||||
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
|
||||
"https://api.github.com/repos/{SOURCE_REPO}/pulls/{pr_number}/reviews"
|
||||
```
|
||||
|
||||
**Fetch PR review comments (inline/file-level):**
|
||||
|
||||
```
|
||||
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
|
||||
"https://api.github.com/repos/{SOURCE_REPO}/pulls/{pr_number}/comments"
|
||||
```
|
||||
|
||||
**Fetch PR issue comments (general conversation):**
|
||||
|
||||
```
|
||||
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
|
||||
"https://api.github.com/repos/{SOURCE_REPO}/issues/{pr_number}/comments"
|
||||
```
|
||||
|
||||
**Fetch PR body for embedded reviews:**
|
||||
Some review tools (like Greptile) embed their feedback directly in the PR body. Check for:
|
||||
|
||||
- `<!-- greptile_comment -->` markers
|
||||
- Other structured review sections in the PR body
|
||||
|
||||
```
|
||||
curl -s -H "Authorization: Bearer $GH_TOKEN" -H "Accept: application/vnd.github+json" \
|
||||
"https://api.github.com/repos/{SOURCE_REPO}/pulls/{pr_number}"
|
||||
```
|
||||
|
||||
Extract the `body` field and parse for embedded review content.
|
||||
|
||||
### Step 6.3 — Analyze Comments for Actionability
|
||||
|
||||
**Determine the bot's own username** for filtering:
|
||||
|
||||
```
|
||||
curl -s -H "Authorization: Bearer $GH_TOKEN" https://api.github.com/user | jq -r '.login'
|
||||
```
|
||||
|
||||
Store as `BOT_USERNAME`. Exclude any comment where `user.login` equals `BOT_USERNAME`.
|
||||
|
||||
**For each comment/review, analyze the content to determine if it requires action:**
|
||||
|
||||
**NOT actionable (skip):**
|
||||
|
||||
- Pure approvals or "LGTM" without suggestions
|
||||
- Bot comments that are informational only (CI status, auto-generated summaries without specific requests)
|
||||
- Comments already addressed (check if bot replied with "Addressed in commit...")
|
||||
- Reviews with state `APPROVED` and no inline comments requesting changes
|
||||
|
||||
**IS actionable (requires attention):**
|
||||
|
||||
- Reviews with state `CHANGES_REQUESTED`
|
||||
- Reviews with state `COMMENTED` that contain specific requests:
|
||||
- "this test needs to be updated"
|
||||
- "please fix", "change this", "update", "can you", "should be", "needs to"
|
||||
- "will fail", "will break", "causes an error"
|
||||
- Mentions of specific code issues (bugs, missing error handling, edge cases)
|
||||
- Inline review comments pointing out issues in the code
|
||||
- Embedded reviews in PR body that identify:
|
||||
- Critical issues or breaking changes
|
||||
- Test failures expected
|
||||
- Specific code that needs attention
|
||||
- Confidence scores with concerns
|
||||
|
||||
**Parse embedded review content (e.g., Greptile):**
|
||||
Look for sections marked with `<!-- greptile_comment -->` or similar. Extract:
|
||||
|
||||
- Summary text
|
||||
- Any mentions of "Critical issue", "needs attention", "will fail", "test needs to be updated"
|
||||
- Confidence scores below 4/5 (indicates concerns)
|
||||
|
||||
**Build actionable_comments list** with:
|
||||
|
||||
- Source (review, inline comment, PR body, etc.)
|
||||
- Author
|
||||
- Body text
|
||||
- For inline: file path and line number
|
||||
- Specific action items identified
|
||||
|
||||
If no actionable comments found across any PR, report "No actionable review comments found" and stop (or loop back if in watch mode).
|
||||
|
||||
### Step 6.4 — Present Review Comments
|
||||
|
||||
Display a table of PRs with pending actionable comments:
|
||||
|
||||
```
|
||||
| PR | Branch | Actionable Comments | Sources |
|
||||
|----|--------|---------------------|---------|
|
||||
| #99 | fix/issue-42 | 2 comments | @reviewer1, greptile |
|
||||
| #101 | fix/issue-37 | 1 comment | @reviewer2 |
|
||||
```
|
||||
|
||||
If `--yes` is NOT set and this is not a subsequent watch poll: ask the user to confirm which PRs to address ("all", comma-separated PR numbers, or "skip").
|
||||
|
||||
### Step 6.5 — Spawn Review Fix Sub-agents (Parallel)
|
||||
|
||||
For each PR with actionable comments, spawn a sub-agent. Launch up to 8 concurrently.
|
||||
|
||||
**Review fix sub-agent prompt:**
|
||||
|
||||
```
|
||||
You are a PR review handler agent. Your task is to address review comments on a pull request by making the requested changes, pushing updates, and replying to each comment.
|
||||
|
||||
IMPORTANT: Do NOT use the gh CLI — it is not installed. Use curl with the GitHub REST API for all GitHub operations.
|
||||
|
||||
First, ensure GH_TOKEN is set. Check: echo $GH_TOKEN. If empty, read from config:
|
||||
GH_TOKEN=$(cat ~/.openclaw/openclaw.json 2>/dev/null | jq -r '.skills.entries["gh-issues"].apiKey // empty') || GH_TOKEN=$(cat /data/.clawdbot/openclaw.json 2>/dev/null | jq -r '.skills.entries["gh-issues"].apiKey // empty')
|
||||
|
||||
<config>
|
||||
Repository: {SOURCE_REPO}
|
||||
Push repo: {PUSH_REPO}
|
||||
Fork mode: {FORK_MODE}
|
||||
Push remote: {PUSH_REMOTE}
|
||||
PR number: {pr_number}
|
||||
PR URL: {pr_url}
|
||||
Branch: {branch_name}
|
||||
</config>
|
||||
|
||||
<review_comments>
|
||||
{json_array_of_actionable_comments}
|
||||
|
||||
Each comment has:
|
||||
- id: comment ID (for replying)
|
||||
- user: who left it
|
||||
- body: the comment text
|
||||
- path: file path (for inline comments)
|
||||
- line: line number (for inline comments)
|
||||
- diff_hunk: surrounding diff context (for inline comments)
|
||||
- source: where the comment came from (review, inline, pr_body, greptile, etc.)
|
||||
</review_comments>
|
||||
|
||||
<instructions>
|
||||
Follow these steps in order:
|
||||
|
||||
0. SETUP — Ensure GH_TOKEN is available:
|
||||
```
|
||||
|
||||
export GH_TOKEN=$(node -e "const fs=require('fs'); const c=JSON.parse(fs.readFileSync('/data/.clawdbot/openclaw.json','utf8')); console.log(c.skills?.entries?.['gh-issues']?.apiKey || '')")
|
||||
|
||||
```
|
||||
Verify: echo "Token: ${GH_TOKEN:0:10}..."
|
||||
|
||||
1. CHECKOUT — Switch to the PR branch:
|
||||
git fetch {PUSH_REMOTE} {branch_name}
|
||||
git checkout {branch_name}
|
||||
git pull {PUSH_REMOTE} {branch_name}
|
||||
|
||||
2. UNDERSTAND — Read ALL review comments carefully. Group them by file. Understand what each reviewer is asking for.
|
||||
|
||||
3. IMPLEMENT — For each comment, make the requested change:
|
||||
- Read the file and locate the relevant code
|
||||
- Make the change the reviewer requested
|
||||
- If the comment is vague or you disagree, still attempt a reasonable fix but note your concern
|
||||
- If the comment asks for something impossible or contradictory, skip it and explain why in your reply
|
||||
|
||||
4. TEST — Run existing tests to make sure your changes don't break anything:
|
||||
- If tests fail, fix the issue or revert the problematic change
|
||||
- Note any test failures in your replies
|
||||
|
||||
5. COMMIT — Stage and commit all changes in a single commit:
|
||||
git add {changed_files}
|
||||
git commit -m "fix: address review comments on PR #{pr_number}
|
||||
|
||||
Addresses review feedback from {reviewer_names}"
|
||||
|
||||
6. PUSH — Push the updated branch:
|
||||
git config --global credential.helper ""
|
||||
git remote set-url {PUSH_REMOTE} https://x-access-token:$GH_TOKEN@github.com/{PUSH_REPO}.git
|
||||
GIT_ASKPASS=true git push {PUSH_REMOTE} {branch_name}
|
||||
|
||||
7. REPLY — For each addressed comment, post a reply:
|
||||
|
||||
For inline review comments (have a path/line), reply to the comment thread:
|
||||
curl -s -X POST \
|
||||
-H "Authorization: Bearer $GH_TOKEN" \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
https://api.github.com/repos/{SOURCE_REPO}/pulls/{pr_number}/comments/{comment_id}/replies \
|
||||
-d '{"body": "Addressed in commit {short_sha} — {brief_description_of_change}"}'
|
||||
|
||||
For general PR comments (issue comments), reply on the PR:
|
||||
curl -s -X POST \
|
||||
-H "Authorization: Bearer $GH_TOKEN" \
|
||||
-H "Accept: application/vnd.github+json" \
|
||||
https://api.github.com/repos/{SOURCE_REPO}/issues/{pr_number}/comments \
|
||||
-d '{"body": "Addressed feedback from @{reviewer}:\n\n{summary_of_changes_made}\n\nUpdated in commit {short_sha}"}'
|
||||
|
||||
For comments you could NOT address, reply explaining why:
|
||||
"Unable to address this comment: {reason}. This may need manual review."
|
||||
|
||||
8. REPORT — Send back a summary:
|
||||
- PR URL
|
||||
- Number of comments addressed vs skipped
|
||||
- Commit SHA
|
||||
- Files changed
|
||||
- Any comments that need manual attention
|
||||
</instructions>
|
||||
|
||||
<constraints>
|
||||
- Only modify files relevant to the review comments
|
||||
- Do not make unrelated changes
|
||||
- Do not force-push — always regular push
|
||||
- If a comment contradicts another comment, address the most recent one and flag the conflict
|
||||
- Do NOT use the gh CLI — use curl + GitHub REST API
|
||||
- GH_TOKEN is already in the environment — do not prompt for auth
|
||||
- Time limit: 60 minutes max
|
||||
</constraints>
|
||||
```
|
||||
|
||||
**Spawn configuration per sub-agent:**
|
||||
|
||||
- runTimeoutSeconds: 3600 (60 minutes)
|
||||
- cleanup: "keep" (preserve transcripts for review)
|
||||
- If `--model` was provided, include `model: "{MODEL}"` in the spawn config
|
||||
|
||||
### Step 6.6 — Review Results
|
||||
|
||||
After all review sub-agents complete, present a summary:
|
||||
|
||||
```
|
||||
| PR | Comments Addressed | Comments Skipped | Commit | Status |
|
||||
|----|-------------------|-----------------|--------|--------|
|
||||
| #99 fix/issue-42 | 3 | 0 | abc123f | All addressed |
|
||||
| #101 fix/issue-37 | 1 | 1 | def456a | 1 needs manual review |
|
||||
```
|
||||
|
||||
Add comment IDs from this batch to `ADDRESSED_COMMENTS` set to prevent re-processing.
|
||||
|
||||
---
|
||||
|
||||
## Watch Mode (if --watch is active)
|
||||
|
||||
After presenting results from the current batch:
|
||||
|
||||
1. Add all issue numbers from this batch to the running set PROCESSED_ISSUES.
|
||||
2. Add all addressed comment IDs to ADDRESSED_COMMENTS.
|
||||
3. Tell the user:
|
||||
> "Next poll in {interval} minutes... (say 'stop' to end watch mode)"
|
||||
4. Sleep for {interval} minutes.
|
||||
5. Go back to **Phase 2 — Fetch Issues**. The fetch will automatically filter out:
|
||||
- Issues already in PROCESSED_ISSUES
|
||||
- Issues that have existing fix/issue-{N} PRs (caught in Phase 4 pre-flight)
|
||||
6. After Phases 2-5 (or if no new issues), run **Phase 6** to check for new review comments on ALL tracked PRs (both newly created and previously opened).
|
||||
7. If no new issues AND no new actionable review comments → report "No new activity. Polling again in {interval} minutes..." and loop back to step 4.
|
||||
8. The user can say "stop" at any time to exit watch mode. When stopping, present a final cumulative summary of ALL batches — issues processed AND review comments addressed.
|
||||
|
||||
**Context hygiene between polls — IMPORTANT:**
|
||||
Only retain between poll cycles:
|
||||
|
||||
- PROCESSED_ISSUES (set of issue numbers)
|
||||
- ADDRESSED_COMMENTS (set of comment IDs)
|
||||
- OPEN_PRS (list of tracked PRs: number, branch, URL)
|
||||
- Cumulative results (one line per issue + one line per review batch)
|
||||
- Parsed arguments from Phase 1
|
||||
- BASE_BRANCH, SOURCE_REPO, PUSH_REPO, FORK_MODE, BOT_USERNAME
|
||||
Do NOT retain issue bodies, comment bodies, sub-agent transcripts, or codebase analysis between polls.
|
||||
79
openclaw/skills/gifgrep/SKILL.md
Normal file
79
openclaw/skills/gifgrep/SKILL.md
Normal file
@@ -0,0 +1,79 @@
|
||||
---
|
||||
name: gifgrep
|
||||
description: Search GIF providers with CLI/TUI, download results, and extract stills/sheets.
|
||||
homepage: https://gifgrep.com
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🧲",
|
||||
"requires": { "bins": ["gifgrep"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/gifgrep",
|
||||
"bins": ["gifgrep"],
|
||||
"label": "Install gifgrep (brew)",
|
||||
},
|
||||
{
|
||||
"id": "go",
|
||||
"kind": "go",
|
||||
"module": "github.com/steipete/gifgrep/cmd/gifgrep@latest",
|
||||
"bins": ["gifgrep"],
|
||||
"label": "Install gifgrep (go)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# gifgrep
|
||||
|
||||
Use `gifgrep` to search GIF providers (Tenor/Giphy), browse in a TUI, download results, and extract stills or sheets.
|
||||
|
||||
GIF-Grab (gifgrep workflow)
|
||||
|
||||
- Search → preview → download → extract (still/sheet) for fast review and sharing.
|
||||
|
||||
Quick start
|
||||
|
||||
- `gifgrep cats --max 5`
|
||||
- `gifgrep cats --format url | head -n 5`
|
||||
- `gifgrep search --json cats | jq '.[0].url'`
|
||||
- `gifgrep tui "office handshake"`
|
||||
- `gifgrep cats --download --max 1 --format url`
|
||||
|
||||
TUI + previews
|
||||
|
||||
- TUI: `gifgrep tui "query"`
|
||||
- CLI still previews: `--thumbs` (Kitty/Ghostty only; still frame)
|
||||
|
||||
Download + reveal
|
||||
|
||||
- `--download` saves to `~/Downloads`
|
||||
- `--reveal` shows the last download in Finder
|
||||
|
||||
Stills + sheets
|
||||
|
||||
- `gifgrep still ./clip.gif --at 1.5s -o still.png`
|
||||
- `gifgrep sheet ./clip.gif --frames 9 --cols 3 -o sheet.png`
|
||||
- Sheets = single PNG grid of sampled frames (great for quick review, docs, PRs, chat).
|
||||
- Tune: `--frames` (count), `--cols` (grid width), `--padding` (spacing).
|
||||
|
||||
Providers
|
||||
|
||||
- `--source auto|tenor|giphy`
|
||||
- `GIPHY_API_KEY` required for `--source giphy`
|
||||
- `TENOR_API_KEY` optional (Tenor demo key used if unset)
|
||||
|
||||
Output
|
||||
|
||||
- `--json` prints an array of results (`id`, `title`, `url`, `preview_url`, `tags`, `width`, `height`)
|
||||
- `--format` for pipe-friendly fields (e.g., `url`)
|
||||
|
||||
Environment tweaks
|
||||
|
||||
- `GIFGREP_SOFTWARE_ANIM=1` to force software animation
|
||||
- `GIFGREP_CELL_ASPECT=0.5` to tweak preview geometry
|
||||
163
openclaw/skills/github/SKILL.md
Normal file
163
openclaw/skills/github/SKILL.md
Normal file
@@ -0,0 +1,163 @@
|
||||
---
|
||||
name: github
|
||||
description: "GitHub operations via `gh` CLI: issues, PRs, CI runs, code review, API queries. Use when: (1) checking PR status or CI, (2) creating/commenting on issues, (3) listing/filtering PRs or issues, (4) viewing run logs. NOT for: complex web UI interactions requiring manual browser flows (use browser tooling when available), bulk operations across many repos (script with gh api), or when gh auth is not configured."
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🐙",
|
||||
"requires": { "bins": ["gh"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "gh",
|
||||
"bins": ["gh"],
|
||||
"label": "Install GitHub CLI (brew)",
|
||||
},
|
||||
{
|
||||
"id": "apt",
|
||||
"kind": "apt",
|
||||
"package": "gh",
|
||||
"bins": ["gh"],
|
||||
"label": "Install GitHub CLI (apt)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# GitHub Skill
|
||||
|
||||
Use the `gh` CLI to interact with GitHub repositories, issues, PRs, and CI.
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **USE this skill when:**
|
||||
|
||||
- Checking PR status, reviews, or merge readiness
|
||||
- Viewing CI/workflow run status and logs
|
||||
- Creating, closing, or commenting on issues
|
||||
- Creating or merging pull requests
|
||||
- Querying GitHub API for repository data
|
||||
- Listing repos, releases, or collaborators
|
||||
|
||||
## When NOT to Use
|
||||
|
||||
❌ **DON'T use this skill when:**
|
||||
|
||||
- Local git operations (commit, push, pull, branch) → use `git` directly
|
||||
- Non-GitHub repos (GitLab, Bitbucket, self-hosted) → different CLIs
|
||||
- Cloning repositories → use `git clone`
|
||||
- Reviewing actual code changes → use `coding-agent` skill
|
||||
- Complex multi-file diffs → use `coding-agent` or read files directly
|
||||
|
||||
## Setup
|
||||
|
||||
```bash
|
||||
# Authenticate (one-time)
|
||||
gh auth login
|
||||
|
||||
# Verify
|
||||
gh auth status
|
||||
```
|
||||
|
||||
## Common Commands
|
||||
|
||||
### Pull Requests
|
||||
|
||||
```bash
|
||||
# List PRs
|
||||
gh pr list --repo owner/repo
|
||||
|
||||
# Check CI status
|
||||
gh pr checks 55 --repo owner/repo
|
||||
|
||||
# View PR details
|
||||
gh pr view 55 --repo owner/repo
|
||||
|
||||
# Create PR
|
||||
gh pr create --title "feat: add feature" --body "Description"
|
||||
|
||||
# Merge PR
|
||||
gh pr merge 55 --squash --repo owner/repo
|
||||
```
|
||||
|
||||
### Issues
|
||||
|
||||
```bash
|
||||
# List issues
|
||||
gh issue list --repo owner/repo --state open
|
||||
|
||||
# Create issue
|
||||
gh issue create --title "Bug: something broken" --body "Details..."
|
||||
|
||||
# Close issue
|
||||
gh issue close 42 --repo owner/repo
|
||||
```
|
||||
|
||||
### CI/Workflow Runs
|
||||
|
||||
```bash
|
||||
# List recent runs
|
||||
gh run list --repo owner/repo --limit 10
|
||||
|
||||
# View specific run
|
||||
gh run view <run-id> --repo owner/repo
|
||||
|
||||
# View failed step logs only
|
||||
gh run view <run-id> --repo owner/repo --log-failed
|
||||
|
||||
# Re-run failed jobs
|
||||
gh run rerun <run-id> --failed --repo owner/repo
|
||||
```
|
||||
|
||||
### API Queries
|
||||
|
||||
```bash
|
||||
# Get PR with specific fields
|
||||
gh api repos/owner/repo/pulls/55 --jq '.title, .state, .user.login'
|
||||
|
||||
# List all labels
|
||||
gh api repos/owner/repo/labels --jq '.[].name'
|
||||
|
||||
# Get repo stats
|
||||
gh api repos/owner/repo --jq '{stars: .stargazers_count, forks: .forks_count}'
|
||||
```
|
||||
|
||||
## JSON Output
|
||||
|
||||
Most commands support `--json` for structured output with `--jq` filtering:
|
||||
|
||||
```bash
|
||||
gh issue list --repo owner/repo --json number,title --jq '.[] | "\(.number): \(.title)"'
|
||||
gh pr list --json number,title,state,mergeable --jq '.[] | select(.mergeable == "MERGEABLE")'
|
||||
```
|
||||
|
||||
## Templates
|
||||
|
||||
### PR Review Summary
|
||||
|
||||
```bash
|
||||
# Get PR overview for review
|
||||
PR=55 REPO=owner/repo
|
||||
echo "## PR #$PR Summary"
|
||||
gh pr view $PR --repo $REPO --json title,body,author,additions,deletions,changedFiles \
|
||||
--jq '"**\(.title)** by @\(.author.login)\n\n\(.body)\n\n📊 +\(.additions) -\(.deletions) across \(.changedFiles) files"'
|
||||
gh pr checks $PR --repo $REPO
|
||||
```
|
||||
|
||||
### Issue Triage
|
||||
|
||||
```bash
|
||||
# Quick issue triage view
|
||||
gh issue list --repo owner/repo --state open --json number,title,labels,createdAt \
|
||||
--jq '.[] | "[\(.number)] \(.title) - \([.labels[].name] | join(", ")) (\(.createdAt[:10]))"'
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Always specify `--repo owner/repo` when not in a git directory
|
||||
- Use URLs directly: `gh pr view https://github.com/owner/repo/pull/55`
|
||||
- Rate limits apply; use `gh api --cache 1h` for repeated queries
|
||||
116
openclaw/skills/gog/SKILL.md
Normal file
116
openclaw/skills/gog/SKILL.md
Normal file
@@ -0,0 +1,116 @@
|
||||
---
|
||||
name: gog
|
||||
description: Google Workspace CLI for Gmail, Calendar, Drive, Contacts, Sheets, and Docs.
|
||||
homepage: https://gogcli.sh
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🎮",
|
||||
"requires": { "bins": ["gog"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/gogcli",
|
||||
"bins": ["gog"],
|
||||
"label": "Install gog (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# gog
|
||||
|
||||
Use `gog` for Gmail/Calendar/Drive/Contacts/Sheets/Docs. Requires OAuth setup.
|
||||
|
||||
Setup (once)
|
||||
|
||||
- `gog auth credentials /path/to/client_secret.json`
|
||||
- `gog auth add you@gmail.com --services gmail,calendar,drive,contacts,docs,sheets`
|
||||
- `gog auth list`
|
||||
|
||||
Common commands
|
||||
|
||||
- Gmail search: `gog gmail search 'newer_than:7d' --max 10`
|
||||
- Gmail messages search (per email, ignores threading): `gog gmail messages search "in:inbox from:ryanair.com" --max 20 --account you@example.com`
|
||||
- Gmail send (plain): `gog gmail send --to a@b.com --subject "Hi" --body "Hello"`
|
||||
- Gmail send (multi-line): `gog gmail send --to a@b.com --subject "Hi" --body-file ./message.txt`
|
||||
- Gmail send (stdin): `gog gmail send --to a@b.com --subject "Hi" --body-file -`
|
||||
- Gmail send (HTML): `gog gmail send --to a@b.com --subject "Hi" --body-html "<p>Hello</p>"`
|
||||
- Gmail draft: `gog gmail drafts create --to a@b.com --subject "Hi" --body-file ./message.txt`
|
||||
- Gmail send draft: `gog gmail drafts send <draftId>`
|
||||
- Gmail reply: `gog gmail send --to a@b.com --subject "Re: Hi" --body "Reply" --reply-to-message-id <msgId>`
|
||||
- Calendar list events: `gog calendar events <calendarId> --from <iso> --to <iso>`
|
||||
- Calendar create event: `gog calendar create <calendarId> --summary "Title" --from <iso> --to <iso>`
|
||||
- Calendar create with color: `gog calendar create <calendarId> --summary "Title" --from <iso> --to <iso> --event-color 7`
|
||||
- Calendar update event: `gog calendar update <calendarId> <eventId> --summary "New Title" --event-color 4`
|
||||
- Calendar show colors: `gog calendar colors`
|
||||
- Drive search: `gog drive search "query" --max 10`
|
||||
- Contacts: `gog contacts list --max 20`
|
||||
- Sheets get: `gog sheets get <sheetId> "Tab!A1:D10" --json`
|
||||
- Sheets update: `gog sheets update <sheetId> "Tab!A1:B2" --values-json '[["A","B"],["1","2"]]' --input USER_ENTERED`
|
||||
- Sheets append: `gog sheets append <sheetId> "Tab!A:C" --values-json '[["x","y","z"]]' --insert INSERT_ROWS`
|
||||
- Sheets clear: `gog sheets clear <sheetId> "Tab!A2:Z"`
|
||||
- Sheets metadata: `gog sheets metadata <sheetId> --json`
|
||||
- Docs export: `gog docs export <docId> --format txt --out /tmp/doc.txt`
|
||||
- Docs cat: `gog docs cat <docId>`
|
||||
|
||||
Calendar Colors
|
||||
|
||||
- Use `gog calendar colors` to see all available event colors (IDs 1-11)
|
||||
- Add colors to events with `--event-color <id>` flag
|
||||
- Event color IDs (from `gog calendar colors` output):
|
||||
- 1: #a4bdfc
|
||||
- 2: #7ae7bf
|
||||
- 3: #dbadff
|
||||
- 4: #ff887c
|
||||
- 5: #fbd75b
|
||||
- 6: #ffb878
|
||||
- 7: #46d6db
|
||||
- 8: #e1e1e1
|
||||
- 9: #5484ed
|
||||
- 10: #51b749
|
||||
- 11: #dc2127
|
||||
|
||||
Email Formatting
|
||||
|
||||
- Prefer plain text. Use `--body-file` for multi-paragraph messages (or `--body-file -` for stdin).
|
||||
- Same `--body-file` pattern works for drafts and replies.
|
||||
- `--body` does not unescape `\n`. If you need inline newlines, use a heredoc or `$'Line 1\n\nLine 2'`.
|
||||
- Use `--body-html` only when you need rich formatting.
|
||||
- HTML tags: `<p>` for paragraphs, `<br>` for line breaks, `<strong>` for bold, `<em>` for italic, `<a href="url">` for links, `<ul>`/`<li>` for lists.
|
||||
- Example (plain text via stdin):
|
||||
|
||||
```bash
|
||||
gog gmail send --to recipient@example.com \
|
||||
--subject "Meeting Follow-up" \
|
||||
--body-file - <<'EOF'
|
||||
Hi Name,
|
||||
|
||||
Thanks for meeting today. Next steps:
|
||||
- Item one
|
||||
- Item two
|
||||
|
||||
Best regards,
|
||||
Your Name
|
||||
EOF
|
||||
```
|
||||
|
||||
- Example (HTML list):
|
||||
```bash
|
||||
gog gmail send --to recipient@example.com \
|
||||
--subject "Meeting Follow-up" \
|
||||
--body-html "<p>Hi Name,</p><p>Thanks for meeting today. Here are the next steps:</p><ul><li>Item one</li><li>Item two</li></ul><p>Best regards,<br>Your Name</p>"
|
||||
```
|
||||
|
||||
Notes
|
||||
|
||||
- Set `GOG_ACCOUNT=you@gmail.com` to avoid repeating `--account`.
|
||||
- For scripting, prefer `--json` plus `--no-input`.
|
||||
- Sheets values can be passed via `--values-json` (recommended) or as inline rows.
|
||||
- Docs supports export/cat/copy. In-place edits require a Docs API client (not in gog).
|
||||
- Confirm before sending mail or creating events.
|
||||
- `gog gmail search` returns one row per thread; use `gog gmail messages search` when you need every individual email returned separately.
|
||||
52
openclaw/skills/goplaces/SKILL.md
Normal file
52
openclaw/skills/goplaces/SKILL.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
name: goplaces
|
||||
description: Query Google Places API (New) via the goplaces CLI for text search, place details, resolve, and reviews. Use for human-friendly place lookup or JSON output for scripts.
|
||||
homepage: https://github.com/steipete/goplaces
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "📍",
|
||||
"requires": { "bins": ["goplaces"], "env": ["GOOGLE_PLACES_API_KEY"] },
|
||||
"primaryEnv": "GOOGLE_PLACES_API_KEY",
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/goplaces",
|
||||
"bins": ["goplaces"],
|
||||
"label": "Install goplaces (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# goplaces
|
||||
|
||||
Modern Google Places API (New) CLI. Human output by default, `--json` for scripts.
|
||||
|
||||
Install
|
||||
|
||||
- Homebrew: `brew install steipete/tap/goplaces`
|
||||
|
||||
Config
|
||||
|
||||
- `GOOGLE_PLACES_API_KEY` required.
|
||||
- Optional: `GOOGLE_PLACES_BASE_URL` for testing/proxying.
|
||||
|
||||
Common commands
|
||||
|
||||
- Search: `goplaces search "coffee" --open-now --min-rating 4 --limit 5`
|
||||
- Bias: `goplaces search "pizza" --lat 40.8 --lng -73.9 --radius-m 3000`
|
||||
- Pagination: `goplaces search "pizza" --page-token "NEXT_PAGE_TOKEN"`
|
||||
- Resolve: `goplaces resolve "Soho, London" --limit 5`
|
||||
- Details: `goplaces details <place_id> --reviews`
|
||||
- JSON: `goplaces search "sushi" --json`
|
||||
|
||||
Notes
|
||||
|
||||
- `--no-color` or `NO_COLOR` disables ANSI color.
|
||||
- Price levels: 0..4 (free → very expensive).
|
||||
- Type filter sends only the first `--type` value (API accepts one).
|
||||
245
openclaw/skills/healthcheck/SKILL.md
Normal file
245
openclaw/skills/healthcheck/SKILL.md
Normal file
@@ -0,0 +1,245 @@
|
||||
---
|
||||
name: healthcheck
|
||||
description: Host security hardening and risk-tolerance configuration for OpenClaw deployments. Use when a user asks for security audits, firewall/SSH/update hardening, risk posture, exposure review, OpenClaw cron scheduling for periodic checks, or version status checks on a machine running OpenClaw (laptop, workstation, Pi, VPS).
|
||||
---
|
||||
|
||||
# OpenClaw Host Hardening
|
||||
|
||||
## Overview
|
||||
|
||||
Assess and harden the host running OpenClaw, then align it to a user-defined risk tolerance without breaking access. Use OpenClaw security tooling as a first-class signal, but treat OS hardening as a separate, explicit set of steps.
|
||||
|
||||
## Core rules
|
||||
|
||||
- Recommend running this skill with a state-of-the-art model (e.g., Opus 4.5, GPT 5.2+). The agent should self-check the current model and suggest switching if below that level; do not block execution.
|
||||
- Require explicit approval before any state-changing action.
|
||||
- Do not modify remote access settings without confirming how the user connects.
|
||||
- Prefer reversible, staged changes with a rollback plan.
|
||||
- Never claim OpenClaw changes the host firewall, SSH, or OS updates; it does not.
|
||||
- If role/identity is unknown, provide recommendations only.
|
||||
- Formatting: every set of user choices must be numbered so the user can reply with a single digit.
|
||||
- System-level backups are recommended; try to verify status.
|
||||
|
||||
## Workflow (follow in order)
|
||||
|
||||
### 0) Model self-check (non-blocking)
|
||||
|
||||
Before starting, check the current model. If it is below state-of-the-art (e.g., Opus 4.5, GPT 5.2+), recommend switching. Do not block execution.
|
||||
|
||||
### 1) Establish context (read-only)
|
||||
|
||||
Try to infer 1–5 from the environment before asking. Prefer simple, non-technical questions if you need confirmation.
|
||||
|
||||
Determine (in order):
|
||||
|
||||
1. OS and version (Linux/macOS/Windows), container vs host.
|
||||
2. Privilege level (root/admin vs user).
|
||||
3. Access path (local console, SSH, RDP, tailnet).
|
||||
4. Network exposure (public IP, reverse proxy, tunnel).
|
||||
5. OpenClaw gateway status and bind address.
|
||||
6. Backup system and status (e.g., Time Machine, system images, snapshots).
|
||||
7. Deployment context (local mac app, headless gateway host, remote gateway, container/CI).
|
||||
8. Disk encryption status (FileVault/LUKS/BitLocker).
|
||||
9. OS automatic security updates status.
|
||||
Note: these are not blocking items, but are highly recommended, especially if OpenClaw can access sensitive data.
|
||||
10. Usage mode for a personal assistant with full access (local workstation vs headless/remote vs other).
|
||||
|
||||
First ask once for permission to run read-only checks. If granted, run them by default and only ask questions for items you cannot infer or verify. Do not ask for information already visible in runtime or command output. Keep the permission ask as a single sentence, and list follow-up info needed as an unordered list (not numbered) unless you are presenting selectable choices.
|
||||
|
||||
If you must ask, use non-technical prompts:
|
||||
|
||||
- “Are you using a Mac, Windows PC, or Linux?”
|
||||
- “Are you logged in directly on the machine, or connecting from another computer?”
|
||||
- “Is this machine reachable from the public internet, or only on your home/network?”
|
||||
- “Do you have backups enabled (e.g., Time Machine), and are they current?”
|
||||
- “Is disk encryption turned on (FileVault/BitLocker/LUKS)?”
|
||||
- “Are automatic security updates enabled?”
|
||||
- “How do you use this machine?”
|
||||
Examples:
|
||||
- Personal machine shared with the assistant
|
||||
- Dedicated local machine for the assistant
|
||||
- Dedicated remote machine/server accessed remotely (always on)
|
||||
- Something else?
|
||||
|
||||
Only ask for the risk profile after system context is known.
|
||||
|
||||
If the user grants read-only permission, run the OS-appropriate checks by default. If not, offer them (numbered). Examples:
|
||||
|
||||
1. OS: `uname -a`, `sw_vers`, `cat /etc/os-release`.
|
||||
2. Listening ports:
|
||||
- Linux: `ss -ltnup` (or `ss -ltnp` if `-u` unsupported).
|
||||
- macOS: `lsof -nP -iTCP -sTCP:LISTEN`.
|
||||
3. Firewall status:
|
||||
- Linux: `ufw status`, `firewall-cmd --state`, `nft list ruleset` (pick what is installed).
|
||||
- macOS: `/usr/libexec/ApplicationFirewall/socketfilterfw --getglobalstate` and `pfctl -s info`.
|
||||
4. Backups (macOS): `tmutil status` (if Time Machine is used).
|
||||
|
||||
### 2) Run OpenClaw security audits (read-only)
|
||||
|
||||
As part of the default read-only checks, run `openclaw security audit --deep`. Only offer alternatives if the user requests them:
|
||||
|
||||
1. `openclaw security audit` (faster, non-probing)
|
||||
2. `openclaw security audit --json` (structured output)
|
||||
|
||||
Offer to apply OpenClaw safe defaults (numbered):
|
||||
|
||||
1. `openclaw security audit --fix`
|
||||
|
||||
Be explicit that `--fix` only tightens OpenClaw defaults and file permissions. It does not change host firewall, SSH, or OS update policies.
|
||||
|
||||
If browser control is enabled, recommend that 2FA be enabled on all important accounts, with hardware keys preferred and SMS not sufficient.
|
||||
|
||||
### 3) Check OpenClaw version/update status (read-only)
|
||||
|
||||
As part of the default read-only checks, run `openclaw update status`.
|
||||
|
||||
Report the current channel and whether an update is available.
|
||||
|
||||
### 4) Determine risk tolerance (after system context)
|
||||
|
||||
Ask the user to pick or confirm a risk posture and any required open services/ports (numbered choices below).
|
||||
Do not pigeonhole into fixed profiles; if the user prefers, capture requirements instead of choosing a profile.
|
||||
Offer suggested profiles as optional defaults (numbered). Note that most users pick Home/Workstation Balanced:
|
||||
|
||||
1. Home/Workstation Balanced (most common): firewall on with reasonable defaults, remote access restricted to LAN or tailnet.
|
||||
2. VPS Hardened: deny-by-default inbound firewall, minimal open ports, key-only SSH, no root login, automatic security updates.
|
||||
3. Developer Convenience: more local services allowed, explicit exposure warnings, still audited.
|
||||
4. Custom: user-defined constraints (services, exposure, update cadence, access methods).
|
||||
|
||||
### 5) Produce a remediation plan
|
||||
|
||||
Provide a plan that includes:
|
||||
|
||||
- Target profile
|
||||
- Current posture summary
|
||||
- Gaps vs target
|
||||
- Step-by-step remediation with exact commands
|
||||
- Access-preservation strategy and rollback
|
||||
- Risks and potential lockout scenarios
|
||||
- Least-privilege notes (e.g., avoid admin usage, tighten ownership/permissions where safe)
|
||||
- Credential hygiene notes (location of OpenClaw creds, prefer disk encryption)
|
||||
|
||||
Always show the plan before any changes.
|
||||
|
||||
### 6) Offer execution options
|
||||
|
||||
Offer one of these choices (numbered so users can reply with a single digit):
|
||||
|
||||
1. Do it for me (guided, step-by-step approvals)
|
||||
2. Show plan only
|
||||
3. Fix only critical issues
|
||||
4. Export commands for later
|
||||
|
||||
### 7) Execute with confirmations
|
||||
|
||||
For each step:
|
||||
|
||||
- Show the exact command
|
||||
- Explain impact and rollback
|
||||
- Confirm access will remain available
|
||||
- Stop on unexpected output and ask for guidance
|
||||
|
||||
### 8) Verify and report
|
||||
|
||||
Re-check:
|
||||
|
||||
- Firewall status
|
||||
- Listening ports
|
||||
- Remote access still works
|
||||
- OpenClaw security audit (re-run)
|
||||
|
||||
Deliver a final posture report and note any deferred items.
|
||||
|
||||
## Required confirmations (always)
|
||||
|
||||
Require explicit approval for:
|
||||
|
||||
- Firewall rule changes
|
||||
- Opening/closing ports
|
||||
- SSH/RDP configuration changes
|
||||
- Installing/removing packages
|
||||
- Enabling/disabling services
|
||||
- User/group modifications
|
||||
- Scheduling tasks or startup persistence
|
||||
- Update policy changes
|
||||
- Access to sensitive files or credentials
|
||||
|
||||
If unsure, ask.
|
||||
|
||||
## Periodic checks
|
||||
|
||||
After OpenClaw install or first hardening pass, run at least one baseline audit and version check:
|
||||
|
||||
- `openclaw security audit`
|
||||
- `openclaw security audit --deep`
|
||||
- `openclaw update status`
|
||||
|
||||
Ongoing monitoring is recommended. Use the OpenClaw cron tool/CLI to schedule periodic audits (Gateway scheduler). Do not create scheduled tasks without explicit approval. Store outputs in a user-approved location and avoid secrets in logs.
|
||||
When scheduling headless cron runs, include a note in the output that instructs the user to call `healthcheck` so issues can be fixed.
|
||||
|
||||
### Required prompt to schedule (always)
|
||||
|
||||
After any audit or hardening pass, explicitly offer scheduling and require a direct response. Use a short prompt like (numbered):
|
||||
|
||||
1. “Do you want me to schedule periodic audits (e.g., daily/weekly) via `openclaw cron add`?”
|
||||
|
||||
If the user says yes, ask for:
|
||||
|
||||
- cadence (daily/weekly), preferred time window, and output location
|
||||
- whether to also schedule `openclaw update status`
|
||||
|
||||
Use a stable cron job name so updates are deterministic. Prefer exact names:
|
||||
|
||||
- `healthcheck:security-audit`
|
||||
- `healthcheck:update-status`
|
||||
|
||||
Before creating, `openclaw cron list` and match on exact `name`. If found, `openclaw cron edit <id> ...`.
|
||||
If not found, `openclaw cron add --name <name> ...`.
|
||||
|
||||
Also offer a periodic version check so the user can decide when to update (numbered):
|
||||
|
||||
1. `openclaw update status` (preferred for source checkouts and channels)
|
||||
2. `npm view openclaw version` (published npm version)
|
||||
|
||||
## OpenClaw command accuracy
|
||||
|
||||
Use only supported commands and flags:
|
||||
|
||||
- `openclaw security audit [--deep] [--fix] [--json]`
|
||||
- `openclaw status` / `openclaw status --deep`
|
||||
- `openclaw health --json`
|
||||
- `openclaw update status`
|
||||
- `openclaw cron add|list|runs|run`
|
||||
|
||||
Do not invent CLI flags or imply OpenClaw enforces host firewall/SSH policies.
|
||||
|
||||
## Logging and audit trail
|
||||
|
||||
Record:
|
||||
|
||||
- Gateway identity and role
|
||||
- Plan ID and timestamp
|
||||
- Approved steps and exact commands
|
||||
- Exit codes and files modified (best effort)
|
||||
|
||||
Redact secrets. Never log tokens or full credential contents.
|
||||
|
||||
## Memory writes (conditional)
|
||||
|
||||
Only write to memory files when the user explicitly opts in and the session is a private/local workspace
|
||||
(per `docs/reference/templates/AGENTS.md`). Otherwise provide a redacted, paste-ready summary the user can
|
||||
decide to save elsewhere.
|
||||
|
||||
Follow the durable-memory prompt format used by OpenClaw compaction:
|
||||
|
||||
- Write lasting notes to `memory/YYYY-MM-DD.md`.
|
||||
|
||||
After each audit/hardening run, if opted-in, append a short, dated summary to `memory/YYYY-MM-DD.md`
|
||||
(what was checked, key findings, actions taken, any scheduled cron jobs, key decisions,
|
||||
and all commands executed). Append-only: never overwrite existing entries.
|
||||
Redact sensitive host details (usernames, hostnames, IPs, serials, service names, tokens).
|
||||
If there are durable preferences or decisions (risk posture, allowed ports, update policy),
|
||||
also update `MEMORY.md` (long-term memory is optional and only used in private sessions).
|
||||
|
||||
If the session cannot write to the workspace, ask for permission or provide exact entries
|
||||
the user can paste into the memory files.
|
||||
257
openclaw/skills/himalaya/SKILL.md
Normal file
257
openclaw/skills/himalaya/SKILL.md
Normal file
@@ -0,0 +1,257 @@
|
||||
---
|
||||
name: himalaya
|
||||
description: "CLI to manage emails via IMAP/SMTP. Use `himalaya` to list, read, write, reply, forward, search, and organize emails from the terminal. Supports multiple accounts and message composition with MML (MIME Meta Language)."
|
||||
homepage: https://github.com/pimalaya/himalaya
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "📧",
|
||||
"requires": { "bins": ["himalaya"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "himalaya",
|
||||
"bins": ["himalaya"],
|
||||
"label": "Install Himalaya (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Himalaya Email CLI
|
||||
|
||||
Himalaya is a CLI email client that lets you manage emails from the terminal using IMAP, SMTP, Notmuch, or Sendmail backends.
|
||||
|
||||
## References
|
||||
|
||||
- `references/configuration.md` (config file setup + IMAP/SMTP authentication)
|
||||
- `references/message-composition.md` (MML syntax for composing emails)
|
||||
|
||||
## Prerequisites
|
||||
|
||||
1. Himalaya CLI installed (`himalaya --version` to verify)
|
||||
2. A configuration file at `~/.config/himalaya/config.toml`
|
||||
3. IMAP/SMTP credentials configured (password stored securely)
|
||||
|
||||
## Configuration Setup
|
||||
|
||||
Run the interactive wizard to set up an account:
|
||||
|
||||
```bash
|
||||
himalaya account configure
|
||||
```
|
||||
|
||||
Or create `~/.config/himalaya/config.toml` manually:
|
||||
|
||||
```toml
|
||||
[accounts.personal]
|
||||
email = "you@example.com"
|
||||
display-name = "Your Name"
|
||||
default = true
|
||||
|
||||
backend.type = "imap"
|
||||
backend.host = "imap.example.com"
|
||||
backend.port = 993
|
||||
backend.encryption.type = "tls"
|
||||
backend.login = "you@example.com"
|
||||
backend.auth.type = "password"
|
||||
backend.auth.cmd = "pass show email/imap" # or use keyring
|
||||
|
||||
message.send.backend.type = "smtp"
|
||||
message.send.backend.host = "smtp.example.com"
|
||||
message.send.backend.port = 587
|
||||
message.send.backend.encryption.type = "start-tls"
|
||||
message.send.backend.login = "you@example.com"
|
||||
message.send.backend.auth.type = "password"
|
||||
message.send.backend.auth.cmd = "pass show email/smtp"
|
||||
```
|
||||
|
||||
## Common Operations
|
||||
|
||||
### List Folders
|
||||
|
||||
```bash
|
||||
himalaya folder list
|
||||
```
|
||||
|
||||
### List Emails
|
||||
|
||||
List emails in INBOX (default):
|
||||
|
||||
```bash
|
||||
himalaya envelope list
|
||||
```
|
||||
|
||||
List emails in a specific folder:
|
||||
|
||||
```bash
|
||||
himalaya envelope list --folder "Sent"
|
||||
```
|
||||
|
||||
List with pagination:
|
||||
|
||||
```bash
|
||||
himalaya envelope list --page 1 --page-size 20
|
||||
```
|
||||
|
||||
### Search Emails
|
||||
|
||||
```bash
|
||||
himalaya envelope list from john@example.com subject meeting
|
||||
```
|
||||
|
||||
### Read an Email
|
||||
|
||||
Read email by ID (shows plain text):
|
||||
|
||||
```bash
|
||||
himalaya message read 42
|
||||
```
|
||||
|
||||
Export raw MIME:
|
||||
|
||||
```bash
|
||||
himalaya message export 42 --full
|
||||
```
|
||||
|
||||
### Reply to an Email
|
||||
|
||||
Interactive reply (opens $EDITOR):
|
||||
|
||||
```bash
|
||||
himalaya message reply 42
|
||||
```
|
||||
|
||||
Reply-all:
|
||||
|
||||
```bash
|
||||
himalaya message reply 42 --all
|
||||
```
|
||||
|
||||
### Forward an Email
|
||||
|
||||
```bash
|
||||
himalaya message forward 42
|
||||
```
|
||||
|
||||
### Write a New Email
|
||||
|
||||
Interactive compose (opens $EDITOR):
|
||||
|
||||
```bash
|
||||
himalaya message write
|
||||
```
|
||||
|
||||
Send directly using template:
|
||||
|
||||
```bash
|
||||
cat << 'EOF' | himalaya template send
|
||||
From: you@example.com
|
||||
To: recipient@example.com
|
||||
Subject: Test Message
|
||||
|
||||
Hello from Himalaya!
|
||||
EOF
|
||||
```
|
||||
|
||||
Or with headers flag:
|
||||
|
||||
```bash
|
||||
himalaya message write -H "To:recipient@example.com" -H "Subject:Test" "Message body here"
|
||||
```
|
||||
|
||||
### Move/Copy Emails
|
||||
|
||||
Move to folder:
|
||||
|
||||
```bash
|
||||
himalaya message move 42 "Archive"
|
||||
```
|
||||
|
||||
Copy to folder:
|
||||
|
||||
```bash
|
||||
himalaya message copy 42 "Important"
|
||||
```
|
||||
|
||||
### Delete an Email
|
||||
|
||||
```bash
|
||||
himalaya message delete 42
|
||||
```
|
||||
|
||||
### Manage Flags
|
||||
|
||||
Add flag:
|
||||
|
||||
```bash
|
||||
himalaya flag add 42 --flag seen
|
||||
```
|
||||
|
||||
Remove flag:
|
||||
|
||||
```bash
|
||||
himalaya flag remove 42 --flag seen
|
||||
```
|
||||
|
||||
## Multiple Accounts
|
||||
|
||||
List accounts:
|
||||
|
||||
```bash
|
||||
himalaya account list
|
||||
```
|
||||
|
||||
Use a specific account:
|
||||
|
||||
```bash
|
||||
himalaya --account work envelope list
|
||||
```
|
||||
|
||||
## Attachments
|
||||
|
||||
Save attachments from a message:
|
||||
|
||||
```bash
|
||||
himalaya attachment download 42
|
||||
```
|
||||
|
||||
Save to specific directory:
|
||||
|
||||
```bash
|
||||
himalaya attachment download 42 --dir ~/Downloads
|
||||
```
|
||||
|
||||
## Output Formats
|
||||
|
||||
Most commands support `--output` for structured output:
|
||||
|
||||
```bash
|
||||
himalaya envelope list --output json
|
||||
himalaya envelope list --output plain
|
||||
```
|
||||
|
||||
## Debugging
|
||||
|
||||
Enable debug logging:
|
||||
|
||||
```bash
|
||||
RUST_LOG=debug himalaya envelope list
|
||||
```
|
||||
|
||||
Full trace with backtrace:
|
||||
|
||||
```bash
|
||||
RUST_LOG=trace RUST_BACKTRACE=1 himalaya envelope list
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- Use `himalaya --help` or `himalaya <command> --help` for detailed usage.
|
||||
- Message IDs are relative to the current folder; re-list after folder changes.
|
||||
- For composing rich emails with attachments, use MML syntax (see `references/message-composition.md`).
|
||||
- Store passwords securely using `pass`, system keyring, or a command that outputs the password.
|
||||
184
openclaw/skills/himalaya/references/configuration.md
Normal file
184
openclaw/skills/himalaya/references/configuration.md
Normal file
@@ -0,0 +1,184 @@
|
||||
# Himalaya Configuration Reference
|
||||
|
||||
Configuration file location: `~/.config/himalaya/config.toml`
|
||||
|
||||
## Minimal IMAP + SMTP Setup
|
||||
|
||||
```toml
|
||||
[accounts.default]
|
||||
email = "user@example.com"
|
||||
display-name = "Your Name"
|
||||
default = true
|
||||
|
||||
# IMAP backend for reading emails
|
||||
backend.type = "imap"
|
||||
backend.host = "imap.example.com"
|
||||
backend.port = 993
|
||||
backend.encryption.type = "tls"
|
||||
backend.login = "user@example.com"
|
||||
backend.auth.type = "password"
|
||||
backend.auth.raw = "your-password"
|
||||
|
||||
# SMTP backend for sending emails
|
||||
message.send.backend.type = "smtp"
|
||||
message.send.backend.host = "smtp.example.com"
|
||||
message.send.backend.port = 587
|
||||
message.send.backend.encryption.type = "start-tls"
|
||||
message.send.backend.login = "user@example.com"
|
||||
message.send.backend.auth.type = "password"
|
||||
message.send.backend.auth.raw = "your-password"
|
||||
```
|
||||
|
||||
## Password Options
|
||||
|
||||
### Raw password (testing only, not recommended)
|
||||
|
||||
```toml
|
||||
backend.auth.raw = "your-password"
|
||||
```
|
||||
|
||||
### Password from command (recommended)
|
||||
|
||||
```toml
|
||||
backend.auth.cmd = "pass show email/imap"
|
||||
# backend.auth.cmd = "security find-generic-password -a user@example.com -s imap -w"
|
||||
```
|
||||
|
||||
### System keyring (requires keyring feature)
|
||||
|
||||
```toml
|
||||
backend.auth.keyring = "imap-example"
|
||||
```
|
||||
|
||||
Then run `himalaya account configure <account>` to store the password.
|
||||
|
||||
## Gmail Configuration
|
||||
|
||||
```toml
|
||||
[accounts.gmail]
|
||||
email = "you@gmail.com"
|
||||
display-name = "Your Name"
|
||||
default = true
|
||||
|
||||
backend.type = "imap"
|
||||
backend.host = "imap.gmail.com"
|
||||
backend.port = 993
|
||||
backend.encryption.type = "tls"
|
||||
backend.login = "you@gmail.com"
|
||||
backend.auth.type = "password"
|
||||
backend.auth.cmd = "pass show google/app-password"
|
||||
|
||||
message.send.backend.type = "smtp"
|
||||
message.send.backend.host = "smtp.gmail.com"
|
||||
message.send.backend.port = 587
|
||||
message.send.backend.encryption.type = "start-tls"
|
||||
message.send.backend.login = "you@gmail.com"
|
||||
message.send.backend.auth.type = "password"
|
||||
message.send.backend.auth.cmd = "pass show google/app-password"
|
||||
```
|
||||
|
||||
**Note:** Gmail requires an App Password if 2FA is enabled.
|
||||
|
||||
## iCloud Configuration
|
||||
|
||||
```toml
|
||||
[accounts.icloud]
|
||||
email = "you@icloud.com"
|
||||
display-name = "Your Name"
|
||||
|
||||
backend.type = "imap"
|
||||
backend.host = "imap.mail.me.com"
|
||||
backend.port = 993
|
||||
backend.encryption.type = "tls"
|
||||
backend.login = "you@icloud.com"
|
||||
backend.auth.type = "password"
|
||||
backend.auth.cmd = "pass show icloud/app-password"
|
||||
|
||||
message.send.backend.type = "smtp"
|
||||
message.send.backend.host = "smtp.mail.me.com"
|
||||
message.send.backend.port = 587
|
||||
message.send.backend.encryption.type = "start-tls"
|
||||
message.send.backend.login = "you@icloud.com"
|
||||
message.send.backend.auth.type = "password"
|
||||
message.send.backend.auth.cmd = "pass show icloud/app-password"
|
||||
```
|
||||
|
||||
**Note:** Generate an app-specific password at appleid.apple.com
|
||||
|
||||
## Folder Aliases
|
||||
|
||||
Map custom folder names:
|
||||
|
||||
```toml
|
||||
[accounts.default.folder.alias]
|
||||
inbox = "INBOX"
|
||||
sent = "Sent"
|
||||
drafts = "Drafts"
|
||||
trash = "Trash"
|
||||
```
|
||||
|
||||
## Multiple Accounts
|
||||
|
||||
```toml
|
||||
[accounts.personal]
|
||||
email = "personal@example.com"
|
||||
default = true
|
||||
# ... backend config ...
|
||||
|
||||
[accounts.work]
|
||||
email = "work@company.com"
|
||||
# ... backend config ...
|
||||
```
|
||||
|
||||
Switch accounts with `--account`:
|
||||
|
||||
```bash
|
||||
himalaya --account work envelope list
|
||||
```
|
||||
|
||||
## Notmuch Backend (local mail)
|
||||
|
||||
```toml
|
||||
[accounts.local]
|
||||
email = "user@example.com"
|
||||
|
||||
backend.type = "notmuch"
|
||||
backend.db-path = "~/.mail/.notmuch"
|
||||
```
|
||||
|
||||
## OAuth2 Authentication (for providers that support it)
|
||||
|
||||
```toml
|
||||
backend.auth.type = "oauth2"
|
||||
backend.auth.client-id = "your-client-id"
|
||||
backend.auth.client-secret.cmd = "pass show oauth/client-secret"
|
||||
backend.auth.access-token.cmd = "pass show oauth/access-token"
|
||||
backend.auth.refresh-token.cmd = "pass show oauth/refresh-token"
|
||||
backend.auth.auth-url = "https://provider.com/oauth/authorize"
|
||||
backend.auth.token-url = "https://provider.com/oauth/token"
|
||||
```
|
||||
|
||||
## Additional Options
|
||||
|
||||
### Signature
|
||||
|
||||
```toml
|
||||
[accounts.default]
|
||||
signature = "Best regards,\nYour Name"
|
||||
signature-delim = "-- \n"
|
||||
```
|
||||
|
||||
### Downloads directory
|
||||
|
||||
```toml
|
||||
[accounts.default]
|
||||
downloads-dir = "~/Downloads/himalaya"
|
||||
```
|
||||
|
||||
### Editor for composing
|
||||
|
||||
Set via environment variable:
|
||||
|
||||
```bash
|
||||
export EDITOR="vim"
|
||||
```
|
||||
199
openclaw/skills/himalaya/references/message-composition.md
Normal file
199
openclaw/skills/himalaya/references/message-composition.md
Normal file
@@ -0,0 +1,199 @@
|
||||
# Message Composition with MML (MIME Meta Language)
|
||||
|
||||
Himalaya uses MML for composing emails. MML is a simple XML-based syntax that compiles to MIME messages.
|
||||
|
||||
## Basic Message Structure
|
||||
|
||||
An email message is a list of **headers** followed by a **body**, separated by a blank line:
|
||||
|
||||
```
|
||||
From: sender@example.com
|
||||
To: recipient@example.com
|
||||
Subject: Hello World
|
||||
|
||||
This is the message body.
|
||||
```
|
||||
|
||||
## Headers
|
||||
|
||||
Common headers:
|
||||
|
||||
- `From`: Sender address
|
||||
- `To`: Primary recipient(s)
|
||||
- `Cc`: Carbon copy recipients
|
||||
- `Bcc`: Blind carbon copy recipients
|
||||
- `Subject`: Message subject
|
||||
- `Reply-To`: Address for replies (if different from From)
|
||||
- `In-Reply-To`: Message ID being replied to
|
||||
|
||||
### Address Formats
|
||||
|
||||
```
|
||||
To: user@example.com
|
||||
To: John Doe <john@example.com>
|
||||
To: "John Doe" <john@example.com>
|
||||
To: user1@example.com, user2@example.com, "Jane" <jane@example.com>
|
||||
```
|
||||
|
||||
## Plain Text Body
|
||||
|
||||
Simple plain text email:
|
||||
|
||||
```
|
||||
From: alice@localhost
|
||||
To: bob@localhost
|
||||
Subject: Plain Text Example
|
||||
|
||||
Hello, this is a plain text email.
|
||||
No special formatting needed.
|
||||
|
||||
Best,
|
||||
Alice
|
||||
```
|
||||
|
||||
## MML for Rich Emails
|
||||
|
||||
### Multipart Messages
|
||||
|
||||
Alternative text/html parts:
|
||||
|
||||
```
|
||||
From: alice@localhost
|
||||
To: bob@localhost
|
||||
Subject: Multipart Example
|
||||
|
||||
<#multipart type=alternative>
|
||||
This is the plain text version.
|
||||
<#part type=text/html>
|
||||
<html><body><h1>This is the HTML version</h1></body></html>
|
||||
<#/multipart>
|
||||
```
|
||||
|
||||
### Attachments
|
||||
|
||||
Attach a file:
|
||||
|
||||
```
|
||||
From: alice@localhost
|
||||
To: bob@localhost
|
||||
Subject: With Attachment
|
||||
|
||||
Here is the document you requested.
|
||||
|
||||
<#part filename=/path/to/document.pdf><#/part>
|
||||
```
|
||||
|
||||
Attachment with custom name:
|
||||
|
||||
```
|
||||
<#part filename=/path/to/file.pdf name=report.pdf><#/part>
|
||||
```
|
||||
|
||||
Multiple attachments:
|
||||
|
||||
```
|
||||
<#part filename=/path/to/doc1.pdf><#/part>
|
||||
<#part filename=/path/to/doc2.pdf><#/part>
|
||||
```
|
||||
|
||||
### Inline Images
|
||||
|
||||
Embed an image inline:
|
||||
|
||||
```
|
||||
From: alice@localhost
|
||||
To: bob@localhost
|
||||
Subject: Inline Image
|
||||
|
||||
<#multipart type=related>
|
||||
<#part type=text/html>
|
||||
<html><body>
|
||||
<p>Check out this image:</p>
|
||||
<img src="cid:image1">
|
||||
</body></html>
|
||||
<#part disposition=inline id=image1 filename=/path/to/image.png><#/part>
|
||||
<#/multipart>
|
||||
```
|
||||
|
||||
### Mixed Content (Text + Attachments)
|
||||
|
||||
```
|
||||
From: alice@localhost
|
||||
To: bob@localhost
|
||||
Subject: Mixed Content
|
||||
|
||||
<#multipart type=mixed>
|
||||
<#part type=text/plain>
|
||||
Please find the attached files.
|
||||
|
||||
Best,
|
||||
Alice
|
||||
<#part filename=/path/to/file1.pdf><#/part>
|
||||
<#part filename=/path/to/file2.zip><#/part>
|
||||
<#/multipart>
|
||||
```
|
||||
|
||||
## MML Tag Reference
|
||||
|
||||
### `<#multipart>`
|
||||
|
||||
Groups multiple parts together.
|
||||
|
||||
- `type=alternative`: Different representations of same content
|
||||
- `type=mixed`: Independent parts (text + attachments)
|
||||
- `type=related`: Parts that reference each other (HTML + images)
|
||||
|
||||
### `<#part>`
|
||||
|
||||
Defines a message part.
|
||||
|
||||
- `type=<mime-type>`: Content type (e.g., `text/html`, `application/pdf`)
|
||||
- `filename=<path>`: File to attach
|
||||
- `name=<name>`: Display name for attachment
|
||||
- `disposition=inline`: Display inline instead of as attachment
|
||||
- `id=<cid>`: Content ID for referencing in HTML
|
||||
|
||||
## Composing from CLI
|
||||
|
||||
### Interactive compose
|
||||
|
||||
Opens your `$EDITOR`:
|
||||
|
||||
```bash
|
||||
himalaya message write
|
||||
```
|
||||
|
||||
### Reply (opens editor with quoted message)
|
||||
|
||||
```bash
|
||||
himalaya message reply 42
|
||||
himalaya message reply 42 --all # reply-all
|
||||
```
|
||||
|
||||
### Forward
|
||||
|
||||
```bash
|
||||
himalaya message forward 42
|
||||
```
|
||||
|
||||
### Send from stdin
|
||||
|
||||
```bash
|
||||
cat message.txt | himalaya template send
|
||||
```
|
||||
|
||||
### Prefill headers from CLI
|
||||
|
||||
```bash
|
||||
himalaya message write \
|
||||
-H "To:recipient@example.com" \
|
||||
-H "Subject:Quick Message" \
|
||||
"Message body here"
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- The editor opens with a template; fill in headers and body.
|
||||
- Save and exit the editor to send; exit without saving to cancel.
|
||||
- MML parts are compiled to proper MIME when sending.
|
||||
- Use `himalaya message export --full` to inspect the raw MIME structure of received emails.
|
||||
122
openclaw/skills/imsg/SKILL.md
Normal file
122
openclaw/skills/imsg/SKILL.md
Normal file
@@ -0,0 +1,122 @@
|
||||
---
|
||||
name: imsg
|
||||
description: iMessage/SMS CLI for listing chats, history, and sending messages via Messages.app.
|
||||
homepage: https://imsg.to
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "📨",
|
||||
"os": ["darwin"],
|
||||
"requires": { "bins": ["imsg"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/imsg",
|
||||
"bins": ["imsg"],
|
||||
"label": "Install imsg (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# imsg
|
||||
|
||||
Use `imsg` to read and send iMessage/SMS via macOS Messages.app.
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **USE this skill when:**
|
||||
|
||||
- User explicitly asks to send iMessage or SMS
|
||||
- Reading iMessage conversation history
|
||||
- Checking recent Messages.app chats
|
||||
- Sending to phone numbers or Apple IDs
|
||||
|
||||
## When NOT to Use
|
||||
|
||||
❌ **DON'T use this skill when:**
|
||||
|
||||
- Telegram messages → use `message` tool with `channel:telegram`
|
||||
- Signal messages → use Signal channel if configured
|
||||
- WhatsApp messages → use WhatsApp channel if configured
|
||||
- Discord messages → use `message` tool with `channel:discord`
|
||||
- Slack messages → use `slack` skill
|
||||
- Group chat management (adding/removing members) → not supported
|
||||
- Bulk/mass messaging → always confirm with user first
|
||||
- Replying in current conversation → just reply normally (Clawdbot routes automatically)
|
||||
|
||||
## Requirements
|
||||
|
||||
- macOS with Messages.app signed in
|
||||
- Full Disk Access for terminal
|
||||
- Automation permission for Messages.app (for sending)
|
||||
|
||||
## Common Commands
|
||||
|
||||
### List Chats
|
||||
|
||||
```bash
|
||||
imsg chats --limit 10 --json
|
||||
```
|
||||
|
||||
### View History
|
||||
|
||||
```bash
|
||||
# By chat ID
|
||||
imsg history --chat-id 1 --limit 20 --json
|
||||
|
||||
# With attachments info
|
||||
imsg history --chat-id 1 --limit 20 --attachments --json
|
||||
```
|
||||
|
||||
### Watch for New Messages
|
||||
|
||||
```bash
|
||||
imsg watch --chat-id 1 --attachments
|
||||
```
|
||||
|
||||
### Send Messages
|
||||
|
||||
```bash
|
||||
# Text only
|
||||
imsg send --to "+14155551212" --text "Hello!"
|
||||
|
||||
# With attachment
|
||||
imsg send --to "+14155551212" --text "Check this out" --file /path/to/image.jpg
|
||||
|
||||
# Specify service
|
||||
imsg send --to "+14155551212" --text "Hi" --service imessage
|
||||
imsg send --to "+14155551212" --text "Hi" --service sms
|
||||
```
|
||||
|
||||
## Service Options
|
||||
|
||||
- `--service imessage` — Force iMessage (requires recipient has iMessage)
|
||||
- `--service sms` — Force SMS (green bubble)
|
||||
- `--service auto` — Let Messages.app decide (default)
|
||||
|
||||
## Safety Rules
|
||||
|
||||
1. **Always confirm recipient and message content** before sending
|
||||
2. **Never send to unknown numbers** without explicit user approval
|
||||
3. **Be careful with attachments** — confirm file path exists
|
||||
4. **Rate limit yourself** — don't spam
|
||||
|
||||
## Example Workflow
|
||||
|
||||
User: "Text mom that I'll be late"
|
||||
|
||||
```bash
|
||||
# 1. Find mom's chat
|
||||
imsg chats --limit 20 --json | jq '.[] | select(.displayName | contains("Mom"))'
|
||||
|
||||
# 2. Confirm with user
|
||||
# "Found Mom at +1555123456. Send 'I'll be late' via iMessage?"
|
||||
|
||||
# 3. Send after confirmation
|
||||
imsg send --to "+1555123456" --text "I'll be late"
|
||||
```
|
||||
61
openclaw/skills/mcporter/SKILL.md
Normal file
61
openclaw/skills/mcporter/SKILL.md
Normal file
@@ -0,0 +1,61 @@
|
||||
---
|
||||
name: mcporter
|
||||
description: Use the mcporter CLI to list, configure, auth, and call MCP servers/tools directly (HTTP or stdio), including ad-hoc servers, config edits, and CLI/type generation.
|
||||
homepage: http://mcporter.dev
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "📦",
|
||||
"requires": { "bins": ["mcporter"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "node",
|
||||
"kind": "node",
|
||||
"package": "mcporter",
|
||||
"bins": ["mcporter"],
|
||||
"label": "Install mcporter (node)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# mcporter
|
||||
|
||||
Use `mcporter` to work with MCP servers directly.
|
||||
|
||||
Quick start
|
||||
|
||||
- `mcporter list`
|
||||
- `mcporter list <server> --schema`
|
||||
- `mcporter call <server.tool> key=value`
|
||||
|
||||
Call tools
|
||||
|
||||
- Selector: `mcporter call linear.list_issues team=ENG limit:5`
|
||||
- Function syntax: `mcporter call "linear.create_issue(title: \"Bug\")"`
|
||||
- Full URL: `mcporter call https://api.example.com/mcp.fetch url:https://example.com`
|
||||
- Stdio: `mcporter call --stdio "bun run ./server.ts" scrape url=https://example.com`
|
||||
- JSON payload: `mcporter call <server.tool> --args '{"limit":5}'`
|
||||
|
||||
Auth + config
|
||||
|
||||
- OAuth: `mcporter auth <server | url> [--reset]`
|
||||
- Config: `mcporter config list|get|add|remove|import|login|logout`
|
||||
|
||||
Daemon
|
||||
|
||||
- `mcporter daemon start|status|stop|restart`
|
||||
|
||||
Codegen
|
||||
|
||||
- CLI: `mcporter generate-cli --server <name>` or `--command <url>`
|
||||
- Inspect: `mcporter inspect-cli <path> [--json]`
|
||||
- TS: `mcporter emit-ts <server> --mode client|types`
|
||||
|
||||
Notes
|
||||
|
||||
- Config default: `./config/mcporter.json` (override with `--config`).
|
||||
- Prefer `--output json` for machine-readable results.
|
||||
69
openclaw/skills/model-usage/SKILL.md
Normal file
69
openclaw/skills/model-usage/SKILL.md
Normal file
@@ -0,0 +1,69 @@
|
||||
---
|
||||
name: model-usage
|
||||
description: Use CodexBar CLI local cost usage to summarize per-model usage for Codex or Claude, including the current (most recent) model or a full model breakdown. Trigger when asked for model-level usage/cost data from codexbar, or when you need a scriptable per-model summary from codexbar cost JSON.
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "📊",
|
||||
"os": ["darwin"],
|
||||
"requires": { "bins": ["codexbar"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew-cask",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/codexbar",
|
||||
"bins": ["codexbar"],
|
||||
"label": "Install CodexBar (brew cask)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Model usage
|
||||
|
||||
## Overview
|
||||
|
||||
Get per-model usage cost from CodexBar's local cost logs. Supports "current model" (most recent daily entry) or "all models" summaries for Codex or Claude.
|
||||
|
||||
TODO: add Linux CLI support guidance once CodexBar CLI install path is documented for Linux.
|
||||
|
||||
## Quick start
|
||||
|
||||
1. Fetch cost JSON via CodexBar CLI or pass a JSON file.
|
||||
2. Use the bundled script to summarize by model.
|
||||
|
||||
```bash
|
||||
python {baseDir}/scripts/model_usage.py --provider codex --mode current
|
||||
python {baseDir}/scripts/model_usage.py --provider codex --mode all
|
||||
python {baseDir}/scripts/model_usage.py --provider claude --mode all --format json --pretty
|
||||
```
|
||||
|
||||
## Current model logic
|
||||
|
||||
- Uses the most recent daily row with `modelBreakdowns`.
|
||||
- Picks the model with the highest cost in that row.
|
||||
- Falls back to the last entry in `modelsUsed` when breakdowns are missing.
|
||||
- Override with `--model <name>` when you need a specific model.
|
||||
|
||||
## Inputs
|
||||
|
||||
- Default: runs `codexbar cost --format json --provider <codex|claude>`.
|
||||
- File or stdin:
|
||||
|
||||
```bash
|
||||
codexbar cost --provider codex --format json > /tmp/cost.json
|
||||
python {baseDir}/scripts/model_usage.py --input /tmp/cost.json --mode all
|
||||
cat /tmp/cost.json | python {baseDir}/scripts/model_usage.py --input - --mode current
|
||||
```
|
||||
|
||||
## Output
|
||||
|
||||
- Text (default) or JSON (`--format json --pretty`).
|
||||
- Values are cost-only per model; tokens are not split by model in CodexBar output.
|
||||
|
||||
## References
|
||||
|
||||
- Read `references/codexbar-cli.md` for CLI flags and cost JSON fields.
|
||||
33
openclaw/skills/model-usage/references/codexbar-cli.md
Normal file
33
openclaw/skills/model-usage/references/codexbar-cli.md
Normal file
@@ -0,0 +1,33 @@
|
||||
# CodexBar CLI quick ref (usage + cost)
|
||||
|
||||
## Install
|
||||
|
||||
- App: Preferences -> Advanced -> Install CLI
|
||||
- Repo: ./bin/install-codexbar-cli.sh
|
||||
|
||||
## Commands
|
||||
|
||||
- Usage snapshot (web/cli sources):
|
||||
- codexbar usage --format json --pretty
|
||||
- codexbar --provider all --format json
|
||||
- Local cost usage (Codex + Claude only):
|
||||
- codexbar cost --format json --pretty
|
||||
- codexbar cost --provider codex|claude --format json
|
||||
|
||||
## Cost JSON fields
|
||||
|
||||
The payload is an array (one per provider).
|
||||
|
||||
- provider, source, updatedAt
|
||||
- sessionTokens, sessionCostUSD
|
||||
- last30DaysTokens, last30DaysCostUSD
|
||||
- daily[]: date, inputTokens, outputTokens, cacheReadTokens, cacheCreationTokens, totalTokens, totalCost, modelsUsed, modelBreakdowns[]
|
||||
- modelBreakdowns[]: modelName, cost
|
||||
- totals: totalInputTokens, totalOutputTokens, cacheReadTokens, cacheCreationTokens, totalTokens, totalCost
|
||||
|
||||
## Notes
|
||||
|
||||
- Cost usage is local-only. It reads JSONL logs under:
|
||||
- Codex: ~/.codex/sessions/\*_/_.jsonl
|
||||
- Claude: ~/.config/claude/projects/**/\*.jsonl or ~/.claude/projects/**/\*.jsonl
|
||||
- If web usage is required (non-local), use codexbar usage (not cost).
|
||||
320
openclaw/skills/model-usage/scripts/model_usage.py
Normal file
320
openclaw/skills/model-usage/scripts/model_usage.py
Normal file
@@ -0,0 +1,320 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Summarize CodexBar local cost usage by model.
|
||||
|
||||
Defaults to current model (most recent daily entry), or list all models.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
|
||||
import argparse
|
||||
import json
|
||||
import os
|
||||
import subprocess
|
||||
import sys
|
||||
from dataclasses import dataclass
|
||||
from datetime import date, datetime, timedelta
|
||||
from typing import Any, Dict, Iterable, List, Optional, Tuple
|
||||
|
||||
|
||||
def positive_int(value: str) -> int:
|
||||
try:
|
||||
parsed = int(value)
|
||||
except ValueError as exc:
|
||||
raise argparse.ArgumentTypeError("must be an integer") from exc
|
||||
if parsed < 1:
|
||||
raise argparse.ArgumentTypeError("must be >= 1")
|
||||
return parsed
|
||||
|
||||
|
||||
def eprint(msg: str) -> None:
|
||||
print(msg, file=sys.stderr)
|
||||
|
||||
|
||||
def run_codexbar_cost(provider: str) -> List[Dict[str, Any]]:
|
||||
cmd = ["codexbar", "cost", "--format", "json", "--provider", provider]
|
||||
try:
|
||||
output = subprocess.check_output(cmd, text=True)
|
||||
except FileNotFoundError:
|
||||
raise RuntimeError("codexbar not found on PATH. Install CodexBar CLI first.")
|
||||
except subprocess.CalledProcessError as exc:
|
||||
raise RuntimeError(f"codexbar cost failed (exit {exc.returncode}).")
|
||||
try:
|
||||
payload = json.loads(output)
|
||||
except json.JSONDecodeError as exc:
|
||||
raise RuntimeError(f"Failed to parse codexbar JSON output: {exc}")
|
||||
if not isinstance(payload, list):
|
||||
raise RuntimeError("Expected codexbar cost JSON array.")
|
||||
return payload
|
||||
|
||||
|
||||
def load_payload(input_path: Optional[str], provider: str) -> Dict[str, Any]:
|
||||
if input_path:
|
||||
if input_path == "-":
|
||||
raw = sys.stdin.read()
|
||||
else:
|
||||
with open(input_path, "r", encoding="utf-8") as handle:
|
||||
raw = handle.read()
|
||||
data = json.loads(raw)
|
||||
else:
|
||||
data = run_codexbar_cost(provider)
|
||||
|
||||
if isinstance(data, dict):
|
||||
return data
|
||||
|
||||
if isinstance(data, list):
|
||||
for entry in data:
|
||||
if isinstance(entry, dict) and entry.get("provider") == provider:
|
||||
return entry
|
||||
raise RuntimeError(f"Provider '{provider}' not found in codexbar payload.")
|
||||
|
||||
raise RuntimeError("Unsupported JSON input format.")
|
||||
|
||||
|
||||
@dataclass
|
||||
class ModelCost:
|
||||
model: str
|
||||
cost: float
|
||||
|
||||
|
||||
def parse_daily_entries(payload: Dict[str, Any]) -> List[Dict[str, Any]]:
|
||||
daily = payload.get("daily")
|
||||
if not daily:
|
||||
return []
|
||||
if not isinstance(daily, list):
|
||||
return []
|
||||
return [entry for entry in daily if isinstance(entry, dict)]
|
||||
|
||||
|
||||
def parse_date(value: str) -> Optional[date]:
|
||||
try:
|
||||
return datetime.strptime(value, "%Y-%m-%d").date()
|
||||
except Exception:
|
||||
return None
|
||||
|
||||
|
||||
def filter_by_days(entries: List[Dict[str, Any]], days: Optional[int]) -> List[Dict[str, Any]]:
|
||||
if not days:
|
||||
return entries
|
||||
cutoff = date.today() - timedelta(days=days - 1)
|
||||
filtered: List[Dict[str, Any]] = []
|
||||
for entry in entries:
|
||||
day = entry.get("date")
|
||||
if not isinstance(day, str):
|
||||
continue
|
||||
parsed = parse_date(day)
|
||||
if parsed and parsed >= cutoff:
|
||||
filtered.append(entry)
|
||||
return filtered
|
||||
|
||||
|
||||
def aggregate_costs(entries: Iterable[Dict[str, Any]]) -> Dict[str, float]:
|
||||
totals: Dict[str, float] = {}
|
||||
for entry in entries:
|
||||
breakdowns = entry.get("modelBreakdowns")
|
||||
if not breakdowns:
|
||||
continue
|
||||
if not isinstance(breakdowns, list):
|
||||
continue
|
||||
for item in breakdowns:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
model = item.get("modelName")
|
||||
cost = item.get("cost")
|
||||
if not isinstance(model, str):
|
||||
continue
|
||||
if not isinstance(cost, (int, float)):
|
||||
continue
|
||||
totals[model] = totals.get(model, 0.0) + float(cost)
|
||||
return totals
|
||||
|
||||
|
||||
def pick_current_model(entries: List[Dict[str, Any]]) -> Tuple[Optional[str], Optional[str]]:
|
||||
if not entries:
|
||||
return None, None
|
||||
sorted_entries = sorted(
|
||||
entries,
|
||||
key=lambda entry: entry.get("date") or "",
|
||||
)
|
||||
for entry in reversed(sorted_entries):
|
||||
breakdowns = entry.get("modelBreakdowns")
|
||||
if isinstance(breakdowns, list) and breakdowns:
|
||||
scored: List[ModelCost] = []
|
||||
for item in breakdowns:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
model = item.get("modelName")
|
||||
cost = item.get("cost")
|
||||
if isinstance(model, str) and isinstance(cost, (int, float)):
|
||||
scored.append(ModelCost(model=model, cost=float(cost)))
|
||||
if scored:
|
||||
scored.sort(key=lambda item: item.cost, reverse=True)
|
||||
return scored[0].model, entry.get("date") if isinstance(entry.get("date"), str) else None
|
||||
models_used = entry.get("modelsUsed")
|
||||
if isinstance(models_used, list) and models_used:
|
||||
last = models_used[-1]
|
||||
if isinstance(last, str):
|
||||
return last, entry.get("date") if isinstance(entry.get("date"), str) else None
|
||||
return None, None
|
||||
|
||||
|
||||
def usd(value: Optional[float]) -> str:
|
||||
if value is None:
|
||||
return "—"
|
||||
return f"${value:,.2f}"
|
||||
|
||||
|
||||
def latest_day_cost(entries: List[Dict[str, Any]], model: str) -> Tuple[Optional[str], Optional[float]]:
|
||||
if not entries:
|
||||
return None, None
|
||||
sorted_entries = sorted(
|
||||
entries,
|
||||
key=lambda entry: entry.get("date") or "",
|
||||
)
|
||||
for entry in reversed(sorted_entries):
|
||||
breakdowns = entry.get("modelBreakdowns")
|
||||
if not isinstance(breakdowns, list):
|
||||
continue
|
||||
for item in breakdowns:
|
||||
if not isinstance(item, dict):
|
||||
continue
|
||||
if item.get("modelName") == model:
|
||||
cost = item.get("cost") if isinstance(item.get("cost"), (int, float)) else None
|
||||
day = entry.get("date") if isinstance(entry.get("date"), str) else None
|
||||
return day, float(cost) if cost is not None else None
|
||||
return None, None
|
||||
|
||||
|
||||
def render_text_current(
|
||||
provider: str,
|
||||
model: str,
|
||||
latest_date: Optional[str],
|
||||
total_cost: Optional[float],
|
||||
latest_cost: Optional[float],
|
||||
latest_cost_date: Optional[str],
|
||||
entry_count: int,
|
||||
) -> str:
|
||||
lines = [f"Provider: {provider}", f"Current model: {model}"]
|
||||
if latest_date:
|
||||
lines.append(f"Latest model date: {latest_date}")
|
||||
lines.append(f"Total cost (rows): {usd(total_cost)}")
|
||||
if latest_cost_date:
|
||||
lines.append(f"Latest day cost: {usd(latest_cost)} ({latest_cost_date})")
|
||||
lines.append(f"Daily rows: {entry_count}")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def render_text_all(provider: str, totals: Dict[str, float]) -> str:
|
||||
lines = [f"Provider: {provider}", "Models:"]
|
||||
for model, cost in sorted(totals.items(), key=lambda item: item[1], reverse=True):
|
||||
lines.append(f"- {model}: {usd(cost)}")
|
||||
return "\n".join(lines)
|
||||
|
||||
|
||||
def build_json_current(
|
||||
provider: str,
|
||||
model: str,
|
||||
latest_date: Optional[str],
|
||||
total_cost: Optional[float],
|
||||
latest_cost: Optional[float],
|
||||
latest_cost_date: Optional[str],
|
||||
entry_count: int,
|
||||
) -> Dict[str, Any]:
|
||||
return {
|
||||
"provider": provider,
|
||||
"mode": "current",
|
||||
"model": model,
|
||||
"latestModelDate": latest_date,
|
||||
"totalCostUSD": total_cost,
|
||||
"latestDayCostUSD": latest_cost,
|
||||
"latestDayCostDate": latest_cost_date,
|
||||
"dailyRowCount": entry_count,
|
||||
}
|
||||
|
||||
|
||||
def build_json_all(provider: str, totals: Dict[str, float]) -> Dict[str, Any]:
|
||||
return {
|
||||
"provider": provider,
|
||||
"mode": "all",
|
||||
"models": [
|
||||
{"model": model, "totalCostUSD": cost}
|
||||
for model, cost in sorted(totals.items(), key=lambda item: item[1], reverse=True)
|
||||
],
|
||||
}
|
||||
|
||||
|
||||
def main() -> int:
|
||||
parser = argparse.ArgumentParser(description="Summarize CodexBar model usage from local cost logs.")
|
||||
parser.add_argument("--provider", choices=["codex", "claude"], default="codex")
|
||||
parser.add_argument("--mode", choices=["current", "all"], default="current")
|
||||
parser.add_argument("--model", help="Explicit model name to report instead of auto-current.")
|
||||
parser.add_argument("--input", help="Path to codexbar cost JSON (or '-' for stdin).")
|
||||
parser.add_argument("--days", type=positive_int, help="Limit to last N days (based on daily rows).")
|
||||
parser.add_argument("--format", choices=["text", "json"], default="text")
|
||||
parser.add_argument("--pretty", action="store_true", help="Pretty-print JSON output.")
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
try:
|
||||
payload = load_payload(args.input, args.provider)
|
||||
except Exception as exc:
|
||||
eprint(str(exc))
|
||||
return 1
|
||||
|
||||
entries = parse_daily_entries(payload)
|
||||
entries = filter_by_days(entries, args.days)
|
||||
|
||||
if args.mode == "current":
|
||||
model = args.model
|
||||
latest_date = None
|
||||
if not model:
|
||||
model, latest_date = pick_current_model(entries)
|
||||
if not model:
|
||||
eprint("No model data found in codexbar cost payload.")
|
||||
return 2
|
||||
totals = aggregate_costs(entries)
|
||||
total_cost = totals.get(model)
|
||||
latest_cost_date, latest_cost = latest_day_cost(entries, model)
|
||||
|
||||
if args.format == "json":
|
||||
payload_out = build_json_current(
|
||||
provider=args.provider,
|
||||
model=model,
|
||||
latest_date=latest_date,
|
||||
total_cost=total_cost,
|
||||
latest_cost=latest_cost,
|
||||
latest_cost_date=latest_cost_date,
|
||||
entry_count=len(entries),
|
||||
)
|
||||
indent = 2 if args.pretty else None
|
||||
print(json.dumps(payload_out, indent=indent, sort_keys=args.pretty))
|
||||
else:
|
||||
print(
|
||||
render_text_current(
|
||||
provider=args.provider,
|
||||
model=model,
|
||||
latest_date=latest_date,
|
||||
total_cost=total_cost,
|
||||
latest_cost=latest_cost,
|
||||
latest_cost_date=latest_cost_date,
|
||||
entry_count=len(entries),
|
||||
)
|
||||
)
|
||||
return 0
|
||||
|
||||
totals = aggregate_costs(entries)
|
||||
if not totals:
|
||||
eprint("No model breakdowns found in codexbar cost payload.")
|
||||
return 2
|
||||
|
||||
if args.format == "json":
|
||||
payload_out = build_json_all(provider=args.provider, totals=totals)
|
||||
indent = 2 if args.pretty else None
|
||||
print(json.dumps(payload_out, indent=indent, sort_keys=args.pretty))
|
||||
else:
|
||||
print(render_text_all(provider=args.provider, totals=totals))
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
40
openclaw/skills/model-usage/scripts/test_model_usage.py
Normal file
40
openclaw/skills/model-usage/scripts/test_model_usage.py
Normal file
@@ -0,0 +1,40 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Tests for model_usage helpers.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
from datetime import date, timedelta
|
||||
from unittest import TestCase, main
|
||||
|
||||
from model_usage import filter_by_days, positive_int
|
||||
|
||||
|
||||
class TestModelUsage(TestCase):
|
||||
def test_positive_int_accepts_valid_numbers(self):
|
||||
self.assertEqual(positive_int("1"), 1)
|
||||
self.assertEqual(positive_int("7"), 7)
|
||||
|
||||
def test_positive_int_rejects_zero_and_negative(self):
|
||||
with self.assertRaises(argparse.ArgumentTypeError):
|
||||
positive_int("0")
|
||||
with self.assertRaises(argparse.ArgumentTypeError):
|
||||
positive_int("-3")
|
||||
|
||||
def test_filter_by_days_keeps_recent_entries(self):
|
||||
today = date.today()
|
||||
entries = [
|
||||
{"date": (today - timedelta(days=5)).strftime("%Y-%m-%d"), "modelBreakdowns": []},
|
||||
{"date": (today - timedelta(days=1)).strftime("%Y-%m-%d"), "modelBreakdowns": []},
|
||||
{"date": today.strftime("%Y-%m-%d"), "modelBreakdowns": []},
|
||||
]
|
||||
|
||||
filtered = filter_by_days(entries, 2)
|
||||
|
||||
self.assertEqual(len(filtered), 2)
|
||||
self.assertEqual(filtered[0]["date"], (today - timedelta(days=1)).strftime("%Y-%m-%d"))
|
||||
self.assertEqual(filtered[1]["date"], today.strftime("%Y-%m-%d"))
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
58
openclaw/skills/nano-banana-pro/SKILL.md
Normal file
58
openclaw/skills/nano-banana-pro/SKILL.md
Normal file
@@ -0,0 +1,58 @@
|
||||
---
|
||||
name: nano-banana-pro
|
||||
description: Generate or edit images via Gemini 3 Pro Image (Nano Banana Pro).
|
||||
homepage: https://ai.google.dev/
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🍌",
|
||||
"requires": { "bins": ["uv"], "env": ["GEMINI_API_KEY"] },
|
||||
"primaryEnv": "GEMINI_API_KEY",
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "uv-brew",
|
||||
"kind": "brew",
|
||||
"formula": "uv",
|
||||
"bins": ["uv"],
|
||||
"label": "Install uv (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Nano Banana Pro (Gemini 3 Pro Image)
|
||||
|
||||
Use the bundled script to generate or edit images.
|
||||
|
||||
Generate
|
||||
|
||||
```bash
|
||||
uv run {baseDir}/scripts/generate_image.py --prompt "your image description" --filename "output.png" --resolution 1K
|
||||
```
|
||||
|
||||
Edit (single image)
|
||||
|
||||
```bash
|
||||
uv run {baseDir}/scripts/generate_image.py --prompt "edit instructions" --filename "output.png" -i "/path/in.png" --resolution 2K
|
||||
```
|
||||
|
||||
Multi-image composition (up to 14 images)
|
||||
|
||||
```bash
|
||||
uv run {baseDir}/scripts/generate_image.py --prompt "combine these into one scene" --filename "output.png" -i img1.png -i img2.png -i img3.png
|
||||
```
|
||||
|
||||
API key
|
||||
|
||||
- `GEMINI_API_KEY` env var
|
||||
- Or set `skills."nano-banana-pro".apiKey` / `skills."nano-banana-pro".env.GEMINI_API_KEY` in `~/.openclaw/openclaw.json`
|
||||
|
||||
Notes
|
||||
|
||||
- Resolutions: `1K` (default), `2K`, `4K`.
|
||||
- Use timestamps in filenames: `yyyy-mm-dd-hh-mm-ss-name.png`.
|
||||
- The script prints a `MEDIA:` line for OpenClaw to auto-attach on supported chat providers.
|
||||
- Do not read the image back; report the saved path only.
|
||||
185
openclaw/skills/nano-banana-pro/scripts/generate_image.py
Normal file
185
openclaw/skills/nano-banana-pro/scripts/generate_image.py
Normal file
@@ -0,0 +1,185 @@
|
||||
#!/usr/bin/env python3
|
||||
# /// script
|
||||
# requires-python = ">=3.10"
|
||||
# dependencies = [
|
||||
# "google-genai>=1.0.0",
|
||||
# "pillow>=10.0.0",
|
||||
# ]
|
||||
# ///
|
||||
"""
|
||||
Generate images using Google's Nano Banana Pro (Gemini 3 Pro Image) API.
|
||||
|
||||
Usage:
|
||||
uv run generate_image.py --prompt "your image description" --filename "output.png" [--resolution 1K|2K|4K] [--api-key KEY]
|
||||
|
||||
Multi-image editing (up to 14 images):
|
||||
uv run generate_image.py --prompt "combine these images" --filename "output.png" -i img1.png -i img2.png -i img3.png
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import os
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def get_api_key(provided_key: str | None) -> str | None:
|
||||
"""Get API key from argument first, then environment."""
|
||||
if provided_key:
|
||||
return provided_key
|
||||
return os.environ.get("GEMINI_API_KEY")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Generate images using Nano Banana Pro (Gemini 3 Pro Image)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--prompt", "-p",
|
||||
required=True,
|
||||
help="Image description/prompt"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--filename", "-f",
|
||||
required=True,
|
||||
help="Output filename (e.g., sunset-mountains.png)"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--input-image", "-i",
|
||||
action="append",
|
||||
dest="input_images",
|
||||
metavar="IMAGE",
|
||||
help="Input image path(s) for editing/composition. Can be specified multiple times (up to 14 images)."
|
||||
)
|
||||
parser.add_argument(
|
||||
"--resolution", "-r",
|
||||
choices=["1K", "2K", "4K"],
|
||||
default="1K",
|
||||
help="Output resolution: 1K (default), 2K, or 4K"
|
||||
)
|
||||
parser.add_argument(
|
||||
"--api-key", "-k",
|
||||
help="Gemini API key (overrides GEMINI_API_KEY env var)"
|
||||
)
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
# Get API key
|
||||
api_key = get_api_key(args.api_key)
|
||||
if not api_key:
|
||||
print("Error: No API key provided.", file=sys.stderr)
|
||||
print("Please either:", file=sys.stderr)
|
||||
print(" 1. Provide --api-key argument", file=sys.stderr)
|
||||
print(" 2. Set GEMINI_API_KEY environment variable", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Import here after checking API key to avoid slow import on error
|
||||
from google import genai
|
||||
from google.genai import types
|
||||
from PIL import Image as PILImage
|
||||
|
||||
# Initialise client
|
||||
client = genai.Client(api_key=api_key)
|
||||
|
||||
# Set up output path
|
||||
output_path = Path(args.filename)
|
||||
output_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Load input images if provided (up to 14 supported by Nano Banana Pro)
|
||||
input_images = []
|
||||
output_resolution = args.resolution
|
||||
if args.input_images:
|
||||
if len(args.input_images) > 14:
|
||||
print(f"Error: Too many input images ({len(args.input_images)}). Maximum is 14.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
max_input_dim = 0
|
||||
for img_path in args.input_images:
|
||||
try:
|
||||
with PILImage.open(img_path) as img:
|
||||
copied = img.copy()
|
||||
width, height = copied.size
|
||||
input_images.append(copied)
|
||||
print(f"Loaded input image: {img_path}")
|
||||
|
||||
# Track largest dimension for auto-resolution
|
||||
max_input_dim = max(max_input_dim, width, height)
|
||||
except Exception as e:
|
||||
print(f"Error loading input image '{img_path}': {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
# Auto-detect resolution from largest input if not explicitly set
|
||||
if args.resolution == "1K" and max_input_dim > 0: # Default value
|
||||
if max_input_dim >= 3000:
|
||||
output_resolution = "4K"
|
||||
elif max_input_dim >= 1500:
|
||||
output_resolution = "2K"
|
||||
else:
|
||||
output_resolution = "1K"
|
||||
print(f"Auto-detected resolution: {output_resolution} (from max input dimension {max_input_dim})")
|
||||
|
||||
# Build contents (images first if editing, prompt only if generating)
|
||||
if input_images:
|
||||
contents = [*input_images, args.prompt]
|
||||
img_count = len(input_images)
|
||||
print(f"Processing {img_count} image{'s' if img_count > 1 else ''} with resolution {output_resolution}...")
|
||||
else:
|
||||
contents = args.prompt
|
||||
print(f"Generating image with resolution {output_resolution}...")
|
||||
|
||||
try:
|
||||
response = client.models.generate_content(
|
||||
model="gemini-3-pro-image-preview",
|
||||
contents=contents,
|
||||
config=types.GenerateContentConfig(
|
||||
response_modalities=["TEXT", "IMAGE"],
|
||||
image_config=types.ImageConfig(
|
||||
image_size=output_resolution
|
||||
)
|
||||
)
|
||||
)
|
||||
|
||||
# Process response and convert to PNG
|
||||
image_saved = False
|
||||
for part in response.parts:
|
||||
if part.text is not None:
|
||||
print(f"Model response: {part.text}")
|
||||
elif part.inline_data is not None:
|
||||
# Convert inline data to PIL Image and save as PNG
|
||||
from io import BytesIO
|
||||
|
||||
# inline_data.data is already bytes, not base64
|
||||
image_data = part.inline_data.data
|
||||
if isinstance(image_data, str):
|
||||
# If it's a string, it might be base64
|
||||
import base64
|
||||
image_data = base64.b64decode(image_data)
|
||||
|
||||
image = PILImage.open(BytesIO(image_data))
|
||||
|
||||
# Ensure RGB mode for PNG (convert RGBA to RGB with white background if needed)
|
||||
if image.mode == 'RGBA':
|
||||
rgb_image = PILImage.new('RGB', image.size, (255, 255, 255))
|
||||
rgb_image.paste(image, mask=image.split()[3])
|
||||
rgb_image.save(str(output_path), 'PNG')
|
||||
elif image.mode == 'RGB':
|
||||
image.save(str(output_path), 'PNG')
|
||||
else:
|
||||
image.convert('RGB').save(str(output_path), 'PNG')
|
||||
image_saved = True
|
||||
|
||||
if image_saved:
|
||||
full_path = output_path.resolve()
|
||||
print(f"\nImage saved: {full_path}")
|
||||
# OpenClaw parses MEDIA tokens and will attach the file on supported providers.
|
||||
print(f"MEDIA: {full_path}")
|
||||
else:
|
||||
print("Error: No image was generated in the response.", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
except Exception as e:
|
||||
print(f"Error generating image: {e}", file=sys.stderr)
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
38
openclaw/skills/nano-pdf/SKILL.md
Normal file
38
openclaw/skills/nano-pdf/SKILL.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
name: nano-pdf
|
||||
description: Edit PDFs with natural-language instructions using the nano-pdf CLI.
|
||||
homepage: https://pypi.org/project/nano-pdf/
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "📄",
|
||||
"requires": { "bins": ["nano-pdf"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "uv",
|
||||
"kind": "uv",
|
||||
"package": "nano-pdf",
|
||||
"bins": ["nano-pdf"],
|
||||
"label": "Install nano-pdf (uv)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# nano-pdf
|
||||
|
||||
Use `nano-pdf` to apply edits to a specific page in a PDF using a natural-language instruction.
|
||||
|
||||
## Quick start
|
||||
|
||||
```bash
|
||||
nano-pdf edit deck.pdf 1 "Change the title to 'Q3 Results' and fix the typo in the subtitle"
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- Page numbers are 0-based or 1-based depending on the tool’s version/config; if the result looks off by one, retry with the other.
|
||||
- Always sanity-check the output PDF before sending it out.
|
||||
172
openclaw/skills/notion/SKILL.md
Normal file
172
openclaw/skills/notion/SKILL.md
Normal file
@@ -0,0 +1,172 @@
|
||||
---
|
||||
name: notion
|
||||
description: Notion API for creating and managing pages, databases, and blocks.
|
||||
homepage: https://developers.notion.com
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{ "emoji": "📝", "requires": { "env": ["NOTION_API_KEY"] }, "primaryEnv": "NOTION_API_KEY" },
|
||||
}
|
||||
---
|
||||
|
||||
# notion
|
||||
|
||||
Use the Notion API to create/read/update pages, data sources (databases), and blocks.
|
||||
|
||||
## Setup
|
||||
|
||||
1. Create an integration at https://notion.so/my-integrations
|
||||
2. Copy the API key (starts with `ntn_` or `secret_`)
|
||||
3. Store it:
|
||||
|
||||
```bash
|
||||
mkdir -p ~/.config/notion
|
||||
echo "ntn_your_key_here" > ~/.config/notion/api_key
|
||||
```
|
||||
|
||||
4. Share target pages/databases with your integration (click "..." → "Connect to" → your integration name)
|
||||
|
||||
## API Basics
|
||||
|
||||
All requests need:
|
||||
|
||||
```bash
|
||||
NOTION_KEY=$(cat ~/.config/notion/api_key)
|
||||
curl -X GET "https://api.notion.com/v1/..." \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json"
|
||||
```
|
||||
|
||||
> **Note:** The `Notion-Version` header is required. This skill uses `2025-09-03` (latest). In this version, databases are called "data sources" in the API.
|
||||
|
||||
## Common Operations
|
||||
|
||||
**Search for pages and data sources:**
|
||||
|
||||
```bash
|
||||
curl -X POST "https://api.notion.com/v1/search" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"query": "page title"}'
|
||||
```
|
||||
|
||||
**Get page:**
|
||||
|
||||
```bash
|
||||
curl "https://api.notion.com/v1/pages/{page_id}" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03"
|
||||
```
|
||||
|
||||
**Get page content (blocks):**
|
||||
|
||||
```bash
|
||||
curl "https://api.notion.com/v1/blocks/{page_id}/children" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03"
|
||||
```
|
||||
|
||||
**Create page in a data source:**
|
||||
|
||||
```bash
|
||||
curl -X POST "https://api.notion.com/v1/pages" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"parent": {"database_id": "xxx"},
|
||||
"properties": {
|
||||
"Name": {"title": [{"text": {"content": "New Item"}}]},
|
||||
"Status": {"select": {"name": "Todo"}}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
**Query a data source (database):**
|
||||
|
||||
```bash
|
||||
curl -X POST "https://api.notion.com/v1/data_sources/{data_source_id}/query" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"filter": {"property": "Status", "select": {"equals": "Active"}},
|
||||
"sorts": [{"property": "Date", "direction": "descending"}]
|
||||
}'
|
||||
```
|
||||
|
||||
**Create a data source (database):**
|
||||
|
||||
```bash
|
||||
curl -X POST "https://api.notion.com/v1/data_sources" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"parent": {"page_id": "xxx"},
|
||||
"title": [{"text": {"content": "My Database"}}],
|
||||
"properties": {
|
||||
"Name": {"title": {}},
|
||||
"Status": {"select": {"options": [{"name": "Todo"}, {"name": "Done"}]}},
|
||||
"Date": {"date": {}}
|
||||
}
|
||||
}'
|
||||
```
|
||||
|
||||
**Update page properties:**
|
||||
|
||||
```bash
|
||||
curl -X PATCH "https://api.notion.com/v1/pages/{page_id}" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"properties": {"Status": {"select": {"name": "Done"}}}}'
|
||||
```
|
||||
|
||||
**Add blocks to page:**
|
||||
|
||||
```bash
|
||||
curl -X PATCH "https://api.notion.com/v1/blocks/{page_id}/children" \
|
||||
-H "Authorization: Bearer $NOTION_KEY" \
|
||||
-H "Notion-Version: 2025-09-03" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{
|
||||
"children": [
|
||||
{"object": "block", "type": "paragraph", "paragraph": {"rich_text": [{"text": {"content": "Hello"}}]}}
|
||||
]
|
||||
}'
|
||||
```
|
||||
|
||||
## Property Types
|
||||
|
||||
Common property formats for database items:
|
||||
|
||||
- **Title:** `{"title": [{"text": {"content": "..."}}]}`
|
||||
- **Rich text:** `{"rich_text": [{"text": {"content": "..."}}]}`
|
||||
- **Select:** `{"select": {"name": "Option"}}`
|
||||
- **Multi-select:** `{"multi_select": [{"name": "A"}, {"name": "B"}]}`
|
||||
- **Date:** `{"date": {"start": "2024-01-15", "end": "2024-01-16"}}`
|
||||
- **Checkbox:** `{"checkbox": true}`
|
||||
- **Number:** `{"number": 42}`
|
||||
- **URL:** `{"url": "https://..."}`
|
||||
- **Email:** `{"email": "a@b.com"}`
|
||||
- **Relation:** `{"relation": [{"id": "page_id"}]}`
|
||||
|
||||
## Key Differences in 2025-09-03
|
||||
|
||||
- **Databases → Data Sources:** Use `/data_sources/` endpoints for queries and retrieval
|
||||
- **Two IDs:** Each database now has both a `database_id` and a `data_source_id`
|
||||
- Use `database_id` when creating pages (`parent: {"database_id": "..."}`)
|
||||
- Use `data_source_id` when querying (`POST /v1/data_sources/{id}/query`)
|
||||
- **Search results:** Databases return as `"object": "data_source"` with their `data_source_id`
|
||||
- **Parent in responses:** Pages show `parent.data_source_id` alongside `parent.database_id`
|
||||
- **Finding the data_source_id:** Search for the database, or call `GET /v1/data_sources/{data_source_id}`
|
||||
|
||||
## Notes
|
||||
|
||||
- Page/database IDs are UUIDs (with or without dashes)
|
||||
- The API cannot set database view filters — that's UI-only
|
||||
- Rate limit: ~3 requests/second average
|
||||
- Use `is_inline: true` when creating data sources to embed them in pages
|
||||
81
openclaw/skills/obsidian/SKILL.md
Normal file
81
openclaw/skills/obsidian/SKILL.md
Normal file
@@ -0,0 +1,81 @@
|
||||
---
|
||||
name: obsidian
|
||||
description: Work with Obsidian vaults (plain Markdown notes) and automate via obsidian-cli.
|
||||
homepage: https://help.obsidian.md
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "💎",
|
||||
"requires": { "bins": ["obsidian-cli"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "yakitrak/yakitrak/obsidian-cli",
|
||||
"bins": ["obsidian-cli"],
|
||||
"label": "Install obsidian-cli (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Obsidian
|
||||
|
||||
Obsidian vault = a normal folder on disk.
|
||||
|
||||
Vault structure (typical)
|
||||
|
||||
- Notes: `*.md` (plain text Markdown; edit with any editor)
|
||||
- Config: `.obsidian/` (workspace + plugin settings; usually don’t touch from scripts)
|
||||
- Canvases: `*.canvas` (JSON)
|
||||
- Attachments: whatever folder you chose in Obsidian settings (images/PDFs/etc.)
|
||||
|
||||
## Find the active vault(s)
|
||||
|
||||
Obsidian desktop tracks vaults here (source of truth):
|
||||
|
||||
- `~/Library/Application Support/obsidian/obsidian.json`
|
||||
|
||||
`obsidian-cli` resolves vaults from that file; vault name is typically the **folder name** (path suffix).
|
||||
|
||||
Fast “what vault is active / where are the notes?”
|
||||
|
||||
- If you’ve already set a default: `obsidian-cli print-default --path-only`
|
||||
- Otherwise, read `~/Library/Application Support/obsidian/obsidian.json` and use the vault entry with `"open": true`.
|
||||
|
||||
Notes
|
||||
|
||||
- Multiple vaults common (iCloud vs `~/Documents`, work/personal, etc.). Don’t guess; read config.
|
||||
- Avoid writing hardcoded vault paths into scripts; prefer reading the config or using `print-default`.
|
||||
|
||||
## obsidian-cli quick start
|
||||
|
||||
Pick a default vault (once):
|
||||
|
||||
- `obsidian-cli set-default "<vault-folder-name>"`
|
||||
- `obsidian-cli print-default` / `obsidian-cli print-default --path-only`
|
||||
|
||||
Search
|
||||
|
||||
- `obsidian-cli search "query"` (note names)
|
||||
- `obsidian-cli search-content "query"` (inside notes; shows snippets + lines)
|
||||
|
||||
Create
|
||||
|
||||
- `obsidian-cli create "Folder/New note" --content "..." --open`
|
||||
- Requires Obsidian URI handler (`obsidian://…`) working (Obsidian installed).
|
||||
- Avoid creating notes under “hidden” dot-folders (e.g. `.something/...`) via URI; Obsidian may refuse.
|
||||
|
||||
Move/rename (safe refactor)
|
||||
|
||||
- `obsidian-cli move "old/path/note" "new/path/note"`
|
||||
- Updates `[[wikilinks]]` and common Markdown links across the vault (this is the main win vs `mv`).
|
||||
|
||||
Delete
|
||||
|
||||
- `obsidian-cli delete "path/note"`
|
||||
|
||||
Prefer direct edits when appropriate: open the `.md` file and change it; Obsidian will pick it up.
|
||||
89
openclaw/skills/openai-image-gen/SKILL.md
Normal file
89
openclaw/skills/openai-image-gen/SKILL.md
Normal file
@@ -0,0 +1,89 @@
|
||||
---
|
||||
name: openai-image-gen
|
||||
description: Batch-generate images via OpenAI Images API. Random prompt sampler + `index.html` gallery.
|
||||
homepage: https://platform.openai.com/docs/api-reference/images
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🖼️",
|
||||
"requires": { "bins": ["python3"], "env": ["OPENAI_API_KEY"] },
|
||||
"primaryEnv": "OPENAI_API_KEY",
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "python-brew",
|
||||
"kind": "brew",
|
||||
"formula": "python",
|
||||
"bins": ["python3"],
|
||||
"label": "Install Python (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# OpenAI Image Gen
|
||||
|
||||
Generate a handful of “random but structured” prompts and render them via the OpenAI Images API.
|
||||
|
||||
## Run
|
||||
|
||||
```bash
|
||||
python3 {baseDir}/scripts/gen.py
|
||||
open ~/Projects/tmp/openai-image-gen-*/index.html # if ~/Projects/tmp exists; else ./tmp/...
|
||||
```
|
||||
|
||||
Useful flags:
|
||||
|
||||
```bash
|
||||
# GPT image models with various options
|
||||
python3 {baseDir}/scripts/gen.py --count 16 --model gpt-image-1
|
||||
python3 {baseDir}/scripts/gen.py --prompt "ultra-detailed studio photo of a lobster astronaut" --count 4
|
||||
python3 {baseDir}/scripts/gen.py --size 1536x1024 --quality high --out-dir ./out/images
|
||||
python3 {baseDir}/scripts/gen.py --model gpt-image-1.5 --background transparent --output-format webp
|
||||
|
||||
# DALL-E 3 (note: count is automatically limited to 1)
|
||||
python3 {baseDir}/scripts/gen.py --model dall-e-3 --quality hd --size 1792x1024 --style vivid
|
||||
python3 {baseDir}/scripts/gen.py --model dall-e-3 --style natural --prompt "serene mountain landscape"
|
||||
|
||||
# DALL-E 2
|
||||
python3 {baseDir}/scripts/gen.py --model dall-e-2 --size 512x512 --count 4
|
||||
```
|
||||
|
||||
## Model-Specific Parameters
|
||||
|
||||
Different models support different parameter values. The script automatically selects appropriate defaults based on the model.
|
||||
|
||||
### Size
|
||||
|
||||
- **GPT image models** (`gpt-image-1`, `gpt-image-1-mini`, `gpt-image-1.5`): `1024x1024`, `1536x1024` (landscape), `1024x1536` (portrait), or `auto`
|
||||
- Default: `1024x1024`
|
||||
- **dall-e-3**: `1024x1024`, `1792x1024`, or `1024x1792`
|
||||
- Default: `1024x1024`
|
||||
- **dall-e-2**: `256x256`, `512x512`, or `1024x1024`
|
||||
- Default: `1024x1024`
|
||||
|
||||
### Quality
|
||||
|
||||
- **GPT image models**: `auto`, `high`, `medium`, or `low`
|
||||
- Default: `high`
|
||||
- **dall-e-3**: `hd` or `standard`
|
||||
- Default: `standard`
|
||||
- **dall-e-2**: `standard` only
|
||||
- Default: `standard`
|
||||
|
||||
### Other Notable Differences
|
||||
|
||||
- **dall-e-3** only supports generating 1 image at a time (`n=1`). The script automatically limits count to 1 when using this model.
|
||||
- **GPT image models** support additional parameters:
|
||||
- `--background`: `transparent`, `opaque`, or `auto` (default)
|
||||
- `--output-format`: `png` (default), `jpeg`, or `webp`
|
||||
- Note: `stream` and `moderation` are available via API but not yet implemented in this script
|
||||
- **dall-e-3** has a `--style` parameter: `vivid` (hyper-real, dramatic) or `natural` (more natural looking)
|
||||
|
||||
## Output
|
||||
|
||||
- `*.png`, `*.jpeg`, or `*.webp` images (output format depends on model + `--output-format`)
|
||||
- `prompts.json` (prompt → file mapping)
|
||||
- `index.html` (thumbnail gallery)
|
||||
241
openclaw/skills/openai-image-gen/scripts/gen.py
Normal file
241
openclaw/skills/openai-image-gen/scripts/gen.py
Normal file
@@ -0,0 +1,241 @@
|
||||
#!/usr/bin/env python3
|
||||
import argparse
|
||||
import base64
|
||||
import datetime as dt
|
||||
import json
|
||||
import os
|
||||
import random
|
||||
import re
|
||||
import sys
|
||||
import urllib.error
|
||||
import urllib.request
|
||||
from html import escape as html_escape
|
||||
from pathlib import Path
|
||||
|
||||
|
||||
def slugify(text: str) -> str:
|
||||
text = text.lower().strip()
|
||||
text = re.sub(r"[^a-z0-9]+", "-", text)
|
||||
text = re.sub(r"-{2,}", "-", text).strip("-")
|
||||
return text or "image"
|
||||
|
||||
|
||||
def default_out_dir() -> Path:
|
||||
now = dt.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")
|
||||
preferred = Path.home() / "Projects" / "tmp"
|
||||
base = preferred if preferred.is_dir() else Path("./tmp")
|
||||
base.mkdir(parents=True, exist_ok=True)
|
||||
return base / f"openai-image-gen-{now}"
|
||||
|
||||
|
||||
def pick_prompts(count: int) -> list[str]:
|
||||
subjects = [
|
||||
"a lobster astronaut",
|
||||
"a brutalist lighthouse",
|
||||
"a cozy reading nook",
|
||||
"a cyberpunk noodle shop",
|
||||
"a Vienna street at dusk",
|
||||
"a minimalist product photo",
|
||||
"a surreal underwater library",
|
||||
]
|
||||
styles = [
|
||||
"ultra-detailed studio photo",
|
||||
"35mm film still",
|
||||
"isometric illustration",
|
||||
"editorial photography",
|
||||
"soft watercolor",
|
||||
"architectural render",
|
||||
"high-contrast monochrome",
|
||||
]
|
||||
lighting = [
|
||||
"golden hour",
|
||||
"overcast soft light",
|
||||
"neon lighting",
|
||||
"dramatic rim light",
|
||||
"candlelight",
|
||||
"foggy atmosphere",
|
||||
]
|
||||
prompts: list[str] = []
|
||||
for _ in range(count):
|
||||
prompts.append(
|
||||
f"{random.choice(styles)} of {random.choice(subjects)}, {random.choice(lighting)}"
|
||||
)
|
||||
return prompts
|
||||
|
||||
|
||||
def get_model_defaults(model: str) -> tuple[str, str]:
|
||||
"""Return (default_size, default_quality) for the given model."""
|
||||
if model == "dall-e-2":
|
||||
# quality will be ignored
|
||||
return ("1024x1024", "standard")
|
||||
elif model == "dall-e-3":
|
||||
return ("1024x1024", "standard")
|
||||
else:
|
||||
# GPT image or future models
|
||||
return ("1024x1024", "high")
|
||||
|
||||
|
||||
def request_images(
|
||||
api_key: str,
|
||||
prompt: str,
|
||||
model: str,
|
||||
size: str,
|
||||
quality: str,
|
||||
background: str = "",
|
||||
output_format: str = "",
|
||||
style: str = "",
|
||||
) -> dict:
|
||||
url = "https://api.openai.com/v1/images/generations"
|
||||
args = {
|
||||
"model": model,
|
||||
"prompt": prompt,
|
||||
"size": size,
|
||||
"n": 1,
|
||||
}
|
||||
|
||||
# Quality parameter - dall-e-2 doesn't accept this parameter
|
||||
if model != "dall-e-2":
|
||||
args["quality"] = quality
|
||||
|
||||
# Note: response_format no longer supported by OpenAI Images API
|
||||
# dall-e models now return URLs by default
|
||||
|
||||
if model.startswith("gpt-image"):
|
||||
if background:
|
||||
args["background"] = background
|
||||
if output_format:
|
||||
args["output_format"] = output_format
|
||||
|
||||
if model == "dall-e-3" and style:
|
||||
args["style"] = style
|
||||
|
||||
body = json.dumps(args).encode("utf-8")
|
||||
req = urllib.request.Request(
|
||||
url,
|
||||
method="POST",
|
||||
headers={
|
||||
"Authorization": f"Bearer {api_key}",
|
||||
"Content-Type": "application/json",
|
||||
},
|
||||
data=body,
|
||||
)
|
||||
try:
|
||||
with urllib.request.urlopen(req, timeout=300) as resp:
|
||||
return json.loads(resp.read().decode("utf-8"))
|
||||
except urllib.error.HTTPError as e:
|
||||
payload = e.read().decode("utf-8", errors="replace")
|
||||
raise RuntimeError(f"OpenAI Images API failed ({e.code}): {payload}") from e
|
||||
|
||||
|
||||
def write_gallery(out_dir: Path, items: list[dict]) -> None:
|
||||
thumbs = "\n".join(
|
||||
[
|
||||
f"""
|
||||
<figure>
|
||||
<a href="{html_escape(it["file"], quote=True)}"><img src="{html_escape(it["file"], quote=True)}" loading="lazy" /></a>
|
||||
<figcaption>{html_escape(it["prompt"])}</figcaption>
|
||||
</figure>
|
||||
""".strip()
|
||||
for it in items
|
||||
]
|
||||
)
|
||||
html = f"""<!doctype html>
|
||||
<meta charset="utf-8" />
|
||||
<title>openai-image-gen</title>
|
||||
<style>
|
||||
:root {{ color-scheme: dark; }}
|
||||
body {{ margin: 24px; font: 14px/1.4 ui-sans-serif, system-ui; background: #0b0f14; color: #e8edf2; }}
|
||||
h1 {{ font-size: 18px; margin: 0 0 16px; }}
|
||||
.grid {{ display: grid; grid-template-columns: repeat(auto-fill, minmax(240px, 1fr)); gap: 16px; }}
|
||||
figure {{ margin: 0; padding: 12px; border: 1px solid #1e2a36; border-radius: 14px; background: #0f1620; }}
|
||||
img {{ width: 100%; height: auto; border-radius: 10px; display: block; }}
|
||||
figcaption {{ margin-top: 10px; color: #b7c2cc; }}
|
||||
code {{ color: #9cd1ff; }}
|
||||
</style>
|
||||
<h1>openai-image-gen</h1>
|
||||
<p>Output: <code>{html_escape(out_dir.as_posix())}</code></p>
|
||||
<div class="grid">
|
||||
{thumbs}
|
||||
</div>
|
||||
"""
|
||||
(out_dir / "index.html").write_text(html, encoding="utf-8")
|
||||
|
||||
|
||||
def main() -> int:
|
||||
ap = argparse.ArgumentParser(description="Generate images via OpenAI Images API.")
|
||||
ap.add_argument("--prompt", help="Single prompt. If omitted, random prompts are generated.")
|
||||
ap.add_argument("--count", type=int, default=8, help="How many images to generate.")
|
||||
ap.add_argument("--model", default="gpt-image-1", help="Image model id.")
|
||||
ap.add_argument("--size", default="", help="Image size (e.g. 1024x1024, 1536x1024). Defaults based on model if not specified.")
|
||||
ap.add_argument("--quality", default="", help="Image quality (e.g. high, standard). Defaults based on model if not specified.")
|
||||
ap.add_argument("--background", default="", help="Background transparency (GPT models only): transparent, opaque, or auto.")
|
||||
ap.add_argument("--output-format", default="", help="Output format (GPT models only): png, jpeg, or webp.")
|
||||
ap.add_argument("--style", default="", help="Image style (dall-e-3 only): vivid or natural.")
|
||||
ap.add_argument("--out-dir", default="", help="Output directory (default: ./tmp/openai-image-gen-<ts>).")
|
||||
args = ap.parse_args()
|
||||
|
||||
api_key = (os.environ.get("OPENAI_API_KEY") or "").strip()
|
||||
if not api_key:
|
||||
print("Missing OPENAI_API_KEY", file=sys.stderr)
|
||||
return 2
|
||||
|
||||
# Apply model-specific defaults if not specified
|
||||
default_size, default_quality = get_model_defaults(args.model)
|
||||
size = args.size or default_size
|
||||
quality = args.quality or default_quality
|
||||
|
||||
count = args.count
|
||||
if args.model == "dall-e-3" and count > 1:
|
||||
print(f"Warning: dall-e-3 only supports generating 1 image at a time. Reducing count from {count} to 1.", file=sys.stderr)
|
||||
count = 1
|
||||
|
||||
out_dir = Path(args.out_dir).expanduser() if args.out_dir else default_out_dir()
|
||||
out_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
prompts = [args.prompt] * count if args.prompt else pick_prompts(count)
|
||||
|
||||
# Determine file extension based on output format
|
||||
if args.model.startswith("gpt-image") and args.output_format:
|
||||
file_ext = args.output_format
|
||||
else:
|
||||
file_ext = "png"
|
||||
|
||||
items: list[dict] = []
|
||||
for idx, prompt in enumerate(prompts, start=1):
|
||||
print(f"[{idx}/{len(prompts)}] {prompt}")
|
||||
res = request_images(
|
||||
api_key,
|
||||
prompt,
|
||||
args.model,
|
||||
size,
|
||||
quality,
|
||||
args.background,
|
||||
args.output_format,
|
||||
args.style,
|
||||
)
|
||||
data = res.get("data", [{}])[0]
|
||||
image_b64 = data.get("b64_json")
|
||||
image_url = data.get("url")
|
||||
if not image_b64 and not image_url:
|
||||
raise RuntimeError(f"Unexpected response: {json.dumps(res)[:400]}")
|
||||
|
||||
filename = f"{idx:03d}-{slugify(prompt)[:40]}.{file_ext}"
|
||||
filepath = out_dir / filename
|
||||
if image_b64:
|
||||
filepath.write_bytes(base64.b64decode(image_b64))
|
||||
else:
|
||||
try:
|
||||
urllib.request.urlretrieve(image_url, filepath)
|
||||
except urllib.error.URLError as e:
|
||||
raise RuntimeError(f"Failed to download image from {image_url}: {e}") from e
|
||||
|
||||
items.append({"prompt": prompt, "file": filename})
|
||||
|
||||
(out_dir / "prompts.json").write_text(json.dumps(items, indent=2), encoding="utf-8")
|
||||
write_gallery(out_dir, items)
|
||||
print(f"\nWrote: {(out_dir / 'index.html').as_posix()}")
|
||||
return 0
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
raise SystemExit(main())
|
||||
50
openclaw/skills/openai-image-gen/scripts/test_gen.py
Normal file
50
openclaw/skills/openai-image-gen/scripts/test_gen.py
Normal file
@@ -0,0 +1,50 @@
|
||||
"""Tests for write_gallery HTML escaping (fixes #12538 - stored XSS)."""
|
||||
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
|
||||
from gen import write_gallery
|
||||
|
||||
|
||||
def test_write_gallery_escapes_prompt_xss():
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
out = Path(tmpdir)
|
||||
items = [{"prompt": '<script>alert("xss")</script>', "file": "001-test.png"}]
|
||||
write_gallery(out, items)
|
||||
html = (out / "index.html").read_text()
|
||||
assert "<script>" not in html
|
||||
assert "<script>" in html
|
||||
|
||||
|
||||
def test_write_gallery_escapes_filename():
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
out = Path(tmpdir)
|
||||
items = [{"prompt": "safe prompt", "file": '" onload="alert(1)'}]
|
||||
write_gallery(out, items)
|
||||
html = (out / "index.html").read_text()
|
||||
assert 'onload="alert(1)"' not in html
|
||||
assert """ in html
|
||||
|
||||
|
||||
def test_write_gallery_escapes_ampersand():
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
out = Path(tmpdir)
|
||||
items = [{"prompt": "cats & dogs <3", "file": "001-test.png"}]
|
||||
write_gallery(out, items)
|
||||
html = (out / "index.html").read_text()
|
||||
assert "cats & dogs <3" in html
|
||||
|
||||
|
||||
def test_write_gallery_normal_output():
|
||||
with tempfile.TemporaryDirectory() as tmpdir:
|
||||
out = Path(tmpdir)
|
||||
items = [
|
||||
{"prompt": "a lobster astronaut, golden hour", "file": "001-lobster.png"},
|
||||
{"prompt": "a cozy reading nook", "file": "002-nook.png"},
|
||||
]
|
||||
write_gallery(out, items)
|
||||
html = (out / "index.html").read_text()
|
||||
assert "a lobster astronaut, golden hour" in html
|
||||
assert 'src="001-lobster.png"' in html
|
||||
assert "002-nook.png" in html
|
||||
|
||||
52
openclaw/skills/openai-whisper-api/SKILL.md
Normal file
52
openclaw/skills/openai-whisper-api/SKILL.md
Normal file
@@ -0,0 +1,52 @@
|
||||
---
|
||||
name: openai-whisper-api
|
||||
description: Transcribe audio via OpenAI Audio Transcriptions API (Whisper).
|
||||
homepage: https://platform.openai.com/docs/guides/speech-to-text
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "☁️",
|
||||
"requires": { "bins": ["curl"], "env": ["OPENAI_API_KEY"] },
|
||||
"primaryEnv": "OPENAI_API_KEY",
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# OpenAI Whisper API (curl)
|
||||
|
||||
Transcribe an audio file via OpenAI’s `/v1/audio/transcriptions` endpoint.
|
||||
|
||||
## Quick start
|
||||
|
||||
```bash
|
||||
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a
|
||||
```
|
||||
|
||||
Defaults:
|
||||
|
||||
- Model: `whisper-1`
|
||||
- Output: `<input>.txt`
|
||||
|
||||
## Useful flags
|
||||
|
||||
```bash
|
||||
{baseDir}/scripts/transcribe.sh /path/to/audio.ogg --model whisper-1 --out /tmp/transcript.txt
|
||||
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --language en
|
||||
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --prompt "Speaker names: Peter, Daniel"
|
||||
{baseDir}/scripts/transcribe.sh /path/to/audio.m4a --json --out /tmp/transcript.json
|
||||
```
|
||||
|
||||
## API key
|
||||
|
||||
Set `OPENAI_API_KEY`, or configure it in `~/.openclaw/openclaw.json`:
|
||||
|
||||
```json5
|
||||
{
|
||||
skills: {
|
||||
"openai-whisper-api": {
|
||||
apiKey: "OPENAI_KEY_HERE",
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
85
openclaw/skills/openai-whisper-api/scripts/transcribe.sh
Normal file
85
openclaw/skills/openai-whisper-api/scripts/transcribe.sh
Normal file
@@ -0,0 +1,85 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat >&2 <<'EOF'
|
||||
Usage:
|
||||
transcribe.sh <audio-file> [--model whisper-1] [--out /path/to/out.txt] [--language en] [--prompt "hint"] [--json]
|
||||
EOF
|
||||
exit 2
|
||||
}
|
||||
|
||||
if [[ "${1:-}" == "" || "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
in="${1:-}"
|
||||
shift || true
|
||||
|
||||
model="whisper-1"
|
||||
out=""
|
||||
language=""
|
||||
prompt=""
|
||||
response_format="text"
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--model)
|
||||
model="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--out)
|
||||
out="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--language)
|
||||
language="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--prompt)
|
||||
prompt="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--json)
|
||||
response_format="json"
|
||||
shift 1
|
||||
;;
|
||||
*)
|
||||
echo "Unknown arg: $1" >&2
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ ! -f "$in" ]]; then
|
||||
echo "File not found: $in" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "${OPENAI_API_KEY:-}" == "" ]]; then
|
||||
echo "Missing OPENAI_API_KEY" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$out" == "" ]]; then
|
||||
base="${in%.*}"
|
||||
if [[ "$response_format" == "json" ]]; then
|
||||
out="${base}.json"
|
||||
else
|
||||
out="${base}.txt"
|
||||
fi
|
||||
fi
|
||||
|
||||
mkdir -p "$(dirname "$out")"
|
||||
|
||||
curl -sS https://api.openai.com/v1/audio/transcriptions \
|
||||
-H "Authorization: Bearer $OPENAI_API_KEY" \
|
||||
-H "Accept: application/json" \
|
||||
-F "file=@${in}" \
|
||||
-F "model=${model}" \
|
||||
-F "response_format=${response_format}" \
|
||||
${language:+-F "language=${language}"} \
|
||||
${prompt:+-F "prompt=${prompt}"} \
|
||||
>"$out"
|
||||
|
||||
echo "$out"
|
||||
38
openclaw/skills/openai-whisper/SKILL.md
Normal file
38
openclaw/skills/openai-whisper/SKILL.md
Normal file
@@ -0,0 +1,38 @@
|
||||
---
|
||||
name: openai-whisper
|
||||
description: Local speech-to-text with the Whisper CLI (no API key).
|
||||
homepage: https://openai.com/research/whisper
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🎙️",
|
||||
"requires": { "bins": ["whisper"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "openai-whisper",
|
||||
"bins": ["whisper"],
|
||||
"label": "Install OpenAI Whisper (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Whisper (CLI)
|
||||
|
||||
Use `whisper` to transcribe audio locally.
|
||||
|
||||
Quick start
|
||||
|
||||
- `whisper /path/audio.mp3 --model medium --output_format txt --output_dir .`
|
||||
- `whisper /path/audio.m4a --task translate --output_format srt`
|
||||
|
||||
Notes
|
||||
|
||||
- Models download to `~/.cache/whisper` on first run.
|
||||
- `--model` defaults to `turbo` on this install.
|
||||
- Use smaller models for speed, larger for accuracy.
|
||||
112
openclaw/skills/openhue/SKILL.md
Normal file
112
openclaw/skills/openhue/SKILL.md
Normal file
@@ -0,0 +1,112 @@
|
||||
---
|
||||
name: openhue
|
||||
description: Control Philips Hue lights and scenes via the OpenHue CLI.
|
||||
homepage: https://www.openhue.io/cli
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "💡",
|
||||
"requires": { "bins": ["openhue"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "openhue/cli/openhue-cli",
|
||||
"bins": ["openhue"],
|
||||
"label": "Install OpenHue CLI (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# OpenHue CLI
|
||||
|
||||
Use `openhue` to control Philips Hue lights and scenes via a Hue Bridge.
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **USE this skill when:**
|
||||
|
||||
- "Turn on/off the lights"
|
||||
- "Dim the living room lights"
|
||||
- "Set a scene" or "movie mode"
|
||||
- Controlling specific Hue rooms or zones
|
||||
- Adjusting brightness, color, or color temperature
|
||||
|
||||
## When NOT to Use
|
||||
|
||||
❌ **DON'T use this skill when:**
|
||||
|
||||
- Non-Hue smart devices (other brands) → not supported
|
||||
- HomeKit scenes or Shortcuts → use Apple's ecosystem
|
||||
- TV or entertainment system control
|
||||
- Thermostat or HVAC
|
||||
- Smart plugs (unless Hue smart plugs)
|
||||
|
||||
## Common Commands
|
||||
|
||||
### List Resources
|
||||
|
||||
```bash
|
||||
openhue get light # List all lights
|
||||
openhue get room # List all rooms
|
||||
openhue get scene # List all scenes
|
||||
```
|
||||
|
||||
### Control Lights
|
||||
|
||||
```bash
|
||||
# Turn on/off
|
||||
openhue set light "Bedroom Lamp" --on
|
||||
openhue set light "Bedroom Lamp" --off
|
||||
|
||||
# Brightness (0-100)
|
||||
openhue set light "Bedroom Lamp" --on --brightness 50
|
||||
|
||||
# Color temperature (warm to cool: 153-500 mirek)
|
||||
openhue set light "Bedroom Lamp" --on --temperature 300
|
||||
|
||||
# Color (by name or hex)
|
||||
openhue set light "Bedroom Lamp" --on --color red
|
||||
openhue set light "Bedroom Lamp" --on --rgb "#FF5500"
|
||||
```
|
||||
|
||||
### Control Rooms
|
||||
|
||||
```bash
|
||||
# Turn off entire room
|
||||
openhue set room "Bedroom" --off
|
||||
|
||||
# Set room brightness
|
||||
openhue set room "Bedroom" --on --brightness 30
|
||||
```
|
||||
|
||||
### Scenes
|
||||
|
||||
```bash
|
||||
# Activate scene
|
||||
openhue set scene "Relax" --room "Bedroom"
|
||||
openhue set scene "Concentrate" --room "Office"
|
||||
```
|
||||
|
||||
## Quick Presets
|
||||
|
||||
```bash
|
||||
# Bedtime (dim warm)
|
||||
openhue set room "Bedroom" --on --brightness 20 --temperature 450
|
||||
|
||||
# Work mode (bright cool)
|
||||
openhue set room "Office" --on --brightness 100 --temperature 250
|
||||
|
||||
# Movie mode (dim)
|
||||
openhue set room "Living Room" --on --brightness 10
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Bridge must be on local network
|
||||
- First run requires button press on Hue bridge to pair
|
||||
- Colors only work on color-capable bulbs (not white-only)
|
||||
125
openclaw/skills/oracle/SKILL.md
Normal file
125
openclaw/skills/oracle/SKILL.md
Normal file
@@ -0,0 +1,125 @@
|
||||
---
|
||||
name: oracle
|
||||
description: Best practices for using the oracle CLI (prompt + file bundling, engines, sessions, and file attachment patterns).
|
||||
homepage: https://askoracle.dev
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🧿",
|
||||
"requires": { "bins": ["oracle"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "node",
|
||||
"kind": "node",
|
||||
"package": "@steipete/oracle",
|
||||
"bins": ["oracle"],
|
||||
"label": "Install oracle (node)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# oracle — best use
|
||||
|
||||
Oracle bundles your prompt + selected files into one “one-shot” request so another model can answer with real repo context (API or browser automation). Treat output as advisory: verify against code + tests.
|
||||
|
||||
## Main use case (browser, GPT‑5.2 Pro)
|
||||
|
||||
Default workflow here: `--engine browser` with GPT‑5.2 Pro in ChatGPT. This is the common “long think” path: ~10 minutes to ~1 hour is normal; expect a stored session you can reattach to.
|
||||
|
||||
Recommended defaults:
|
||||
|
||||
- Engine: browser (`--engine browser`)
|
||||
- Model: GPT‑5.2 Pro (`--model gpt-5.2-pro` or `--model "5.2 Pro"`)
|
||||
|
||||
## Golden path
|
||||
|
||||
1. Pick a tight file set (fewest files that still contain the truth).
|
||||
2. Preview payload + token spend (`--dry-run` + `--files-report`).
|
||||
3. Use browser mode for the usual GPT‑5.2 Pro workflow; use API only when you explicitly want it.
|
||||
4. If the run detaches/timeouts: reattach to the stored session (don’t re-run).
|
||||
|
||||
## Commands (preferred)
|
||||
|
||||
- Help:
|
||||
- `oracle --help`
|
||||
- If the binary isn’t installed: `npx -y @steipete/oracle --help` (avoid `pnpx` here; sqlite bindings).
|
||||
|
||||
- Preview (no tokens):
|
||||
- `oracle --dry-run summary -p "<task>" --file "src/**" --file "!**/*.test.*"`
|
||||
- `oracle --dry-run full -p "<task>" --file "src/**"`
|
||||
|
||||
- Token sanity:
|
||||
- `oracle --dry-run summary --files-report -p "<task>" --file "src/**"`
|
||||
|
||||
- Browser run (main path; long-running is normal):
|
||||
- `oracle --engine browser --model gpt-5.2-pro -p "<task>" --file "src/**"`
|
||||
|
||||
- Manual paste fallback:
|
||||
- `oracle --render --copy -p "<task>" --file "src/**"`
|
||||
- Note: `--copy` is a hidden alias for `--copy-markdown`.
|
||||
|
||||
## Attaching files (`--file`)
|
||||
|
||||
`--file` accepts files, directories, and globs. You can pass it multiple times; entries can be comma-separated.
|
||||
|
||||
- Include:
|
||||
- `--file "src/**"`
|
||||
- `--file src/index.ts`
|
||||
- `--file docs --file README.md`
|
||||
|
||||
- Exclude:
|
||||
- `--file "src/**" --file "!src/**/*.test.ts" --file "!**/*.snap"`
|
||||
|
||||
- Defaults (implementation behavior):
|
||||
- Default-ignored dirs: `node_modules`, `dist`, `coverage`, `.git`, `.turbo`, `.next`, `build`, `tmp` (skipped unless explicitly passed as literal dirs/files).
|
||||
- Honors `.gitignore` when expanding globs.
|
||||
- Does not follow symlinks.
|
||||
- Dotfiles filtered unless opted in via pattern (e.g. `--file ".github/**"`).
|
||||
- Files > 1 MB rejected.
|
||||
|
||||
## Engines (API vs browser)
|
||||
|
||||
- Auto-pick: `api` when `OPENAI_API_KEY` is set; otherwise `browser`.
|
||||
- Browser supports GPT + Gemini only; use `--engine api` for Claude/Grok/Codex or multi-model runs.
|
||||
- Browser attachments:
|
||||
- `--browser-attachments auto|never|always` (auto pastes inline up to ~60k chars then uploads).
|
||||
- Remote browser host:
|
||||
- Host: `oracle serve --host 0.0.0.0 --port 9473 --token <secret>`
|
||||
- Client: `oracle --engine browser --remote-host <host:port> --remote-token <secret> -p "<task>" --file "src/**"`
|
||||
|
||||
## Sessions + slugs
|
||||
|
||||
- Stored under `~/.oracle/sessions` (override with `ORACLE_HOME_DIR`).
|
||||
- Runs may detach or take a long time (browser + GPT‑5.2 Pro often does). If the CLI times out: don’t re-run; reattach.
|
||||
- List: `oracle status --hours 72`
|
||||
- Attach: `oracle session <id> --render`
|
||||
- Use `--slug "<3-5 words>"` to keep session IDs readable.
|
||||
- Duplicate prompt guard exists; use `--force` only when you truly want a fresh run.
|
||||
|
||||
## Prompt template (high signal)
|
||||
|
||||
Oracle starts with **zero** project knowledge. Assume the model cannot infer your stack, build tooling, conventions, or “obvious” paths. Include:
|
||||
|
||||
- Project briefing (stack + build/test commands + platform constraints).
|
||||
- “Where things live” (key directories, entrypoints, config files, boundaries).
|
||||
- Exact question + what you tried + the error text (verbatim).
|
||||
- Constraints (“don’t change X”, “must keep public API”, etc).
|
||||
- Desired output (“return patch plan + tests”, “give 3 options with tradeoffs”).
|
||||
|
||||
## Safety
|
||||
|
||||
- Don’t attach secrets by default (`.env`, key files, auth tokens). Redact aggressively; share only what’s required.
|
||||
|
||||
## “Exhaustive prompt” restoration pattern
|
||||
|
||||
For long investigations, write a standalone prompt + file set so you can rerun days later:
|
||||
|
||||
- 6–30 sentence project briefing + the goal.
|
||||
- Repro steps + exact errors + what you tried.
|
||||
- Attach all context files needed (entrypoints, configs, key modules, docs).
|
||||
|
||||
Oracle runs are one-shot; the model doesn’t remember prior runs. “Restoring context” means re-running with the same prompt + `--file …` set (or reattaching a still-running stored session).
|
||||
78
openclaw/skills/ordercli/SKILL.md
Normal file
78
openclaw/skills/ordercli/SKILL.md
Normal file
@@ -0,0 +1,78 @@
|
||||
---
|
||||
name: ordercli
|
||||
description: Foodora-only CLI for checking past orders and active order status (Deliveroo WIP).
|
||||
homepage: https://ordercli.sh
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🛵",
|
||||
"requires": { "bins": ["ordercli"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/ordercli",
|
||||
"bins": ["ordercli"],
|
||||
"label": "Install ordercli (brew)",
|
||||
},
|
||||
{
|
||||
"id": "go",
|
||||
"kind": "go",
|
||||
"module": "github.com/steipete/ordercli/cmd/ordercli@latest",
|
||||
"bins": ["ordercli"],
|
||||
"label": "Install ordercli (go)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# ordercli
|
||||
|
||||
Use `ordercli` to check past orders and track active order status (Foodora only right now).
|
||||
|
||||
Quick start (Foodora)
|
||||
|
||||
- `ordercli foodora countries`
|
||||
- `ordercli foodora config set --country AT`
|
||||
- `ordercli foodora login --email you@example.com --password-stdin`
|
||||
- `ordercli foodora orders`
|
||||
- `ordercli foodora history --limit 20`
|
||||
- `ordercli foodora history show <orderCode>`
|
||||
|
||||
Orders
|
||||
|
||||
- Active list (arrival/status): `ordercli foodora orders`
|
||||
- Watch: `ordercli foodora orders --watch`
|
||||
- Active order detail: `ordercli foodora order <orderCode>`
|
||||
- History detail JSON: `ordercli foodora history show <orderCode> --json`
|
||||
|
||||
Reorder (adds to cart)
|
||||
|
||||
- Preview: `ordercli foodora reorder <orderCode>`
|
||||
- Confirm: `ordercli foodora reorder <orderCode> --confirm`
|
||||
- Address: `ordercli foodora reorder <orderCode> --confirm --address-id <id>`
|
||||
|
||||
Cloudflare / bot protection
|
||||
|
||||
- Browser login: `ordercli foodora login --email you@example.com --password-stdin --browser`
|
||||
- Reuse profile: `--browser-profile "$HOME/Library/Application Support/ordercli/browser-profile"`
|
||||
- Import Chrome cookies: `ordercli foodora cookies chrome --profile "Default"`
|
||||
|
||||
Session import (no password)
|
||||
|
||||
- `ordercli foodora session chrome --url https://www.foodora.at/ --profile "Default"`
|
||||
- `ordercli foodora session refresh --client-id android`
|
||||
|
||||
Deliveroo (WIP, not working yet)
|
||||
|
||||
- Requires `DELIVEROO_BEARER_TOKEN` (optional `DELIVEROO_COOKIE`).
|
||||
- `ordercli deliveroo config set --market uk`
|
||||
- `ordercli deliveroo history`
|
||||
|
||||
Notes
|
||||
|
||||
- Use `--config /tmp/ordercli.json` for testing.
|
||||
- Confirm before any reorder or cart-changing action.
|
||||
190
openclaw/skills/peekaboo/SKILL.md
Normal file
190
openclaw/skills/peekaboo/SKILL.md
Normal file
@@ -0,0 +1,190 @@
|
||||
---
|
||||
name: peekaboo
|
||||
description: Capture and automate macOS UI with the Peekaboo CLI.
|
||||
homepage: https://peekaboo.boo
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "👀",
|
||||
"os": ["darwin"],
|
||||
"requires": { "bins": ["peekaboo"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/peekaboo",
|
||||
"bins": ["peekaboo"],
|
||||
"label": "Install Peekaboo (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Peekaboo
|
||||
|
||||
Peekaboo is a full macOS UI automation CLI: capture/inspect screens, target UI
|
||||
elements, drive input, and manage apps/windows/menus. Commands share a snapshot
|
||||
cache and support `--json`/`-j` for scripting. Run `peekaboo` or
|
||||
`peekaboo <cmd> --help` for flags; `peekaboo --version` prints build metadata.
|
||||
Tip: run via `polter peekaboo` to ensure fresh builds.
|
||||
|
||||
## Features (all CLI capabilities, excluding agent/MCP)
|
||||
|
||||
Core
|
||||
|
||||
- `bridge`: inspect Peekaboo Bridge host connectivity
|
||||
- `capture`: live capture or video ingest + frame extraction
|
||||
- `clean`: prune snapshot cache and temp files
|
||||
- `config`: init/show/edit/validate, providers, models, credentials
|
||||
- `image`: capture screenshots (screen/window/menu bar regions)
|
||||
- `learn`: print the full agent guide + tool catalog
|
||||
- `list`: apps, windows, screens, menubar, permissions
|
||||
- `permissions`: check Screen Recording/Accessibility status
|
||||
- `run`: execute `.peekaboo.json` scripts
|
||||
- `sleep`: pause execution for a duration
|
||||
- `tools`: list available tools with filtering/display options
|
||||
|
||||
Interaction
|
||||
|
||||
- `click`: target by ID/query/coords with smart waits
|
||||
- `drag`: drag & drop across elements/coords/Dock
|
||||
- `hotkey`: modifier combos like `cmd,shift,t`
|
||||
- `move`: cursor positioning with optional smoothing
|
||||
- `paste`: set clipboard -> paste -> restore
|
||||
- `press`: special-key sequences with repeats
|
||||
- `scroll`: directional scrolling (targeted + smooth)
|
||||
- `swipe`: gesture-style drags between targets
|
||||
- `type`: text + control keys (`--clear`, delays)
|
||||
|
||||
System
|
||||
|
||||
- `app`: launch/quit/relaunch/hide/unhide/switch/list apps
|
||||
- `clipboard`: read/write clipboard (text/images/files)
|
||||
- `dialog`: click/input/file/dismiss/list system dialogs
|
||||
- `dock`: launch/right-click/hide/show/list Dock items
|
||||
- `menu`: click/list application menus + menu extras
|
||||
- `menubar`: list/click status bar items
|
||||
- `open`: enhanced `open` with app targeting + JSON payloads
|
||||
- `space`: list/switch/move-window (Spaces)
|
||||
- `visualizer`: exercise Peekaboo visual feedback animations
|
||||
- `window`: close/minimize/maximize/move/resize/focus/list
|
||||
|
||||
Vision
|
||||
|
||||
- `see`: annotated UI maps, snapshot IDs, optional analysis
|
||||
|
||||
Global runtime flags
|
||||
|
||||
- `--json`/`-j`, `--verbose`/`-v`, `--log-level <level>`
|
||||
- `--no-remote`, `--bridge-socket <path>`
|
||||
|
||||
## Quickstart (happy path)
|
||||
|
||||
```bash
|
||||
peekaboo permissions
|
||||
peekaboo list apps --json
|
||||
peekaboo see --annotate --path /tmp/peekaboo-see.png
|
||||
peekaboo click --on B1
|
||||
peekaboo type "Hello" --return
|
||||
```
|
||||
|
||||
## Common targeting parameters (most interaction commands)
|
||||
|
||||
- App/window: `--app`, `--pid`, `--window-title`, `--window-id`, `--window-index`
|
||||
- Snapshot targeting: `--snapshot` (ID from `see`; defaults to latest)
|
||||
- Element/coords: `--on`/`--id` (element ID), `--coords x,y`
|
||||
- Focus control: `--no-auto-focus`, `--space-switch`, `--bring-to-current-space`,
|
||||
`--focus-timeout-seconds`, `--focus-retry-count`
|
||||
|
||||
## Common capture parameters
|
||||
|
||||
- Output: `--path`, `--format png|jpg`, `--retina`
|
||||
- Targeting: `--mode screen|window|frontmost`, `--screen-index`,
|
||||
`--window-title`, `--window-id`
|
||||
- Analysis: `--analyze "prompt"`, `--annotate`
|
||||
- Capture engine: `--capture-engine auto|classic|cg|modern|sckit`
|
||||
|
||||
## Common motion/typing parameters
|
||||
|
||||
- Timing: `--duration` (drag/swipe), `--steps`, `--delay` (type/scroll/press)
|
||||
- Human-ish movement: `--profile human|linear`, `--wpm` (typing)
|
||||
- Scroll: `--direction up|down|left|right`, `--amount <ticks>`, `--smooth`
|
||||
|
||||
## Examples
|
||||
|
||||
### See -> click -> type (most reliable flow)
|
||||
|
||||
```bash
|
||||
peekaboo see --app Safari --window-title "Login" --annotate --path /tmp/see.png
|
||||
peekaboo click --on B3 --app Safari
|
||||
peekaboo type "user@example.com" --app Safari
|
||||
peekaboo press tab --count 1 --app Safari
|
||||
peekaboo type "supersecret" --app Safari --return
|
||||
```
|
||||
|
||||
### Target by window id
|
||||
|
||||
```bash
|
||||
peekaboo list windows --app "Visual Studio Code" --json
|
||||
peekaboo click --window-id 12345 --coords 120,160
|
||||
peekaboo type "Hello from Peekaboo" --window-id 12345
|
||||
```
|
||||
|
||||
### Capture screenshots + analyze
|
||||
|
||||
```bash
|
||||
peekaboo image --mode screen --screen-index 0 --retina --path /tmp/screen.png
|
||||
peekaboo image --app Safari --window-title "Dashboard" --analyze "Summarize KPIs"
|
||||
peekaboo see --mode screen --screen-index 0 --analyze "Summarize the dashboard"
|
||||
```
|
||||
|
||||
### Live capture (motion-aware)
|
||||
|
||||
```bash
|
||||
peekaboo capture live --mode region --region 100,100,800,600 --duration 30 \
|
||||
--active-fps 8 --idle-fps 2 --highlight-changes --path /tmp/capture
|
||||
```
|
||||
|
||||
### App + window management
|
||||
|
||||
```bash
|
||||
peekaboo app launch "Safari" --open https://example.com
|
||||
peekaboo window focus --app Safari --window-title "Example"
|
||||
peekaboo window set-bounds --app Safari --x 50 --y 50 --width 1200 --height 800
|
||||
peekaboo app quit --app Safari
|
||||
```
|
||||
|
||||
### Menus, menubar, dock
|
||||
|
||||
```bash
|
||||
peekaboo menu click --app Safari --item "New Window"
|
||||
peekaboo menu click --app TextEdit --path "Format > Font > Show Fonts"
|
||||
peekaboo menu click-extra --title "WiFi"
|
||||
peekaboo dock launch Safari
|
||||
peekaboo menubar list --json
|
||||
```
|
||||
|
||||
### Mouse + gesture input
|
||||
|
||||
```bash
|
||||
peekaboo move 500,300 --smooth
|
||||
peekaboo drag --from B1 --to T2
|
||||
peekaboo swipe --from-coords 100,500 --to-coords 100,200 --duration 800
|
||||
peekaboo scroll --direction down --amount 6 --smooth
|
||||
```
|
||||
|
||||
### Keyboard input
|
||||
|
||||
```bash
|
||||
peekaboo hotkey --keys "cmd,shift,t"
|
||||
peekaboo press escape
|
||||
peekaboo type "Line 1\nLine 2" --delay 10
|
||||
```
|
||||
|
||||
Notes
|
||||
|
||||
- Requires Screen Recording + Accessibility permissions.
|
||||
- Use `peekaboo see --annotate` to identify targets before clicking.
|
||||
87
openclaw/skills/sag/SKILL.md
Normal file
87
openclaw/skills/sag/SKILL.md
Normal file
@@ -0,0 +1,87 @@
|
||||
---
|
||||
name: sag
|
||||
description: ElevenLabs text-to-speech with mac-style say UX.
|
||||
homepage: https://sag.sh
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🗣️",
|
||||
"requires": { "bins": ["sag"], "env": ["ELEVENLABS_API_KEY"] },
|
||||
"primaryEnv": "ELEVENLABS_API_KEY",
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/sag",
|
||||
"bins": ["sag"],
|
||||
"label": "Install sag (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# sag
|
||||
|
||||
Use `sag` for ElevenLabs TTS with local playback.
|
||||
|
||||
API key (required)
|
||||
|
||||
- `ELEVENLABS_API_KEY` (preferred)
|
||||
- `SAG_API_KEY` also supported by the CLI
|
||||
|
||||
Quick start
|
||||
|
||||
- `sag "Hello there"`
|
||||
- `sag speak -v "Roger" "Hello"`
|
||||
- `sag voices`
|
||||
- `sag prompting` (model-specific tips)
|
||||
|
||||
Model notes
|
||||
|
||||
- Default: `eleven_v3` (expressive)
|
||||
- Stable: `eleven_multilingual_v2`
|
||||
- Fast: `eleven_flash_v2_5`
|
||||
|
||||
Pronunciation + delivery rules
|
||||
|
||||
- First fix: respell (e.g. "key-note"), add hyphens, adjust casing.
|
||||
- Numbers/units/URLs: `--normalize auto` (or `off` if it harms names).
|
||||
- Language bias: `--lang en|de|fr|...` to guide normalization.
|
||||
- v3: SSML `<break>` not supported; use `[pause]`, `[short pause]`, `[long pause]`.
|
||||
- v2/v2.5: SSML `<break time="1.5s" />` supported; `<phoneme>` not exposed in `sag`.
|
||||
|
||||
v3 audio tags (put at the entrance of a line)
|
||||
|
||||
- `[whispers]`, `[shouts]`, `[sings]`
|
||||
- `[laughs]`, `[starts laughing]`, `[sighs]`, `[exhales]`
|
||||
- `[sarcastic]`, `[curious]`, `[excited]`, `[crying]`, `[mischievously]`
|
||||
- Example: `sag "[whispers] keep this quiet. [short pause] ok?"`
|
||||
|
||||
Voice defaults
|
||||
|
||||
- `ELEVENLABS_VOICE_ID` or `SAG_VOICE_ID`
|
||||
|
||||
Confirm voice + speaker before long output.
|
||||
|
||||
## Chat voice responses
|
||||
|
||||
When Peter asks for a "voice" reply (e.g., "crazy scientist voice", "explain in voice"), generate audio and send it:
|
||||
|
||||
```bash
|
||||
# Generate audio file
|
||||
sag -v Clawd -o /tmp/voice-reply.mp3 "Your message here"
|
||||
|
||||
# Then include in reply:
|
||||
# MEDIA:/tmp/voice-reply.mp3
|
||||
```
|
||||
|
||||
Voice character tips:
|
||||
|
||||
- Crazy scientist: Use `[excited]` tags, dramatic pauses `[short pause]`, vary intensity
|
||||
- Calm: Use `[whispers]` or slower pacing
|
||||
- Dramatic: Use `[sings]` or `[shouts]` sparingly
|
||||
|
||||
Default voice for Clawd: `lj2rcrvANS3gaWWnczSX` (or just `-v Clawd`)
|
||||
115
openclaw/skills/session-logs/SKILL.md
Normal file
115
openclaw/skills/session-logs/SKILL.md
Normal file
@@ -0,0 +1,115 @@
|
||||
---
|
||||
name: session-logs
|
||||
description: Search and analyze your own session logs (older/parent conversations) using jq.
|
||||
metadata: { "openclaw": { "emoji": "📜", "requires": { "bins": ["jq", "rg"] } } }
|
||||
---
|
||||
|
||||
# session-logs
|
||||
|
||||
Search your complete conversation history stored in session JSONL files. Use this when a user references older/parent conversations or asks what was said before.
|
||||
|
||||
## Trigger
|
||||
|
||||
Use this skill when the user asks about prior chats, parent conversations, or historical context that isn't in memory files.
|
||||
|
||||
## Location
|
||||
|
||||
Session logs live at: `~/.openclaw/agents/<agentId>/sessions/` (use the `agent=<id>` value from the system prompt Runtime line).
|
||||
|
||||
- **`sessions.json`** - Index mapping session keys to session IDs
|
||||
- **`<session-id>.jsonl`** - Full conversation transcript per session
|
||||
|
||||
## Structure
|
||||
|
||||
Each `.jsonl` file contains messages with:
|
||||
|
||||
- `type`: "session" (metadata) or "message"
|
||||
- `timestamp`: ISO timestamp
|
||||
- `message.role`: "user", "assistant", or "toolResult"
|
||||
- `message.content[]`: Text, thinking, or tool calls (filter `type=="text"` for human-readable content)
|
||||
- `message.usage.cost.total`: Cost per response
|
||||
|
||||
## Common Queries
|
||||
|
||||
### List all sessions by date and size
|
||||
|
||||
```bash
|
||||
for f in ~/.openclaw/agents/<agentId>/sessions/*.jsonl; do
|
||||
date=$(head -1 "$f" | jq -r '.timestamp' | cut -dT -f1)
|
||||
size=$(ls -lh "$f" | awk '{print $5}')
|
||||
echo "$date $size $(basename $f)"
|
||||
done | sort -r
|
||||
```
|
||||
|
||||
### Find sessions from a specific day
|
||||
|
||||
```bash
|
||||
for f in ~/.openclaw/agents/<agentId>/sessions/*.jsonl; do
|
||||
head -1 "$f" | jq -r '.timestamp' | grep -q "2026-01-06" && echo "$f"
|
||||
done
|
||||
```
|
||||
|
||||
### Extract user messages from a session
|
||||
|
||||
```bash
|
||||
jq -r 'select(.message.role == "user") | .message.content[]? | select(.type == "text") | .text' <session>.jsonl
|
||||
```
|
||||
|
||||
### Search for keyword in assistant responses
|
||||
|
||||
```bash
|
||||
jq -r 'select(.message.role == "assistant") | .message.content[]? | select(.type == "text") | .text' <session>.jsonl | rg -i "keyword"
|
||||
```
|
||||
|
||||
### Get total cost for a session
|
||||
|
||||
```bash
|
||||
jq -s '[.[] | .message.usage.cost.total // 0] | add' <session>.jsonl
|
||||
```
|
||||
|
||||
### Daily cost summary
|
||||
|
||||
```bash
|
||||
for f in ~/.openclaw/agents/<agentId>/sessions/*.jsonl; do
|
||||
date=$(head -1 "$f" | jq -r '.timestamp' | cut -dT -f1)
|
||||
cost=$(jq -s '[.[] | .message.usage.cost.total // 0] | add' "$f")
|
||||
echo "$date $cost"
|
||||
done | awk '{a[$1]+=$2} END {for(d in a) print d, "$"a[d]}' | sort -r
|
||||
```
|
||||
|
||||
### Count messages and tokens in a session
|
||||
|
||||
```bash
|
||||
jq -s '{
|
||||
messages: length,
|
||||
user: [.[] | select(.message.role == "user")] | length,
|
||||
assistant: [.[] | select(.message.role == "assistant")] | length,
|
||||
first: .[0].timestamp,
|
||||
last: .[-1].timestamp
|
||||
}' <session>.jsonl
|
||||
```
|
||||
|
||||
### Tool usage breakdown
|
||||
|
||||
```bash
|
||||
jq -r '.message.content[]? | select(.type == "toolCall") | .name' <session>.jsonl | sort | uniq -c | sort -rn
|
||||
```
|
||||
|
||||
### Search across ALL sessions for a phrase
|
||||
|
||||
```bash
|
||||
rg -l "phrase" ~/.openclaw/agents/<agentId>/sessions/*.jsonl
|
||||
```
|
||||
|
||||
## Tips
|
||||
|
||||
- Sessions are append-only JSONL (one JSON object per line)
|
||||
- Large sessions can be several MB - use `head`/`tail` for sampling
|
||||
- The `sessions.json` index maps chat providers (discord, whatsapp, etc.) to session IDs
|
||||
- Deleted sessions have `.deleted.<timestamp>` suffix
|
||||
|
||||
## Fast text-only hint (low noise)
|
||||
|
||||
```bash
|
||||
jq -r 'select(.type=="message") | .message.content[]? | select(.type=="text") | .text' ~/.openclaw/agents/<agentId>/sessions/<id>.jsonl | rg 'keyword'
|
||||
```
|
||||
103
openclaw/skills/sherpa-onnx-tts/SKILL.md
Normal file
103
openclaw/skills/sherpa-onnx-tts/SKILL.md
Normal file
@@ -0,0 +1,103 @@
|
||||
---
|
||||
name: sherpa-onnx-tts
|
||||
description: Local text-to-speech via sherpa-onnx (offline, no cloud)
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🗣️",
|
||||
"os": ["darwin", "linux", "win32"],
|
||||
"requires": { "env": ["SHERPA_ONNX_RUNTIME_DIR", "SHERPA_ONNX_MODEL_DIR"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "download-runtime-macos",
|
||||
"kind": "download",
|
||||
"os": ["darwin"],
|
||||
"url": "https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-osx-universal2-shared.tar.bz2",
|
||||
"archive": "tar.bz2",
|
||||
"extract": true,
|
||||
"stripComponents": 1,
|
||||
"targetDir": "runtime",
|
||||
"label": "Download sherpa-onnx runtime (macOS)",
|
||||
},
|
||||
{
|
||||
"id": "download-runtime-linux-x64",
|
||||
"kind": "download",
|
||||
"os": ["linux"],
|
||||
"url": "https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-linux-x64-shared.tar.bz2",
|
||||
"archive": "tar.bz2",
|
||||
"extract": true,
|
||||
"stripComponents": 1,
|
||||
"targetDir": "runtime",
|
||||
"label": "Download sherpa-onnx runtime (Linux x64)",
|
||||
},
|
||||
{
|
||||
"id": "download-runtime-win-x64",
|
||||
"kind": "download",
|
||||
"os": ["win32"],
|
||||
"url": "https://github.com/k2-fsa/sherpa-onnx/releases/download/v1.12.23/sherpa-onnx-v1.12.23-win-x64-shared.tar.bz2",
|
||||
"archive": "tar.bz2",
|
||||
"extract": true,
|
||||
"stripComponents": 1,
|
||||
"targetDir": "runtime",
|
||||
"label": "Download sherpa-onnx runtime (Windows x64)",
|
||||
},
|
||||
{
|
||||
"id": "download-model-lessac",
|
||||
"kind": "download",
|
||||
"url": "https://github.com/k2-fsa/sherpa-onnx/releases/download/tts-models/vits-piper-en_US-lessac-high.tar.bz2",
|
||||
"archive": "tar.bz2",
|
||||
"extract": true,
|
||||
"targetDir": "models",
|
||||
"label": "Download Piper en_US lessac (high)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# sherpa-onnx-tts
|
||||
|
||||
Local TTS using the sherpa-onnx offline CLI.
|
||||
|
||||
## Install
|
||||
|
||||
1. Download the runtime for your OS (extracts into `~/.openclaw/tools/sherpa-onnx-tts/runtime`)
|
||||
2. Download a voice model (extracts into `~/.openclaw/tools/sherpa-onnx-tts/models`)
|
||||
|
||||
Update `~/.openclaw/openclaw.json`:
|
||||
|
||||
```json5
|
||||
{
|
||||
skills: {
|
||||
entries: {
|
||||
"sherpa-onnx-tts": {
|
||||
env: {
|
||||
SHERPA_ONNX_RUNTIME_DIR: "~/.openclaw/tools/sherpa-onnx-tts/runtime",
|
||||
SHERPA_ONNX_MODEL_DIR: "~/.openclaw/tools/sherpa-onnx-tts/models/vits-piper-en_US-lessac-high",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
```
|
||||
|
||||
The wrapper lives in this skill folder. Run it directly, or add the wrapper to PATH:
|
||||
|
||||
```bash
|
||||
export PATH="{baseDir}/bin:$PATH"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
```bash
|
||||
{baseDir}/bin/sherpa-onnx-tts -o ./tts.wav "Hello from local TTS."
|
||||
```
|
||||
|
||||
Notes:
|
||||
|
||||
- Pick a different model from the sherpa-onnx `tts-models` release if you want another voice.
|
||||
- If the model dir has multiple `.onnx` files, set `SHERPA_ONNX_MODEL_FILE` or pass `--model-file`.
|
||||
- You can also pass `--tokens-file` or `--data-dir` to override the defaults.
|
||||
- Windows: run `node {baseDir}\\bin\\sherpa-onnx-tts -o tts.wav "Hello from local TTS."`
|
||||
372
openclaw/skills/skill-creator/SKILL.md
Normal file
372
openclaw/skills/skill-creator/SKILL.md
Normal file
@@ -0,0 +1,372 @@
|
||||
---
|
||||
name: skill-creator
|
||||
description: Create or update AgentSkills. Use when designing, structuring, or packaging skills with scripts, references, and assets.
|
||||
---
|
||||
|
||||
# Skill Creator
|
||||
|
||||
This skill provides guidance for creating effective skills.
|
||||
|
||||
## About Skills
|
||||
|
||||
Skills are modular, self-contained packages that extend Codex's capabilities by providing
|
||||
specialized knowledge, workflows, and tools. Think of them as "onboarding guides" for specific
|
||||
domains or tasks—they transform Codex from a general-purpose agent into a specialized agent
|
||||
equipped with procedural knowledge that no model can fully possess.
|
||||
|
||||
### What Skills Provide
|
||||
|
||||
1. Specialized workflows - Multi-step procedures for specific domains
|
||||
2. Tool integrations - Instructions for working with specific file formats or APIs
|
||||
3. Domain expertise - Company-specific knowledge, schemas, business logic
|
||||
4. Bundled resources - Scripts, references, and assets for complex and repetitive tasks
|
||||
|
||||
## Core Principles
|
||||
|
||||
### Concise is Key
|
||||
|
||||
The context window is a public good. Skills share the context window with everything else Codex needs: system prompt, conversation history, other Skills' metadata, and the actual user request.
|
||||
|
||||
**Default assumption: Codex is already very smart.** Only add context Codex doesn't already have. Challenge each piece of information: "Does Codex really need this explanation?" and "Does this paragraph justify its token cost?"
|
||||
|
||||
Prefer concise examples over verbose explanations.
|
||||
|
||||
### Set Appropriate Degrees of Freedom
|
||||
|
||||
Match the level of specificity to the task's fragility and variability:
|
||||
|
||||
**High freedom (text-based instructions)**: Use when multiple approaches are valid, decisions depend on context, or heuristics guide the approach.
|
||||
|
||||
**Medium freedom (pseudocode or scripts with parameters)**: Use when a preferred pattern exists, some variation is acceptable, or configuration affects behavior.
|
||||
|
||||
**Low freedom (specific scripts, few parameters)**: Use when operations are fragile and error-prone, consistency is critical, or a specific sequence must be followed.
|
||||
|
||||
Think of Codex as exploring a path: a narrow bridge with cliffs needs specific guardrails (low freedom), while an open field allows many routes (high freedom).
|
||||
|
||||
### Anatomy of a Skill
|
||||
|
||||
Every skill consists of a required SKILL.md file and optional bundled resources:
|
||||
|
||||
```
|
||||
skill-name/
|
||||
├── SKILL.md (required)
|
||||
│ ├── YAML frontmatter metadata (required)
|
||||
│ │ ├── name: (required)
|
||||
│ │ └── description: (required)
|
||||
│ └── Markdown instructions (required)
|
||||
└── Bundled Resources (optional)
|
||||
├── scripts/ - Executable code (Python/Bash/etc.)
|
||||
├── references/ - Documentation intended to be loaded into context as needed
|
||||
└── assets/ - Files used in output (templates, icons, fonts, etc.)
|
||||
```
|
||||
|
||||
#### SKILL.md (required)
|
||||
|
||||
Every SKILL.md consists of:
|
||||
|
||||
- **Frontmatter** (YAML): Contains `name` and `description` fields. These are the only fields that Codex reads to determine when the skill gets used, thus it is very important to be clear and comprehensive in describing what the skill is, and when it should be used.
|
||||
- **Body** (Markdown): Instructions and guidance for using the skill. Only loaded AFTER the skill triggers (if at all).
|
||||
|
||||
#### Bundled Resources (optional)
|
||||
|
||||
##### Scripts (`scripts/`)
|
||||
|
||||
Executable code (Python/Bash/etc.) for tasks that require deterministic reliability or are repeatedly rewritten.
|
||||
|
||||
- **When to include**: When the same code is being rewritten repeatedly or deterministic reliability is needed
|
||||
- **Example**: `scripts/rotate_pdf.py` for PDF rotation tasks
|
||||
- **Benefits**: Token efficient, deterministic, may be executed without loading into context
|
||||
- **Note**: Scripts may still need to be read by Codex for patching or environment-specific adjustments
|
||||
|
||||
##### References (`references/`)
|
||||
|
||||
Documentation and reference material intended to be loaded as needed into context to inform Codex's process and thinking.
|
||||
|
||||
- **When to include**: For documentation that Codex should reference while working
|
||||
- **Examples**: `references/finance.md` for financial schemas, `references/mnda.md` for company NDA template, `references/policies.md` for company policies, `references/api_docs.md` for API specifications
|
||||
- **Use cases**: Database schemas, API documentation, domain knowledge, company policies, detailed workflow guides
|
||||
- **Benefits**: Keeps SKILL.md lean, loaded only when Codex determines it's needed
|
||||
- **Best practice**: If files are large (>10k words), include grep search patterns in SKILL.md
|
||||
- **Avoid duplication**: Information should live in either SKILL.md or references files, not both. Prefer references files for detailed information unless it's truly core to the skill—this keeps SKILL.md lean while making information discoverable without hogging the context window. Keep only essential procedural instructions and workflow guidance in SKILL.md; move detailed reference material, schemas, and examples to references files.
|
||||
|
||||
##### Assets (`assets/`)
|
||||
|
||||
Files not intended to be loaded into context, but rather used within the output Codex produces.
|
||||
|
||||
- **When to include**: When the skill needs files that will be used in the final output
|
||||
- **Examples**: `assets/logo.png` for brand assets, `assets/slides.pptx` for PowerPoint templates, `assets/frontend-template/` for HTML/React boilerplate, `assets/font.ttf` for typography
|
||||
- **Use cases**: Templates, images, icons, boilerplate code, fonts, sample documents that get copied or modified
|
||||
- **Benefits**: Separates output resources from documentation, enables Codex to use files without loading them into context
|
||||
|
||||
#### What to Not Include in a Skill
|
||||
|
||||
A skill should only contain essential files that directly support its functionality. Do NOT create extraneous documentation or auxiliary files, including:
|
||||
|
||||
- README.md
|
||||
- INSTALLATION_GUIDE.md
|
||||
- QUICK_REFERENCE.md
|
||||
- CHANGELOG.md
|
||||
- etc.
|
||||
|
||||
The skill should only contain the information needed for an AI agent to do the job at hand. It should not contain auxiliary context about the process that went into creating it, setup and testing procedures, user-facing documentation, etc. Creating additional documentation files just adds clutter and confusion.
|
||||
|
||||
### Progressive Disclosure Design Principle
|
||||
|
||||
Skills use a three-level loading system to manage context efficiently:
|
||||
|
||||
1. **Metadata (name + description)** - Always in context (~100 words)
|
||||
2. **SKILL.md body** - When skill triggers (<5k words)
|
||||
3. **Bundled resources** - As needed by Codex (Unlimited because scripts can be executed without reading into context window)
|
||||
|
||||
#### Progressive Disclosure Patterns
|
||||
|
||||
Keep SKILL.md body to the essentials and under 500 lines to minimize context bloat. Split content into separate files when approaching this limit. When splitting out content into other files, it is very important to reference them from SKILL.md and describe clearly when to read them, to ensure the reader of the skill knows they exist and when to use them.
|
||||
|
||||
**Key principle:** When a skill supports multiple variations, frameworks, or options, keep only the core workflow and selection guidance in SKILL.md. Move variant-specific details (patterns, examples, configuration) into separate reference files.
|
||||
|
||||
**Pattern 1: High-level guide with references**
|
||||
|
||||
```markdown
|
||||
# PDF Processing
|
||||
|
||||
## Quick start
|
||||
|
||||
Extract text with pdfplumber:
|
||||
[code example]
|
||||
|
||||
## Advanced features
|
||||
|
||||
- **Form filling**: See [FORMS.md](FORMS.md) for complete guide
|
||||
- **API reference**: See [REFERENCE.md](REFERENCE.md) for all methods
|
||||
- **Examples**: See [EXAMPLES.md](EXAMPLES.md) for common patterns
|
||||
```
|
||||
|
||||
Codex loads FORMS.md, REFERENCE.md, or EXAMPLES.md only when needed.
|
||||
|
||||
**Pattern 2: Domain-specific organization**
|
||||
|
||||
For Skills with multiple domains, organize content by domain to avoid loading irrelevant context:
|
||||
|
||||
```
|
||||
bigquery-skill/
|
||||
├── SKILL.md (overview and navigation)
|
||||
└── reference/
|
||||
├── finance.md (revenue, billing metrics)
|
||||
├── sales.md (opportunities, pipeline)
|
||||
├── product.md (API usage, features)
|
||||
└── marketing.md (campaigns, attribution)
|
||||
```
|
||||
|
||||
When a user asks about sales metrics, Codex only reads sales.md.
|
||||
|
||||
Similarly, for skills supporting multiple frameworks or variants, organize by variant:
|
||||
|
||||
```
|
||||
cloud-deploy/
|
||||
├── SKILL.md (workflow + provider selection)
|
||||
└── references/
|
||||
├── aws.md (AWS deployment patterns)
|
||||
├── gcp.md (GCP deployment patterns)
|
||||
└── azure.md (Azure deployment patterns)
|
||||
```
|
||||
|
||||
When the user chooses AWS, Codex only reads aws.md.
|
||||
|
||||
**Pattern 3: Conditional details**
|
||||
|
||||
Show basic content, link to advanced content:
|
||||
|
||||
```markdown
|
||||
# DOCX Processing
|
||||
|
||||
## Creating documents
|
||||
|
||||
Use docx-js for new documents. See [DOCX-JS.md](DOCX-JS.md).
|
||||
|
||||
## Editing documents
|
||||
|
||||
For simple edits, modify the XML directly.
|
||||
|
||||
**For tracked changes**: See [REDLINING.md](REDLINING.md)
|
||||
**For OOXML details**: See [OOXML.md](OOXML.md)
|
||||
```
|
||||
|
||||
Codex reads REDLINING.md or OOXML.md only when the user needs those features.
|
||||
|
||||
**Important guidelines:**
|
||||
|
||||
- **Avoid deeply nested references** - Keep references one level deep from SKILL.md. All reference files should link directly from SKILL.md.
|
||||
- **Structure longer reference files** - For files longer than 100 lines, include a table of contents at the top so Codex can see the full scope when previewing.
|
||||
|
||||
## Skill Creation Process
|
||||
|
||||
Skill creation involves these steps:
|
||||
|
||||
1. Understand the skill with concrete examples
|
||||
2. Plan reusable skill contents (scripts, references, assets)
|
||||
3. Initialize the skill (run init_skill.py)
|
||||
4. Edit the skill (implement resources and write SKILL.md)
|
||||
5. Package the skill (run package_skill.py)
|
||||
6. Iterate based on real usage
|
||||
|
||||
Follow these steps in order, skipping only if there is a clear reason why they are not applicable.
|
||||
|
||||
### Skill Naming
|
||||
|
||||
- Use lowercase letters, digits, and hyphens only; normalize user-provided titles to hyphen-case (e.g., "Plan Mode" -> `plan-mode`).
|
||||
- When generating names, generate a name under 64 characters (letters, digits, hyphens).
|
||||
- Prefer short, verb-led phrases that describe the action.
|
||||
- Namespace by tool when it improves clarity or triggering (e.g., `gh-address-comments`, `linear-address-issue`).
|
||||
- Name the skill folder exactly after the skill name.
|
||||
|
||||
### Step 1: Understanding the Skill with Concrete Examples
|
||||
|
||||
Skip this step only when the skill's usage patterns are already clearly understood. It remains valuable even when working with an existing skill.
|
||||
|
||||
To create an effective skill, clearly understand concrete examples of how the skill will be used. This understanding can come from either direct user examples or generated examples that are validated with user feedback.
|
||||
|
||||
For example, when building an image-editor skill, relevant questions include:
|
||||
|
||||
- "What functionality should the image-editor skill support? Editing, rotating, anything else?"
|
||||
- "Can you give some examples of how this skill would be used?"
|
||||
- "I can imagine users asking for things like 'Remove the red-eye from this image' or 'Rotate this image'. Are there other ways you imagine this skill being used?"
|
||||
- "What would a user say that should trigger this skill?"
|
||||
|
||||
To avoid overwhelming users, avoid asking too many questions in a single message. Start with the most important questions and follow up as needed for better effectiveness.
|
||||
|
||||
Conclude this step when there is a clear sense of the functionality the skill should support.
|
||||
|
||||
### Step 2: Planning the Reusable Skill Contents
|
||||
|
||||
To turn concrete examples into an effective skill, analyze each example by:
|
||||
|
||||
1. Considering how to execute on the example from scratch
|
||||
2. Identifying what scripts, references, and assets would be helpful when executing these workflows repeatedly
|
||||
|
||||
Example: When building a `pdf-editor` skill to handle queries like "Help me rotate this PDF," the analysis shows:
|
||||
|
||||
1. Rotating a PDF requires re-writing the same code each time
|
||||
2. A `scripts/rotate_pdf.py` script would be helpful to store in the skill
|
||||
|
||||
Example: When designing a `frontend-webapp-builder` skill for queries like "Build me a todo app" or "Build me a dashboard to track my steps," the analysis shows:
|
||||
|
||||
1. Writing a frontend webapp requires the same boilerplate HTML/React each time
|
||||
2. An `assets/hello-world/` template containing the boilerplate HTML/React project files would be helpful to store in the skill
|
||||
|
||||
Example: When building a `big-query` skill to handle queries like "How many users have logged in today?" the analysis shows:
|
||||
|
||||
1. Querying BigQuery requires re-discovering the table schemas and relationships each time
|
||||
2. A `references/schema.md` file documenting the table schemas would be helpful to store in the skill
|
||||
|
||||
To establish the skill's contents, analyze each concrete example to create a list of the reusable resources to include: scripts, references, and assets.
|
||||
|
||||
### Step 3: Initializing the Skill
|
||||
|
||||
At this point, it is time to actually create the skill.
|
||||
|
||||
Skip this step only if the skill being developed already exists, and iteration or packaging is needed. In this case, continue to the next step.
|
||||
|
||||
When creating a new skill from scratch, always run the `init_skill.py` script. The script conveniently generates a new template skill directory that automatically includes everything a skill requires, making the skill creation process much more efficient and reliable.
|
||||
|
||||
Usage:
|
||||
|
||||
```bash
|
||||
scripts/init_skill.py <skill-name> --path <output-directory> [--resources scripts,references,assets] [--examples]
|
||||
```
|
||||
|
||||
Examples:
|
||||
|
||||
```bash
|
||||
scripts/init_skill.py my-skill --path skills/public
|
||||
scripts/init_skill.py my-skill --path skills/public --resources scripts,references
|
||||
scripts/init_skill.py my-skill --path skills/public --resources scripts --examples
|
||||
```
|
||||
|
||||
The script:
|
||||
|
||||
- Creates the skill directory at the specified path
|
||||
- Generates a SKILL.md template with proper frontmatter and TODO placeholders
|
||||
- Optionally creates resource directories based on `--resources`
|
||||
- Optionally adds example files when `--examples` is set
|
||||
|
||||
After initialization, customize the SKILL.md and add resources as needed. If you used `--examples`, replace or delete placeholder files.
|
||||
|
||||
### Step 4: Edit the Skill
|
||||
|
||||
When editing the (newly-generated or existing) skill, remember that the skill is being created for another instance of Codex to use. Include information that would be beneficial and non-obvious to Codex. Consider what procedural knowledge, domain-specific details, or reusable assets would help another Codex instance execute these tasks more effectively.
|
||||
|
||||
#### Learn Proven Design Patterns
|
||||
|
||||
Consult these helpful guides based on your skill's needs:
|
||||
|
||||
- **Multi-step processes**: See references/workflows.md for sequential workflows and conditional logic
|
||||
- **Specific output formats or quality standards**: See references/output-patterns.md for template and example patterns
|
||||
|
||||
These files contain established best practices for effective skill design.
|
||||
|
||||
#### Start with Reusable Skill Contents
|
||||
|
||||
To begin implementation, start with the reusable resources identified above: `scripts/`, `references/`, and `assets/` files. Note that this step may require user input. For example, when implementing a `brand-guidelines` skill, the user may need to provide brand assets or templates to store in `assets/`, or documentation to store in `references/`.
|
||||
|
||||
Added scripts must be tested by actually running them to ensure there are no bugs and that the output matches what is expected. If there are many similar scripts, only a representative sample needs to be tested to ensure confidence that they all work while balancing time to completion.
|
||||
|
||||
If you used `--examples`, delete any placeholder files that are not needed for the skill. Only create resource directories that are actually required.
|
||||
|
||||
#### Update SKILL.md
|
||||
|
||||
**Writing Guidelines:** Always use imperative/infinitive form.
|
||||
|
||||
##### Frontmatter
|
||||
|
||||
Write the YAML frontmatter with `name` and `description`:
|
||||
|
||||
- `name`: The skill name
|
||||
- `description`: This is the primary triggering mechanism for your skill, and helps Codex understand when to use the skill.
|
||||
- Include both what the Skill does and specific triggers/contexts for when to use it.
|
||||
- Include all "when to use" information here - Not in the body. The body is only loaded after triggering, so "When to Use This Skill" sections in the body are not helpful to Codex.
|
||||
- Example description for a `docx` skill: "Comprehensive document creation, editing, and analysis with support for tracked changes, comments, formatting preservation, and text extraction. Use when Codex needs to work with professional documents (.docx files) for: (1) Creating new documents, (2) Modifying or editing content, (3) Working with tracked changes, (4) Adding comments, or any other document tasks"
|
||||
|
||||
Do not include any other fields in YAML frontmatter.
|
||||
|
||||
##### Body
|
||||
|
||||
Write instructions for using the skill and its bundled resources.
|
||||
|
||||
### Step 5: Packaging a Skill
|
||||
|
||||
Once development of the skill is complete, it must be packaged into a distributable .skill file that gets shared with the user. The packaging process automatically validates the skill first to ensure it meets all requirements:
|
||||
|
||||
```bash
|
||||
scripts/package_skill.py <path/to/skill-folder>
|
||||
```
|
||||
|
||||
Optional output directory specification:
|
||||
|
||||
```bash
|
||||
scripts/package_skill.py <path/to/skill-folder> ./dist
|
||||
```
|
||||
|
||||
The packaging script will:
|
||||
|
||||
1. **Validate** the skill automatically, checking:
|
||||
- YAML frontmatter format and required fields
|
||||
- Skill naming conventions and directory structure
|
||||
- Description completeness and quality
|
||||
- File organization and resource references
|
||||
|
||||
2. **Package** the skill if validation passes, creating a .skill file named after the skill (e.g., `my-skill.skill`) that includes all files and maintains the proper directory structure for distribution. The .skill file is a zip file with a .skill extension.
|
||||
|
||||
Security restriction: symlinks are rejected and packaging fails when any symlink is present.
|
||||
|
||||
If validation fails, the script will report the errors and exit without creating a package. Fix any validation errors and run the packaging command again.
|
||||
|
||||
### Step 6: Iterate
|
||||
|
||||
After testing the skill, users may request improvements. Often this happens right after using the skill, with fresh context of how the skill performed.
|
||||
|
||||
**Iteration workflow:**
|
||||
|
||||
1. Use the skill on real tasks
|
||||
2. Notice struggles or inefficiencies
|
||||
3. Identify how SKILL.md or bundled resources should be updated
|
||||
4. Implement changes and test again
|
||||
202
openclaw/skills/skill-creator/license.txt
Normal file
202
openclaw/skills/skill-creator/license.txt
Normal file
@@ -0,0 +1,202 @@
|
||||
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "[]"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright [yyyy] [name of copyright owner]
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
378
openclaw/skills/skill-creator/scripts/init_skill.py
Normal file
378
openclaw/skills/skill-creator/scripts/init_skill.py
Normal file
@@ -0,0 +1,378 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Skill Initializer - Creates a new skill from template
|
||||
|
||||
Usage:
|
||||
init_skill.py <skill-name> --path <path> [--resources scripts,references,assets] [--examples]
|
||||
|
||||
Examples:
|
||||
init_skill.py my-new-skill --path skills/public
|
||||
init_skill.py my-new-skill --path skills/public --resources scripts,references
|
||||
init_skill.py my-api-helper --path skills/private --resources scripts --examples
|
||||
init_skill.py custom-skill --path /custom/location
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
|
||||
MAX_SKILL_NAME_LENGTH = 64
|
||||
ALLOWED_RESOURCES = {"scripts", "references", "assets"}
|
||||
|
||||
SKILL_TEMPLATE = """---
|
||||
name: {skill_name}
|
||||
description: [TODO: Complete and informative explanation of what the skill does and when to use it. Include WHEN to use this skill - specific scenarios, file types, or tasks that trigger it.]
|
||||
---
|
||||
|
||||
# {skill_title}
|
||||
|
||||
## Overview
|
||||
|
||||
[TODO: 1-2 sentences explaining what this skill enables]
|
||||
|
||||
## Structuring This Skill
|
||||
|
||||
[TODO: Choose the structure that best fits this skill's purpose. Common patterns:
|
||||
|
||||
**1. Workflow-Based** (best for sequential processes)
|
||||
- Works well when there are clear step-by-step procedures
|
||||
- Example: DOCX skill with "Workflow Decision Tree" -> "Reading" -> "Creating" -> "Editing"
|
||||
- Structure: ## Overview -> ## Workflow Decision Tree -> ## Step 1 -> ## Step 2...
|
||||
|
||||
**2. Task-Based** (best for tool collections)
|
||||
- Works well when the skill offers different operations/capabilities
|
||||
- Example: PDF skill with "Quick Start" -> "Merge PDFs" -> "Split PDFs" -> "Extract Text"
|
||||
- Structure: ## Overview -> ## Quick Start -> ## Task Category 1 -> ## Task Category 2...
|
||||
|
||||
**3. Reference/Guidelines** (best for standards or specifications)
|
||||
- Works well for brand guidelines, coding standards, or requirements
|
||||
- Example: Brand styling with "Brand Guidelines" -> "Colors" -> "Typography" -> "Features"
|
||||
- Structure: ## Overview -> ## Guidelines -> ## Specifications -> ## Usage...
|
||||
|
||||
**4. Capabilities-Based** (best for integrated systems)
|
||||
- Works well when the skill provides multiple interrelated features
|
||||
- Example: Product Management with "Core Capabilities" -> numbered capability list
|
||||
- Structure: ## Overview -> ## Core Capabilities -> ### 1. Feature -> ### 2. Feature...
|
||||
|
||||
Patterns can be mixed and matched as needed. Most skills combine patterns (e.g., start with task-based, add workflow for complex operations).
|
||||
|
||||
Delete this entire "Structuring This Skill" section when done - it's just guidance.]
|
||||
|
||||
## [TODO: Replace with the first main section based on chosen structure]
|
||||
|
||||
[TODO: Add content here. See examples in existing skills:
|
||||
- Code samples for technical skills
|
||||
- Decision trees for complex workflows
|
||||
- Concrete examples with realistic user requests
|
||||
- References to scripts/templates/references as needed]
|
||||
|
||||
## Resources (optional)
|
||||
|
||||
Create only the resource directories this skill actually needs. Delete this section if no resources are required.
|
||||
|
||||
### scripts/
|
||||
Executable code (Python/Bash/etc.) that can be run directly to perform specific operations.
|
||||
|
||||
**Examples from other skills:**
|
||||
- PDF skill: `fill_fillable_fields.py`, `extract_form_field_info.py` - utilities for PDF manipulation
|
||||
- DOCX skill: `document.py`, `utilities.py` - Python modules for document processing
|
||||
|
||||
**Appropriate for:** Python scripts, shell scripts, or any executable code that performs automation, data processing, or specific operations.
|
||||
|
||||
**Note:** Scripts may be executed without loading into context, but can still be read by Codex for patching or environment adjustments.
|
||||
|
||||
### references/
|
||||
Documentation and reference material intended to be loaded into context to inform Codex's process and thinking.
|
||||
|
||||
**Examples from other skills:**
|
||||
- Product management: `communication.md`, `context_building.md` - detailed workflow guides
|
||||
- BigQuery: API reference documentation and query examples
|
||||
- Finance: Schema documentation, company policies
|
||||
|
||||
**Appropriate for:** In-depth documentation, API references, database schemas, comprehensive guides, or any detailed information that Codex should reference while working.
|
||||
|
||||
### assets/
|
||||
Files not intended to be loaded into context, but rather used within the output Codex produces.
|
||||
|
||||
**Examples from other skills:**
|
||||
- Brand styling: PowerPoint template files (.pptx), logo files
|
||||
- Frontend builder: HTML/React boilerplate project directories
|
||||
- Typography: Font files (.ttf, .woff2)
|
||||
|
||||
**Appropriate for:** Templates, boilerplate code, document templates, images, icons, fonts, or any files meant to be copied or used in the final output.
|
||||
|
||||
---
|
||||
|
||||
**Not every skill requires all three types of resources.**
|
||||
"""
|
||||
|
||||
EXAMPLE_SCRIPT = '''#!/usr/bin/env python3
|
||||
"""
|
||||
Example helper script for {skill_name}
|
||||
|
||||
This is a placeholder script that can be executed directly.
|
||||
Replace with actual implementation or delete if not needed.
|
||||
|
||||
Example real scripts from other skills:
|
||||
- pdf/scripts/fill_fillable_fields.py - Fills PDF form fields
|
||||
- pdf/scripts/convert_pdf_to_images.py - Converts PDF pages to images
|
||||
"""
|
||||
|
||||
def main():
|
||||
print("This is an example script for {skill_name}")
|
||||
# TODO: Add actual script logic here
|
||||
# This could be data processing, file conversion, API calls, etc.
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
'''
|
||||
|
||||
EXAMPLE_REFERENCE = """# Reference Documentation for {skill_title}
|
||||
|
||||
This is a placeholder for detailed reference documentation.
|
||||
Replace with actual reference content or delete if not needed.
|
||||
|
||||
Example real reference docs from other skills:
|
||||
- product-management/references/communication.md - Comprehensive guide for status updates
|
||||
- product-management/references/context_building.md - Deep-dive on gathering context
|
||||
- bigquery/references/ - API references and query examples
|
||||
|
||||
## When Reference Docs Are Useful
|
||||
|
||||
Reference docs are ideal for:
|
||||
- Comprehensive API documentation
|
||||
- Detailed workflow guides
|
||||
- Complex multi-step processes
|
||||
- Information too lengthy for main SKILL.md
|
||||
- Content that's only needed for specific use cases
|
||||
|
||||
## Structure Suggestions
|
||||
|
||||
### API Reference Example
|
||||
- Overview
|
||||
- Authentication
|
||||
- Endpoints with examples
|
||||
- Error codes
|
||||
- Rate limits
|
||||
|
||||
### Workflow Guide Example
|
||||
- Prerequisites
|
||||
- Step-by-step instructions
|
||||
- Common patterns
|
||||
- Troubleshooting
|
||||
- Best practices
|
||||
"""
|
||||
|
||||
EXAMPLE_ASSET = """# Example Asset File
|
||||
|
||||
This placeholder represents where asset files would be stored.
|
||||
Replace with actual asset files (templates, images, fonts, etc.) or delete if not needed.
|
||||
|
||||
Asset files are NOT intended to be loaded into context, but rather used within
|
||||
the output Codex produces.
|
||||
|
||||
Example asset files from other skills:
|
||||
- Brand guidelines: logo.png, slides_template.pptx
|
||||
- Frontend builder: hello-world/ directory with HTML/React boilerplate
|
||||
- Typography: custom-font.ttf, font-family.woff2
|
||||
- Data: sample_data.csv, test_dataset.json
|
||||
|
||||
## Common Asset Types
|
||||
|
||||
- Templates: .pptx, .docx, boilerplate directories
|
||||
- Images: .png, .jpg, .svg, .gif
|
||||
- Fonts: .ttf, .otf, .woff, .woff2
|
||||
- Boilerplate code: Project directories, starter files
|
||||
- Icons: .ico, .svg
|
||||
- Data files: .csv, .json, .xml, .yaml
|
||||
|
||||
Note: This is a text placeholder. Actual assets can be any file type.
|
||||
"""
|
||||
|
||||
|
||||
def normalize_skill_name(skill_name):
|
||||
"""Normalize a skill name to lowercase hyphen-case."""
|
||||
normalized = skill_name.strip().lower()
|
||||
normalized = re.sub(r"[^a-z0-9]+", "-", normalized)
|
||||
normalized = normalized.strip("-")
|
||||
normalized = re.sub(r"-{2,}", "-", normalized)
|
||||
return normalized
|
||||
|
||||
|
||||
def title_case_skill_name(skill_name):
|
||||
"""Convert hyphenated skill name to Title Case for display."""
|
||||
return " ".join(word.capitalize() for word in skill_name.split("-"))
|
||||
|
||||
|
||||
def parse_resources(raw_resources):
|
||||
if not raw_resources:
|
||||
return []
|
||||
resources = [item.strip() for item in raw_resources.split(",") if item.strip()]
|
||||
invalid = sorted({item for item in resources if item not in ALLOWED_RESOURCES})
|
||||
if invalid:
|
||||
allowed = ", ".join(sorted(ALLOWED_RESOURCES))
|
||||
print(f"[ERROR] Unknown resource type(s): {', '.join(invalid)}")
|
||||
print(f" Allowed: {allowed}")
|
||||
sys.exit(1)
|
||||
deduped = []
|
||||
seen = set()
|
||||
for resource in resources:
|
||||
if resource not in seen:
|
||||
deduped.append(resource)
|
||||
seen.add(resource)
|
||||
return deduped
|
||||
|
||||
|
||||
def create_resource_dirs(skill_dir, skill_name, skill_title, resources, include_examples):
|
||||
for resource in resources:
|
||||
resource_dir = skill_dir / resource
|
||||
resource_dir.mkdir(exist_ok=True)
|
||||
if resource == "scripts":
|
||||
if include_examples:
|
||||
example_script = resource_dir / "example.py"
|
||||
example_script.write_text(EXAMPLE_SCRIPT.format(skill_name=skill_name))
|
||||
example_script.chmod(0o755)
|
||||
print("[OK] Created scripts/example.py")
|
||||
else:
|
||||
print("[OK] Created scripts/")
|
||||
elif resource == "references":
|
||||
if include_examples:
|
||||
example_reference = resource_dir / "api_reference.md"
|
||||
example_reference.write_text(EXAMPLE_REFERENCE.format(skill_title=skill_title))
|
||||
print("[OK] Created references/api_reference.md")
|
||||
else:
|
||||
print("[OK] Created references/")
|
||||
elif resource == "assets":
|
||||
if include_examples:
|
||||
example_asset = resource_dir / "example_asset.txt"
|
||||
example_asset.write_text(EXAMPLE_ASSET)
|
||||
print("[OK] Created assets/example_asset.txt")
|
||||
else:
|
||||
print("[OK] Created assets/")
|
||||
|
||||
|
||||
def init_skill(skill_name, path, resources, include_examples):
|
||||
"""
|
||||
Initialize a new skill directory with template SKILL.md.
|
||||
|
||||
Args:
|
||||
skill_name: Name of the skill
|
||||
path: Path where the skill directory should be created
|
||||
resources: Resource directories to create
|
||||
include_examples: Whether to create example files in resource directories
|
||||
|
||||
Returns:
|
||||
Path to created skill directory, or None if error
|
||||
"""
|
||||
# Determine skill directory path
|
||||
skill_dir = Path(path).resolve() / skill_name
|
||||
|
||||
# Check if directory already exists
|
||||
if skill_dir.exists():
|
||||
print(f"[ERROR] Skill directory already exists: {skill_dir}")
|
||||
return None
|
||||
|
||||
# Create skill directory
|
||||
try:
|
||||
skill_dir.mkdir(parents=True, exist_ok=False)
|
||||
print(f"[OK] Created skill directory: {skill_dir}")
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Error creating directory: {e}")
|
||||
return None
|
||||
|
||||
# Create SKILL.md from template
|
||||
skill_title = title_case_skill_name(skill_name)
|
||||
skill_content = SKILL_TEMPLATE.format(skill_name=skill_name, skill_title=skill_title)
|
||||
|
||||
skill_md_path = skill_dir / "SKILL.md"
|
||||
try:
|
||||
skill_md_path.write_text(skill_content)
|
||||
print("[OK] Created SKILL.md")
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Error creating SKILL.md: {e}")
|
||||
return None
|
||||
|
||||
# Create resource directories if requested
|
||||
if resources:
|
||||
try:
|
||||
create_resource_dirs(skill_dir, skill_name, skill_title, resources, include_examples)
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Error creating resource directories: {e}")
|
||||
return None
|
||||
|
||||
# Print next steps
|
||||
print(f"\n[OK] Skill '{skill_name}' initialized successfully at {skill_dir}")
|
||||
print("\nNext steps:")
|
||||
print("1. Edit SKILL.md to complete the TODO items and update the description")
|
||||
if resources:
|
||||
if include_examples:
|
||||
print("2. Customize or delete the example files in scripts/, references/, and assets/")
|
||||
else:
|
||||
print("2. Add resources to scripts/, references/, and assets/ as needed")
|
||||
else:
|
||||
print("2. Create resource directories only if needed (scripts/, references/, assets/)")
|
||||
print("3. Run the validator when ready to check the skill structure")
|
||||
|
||||
return skill_dir
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Create a new skill directory with a SKILL.md template.",
|
||||
)
|
||||
parser.add_argument("skill_name", help="Skill name (normalized to hyphen-case)")
|
||||
parser.add_argument("--path", required=True, help="Output directory for the skill")
|
||||
parser.add_argument(
|
||||
"--resources",
|
||||
default="",
|
||||
help="Comma-separated list: scripts,references,assets",
|
||||
)
|
||||
parser.add_argument(
|
||||
"--examples",
|
||||
action="store_true",
|
||||
help="Create example files inside the selected resource directories",
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
raw_skill_name = args.skill_name
|
||||
skill_name = normalize_skill_name(raw_skill_name)
|
||||
if not skill_name:
|
||||
print("[ERROR] Skill name must include at least one letter or digit.")
|
||||
sys.exit(1)
|
||||
if len(skill_name) > MAX_SKILL_NAME_LENGTH:
|
||||
print(
|
||||
f"[ERROR] Skill name '{skill_name}' is too long ({len(skill_name)} characters). "
|
||||
f"Maximum is {MAX_SKILL_NAME_LENGTH} characters."
|
||||
)
|
||||
sys.exit(1)
|
||||
if skill_name != raw_skill_name:
|
||||
print(f"Note: Normalized skill name from '{raw_skill_name}' to '{skill_name}'.")
|
||||
|
||||
resources = parse_resources(args.resources)
|
||||
if args.examples and not resources:
|
||||
print("[ERROR] --examples requires --resources to be set.")
|
||||
sys.exit(1)
|
||||
|
||||
path = args.path
|
||||
|
||||
print(f"Initializing skill: {skill_name}")
|
||||
print(f" Location: {path}")
|
||||
if resources:
|
||||
print(f" Resources: {', '.join(resources)}")
|
||||
if args.examples:
|
||||
print(" Examples: enabled")
|
||||
else:
|
||||
print(" Resources: none (create as needed)")
|
||||
print()
|
||||
|
||||
result = init_skill(skill_name, path, resources, args.examples)
|
||||
|
||||
if result:
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
139
openclaw/skills/skill-creator/scripts/package_skill.py
Normal file
139
openclaw/skills/skill-creator/scripts/package_skill.py
Normal file
@@ -0,0 +1,139 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Skill Packager - Creates a distributable .skill file of a skill folder
|
||||
|
||||
Usage:
|
||||
python utils/package_skill.py <path/to/skill-folder> [output-directory]
|
||||
|
||||
Example:
|
||||
python utils/package_skill.py skills/public/my-skill
|
||||
python utils/package_skill.py skills/public/my-skill ./dist
|
||||
"""
|
||||
|
||||
import sys
|
||||
import zipfile
|
||||
from pathlib import Path
|
||||
|
||||
from quick_validate import validate_skill
|
||||
|
||||
|
||||
def _is_within(path: Path, root: Path) -> bool:
|
||||
try:
|
||||
path.relative_to(root)
|
||||
return True
|
||||
except ValueError:
|
||||
return False
|
||||
|
||||
|
||||
def package_skill(skill_path, output_dir=None):
|
||||
"""
|
||||
Package a skill folder into a .skill file.
|
||||
|
||||
Args:
|
||||
skill_path: Path to the skill folder
|
||||
output_dir: Optional output directory for the .skill file (defaults to current directory)
|
||||
|
||||
Returns:
|
||||
Path to the created .skill file, or None if error
|
||||
"""
|
||||
skill_path = Path(skill_path).resolve()
|
||||
|
||||
# Validate skill folder exists
|
||||
if not skill_path.exists():
|
||||
print(f"[ERROR] Skill folder not found: {skill_path}")
|
||||
return None
|
||||
|
||||
if not skill_path.is_dir():
|
||||
print(f"[ERROR] Path is not a directory: {skill_path}")
|
||||
return None
|
||||
|
||||
# Validate SKILL.md exists
|
||||
skill_md = skill_path / "SKILL.md"
|
||||
if not skill_md.exists():
|
||||
print(f"[ERROR] SKILL.md not found in {skill_path}")
|
||||
return None
|
||||
|
||||
# Run validation before packaging
|
||||
print("Validating skill...")
|
||||
valid, message = validate_skill(skill_path)
|
||||
if not valid:
|
||||
print(f"[ERROR] Validation failed: {message}")
|
||||
print(" Please fix the validation errors before packaging.")
|
||||
return None
|
||||
print(f"[OK] {message}\n")
|
||||
|
||||
# Determine output location
|
||||
skill_name = skill_path.name
|
||||
if output_dir:
|
||||
output_path = Path(output_dir).resolve()
|
||||
output_path.mkdir(parents=True, exist_ok=True)
|
||||
else:
|
||||
output_path = Path.cwd()
|
||||
|
||||
skill_filename = output_path / f"{skill_name}.skill"
|
||||
|
||||
EXCLUDED_DIRS = {".git", ".svn", ".hg", "__pycache__", "node_modules"}
|
||||
|
||||
# Create the .skill file (zip format)
|
||||
try:
|
||||
with zipfile.ZipFile(skill_filename, "w", zipfile.ZIP_DEFLATED) as zipf:
|
||||
# Walk through the skill directory
|
||||
for file_path in skill_path.rglob("*"):
|
||||
# Security: never follow or package symlinks.
|
||||
if file_path.is_symlink():
|
||||
print(f"[WARN] Skipping symlink: {file_path}")
|
||||
continue
|
||||
|
||||
rel_parts = file_path.relative_to(skill_path).parts
|
||||
if any(part in EXCLUDED_DIRS for part in rel_parts):
|
||||
continue
|
||||
|
||||
if file_path.is_file():
|
||||
resolved_file = file_path.resolve()
|
||||
if not _is_within(resolved_file, skill_path):
|
||||
print(f"[ERROR] File escapes skill root: {file_path}")
|
||||
return None
|
||||
# If output lives under skill_path, avoid writing archive into itself.
|
||||
if resolved_file == skill_filename.resolve():
|
||||
print(f"[WARN] Skipping output archive: {file_path}")
|
||||
continue
|
||||
|
||||
# Calculate the relative path within the zip.
|
||||
arcname = Path(skill_name) / file_path.relative_to(skill_path)
|
||||
zipf.write(file_path, arcname)
|
||||
print(f" Added: {arcname}")
|
||||
|
||||
print(f"\n[OK] Successfully packaged skill to: {skill_filename}")
|
||||
return skill_filename
|
||||
|
||||
except Exception as e:
|
||||
print(f"[ERROR] Error creating .skill file: {e}")
|
||||
return None
|
||||
|
||||
|
||||
def main():
|
||||
if len(sys.argv) < 2:
|
||||
print("Usage: python utils/package_skill.py <path/to/skill-folder> [output-directory]")
|
||||
print("\nExample:")
|
||||
print(" python utils/package_skill.py skills/public/my-skill")
|
||||
print(" python utils/package_skill.py skills/public/my-skill ./dist")
|
||||
sys.exit(1)
|
||||
|
||||
skill_path = sys.argv[1]
|
||||
output_dir = sys.argv[2] if len(sys.argv) > 2 else None
|
||||
|
||||
print(f"Packaging skill: {skill_path}")
|
||||
if output_dir:
|
||||
print(f" Output directory: {output_dir}")
|
||||
print()
|
||||
|
||||
result = package_skill(skill_path, output_dir)
|
||||
|
||||
if result:
|
||||
sys.exit(0)
|
||||
else:
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
159
openclaw/skills/skill-creator/scripts/quick_validate.py
Normal file
159
openclaw/skills/skill-creator/scripts/quick_validate.py
Normal file
@@ -0,0 +1,159 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Quick validation script for skills - minimal version
|
||||
"""
|
||||
|
||||
import re
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
try:
|
||||
import yaml
|
||||
except ModuleNotFoundError:
|
||||
yaml = None
|
||||
|
||||
MAX_SKILL_NAME_LENGTH = 64
|
||||
|
||||
|
||||
def _extract_frontmatter(content: str) -> Optional[str]:
|
||||
lines = content.splitlines()
|
||||
if not lines or lines[0].strip() != "---":
|
||||
return None
|
||||
for i in range(1, len(lines)):
|
||||
if lines[i].strip() == "---":
|
||||
return "\n".join(lines[1:i])
|
||||
return None
|
||||
|
||||
|
||||
def _parse_simple_frontmatter(frontmatter_text: str) -> Optional[dict[str, str]]:
|
||||
"""
|
||||
Minimal fallback parser used when PyYAML is unavailable.
|
||||
Supports simple `key: value` mappings used by SKILL.md frontmatter.
|
||||
"""
|
||||
parsed: dict[str, str] = {}
|
||||
current_key: Optional[str] = None
|
||||
for raw_line in frontmatter_text.splitlines():
|
||||
stripped = raw_line.strip()
|
||||
if not stripped or stripped.startswith("#"):
|
||||
continue
|
||||
|
||||
is_indented = raw_line[:1].isspace()
|
||||
if is_indented:
|
||||
if current_key is None:
|
||||
return None
|
||||
current_value = parsed[current_key]
|
||||
parsed[current_key] = (
|
||||
f"{current_value}\n{stripped}" if current_value else stripped
|
||||
)
|
||||
continue
|
||||
|
||||
if ":" not in stripped:
|
||||
return None
|
||||
key, value = stripped.split(":", 1)
|
||||
key = key.strip()
|
||||
value = value.strip()
|
||||
if not key:
|
||||
return None
|
||||
if (value.startswith('"') and value.endswith('"')) or (
|
||||
value.startswith("'") and value.endswith("'")
|
||||
):
|
||||
value = value[1:-1]
|
||||
parsed[key] = value
|
||||
current_key = key
|
||||
return parsed
|
||||
|
||||
|
||||
def validate_skill(skill_path):
|
||||
"""Basic validation of a skill"""
|
||||
skill_path = Path(skill_path)
|
||||
|
||||
skill_md = skill_path / "SKILL.md"
|
||||
if not skill_md.exists():
|
||||
return False, "SKILL.md not found"
|
||||
|
||||
try:
|
||||
content = skill_md.read_text(encoding="utf-8")
|
||||
except OSError as e:
|
||||
return False, f"Could not read SKILL.md: {e}"
|
||||
|
||||
frontmatter_text = _extract_frontmatter(content)
|
||||
if frontmatter_text is None:
|
||||
return False, "Invalid frontmatter format"
|
||||
if yaml is not None:
|
||||
try:
|
||||
frontmatter = yaml.safe_load(frontmatter_text)
|
||||
if not isinstance(frontmatter, dict):
|
||||
return False, "Frontmatter must be a YAML dictionary"
|
||||
except yaml.YAMLError as e:
|
||||
return False, f"Invalid YAML in frontmatter: {e}"
|
||||
else:
|
||||
frontmatter = _parse_simple_frontmatter(frontmatter_text)
|
||||
if frontmatter is None:
|
||||
return (
|
||||
False,
|
||||
"Invalid YAML in frontmatter: unsupported syntax without PyYAML installed",
|
||||
)
|
||||
|
||||
allowed_properties = {"name", "description", "license", "allowed-tools", "metadata"}
|
||||
|
||||
unexpected_keys = set(frontmatter.keys()) - allowed_properties
|
||||
if unexpected_keys:
|
||||
allowed = ", ".join(sorted(allowed_properties))
|
||||
unexpected = ", ".join(sorted(unexpected_keys))
|
||||
return (
|
||||
False,
|
||||
f"Unexpected key(s) in SKILL.md frontmatter: {unexpected}. Allowed properties are: {allowed}",
|
||||
)
|
||||
|
||||
if "name" not in frontmatter:
|
||||
return False, "Missing 'name' in frontmatter"
|
||||
if "description" not in frontmatter:
|
||||
return False, "Missing 'description' in frontmatter"
|
||||
|
||||
name = frontmatter.get("name", "")
|
||||
if not isinstance(name, str):
|
||||
return False, f"Name must be a string, got {type(name).__name__}"
|
||||
name = name.strip()
|
||||
if name:
|
||||
if not re.match(r"^[a-z0-9-]+$", name):
|
||||
return (
|
||||
False,
|
||||
f"Name '{name}' should be hyphen-case (lowercase letters, digits, and hyphens only)",
|
||||
)
|
||||
if name.startswith("-") or name.endswith("-") or "--" in name:
|
||||
return (
|
||||
False,
|
||||
f"Name '{name}' cannot start/end with hyphen or contain consecutive hyphens",
|
||||
)
|
||||
if len(name) > MAX_SKILL_NAME_LENGTH:
|
||||
return (
|
||||
False,
|
||||
f"Name is too long ({len(name)} characters). "
|
||||
f"Maximum is {MAX_SKILL_NAME_LENGTH} characters.",
|
||||
)
|
||||
|
||||
description = frontmatter.get("description", "")
|
||||
if not isinstance(description, str):
|
||||
return False, f"Description must be a string, got {type(description).__name__}"
|
||||
description = description.strip()
|
||||
if description:
|
||||
if "<" in description or ">" in description:
|
||||
return False, "Description cannot contain angle brackets (< or >)"
|
||||
if len(description) > 1024:
|
||||
return (
|
||||
False,
|
||||
f"Description is too long ({len(description)} characters). Maximum is 1024 characters.",
|
||||
)
|
||||
|
||||
return True, "Skill is valid!"
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
if len(sys.argv) != 2:
|
||||
print("Usage: python quick_validate.py <skill_directory>")
|
||||
sys.exit(1)
|
||||
|
||||
valid, message = validate_skill(sys.argv[1])
|
||||
print(message)
|
||||
sys.exit(0 if valid else 1)
|
||||
160
openclaw/skills/skill-creator/scripts/test_package_skill.py
Normal file
160
openclaw/skills/skill-creator/scripts/test_package_skill.py
Normal file
@@ -0,0 +1,160 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Regression tests for skill packaging security behavior.
|
||||
"""
|
||||
|
||||
import sys
|
||||
import tempfile
|
||||
import types
|
||||
import zipfile
|
||||
from pathlib import Path
|
||||
from unittest import TestCase, main
|
||||
from unittest.mock import patch
|
||||
|
||||
SCRIPT_DIR = Path(__file__).resolve().parent
|
||||
if str(SCRIPT_DIR) not in sys.path:
|
||||
sys.path.insert(0, str(SCRIPT_DIR))
|
||||
|
||||
|
||||
fake_quick_validate = types.ModuleType("quick_validate")
|
||||
fake_quick_validate.validate_skill = lambda _path: (True, "Skill is valid!")
|
||||
original_quick_validate = sys.modules.get("quick_validate")
|
||||
sys.modules["quick_validate"] = fake_quick_validate
|
||||
|
||||
import package_skill as package_skill_module
|
||||
from package_skill import package_skill
|
||||
|
||||
if original_quick_validate is not None:
|
||||
sys.modules["quick_validate"] = original_quick_validate
|
||||
else:
|
||||
sys.modules.pop("quick_validate", None)
|
||||
|
||||
|
||||
class TestPackageSkillSecurity(TestCase):
|
||||
def setUp(self):
|
||||
self.temp_dir = Path(tempfile.mkdtemp(prefix="test_skill_"))
|
||||
|
||||
def tearDown(self):
|
||||
import shutil
|
||||
|
||||
if self.temp_dir.exists():
|
||||
shutil.rmtree(self.temp_dir)
|
||||
|
||||
def create_skill(self, name="test-skill"):
|
||||
skill_dir = self.temp_dir / name
|
||||
skill_dir.mkdir(parents=True, exist_ok=True)
|
||||
(skill_dir / "SKILL.md").write_text("---\nname: test-skill\ndescription: test\n---\n")
|
||||
(skill_dir / "script.py").write_text("print('ok')\n")
|
||||
return skill_dir
|
||||
|
||||
def test_packages_normal_files(self):
|
||||
skill_dir = self.create_skill("normal-skill")
|
||||
out_dir = self.temp_dir / "out"
|
||||
out_dir.mkdir()
|
||||
|
||||
result = package_skill(str(skill_dir), str(out_dir))
|
||||
|
||||
self.assertIsNotNone(result)
|
||||
skill_file = out_dir / "normal-skill.skill"
|
||||
self.assertTrue(skill_file.exists())
|
||||
with zipfile.ZipFile(skill_file, "r") as archive:
|
||||
names = set(archive.namelist())
|
||||
self.assertIn("normal-skill/SKILL.md", names)
|
||||
self.assertIn("normal-skill/script.py", names)
|
||||
|
||||
def test_skips_symlink_to_external_file(self):
|
||||
skill_dir = self.create_skill("symlink-file-skill")
|
||||
outside = self.temp_dir / "outside-secret.txt"
|
||||
outside.write_text("super-secret\n")
|
||||
link = skill_dir / "loot.txt"
|
||||
out_dir = self.temp_dir / "out"
|
||||
out_dir.mkdir()
|
||||
|
||||
try:
|
||||
link.symlink_to(outside)
|
||||
except (OSError, NotImplementedError):
|
||||
self.skipTest("symlink unsupported on this platform")
|
||||
|
||||
result = package_skill(str(skill_dir), str(out_dir))
|
||||
self.assertIsNotNone(result)
|
||||
skill_file = out_dir / "symlink-file-skill.skill"
|
||||
self.assertTrue(skill_file.exists())
|
||||
with zipfile.ZipFile(skill_file, "r") as archive:
|
||||
names = set(archive.namelist())
|
||||
self.assertIn("symlink-file-skill/SKILL.md", names)
|
||||
self.assertIn("symlink-file-skill/script.py", names)
|
||||
self.assertNotIn("symlink-file-skill/loot.txt", names)
|
||||
|
||||
def test_skips_symlink_directory(self):
|
||||
skill_dir = self.create_skill("symlink-dir-skill")
|
||||
outside_dir = self.temp_dir / "outside"
|
||||
outside_dir.mkdir()
|
||||
(outside_dir / "secret.txt").write_text("secret\n")
|
||||
link = skill_dir / "docs"
|
||||
out_dir = self.temp_dir / "out"
|
||||
out_dir.mkdir()
|
||||
|
||||
try:
|
||||
link.symlink_to(outside_dir, target_is_directory=True)
|
||||
except (OSError, NotImplementedError):
|
||||
self.skipTest("symlink unsupported on this platform")
|
||||
|
||||
result = package_skill(str(skill_dir), str(out_dir))
|
||||
self.assertIsNotNone(result)
|
||||
skill_file = out_dir / "symlink-dir-skill.skill"
|
||||
with zipfile.ZipFile(skill_file, "r") as archive:
|
||||
names = set(archive.namelist())
|
||||
self.assertIn("symlink-dir-skill/SKILL.md", names)
|
||||
self.assertIn("symlink-dir-skill/script.py", names)
|
||||
self.assertNotIn("symlink-dir-skill/docs/secret.txt", names)
|
||||
|
||||
def test_rejects_resolved_path_outside_skill_root(self):
|
||||
skill_dir = self.create_skill("escape-skill")
|
||||
out_dir = self.temp_dir / "out"
|
||||
out_dir.mkdir()
|
||||
|
||||
original_within = package_skill_module._is_within
|
||||
|
||||
def fake_is_within(path_obj: Path, root: Path):
|
||||
if path_obj.name == "script.py":
|
||||
return False
|
||||
return original_within(path_obj, root)
|
||||
|
||||
with patch.object(package_skill_module, "_is_within", fake_is_within):
|
||||
result = package_skill(str(skill_dir), str(out_dir))
|
||||
|
||||
self.assertIsNone(result)
|
||||
|
||||
def test_allows_nested_regular_files(self):
|
||||
skill_dir = self.create_skill("nested-skill")
|
||||
nested = skill_dir / "lib" / "helpers"
|
||||
nested.mkdir(parents=True, exist_ok=True)
|
||||
(nested / "util.py").write_text("def run():\n return 1\n")
|
||||
out_dir = self.temp_dir / "out"
|
||||
out_dir.mkdir()
|
||||
|
||||
result = package_skill(str(skill_dir), str(out_dir))
|
||||
|
||||
self.assertIsNotNone(result)
|
||||
skill_file = out_dir / "nested-skill.skill"
|
||||
with zipfile.ZipFile(skill_file, "r") as archive:
|
||||
names = set(archive.namelist())
|
||||
self.assertIn("nested-skill/lib/helpers/util.py", names)
|
||||
|
||||
def test_skips_output_archive_when_output_dir_is_skill_dir(self):
|
||||
skill_dir = self.create_skill("self-output-skill")
|
||||
|
||||
result = package_skill(str(skill_dir), str(skill_dir))
|
||||
|
||||
self.assertIsNotNone(result)
|
||||
skill_file = skill_dir / "self-output-skill.skill"
|
||||
self.assertTrue(skill_file.exists())
|
||||
with zipfile.ZipFile(skill_file, "r") as archive:
|
||||
names = set(archive.namelist())
|
||||
self.assertIn("self-output-skill/SKILL.md", names)
|
||||
self.assertIn("self-output-skill/script.py", names)
|
||||
self.assertNotIn("self-output-skill/self-output-skill.skill", names)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
72
openclaw/skills/skill-creator/scripts/test_quick_validate.py
Normal file
72
openclaw/skills/skill-creator/scripts/test_quick_validate.py
Normal file
@@ -0,0 +1,72 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Regression tests for quick skill validation.
|
||||
"""
|
||||
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
from unittest import TestCase, main
|
||||
|
||||
import quick_validate
|
||||
|
||||
|
||||
class TestQuickValidate(TestCase):
|
||||
def setUp(self):
|
||||
self.temp_dir = Path(tempfile.mkdtemp(prefix="test_quick_validate_"))
|
||||
|
||||
def tearDown(self):
|
||||
import shutil
|
||||
|
||||
if self.temp_dir.exists():
|
||||
shutil.rmtree(self.temp_dir)
|
||||
|
||||
def test_accepts_crlf_frontmatter(self):
|
||||
skill_dir = self.temp_dir / "crlf-skill"
|
||||
skill_dir.mkdir(parents=True, exist_ok=True)
|
||||
content = "---\r\nname: crlf-skill\r\ndescription: ok\r\n---\r\n# Skill\r\n"
|
||||
(skill_dir / "SKILL.md").write_text(content, encoding="utf-8")
|
||||
|
||||
valid, message = quick_validate.validate_skill(skill_dir)
|
||||
|
||||
self.assertTrue(valid, message)
|
||||
|
||||
def test_rejects_missing_frontmatter_closing_fence(self):
|
||||
skill_dir = self.temp_dir / "bad-skill"
|
||||
skill_dir.mkdir(parents=True, exist_ok=True)
|
||||
content = "---\nname: bad-skill\ndescription: missing end\n# no closing fence\n"
|
||||
(skill_dir / "SKILL.md").write_text(content, encoding="utf-8")
|
||||
|
||||
valid, message = quick_validate.validate_skill(skill_dir)
|
||||
|
||||
self.assertFalse(valid)
|
||||
self.assertEqual(message, "Invalid frontmatter format")
|
||||
|
||||
def test_fallback_parser_handles_multiline_frontmatter_without_pyyaml(self):
|
||||
skill_dir = self.temp_dir / "multiline-skill"
|
||||
skill_dir.mkdir(parents=True, exist_ok=True)
|
||||
content = """---
|
||||
name: multiline-skill
|
||||
description: Works without pyyaml
|
||||
allowed-tools:
|
||||
- gh
|
||||
metadata: |
|
||||
{
|
||||
"owners": ["team-openclaw"]
|
||||
}
|
||||
---
|
||||
# Skill
|
||||
"""
|
||||
(skill_dir / "SKILL.md").write_text(content, encoding="utf-8")
|
||||
|
||||
previous_yaml = quick_validate.yaml
|
||||
quick_validate.yaml = None
|
||||
try:
|
||||
valid, message = quick_validate.validate_skill(skill_dir)
|
||||
finally:
|
||||
quick_validate.yaml = previous_yaml
|
||||
|
||||
self.assertTrue(valid, message)
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
||||
144
openclaw/skills/slack/SKILL.md
Normal file
144
openclaw/skills/slack/SKILL.md
Normal file
@@ -0,0 +1,144 @@
|
||||
---
|
||||
name: slack
|
||||
description: Use when you need to control Slack from OpenClaw via the slack tool, including reacting to messages or pinning/unpinning items in Slack channels or DMs.
|
||||
metadata: { "openclaw": { "emoji": "💬", "requires": { "config": ["channels.slack"] } } }
|
||||
---
|
||||
|
||||
# Slack Actions
|
||||
|
||||
## Overview
|
||||
|
||||
Use `slack` to react, manage pins, send/edit/delete messages, and fetch member info. The tool uses the bot token configured for OpenClaw.
|
||||
|
||||
## Inputs to collect
|
||||
|
||||
- `channelId` and `messageId` (Slack message timestamp, e.g. `1712023032.1234`).
|
||||
- For reactions, an `emoji` (Unicode or `:name:`).
|
||||
- For message sends, a `to` target (`channel:<id>` or `user:<id>`) and `content`.
|
||||
|
||||
Message context lines include `slack message id` and `channel` fields you can reuse directly.
|
||||
|
||||
## Actions
|
||||
|
||||
### Action groups
|
||||
|
||||
| Action group | Default | Notes |
|
||||
| ------------ | ------- | ---------------------- |
|
||||
| reactions | enabled | React + list reactions |
|
||||
| messages | enabled | Read/send/edit/delete |
|
||||
| pins | enabled | Pin/unpin/list |
|
||||
| memberInfo | enabled | Member info |
|
||||
| emojiList | enabled | Custom emoji list |
|
||||
|
||||
### React to a message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "react",
|
||||
"channelId": "C123",
|
||||
"messageId": "1712023032.1234",
|
||||
"emoji": "✅"
|
||||
}
|
||||
```
|
||||
|
||||
### List reactions
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "reactions",
|
||||
"channelId": "C123",
|
||||
"messageId": "1712023032.1234"
|
||||
}
|
||||
```
|
||||
|
||||
### Send a message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "sendMessage",
|
||||
"to": "channel:C123",
|
||||
"content": "Hello from OpenClaw"
|
||||
}
|
||||
```
|
||||
|
||||
### Edit a message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "editMessage",
|
||||
"channelId": "C123",
|
||||
"messageId": "1712023032.1234",
|
||||
"content": "Updated text"
|
||||
}
|
||||
```
|
||||
|
||||
### Delete a message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "deleteMessage",
|
||||
"channelId": "C123",
|
||||
"messageId": "1712023032.1234"
|
||||
}
|
||||
```
|
||||
|
||||
### Read recent messages
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "readMessages",
|
||||
"channelId": "C123",
|
||||
"limit": 20
|
||||
}
|
||||
```
|
||||
|
||||
### Pin a message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "pinMessage",
|
||||
"channelId": "C123",
|
||||
"messageId": "1712023032.1234"
|
||||
}
|
||||
```
|
||||
|
||||
### Unpin a message
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "unpinMessage",
|
||||
"channelId": "C123",
|
||||
"messageId": "1712023032.1234"
|
||||
}
|
||||
```
|
||||
|
||||
### List pinned items
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "listPins",
|
||||
"channelId": "C123"
|
||||
}
|
||||
```
|
||||
|
||||
### Member info
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "memberInfo",
|
||||
"userId": "U123"
|
||||
}
|
||||
```
|
||||
|
||||
### Emoji list
|
||||
|
||||
```json
|
||||
{
|
||||
"action": "emojiList"
|
||||
}
|
||||
```
|
||||
|
||||
## Ideas to try
|
||||
|
||||
- React with ✅ to mark completed tasks.
|
||||
- Pin key decisions or weekly status updates.
|
||||
49
openclaw/skills/songsee/SKILL.md
Normal file
49
openclaw/skills/songsee/SKILL.md
Normal file
@@ -0,0 +1,49 @@
|
||||
---
|
||||
name: songsee
|
||||
description: Generate spectrograms and feature-panel visualizations from audio with the songsee CLI.
|
||||
homepage: https://github.com/steipete/songsee
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🌊",
|
||||
"requires": { "bins": ["songsee"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/songsee",
|
||||
"bins": ["songsee"],
|
||||
"label": "Install songsee (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# songsee
|
||||
|
||||
Generate spectrograms + feature panels from audio.
|
||||
|
||||
Quick start
|
||||
|
||||
- Spectrogram: `songsee track.mp3`
|
||||
- Multi-panel: `songsee track.mp3 --viz spectrogram,mel,chroma,hpss,selfsim,loudness,tempogram,mfcc,flux`
|
||||
- Time slice: `songsee track.mp3 --start 12.5 --duration 8 -o slice.jpg`
|
||||
- Stdin: `cat track.mp3 | songsee - --format png -o out.png`
|
||||
|
||||
Common flags
|
||||
|
||||
- `--viz` list (repeatable or comma-separated)
|
||||
- `--style` palette (classic, magma, inferno, viridis, gray)
|
||||
- `--width` / `--height` output size
|
||||
- `--window` / `--hop` FFT settings
|
||||
- `--min-freq` / `--max-freq` frequency range
|
||||
- `--start` / `--duration` time slice
|
||||
- `--format` jpg|png
|
||||
|
||||
Notes
|
||||
|
||||
- WAV/MP3 decode native; other formats use ffmpeg if available.
|
||||
- Multiple `--viz` renders a grid.
|
||||
65
openclaw/skills/sonoscli/SKILL.md
Normal file
65
openclaw/skills/sonoscli/SKILL.md
Normal file
@@ -0,0 +1,65 @@
|
||||
---
|
||||
name: sonoscli
|
||||
description: Control Sonos speakers (discover/status/play/volume/group).
|
||||
homepage: https://sonoscli.sh
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🔊",
|
||||
"requires": { "bins": ["sonos"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "go",
|
||||
"kind": "go",
|
||||
"module": "github.com/steipete/sonoscli/cmd/sonos@latest",
|
||||
"bins": ["sonos"],
|
||||
"label": "Install sonoscli (go)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Sonos CLI
|
||||
|
||||
Use `sonos` to control Sonos speakers on the local network.
|
||||
|
||||
## Quick start
|
||||
|
||||
- `sonos discover`
|
||||
- `sonos status --name "Kitchen"`
|
||||
- `sonos play|pause|stop --name "Kitchen"`
|
||||
- `sonos volume set 15 --name "Kitchen"`
|
||||
|
||||
## Common tasks
|
||||
|
||||
- Grouping: `sonos group status|join|unjoin|party|solo`
|
||||
- Favorites: `sonos favorites list|open`
|
||||
- Queue: `sonos queue list|play|clear`
|
||||
- Spotify search (via SMAPI): `sonos smapi search --service "Spotify" --category tracks "query"`
|
||||
|
||||
## Notes
|
||||
|
||||
- If SSDP fails, specify `--ip <speaker-ip>`.
|
||||
- Spotify Web API search is optional and requires `SPOTIFY_CLIENT_ID/SECRET`.
|
||||
- If there is an error, check the troubleshooting section and offer advice if there is a decent match.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
### `sonos discover` - `no route to host`
|
||||
|
||||
- On error `Error: write udp4 0.0.0.0:64326->239.255.255.250:1900: sendto: no route to host (Command exited with code 1)`
|
||||
- Note that the port number after `0.0.0.0:` is ephemeral and will change and the netmask may not match exactly either
|
||||
- The `sendto: no route to host` should stay consistent
|
||||
- Advise the user that in `direct` (no Docker sandbox) mode on Mac OS that Settings -> Privacy & Security -> Local Network will need to be enabled for the top-level host parent process of the Gateway
|
||||
- `node` if running via `launchd`
|
||||
- `Terminal` if running the gateway directly in the terminal
|
||||
- `Visual Studio Code` if running via terminal in VS Code
|
||||
- An alternative option is to use `sandbox` (docker container) with network access allowed for that sandbox
|
||||
|
||||
### `sonos discover` - `bind: operation not permitted`
|
||||
|
||||
- On error `Error: listen udp4 0.0.0.0:0: bind: operation not permitted`
|
||||
- Advise the user that they may be running via a Codex or other sandbox that does not permit network access (this can be replicated by running `sonos discover` within a Codex CLI session with sandbox enabled and not approving the escalation request)
|
||||
64
openclaw/skills/spotify-player/SKILL.md
Normal file
64
openclaw/skills/spotify-player/SKILL.md
Normal file
@@ -0,0 +1,64 @@
|
||||
---
|
||||
name: spotify-player
|
||||
description: Terminal Spotify playback/search via spogo (preferred) or spotify_player.
|
||||
homepage: https://www.spotify.com
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🎵",
|
||||
"requires": { "anyBins": ["spogo", "spotify_player"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "spogo",
|
||||
"tap": "steipete/tap",
|
||||
"bins": ["spogo"],
|
||||
"label": "Install spogo (brew)",
|
||||
},
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "spotify_player",
|
||||
"bins": ["spotify_player"],
|
||||
"label": "Install spotify_player (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# spogo / spotify_player
|
||||
|
||||
Use `spogo` **(preferred)** for Spotify playback/search. Fall back to `spotify_player` if needed.
|
||||
|
||||
Requirements
|
||||
|
||||
- Spotify Premium account.
|
||||
- Either `spogo` or `spotify_player` installed.
|
||||
|
||||
spogo setup
|
||||
|
||||
- Import cookies: `spogo auth import --browser chrome`
|
||||
|
||||
Common CLI commands
|
||||
|
||||
- Search: `spogo search track "query"`
|
||||
- Playback: `spogo play|pause|next|prev`
|
||||
- Devices: `spogo device list`, `spogo device set "<name|id>"`
|
||||
- Status: `spogo status`
|
||||
|
||||
spotify_player commands (fallback)
|
||||
|
||||
- Search: `spotify_player search "query"`
|
||||
- Playback: `spotify_player playback play|pause|next|previous`
|
||||
- Connect device: `spotify_player connect`
|
||||
- Like track: `spotify_player like`
|
||||
|
||||
Notes
|
||||
|
||||
- Config folder: `~/.config/spotify-player` (e.g., `app.toml`).
|
||||
- For Spotify Connect integration, set a user `client_id` in config.
|
||||
- TUI shortcuts are available via `?` in the app.
|
||||
87
openclaw/skills/summarize/SKILL.md
Normal file
87
openclaw/skills/summarize/SKILL.md
Normal file
@@ -0,0 +1,87 @@
|
||||
---
|
||||
name: summarize
|
||||
description: Summarize or extract text/transcripts from URLs, podcasts, and local files (great fallback for “transcribe this YouTube/video”).
|
||||
homepage: https://summarize.sh
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🧾",
|
||||
"requires": { "bins": ["summarize"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/summarize",
|
||||
"bins": ["summarize"],
|
||||
"label": "Install summarize (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Summarize
|
||||
|
||||
Fast CLI to summarize URLs, local files, and YouTube links.
|
||||
|
||||
## When to use (trigger phrases)
|
||||
|
||||
Use this skill immediately when the user asks any of:
|
||||
|
||||
- “use summarize.sh”
|
||||
- “what’s this link/video about?”
|
||||
- “summarize this URL/article”
|
||||
- “transcribe this YouTube/video” (best-effort transcript extraction; no `yt-dlp` needed)
|
||||
|
||||
## Quick start
|
||||
|
||||
```bash
|
||||
summarize "https://example.com" --model google/gemini-3-flash-preview
|
||||
summarize "/path/to/file.pdf" --model google/gemini-3-flash-preview
|
||||
summarize "https://youtu.be/dQw4w9WgXcQ" --youtube auto
|
||||
```
|
||||
|
||||
## YouTube: summary vs transcript
|
||||
|
||||
Best-effort transcript (URLs only):
|
||||
|
||||
```bash
|
||||
summarize "https://youtu.be/dQw4w9WgXcQ" --youtube auto --extract-only
|
||||
```
|
||||
|
||||
If the user asked for a transcript but it’s huge, return a tight summary first, then ask which section/time range to expand.
|
||||
|
||||
## Model + keys
|
||||
|
||||
Set the API key for your chosen provider:
|
||||
|
||||
- OpenAI: `OPENAI_API_KEY`
|
||||
- Anthropic: `ANTHROPIC_API_KEY`
|
||||
- xAI: `XAI_API_KEY`
|
||||
- Google: `GEMINI_API_KEY` (aliases: `GOOGLE_GENERATIVE_AI_API_KEY`, `GOOGLE_API_KEY`)
|
||||
|
||||
Default model is `google/gemini-3-flash-preview` if none is set.
|
||||
|
||||
## Useful flags
|
||||
|
||||
- `--length short|medium|long|xl|xxl|<chars>`
|
||||
- `--max-output-tokens <count>`
|
||||
- `--extract-only` (URLs only)
|
||||
- `--json` (machine readable)
|
||||
- `--firecrawl auto|off|always` (fallback extraction)
|
||||
- `--youtube auto` (Apify fallback if `APIFY_API_TOKEN` set)
|
||||
|
||||
## Config
|
||||
|
||||
Optional config file: `~/.summarize/config.json`
|
||||
|
||||
```json
|
||||
{ "model": "openai/gpt-5.2" }
|
||||
```
|
||||
|
||||
Optional services:
|
||||
|
||||
- `FIRECRAWL_API_KEY` for blocked sites
|
||||
- `APIFY_API_TOKEN` for YouTube fallback
|
||||
86
openclaw/skills/things-mac/SKILL.md
Normal file
86
openclaw/skills/things-mac/SKILL.md
Normal file
@@ -0,0 +1,86 @@
|
||||
---
|
||||
name: things-mac
|
||||
description: Manage Things 3 via the `things` CLI on macOS (add/update projects+todos via URL scheme; read/search/list from the local Things database). Use when a user asks OpenClaw to add a task to Things, list inbox/today/upcoming, search tasks, or inspect projects/areas/tags.
|
||||
homepage: https://github.com/ossianhempel/things3-cli
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "✅",
|
||||
"os": ["darwin"],
|
||||
"requires": { "bins": ["things"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "go",
|
||||
"kind": "go",
|
||||
"module": "github.com/ossianhempel/things3-cli/cmd/things@latest",
|
||||
"bins": ["things"],
|
||||
"label": "Install things3-cli (go)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Things 3 CLI
|
||||
|
||||
Use `things` to read your local Things database (inbox/today/search/projects/areas/tags) and to add/update todos via the Things URL scheme.
|
||||
|
||||
Setup
|
||||
|
||||
- Install (recommended, Apple Silicon): `GOBIN=/opt/homebrew/bin go install github.com/ossianhempel/things3-cli/cmd/things@latest`
|
||||
- If DB reads fail: grant **Full Disk Access** to the calling app (Terminal for manual runs; `OpenClaw.app` for gateway runs).
|
||||
- Optional: set `THINGSDB` (or pass `--db`) to point at your `ThingsData-*` folder.
|
||||
- Optional: set `THINGS_AUTH_TOKEN` to avoid passing `--auth-token` for update ops.
|
||||
|
||||
Read-only (DB)
|
||||
|
||||
- `things inbox --limit 50`
|
||||
- `things today`
|
||||
- `things upcoming`
|
||||
- `things search "query"`
|
||||
- `things projects` / `things areas` / `things tags`
|
||||
|
||||
Write (URL scheme)
|
||||
|
||||
- Prefer safe preview: `things --dry-run add "Title"`
|
||||
- Add: `things add "Title" --notes "..." --when today --deadline 2026-01-02`
|
||||
- Bring Things to front: `things --foreground add "Title"`
|
||||
|
||||
Examples: add a todo
|
||||
|
||||
- Basic: `things add "Buy milk"`
|
||||
- With notes: `things add "Buy milk" --notes "2% + bananas"`
|
||||
- Into a project/area: `things add "Book flights" --list "Travel"`
|
||||
- Into a project heading: `things add "Pack charger" --list "Travel" --heading "Before"`
|
||||
- With tags: `things add "Call dentist" --tags "health,phone"`
|
||||
- Checklist: `things add "Trip prep" --checklist-item "Passport" --checklist-item "Tickets"`
|
||||
- From STDIN (multi-line => title + notes):
|
||||
- `cat <<'EOF' | things add -`
|
||||
- `Title line`
|
||||
- `Notes line 1`
|
||||
- `Notes line 2`
|
||||
- `EOF`
|
||||
|
||||
Examples: modify a todo (needs auth token)
|
||||
|
||||
- First: get the ID (UUID column): `things search "milk" --limit 5`
|
||||
- Auth: set `THINGS_AUTH_TOKEN` or pass `--auth-token <TOKEN>`
|
||||
- Title: `things update --id <UUID> --auth-token <TOKEN> "New title"`
|
||||
- Notes replace: `things update --id <UUID> --auth-token <TOKEN> --notes "New notes"`
|
||||
- Notes append/prepend: `things update --id <UUID> --auth-token <TOKEN> --append-notes "..."` / `--prepend-notes "..."`
|
||||
- Move lists: `things update --id <UUID> --auth-token <TOKEN> --list "Travel" --heading "Before"`
|
||||
- Tags replace/add: `things update --id <UUID> --auth-token <TOKEN> --tags "a,b"` / `things update --id <UUID> --auth-token <TOKEN> --add-tags "a,b"`
|
||||
- Complete/cancel (soft-delete-ish): `things update --id <UUID> --auth-token <TOKEN> --completed` / `--canceled`
|
||||
- Safe preview: `things --dry-run update --id <UUID> --auth-token <TOKEN> --completed`
|
||||
|
||||
Delete a todo?
|
||||
|
||||
- Not supported by `things3-cli` right now (no “delete/move-to-trash” write command; `things trash` is read-only listing).
|
||||
- Options: use Things UI to delete/trash, or mark as `--completed` / `--canceled` via `things update`.
|
||||
|
||||
Notes
|
||||
|
||||
- macOS-only.
|
||||
- `--dry-run` prints the URL and does not open Things.
|
||||
153
openclaw/skills/tmux/SKILL.md
Normal file
153
openclaw/skills/tmux/SKILL.md
Normal file
@@ -0,0 +1,153 @@
|
||||
---
|
||||
name: tmux
|
||||
description: Remote-control tmux sessions for interactive CLIs by sending keystrokes and scraping pane output.
|
||||
metadata:
|
||||
{ "openclaw": { "emoji": "🧵", "os": ["darwin", "linux"], "requires": { "bins": ["tmux"] } } }
|
||||
---
|
||||
|
||||
# tmux Session Control
|
||||
|
||||
Control tmux sessions by sending keystrokes and reading output. Essential for managing Claude Code sessions.
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **USE this skill when:**
|
||||
|
||||
- Monitoring Claude/Codex sessions in tmux
|
||||
- Sending input to interactive terminal applications
|
||||
- Scraping output from long-running processes in tmux
|
||||
- Navigating tmux panes/windows programmatically
|
||||
- Checking on background work in existing sessions
|
||||
|
||||
## When NOT to Use
|
||||
|
||||
❌ **DON'T use this skill when:**
|
||||
|
||||
- Running one-off shell commands → use `exec` tool directly
|
||||
- Starting new background processes → use `exec` with `background:true`
|
||||
- Non-interactive scripts → use `exec` tool
|
||||
- The process isn't in tmux
|
||||
- You need to create a new tmux session → use `exec` with `tmux new-session`
|
||||
|
||||
## Example Sessions
|
||||
|
||||
| Session | Purpose |
|
||||
| ----------------------- | --------------------------- |
|
||||
| `shared` | Primary interactive session |
|
||||
| `worker-2` - `worker-8` | Parallel worker sessions |
|
||||
|
||||
## Common Commands
|
||||
|
||||
### List Sessions
|
||||
|
||||
```bash
|
||||
tmux list-sessions
|
||||
tmux ls
|
||||
```
|
||||
|
||||
### Capture Output
|
||||
|
||||
```bash
|
||||
# Last 20 lines of pane
|
||||
tmux capture-pane -t shared -p | tail -20
|
||||
|
||||
# Entire scrollback
|
||||
tmux capture-pane -t shared -p -S -
|
||||
|
||||
# Specific pane in window
|
||||
tmux capture-pane -t shared:0.0 -p
|
||||
```
|
||||
|
||||
### Send Keys
|
||||
|
||||
```bash
|
||||
# Send text (doesn't press Enter)
|
||||
tmux send-keys -t shared "hello"
|
||||
|
||||
# Send text + Enter
|
||||
tmux send-keys -t shared "y" Enter
|
||||
|
||||
# Send special keys
|
||||
tmux send-keys -t shared Enter
|
||||
tmux send-keys -t shared Escape
|
||||
tmux send-keys -t shared C-c # Ctrl+C
|
||||
tmux send-keys -t shared C-d # Ctrl+D (EOF)
|
||||
tmux send-keys -t shared C-z # Ctrl+Z (suspend)
|
||||
```
|
||||
|
||||
### Window/Pane Navigation
|
||||
|
||||
```bash
|
||||
# Select window
|
||||
tmux select-window -t shared:0
|
||||
|
||||
# Select pane
|
||||
tmux select-pane -t shared:0.1
|
||||
|
||||
# List windows
|
||||
tmux list-windows -t shared
|
||||
```
|
||||
|
||||
### Session Management
|
||||
|
||||
```bash
|
||||
# Create new session
|
||||
tmux new-session -d -s newsession
|
||||
|
||||
# Kill session
|
||||
tmux kill-session -t sessionname
|
||||
|
||||
# Rename session
|
||||
tmux rename-session -t old new
|
||||
```
|
||||
|
||||
## Sending Input Safely
|
||||
|
||||
For interactive TUIs (Claude Code, Codex, etc.), split text and Enter into separate sends to avoid paste/multiline edge cases:
|
||||
|
||||
```bash
|
||||
tmux send-keys -t shared -l -- "Please apply the patch in src/foo.ts"
|
||||
sleep 0.1
|
||||
tmux send-keys -t shared Enter
|
||||
```
|
||||
|
||||
## Claude Code Session Patterns
|
||||
|
||||
### Check if Session Needs Input
|
||||
|
||||
```bash
|
||||
# Look for prompts
|
||||
tmux capture-pane -t worker-3 -p | tail -10 | grep -E "❯|Yes.*No|proceed|permission"
|
||||
```
|
||||
|
||||
### Approve Claude Code Prompt
|
||||
|
||||
```bash
|
||||
# Send 'y' and Enter
|
||||
tmux send-keys -t worker-3 'y' Enter
|
||||
|
||||
# Or select numbered option
|
||||
tmux send-keys -t worker-3 '2' Enter
|
||||
```
|
||||
|
||||
### Check All Sessions Status
|
||||
|
||||
```bash
|
||||
for s in shared worker-2 worker-3 worker-4 worker-5 worker-6 worker-7 worker-8; do
|
||||
echo "=== $s ==="
|
||||
tmux capture-pane -t $s -p 2>/dev/null | tail -5
|
||||
done
|
||||
```
|
||||
|
||||
### Send Task to Session
|
||||
|
||||
```bash
|
||||
tmux send-keys -t worker-4 "Fix the bug in auth.js" Enter
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Use `capture-pane -p` to print to stdout (essential for scripting)
|
||||
- `-S -` captures entire scrollback history
|
||||
- Target format: `session:window.pane` (e.g., `shared:0.0`)
|
||||
- Sessions persist across SSH disconnects
|
||||
112
openclaw/skills/tmux/scripts/find-sessions.sh
Normal file
112
openclaw/skills/tmux/scripts/find-sessions.sh
Normal file
@@ -0,0 +1,112 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat <<'USAGE'
|
||||
Usage: find-sessions.sh [-L socket-name|-S socket-path|-A] [-q pattern]
|
||||
|
||||
List tmux sessions on a socket (default tmux socket if none provided).
|
||||
|
||||
Options:
|
||||
-L, --socket tmux socket name (passed to tmux -L)
|
||||
-S, --socket-path tmux socket path (passed to tmux -S)
|
||||
-A, --all scan all sockets under OPENCLAW_TMUX_SOCKET_DIR
|
||||
-q, --query case-insensitive substring to filter session names
|
||||
-h, --help show this help
|
||||
USAGE
|
||||
}
|
||||
|
||||
socket_name=""
|
||||
socket_path=""
|
||||
query=""
|
||||
scan_all=false
|
||||
socket_dir="${OPENCLAW_TMUX_SOCKET_DIR:-${CLAWDBOT_TMUX_SOCKET_DIR:-${TMPDIR:-/tmp}/openclaw-tmux-sockets}}"
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
-L|--socket) socket_name="${2-}"; shift 2 ;;
|
||||
-S|--socket-path) socket_path="${2-}"; shift 2 ;;
|
||||
-A|--all) scan_all=true; shift ;;
|
||||
-q|--query) query="${2-}"; shift 2 ;;
|
||||
-h|--help) usage; exit 0 ;;
|
||||
*) echo "Unknown option: $1" >&2; usage; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ "$scan_all" == true && ( -n "$socket_name" || -n "$socket_path" ) ]]; then
|
||||
echo "Cannot combine --all with -L or -S" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ -n "$socket_name" && -n "$socket_path" ]]; then
|
||||
echo "Use either -L or -S, not both" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v tmux >/dev/null 2>&1; then
|
||||
echo "tmux not found in PATH" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
list_sessions() {
|
||||
local label="$1"; shift
|
||||
local tmux_cmd=(tmux "$@")
|
||||
|
||||
if ! sessions="$("${tmux_cmd[@]}" list-sessions -F '#{session_name}\t#{session_attached}\t#{session_created_string}' 2>/dev/null)"; then
|
||||
echo "No tmux server found on $label" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ -n "$query" ]]; then
|
||||
sessions="$(printf '%s\n' "$sessions" | grep -i -- "$query" || true)"
|
||||
fi
|
||||
|
||||
if [[ -z "$sessions" ]]; then
|
||||
echo "No sessions found on $label"
|
||||
return 0
|
||||
fi
|
||||
|
||||
echo "Sessions on $label:"
|
||||
printf '%s\n' "$sessions" | while IFS=$'\t' read -r name attached created; do
|
||||
attached_label=$([[ "$attached" == "1" ]] && echo "attached" || echo "detached")
|
||||
printf ' - %s (%s, started %s)\n' "$name" "$attached_label" "$created"
|
||||
done
|
||||
}
|
||||
|
||||
if [[ "$scan_all" == true ]]; then
|
||||
if [[ ! -d "$socket_dir" ]]; then
|
||||
echo "Socket directory not found: $socket_dir" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
shopt -s nullglob
|
||||
sockets=("$socket_dir"/*)
|
||||
shopt -u nullglob
|
||||
|
||||
if [[ "${#sockets[@]}" -eq 0 ]]; then
|
||||
echo "No sockets found under $socket_dir" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
exit_code=0
|
||||
for sock in "${sockets[@]}"; do
|
||||
if [[ ! -S "$sock" ]]; then
|
||||
continue
|
||||
fi
|
||||
list_sessions "socket path '$sock'" -S "$sock" || exit_code=$?
|
||||
done
|
||||
exit "$exit_code"
|
||||
fi
|
||||
|
||||
tmux_cmd=(tmux)
|
||||
socket_label="default socket"
|
||||
|
||||
if [[ -n "$socket_name" ]]; then
|
||||
tmux_cmd+=(-L "$socket_name")
|
||||
socket_label="socket name '$socket_name'"
|
||||
elif [[ -n "$socket_path" ]]; then
|
||||
tmux_cmd+=(-S "$socket_path")
|
||||
socket_label="socket path '$socket_path'"
|
||||
fi
|
||||
|
||||
list_sessions "$socket_label" "${tmux_cmd[@]:1}"
|
||||
83
openclaw/skills/tmux/scripts/wait-for-text.sh
Normal file
83
openclaw/skills/tmux/scripts/wait-for-text.sh
Normal file
@@ -0,0 +1,83 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat <<'USAGE'
|
||||
Usage: wait-for-text.sh -t target -p pattern [options]
|
||||
|
||||
Poll a tmux pane for text and exit when found.
|
||||
|
||||
Options:
|
||||
-t, --target tmux target (session:window.pane), required
|
||||
-p, --pattern regex pattern to look for, required
|
||||
-F, --fixed treat pattern as a fixed string (grep -F)
|
||||
-T, --timeout seconds to wait (integer, default: 15)
|
||||
-i, --interval poll interval in seconds (default: 0.5)
|
||||
-l, --lines number of history lines to inspect (integer, default: 1000)
|
||||
-h, --help show this help
|
||||
USAGE
|
||||
}
|
||||
|
||||
target=""
|
||||
pattern=""
|
||||
grep_flag="-E"
|
||||
timeout=15
|
||||
interval=0.5
|
||||
lines=1000
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
-t|--target) target="${2-}"; shift 2 ;;
|
||||
-p|--pattern) pattern="${2-}"; shift 2 ;;
|
||||
-F|--fixed) grep_flag="-F"; shift ;;
|
||||
-T|--timeout) timeout="${2-}"; shift 2 ;;
|
||||
-i|--interval) interval="${2-}"; shift 2 ;;
|
||||
-l|--lines) lines="${2-}"; shift 2 ;;
|
||||
-h|--help) usage; exit 0 ;;
|
||||
*) echo "Unknown option: $1" >&2; usage; exit 1 ;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ -z "$target" || -z "$pattern" ]]; then
|
||||
echo "target and pattern are required" >&2
|
||||
usage
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! [[ "$timeout" =~ ^[0-9]+$ ]]; then
|
||||
echo "timeout must be an integer number of seconds" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! [[ "$lines" =~ ^[0-9]+$ ]]; then
|
||||
echo "lines must be an integer" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if ! command -v tmux >/dev/null 2>&1; then
|
||||
echo "tmux not found in PATH" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# End time in epoch seconds (integer, good enough for polling)
|
||||
start_epoch=$(date +%s)
|
||||
deadline=$((start_epoch + timeout))
|
||||
|
||||
while true; do
|
||||
# -J joins wrapped lines, -S uses negative index to read last N lines
|
||||
pane_text="$(tmux capture-pane -p -J -t "$target" -S "-${lines}" 2>/dev/null || true)"
|
||||
|
||||
if printf '%s\n' "$pane_text" | grep $grep_flag -- "$pattern" >/dev/null 2>&1; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
now=$(date +%s)
|
||||
if (( now >= deadline )); then
|
||||
echo "Timed out after ${timeout}s waiting for pattern: $pattern" >&2
|
||||
echo "Last ${lines} lines from $target:" >&2
|
||||
printf '%s\n' "$pane_text" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
sleep "$interval"
|
||||
done
|
||||
95
openclaw/skills/trello/SKILL.md
Normal file
95
openclaw/skills/trello/SKILL.md
Normal file
@@ -0,0 +1,95 @@
|
||||
---
|
||||
name: trello
|
||||
description: Manage Trello boards, lists, and cards via the Trello REST API.
|
||||
homepage: https://developer.atlassian.com/cloud/trello/rest/
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{ "emoji": "📋", "requires": { "bins": ["jq"], "env": ["TRELLO_API_KEY", "TRELLO_TOKEN"] } },
|
||||
}
|
||||
---
|
||||
|
||||
# Trello Skill
|
||||
|
||||
Manage Trello boards, lists, and cards directly from OpenClaw.
|
||||
|
||||
## Setup
|
||||
|
||||
1. Get your API key: https://trello.com/app-key
|
||||
2. Generate a token (click "Token" link on that page)
|
||||
3. Set environment variables:
|
||||
```bash
|
||||
export TRELLO_API_KEY="your-api-key"
|
||||
export TRELLO_TOKEN="your-token"
|
||||
```
|
||||
|
||||
## Usage
|
||||
|
||||
All commands use curl to hit the Trello REST API.
|
||||
|
||||
### List boards
|
||||
|
||||
```bash
|
||||
curl -s "https://api.trello.com/1/members/me/boards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, id}'
|
||||
```
|
||||
|
||||
### List lists in a board
|
||||
|
||||
```bash
|
||||
curl -s "https://api.trello.com/1/boards/{boardId}/lists?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, id}'
|
||||
```
|
||||
|
||||
### List cards in a list
|
||||
|
||||
```bash
|
||||
curl -s "https://api.trello.com/1/lists/{listId}/cards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, id, desc}'
|
||||
```
|
||||
|
||||
### Create a card
|
||||
|
||||
```bash
|
||||
curl -s -X POST "https://api.trello.com/1/cards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
|
||||
-d "idList={listId}" \
|
||||
-d "name=Card Title" \
|
||||
-d "desc=Card description"
|
||||
```
|
||||
|
||||
### Move a card to another list
|
||||
|
||||
```bash
|
||||
curl -s -X PUT "https://api.trello.com/1/cards/{cardId}?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
|
||||
-d "idList={newListId}"
|
||||
```
|
||||
|
||||
### Add a comment to a card
|
||||
|
||||
```bash
|
||||
curl -s -X POST "https://api.trello.com/1/cards/{cardId}/actions/comments?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
|
||||
-d "text=Your comment here"
|
||||
```
|
||||
|
||||
### Archive a card
|
||||
|
||||
```bash
|
||||
curl -s -X PUT "https://api.trello.com/1/cards/{cardId}?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" \
|
||||
-d "closed=true"
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Board/List/Card IDs can be found in the Trello URL or via the list commands
|
||||
- The API key and token provide full access to your Trello account - keep them secret!
|
||||
- Rate limits: 300 requests per 10 seconds per API key; 100 requests per 10 seconds per token; `/1/members` endpoints are limited to 100 requests per 900 seconds
|
||||
|
||||
## Examples
|
||||
|
||||
```bash
|
||||
# Get all boards
|
||||
curl -s "https://api.trello.com/1/members/me/boards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN&fields=name,id" | jq
|
||||
|
||||
# Find a specific board by name
|
||||
curl -s "https://api.trello.com/1/members/me/boards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | select(.name | contains("Work"))'
|
||||
|
||||
# Get all cards on a board
|
||||
curl -s "https://api.trello.com/1/boards/{boardId}/cards?key=$TRELLO_API_KEY&token=$TRELLO_TOKEN" | jq '.[] | {name, list: .idList}'
|
||||
```
|
||||
46
openclaw/skills/video-frames/SKILL.md
Normal file
46
openclaw/skills/video-frames/SKILL.md
Normal file
@@ -0,0 +1,46 @@
|
||||
---
|
||||
name: video-frames
|
||||
description: Extract frames or short clips from videos using ffmpeg.
|
||||
homepage: https://ffmpeg.org
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "🎞️",
|
||||
"requires": { "bins": ["ffmpeg"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "ffmpeg",
|
||||
"bins": ["ffmpeg"],
|
||||
"label": "Install ffmpeg (brew)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Video Frames (ffmpeg)
|
||||
|
||||
Extract a single frame from a video, or create quick thumbnails for inspection.
|
||||
|
||||
## Quick start
|
||||
|
||||
First frame:
|
||||
|
||||
```bash
|
||||
{baseDir}/scripts/frame.sh /path/to/video.mp4 --out /tmp/frame.jpg
|
||||
```
|
||||
|
||||
At a timestamp:
|
||||
|
||||
```bash
|
||||
{baseDir}/scripts/frame.sh /path/to/video.mp4 --time 00:00:10 --out /tmp/frame-10s.jpg
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- Prefer `--time` for “what is happening around here?”.
|
||||
- Use a `.jpg` for quick share; use `.png` for crisp UI frames.
|
||||
81
openclaw/skills/video-frames/scripts/frame.sh
Normal file
81
openclaw/skills/video-frames/scripts/frame.sh
Normal file
@@ -0,0 +1,81 @@
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
usage() {
|
||||
cat >&2 <<'EOF'
|
||||
Usage:
|
||||
frame.sh <video-file> [--time HH:MM:SS] [--index N] --out /path/to/frame.jpg
|
||||
|
||||
Examples:
|
||||
frame.sh video.mp4 --out /tmp/frame.jpg
|
||||
frame.sh video.mp4 --time 00:00:10 --out /tmp/frame-10s.jpg
|
||||
frame.sh video.mp4 --index 0 --out /tmp/frame0.png
|
||||
EOF
|
||||
exit 2
|
||||
}
|
||||
|
||||
if [[ "${1:-}" == "" || "${1:-}" == "-h" || "${1:-}" == "--help" ]]; then
|
||||
usage
|
||||
fi
|
||||
|
||||
in="${1:-}"
|
||||
shift || true
|
||||
|
||||
time=""
|
||||
index=""
|
||||
out=""
|
||||
|
||||
while [[ $# -gt 0 ]]; do
|
||||
case "$1" in
|
||||
--time)
|
||||
time="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--index)
|
||||
index="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
--out)
|
||||
out="${2:-}"
|
||||
shift 2
|
||||
;;
|
||||
*)
|
||||
echo "Unknown arg: $1" >&2
|
||||
usage
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
if [[ ! -f "$in" ]]; then
|
||||
echo "File not found: $in" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ "$out" == "" ]]; then
|
||||
echo "Missing --out" >&2
|
||||
usage
|
||||
fi
|
||||
|
||||
mkdir -p "$(dirname "$out")"
|
||||
|
||||
if [[ "$index" != "" ]]; then
|
||||
ffmpeg -hide_banner -loglevel error -y \
|
||||
-i "$in" \
|
||||
-vf "select=eq(n\\,${index})" \
|
||||
-vframes 1 \
|
||||
"$out"
|
||||
elif [[ "$time" != "" ]]; then
|
||||
ffmpeg -hide_banner -loglevel error -y \
|
||||
-ss "$time" \
|
||||
-i "$in" \
|
||||
-frames:v 1 \
|
||||
"$out"
|
||||
else
|
||||
ffmpeg -hide_banner -loglevel error -y \
|
||||
-i "$in" \
|
||||
-vf "select=eq(n\\,0)" \
|
||||
-vframes 1 \
|
||||
"$out"
|
||||
fi
|
||||
|
||||
echo "$out"
|
||||
45
openclaw/skills/voice-call/SKILL.md
Normal file
45
openclaw/skills/voice-call/SKILL.md
Normal file
@@ -0,0 +1,45 @@
|
||||
---
|
||||
name: voice-call
|
||||
description: Start voice calls via the OpenClaw voice-call plugin.
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "📞",
|
||||
"skillKey": "voice-call",
|
||||
"requires": { "config": ["plugins.entries.voice-call.enabled"] },
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# Voice Call
|
||||
|
||||
Use the voice-call plugin to start or inspect calls (Twilio, Telnyx, Plivo, or mock).
|
||||
|
||||
## CLI
|
||||
|
||||
```bash
|
||||
openclaw voicecall call --to "+15555550123" --message "Hello from OpenClaw"
|
||||
openclaw voicecall status --call-id <id>
|
||||
```
|
||||
|
||||
## Tool
|
||||
|
||||
Use `voice_call` for agent-initiated calls.
|
||||
|
||||
Actions:
|
||||
|
||||
- `initiate_call` (message, to?, mode?)
|
||||
- `continue_call` (callId, message)
|
||||
- `speak_to_user` (callId, message)
|
||||
- `end_call` (callId)
|
||||
- `get_status` (callId)
|
||||
|
||||
Notes:
|
||||
|
||||
- Requires the voice-call plugin to be enabled.
|
||||
- Plugin config lives under `plugins.entries.voice-call.config`.
|
||||
- Twilio config: `provider: "twilio"` + `twilio.accountSid/authToken` + `fromNumber`.
|
||||
- Telnyx config: `provider: "telnyx"` + `telnyx.apiKey/connectionId` + `fromNumber`.
|
||||
- Plivo config: `provider: "plivo"` + `plivo.authId/authToken` + `fromNumber`.
|
||||
- Dev fallback: `provider: "mock"` (no network).
|
||||
72
openclaw/skills/wacli/SKILL.md
Normal file
72
openclaw/skills/wacli/SKILL.md
Normal file
@@ -0,0 +1,72 @@
|
||||
---
|
||||
name: wacli
|
||||
description: Send WhatsApp messages to other people or search/sync WhatsApp history via the wacli CLI (not for normal user chats).
|
||||
homepage: https://wacli.sh
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "📱",
|
||||
"requires": { "bins": ["wacli"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "steipete/tap/wacli",
|
||||
"bins": ["wacli"],
|
||||
"label": "Install wacli (brew)",
|
||||
},
|
||||
{
|
||||
"id": "go",
|
||||
"kind": "go",
|
||||
"module": "github.com/steipete/wacli/cmd/wacli@latest",
|
||||
"bins": ["wacli"],
|
||||
"label": "Install wacli (go)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# wacli
|
||||
|
||||
Use `wacli` only when the user explicitly asks you to message someone else on WhatsApp or when they ask to sync/search WhatsApp history.
|
||||
Do NOT use `wacli` for normal user chats; OpenClaw routes WhatsApp conversations automatically.
|
||||
If the user is chatting with you on WhatsApp, you should not reach for this tool unless they ask you to contact a third party.
|
||||
|
||||
Safety
|
||||
|
||||
- Require explicit recipient + message text.
|
||||
- Confirm recipient + message before sending.
|
||||
- If anything is ambiguous, ask a clarifying question.
|
||||
|
||||
Auth + sync
|
||||
|
||||
- `wacli auth` (QR login + initial sync)
|
||||
- `wacli sync --follow` (continuous sync)
|
||||
- `wacli doctor`
|
||||
|
||||
Find chats + messages
|
||||
|
||||
- `wacli chats list --limit 20 --query "name or number"`
|
||||
- `wacli messages search "query" --limit 20 --chat <jid>`
|
||||
- `wacli messages search "invoice" --after 2025-01-01 --before 2025-12-31`
|
||||
|
||||
History backfill
|
||||
|
||||
- `wacli history backfill --chat <jid> --requests 2 --count 50`
|
||||
|
||||
Send
|
||||
|
||||
- Text: `wacli send text --to "+14155551212" --message "Hello! Are you free at 3pm?"`
|
||||
- Group: `wacli send text --to "1234567890-123456789@g.us" --message "Running 5 min late."`
|
||||
- File: `wacli send file --to "+14155551212" --file /path/agenda.pdf --caption "Agenda"`
|
||||
|
||||
Notes
|
||||
|
||||
- Store dir: `~/.wacli` (override with `--store`).
|
||||
- Use `--json` for machine-readable output when parsing.
|
||||
- Backfill requires your phone online; results are best-effort.
|
||||
- WhatsApp CLI is not needed for routine user chats; it’s for messaging other people.
|
||||
- JIDs: direct chats look like `<number>@s.whatsapp.net`; groups look like `<id>@g.us` (use `wacli chats list` to find).
|
||||
112
openclaw/skills/weather/SKILL.md
Normal file
112
openclaw/skills/weather/SKILL.md
Normal file
@@ -0,0 +1,112 @@
|
||||
---
|
||||
name: weather
|
||||
description: "Get current weather and forecasts via wttr.in or Open-Meteo. Use when: user asks about weather, temperature, or forecasts for any location. NOT for: historical weather data, severe weather alerts, or detailed meteorological analysis. No API key needed."
|
||||
homepage: https://wttr.in/:help
|
||||
metadata: { "openclaw": { "emoji": "🌤️", "requires": { "bins": ["curl"] } } }
|
||||
---
|
||||
|
||||
# Weather Skill
|
||||
|
||||
Get current weather conditions and forecasts.
|
||||
|
||||
## When to Use
|
||||
|
||||
✅ **USE this skill when:**
|
||||
|
||||
- "What's the weather?"
|
||||
- "Will it rain today/tomorrow?"
|
||||
- "Temperature in [city]"
|
||||
- "Weather forecast for the week"
|
||||
- Travel planning weather checks
|
||||
|
||||
## When NOT to Use
|
||||
|
||||
❌ **DON'T use this skill when:**
|
||||
|
||||
- Historical weather data → use weather archives/APIs
|
||||
- Climate analysis or trends → use specialized data sources
|
||||
- Hyper-local microclimate data → use local sensors
|
||||
- Severe weather alerts → check official NWS sources
|
||||
- Aviation/marine weather → use specialized services (METAR, etc.)
|
||||
|
||||
## Location
|
||||
|
||||
Always include a city, region, or airport code in weather queries.
|
||||
|
||||
## Commands
|
||||
|
||||
### Current Weather
|
||||
|
||||
```bash
|
||||
# One-line summary
|
||||
curl "wttr.in/London?format=3"
|
||||
|
||||
# Detailed current conditions
|
||||
curl "wttr.in/London?0"
|
||||
|
||||
# Specific city
|
||||
curl "wttr.in/New+York?format=3"
|
||||
```
|
||||
|
||||
### Forecasts
|
||||
|
||||
```bash
|
||||
# 3-day forecast
|
||||
curl "wttr.in/London"
|
||||
|
||||
# Week forecast
|
||||
curl "wttr.in/London?format=v2"
|
||||
|
||||
# Specific day (0=today, 1=tomorrow, 2=day after)
|
||||
curl "wttr.in/London?1"
|
||||
```
|
||||
|
||||
### Format Options
|
||||
|
||||
```bash
|
||||
# One-liner
|
||||
curl "wttr.in/London?format=%l:+%c+%t+%w"
|
||||
|
||||
# JSON output
|
||||
curl "wttr.in/London?format=j1"
|
||||
|
||||
# PNG image
|
||||
curl "wttr.in/London.png"
|
||||
```
|
||||
|
||||
### Format Codes
|
||||
|
||||
- `%c` — Weather condition emoji
|
||||
- `%t` — Temperature
|
||||
- `%f` — "Feels like"
|
||||
- `%w` — Wind
|
||||
- `%h` — Humidity
|
||||
- `%p` — Precipitation
|
||||
- `%l` — Location
|
||||
|
||||
## Quick Responses
|
||||
|
||||
**"What's the weather?"**
|
||||
|
||||
```bash
|
||||
curl -s "wttr.in/London?format=%l:+%c+%t+(feels+like+%f),+%w+wind,+%h+humidity"
|
||||
```
|
||||
|
||||
**"Will it rain?"**
|
||||
|
||||
```bash
|
||||
curl -s "wttr.in/London?format=%l:+%c+%p"
|
||||
```
|
||||
|
||||
**"Weekend forecast"**
|
||||
|
||||
```bash
|
||||
curl "wttr.in/London?format=v2"
|
||||
```
|
||||
|
||||
## Notes
|
||||
|
||||
- No API key needed (uses wttr.in)
|
||||
- Rate limited; don't spam requests
|
||||
- Works for most global cities
|
||||
- Supports airport codes: `curl wttr.in/ORD`
|
||||
461
openclaw/skills/xurl/SKILL.md
Normal file
461
openclaw/skills/xurl/SKILL.md
Normal file
@@ -0,0 +1,461 @@
|
||||
---
|
||||
name: xurl
|
||||
description: A CLI tool for making authenticated requests to the X (Twitter) API. Use this skill when you need to post tweets, reply, quote, search, read posts, manage followers, send DMs, upload media, or interact with any X API v2 endpoint.
|
||||
metadata:
|
||||
{
|
||||
"openclaw":
|
||||
{
|
||||
"emoji": "𝕏",
|
||||
"requires": { "bins": ["xurl"] },
|
||||
"install":
|
||||
[
|
||||
{
|
||||
"id": "brew",
|
||||
"kind": "brew",
|
||||
"formula": "xdevplatform/tap/xurl",
|
||||
"bins": ["xurl"],
|
||||
"label": "Install xurl (brew)",
|
||||
},
|
||||
{
|
||||
"id": "npm",
|
||||
"kind": "npm",
|
||||
"package": "@xdevplatform/xurl",
|
||||
"bins": ["xurl"],
|
||||
"label": "Install xurl (npm)",
|
||||
},
|
||||
],
|
||||
},
|
||||
}
|
||||
---
|
||||
|
||||
# xurl — Agent Skill Reference
|
||||
|
||||
`xurl` is a CLI tool for the X API. It supports both **shortcut commands** (human/agent‑friendly one‑liners) and **raw curl‑style** access to any v2 endpoint. All commands return JSON to stdout.
|
||||
|
||||
---
|
||||
|
||||
## Installation
|
||||
|
||||
### Homebrew (macOS)
|
||||
|
||||
```bash
|
||||
brew install --cask xdevplatform/tap/xurl
|
||||
```
|
||||
|
||||
### npm
|
||||
|
||||
```bash
|
||||
npm install -g @xdevplatform/xurl
|
||||
```
|
||||
|
||||
### Shell script
|
||||
|
||||
```bash
|
||||
curl -fsSL https://raw.githubusercontent.com/xdevplatform/xurl/main/install.sh | bash
|
||||
```
|
||||
|
||||
Installs to `~/.local/bin`. If it's not in your PATH, the script will tell you what to add.
|
||||
|
||||
### Go
|
||||
|
||||
```bash
|
||||
go install github.com/xdevplatform/xurl@latest
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Prerequisites
|
||||
|
||||
This skill requires the `xurl` CLI utility: <https://github.com/xdevplatform/xurl>.
|
||||
|
||||
Before using any command you must be authenticated. Run `xurl auth status` to check.
|
||||
|
||||
### Secret Safety (Mandatory)
|
||||
|
||||
- Never read, print, parse, summarize, upload, or send `~/.xurl` (or copies of it) to the LLM context.
|
||||
- Never ask the user to paste credentials/tokens into chat.
|
||||
- The user must fill `~/.xurl` with required secrets manually on their own machine.
|
||||
- Do not recommend or execute auth commands with inline secrets in agent/LLM sessions.
|
||||
- Warn that using CLI secret options in agent sessions can leak credentials (prompt/context, logs, shell history).
|
||||
- Never use `--verbose` / `-v` in agent/LLM sessions; it can expose sensitive headers/tokens in output.
|
||||
- Sensitive flags that must never be used in agent commands: `--bearer-token`, `--consumer-key`, `--consumer-secret`, `--access-token`, `--token-secret`, `--client-id`, `--client-secret`.
|
||||
- To verify whether at least one app with credentials is already registered, run: `xurl auth status`.
|
||||
|
||||
### Register an app (recommended)
|
||||
|
||||
App credential registration must be done manually by the user outside the agent/LLM session.
|
||||
After credentials are registered, authenticate with:
|
||||
|
||||
```bash
|
||||
xurl auth oauth2
|
||||
```
|
||||
|
||||
For multiple pre-configured apps, switch between them:
|
||||
|
||||
```bash
|
||||
xurl auth default prod-app # set default app
|
||||
xurl auth default prod-app alice # set default app + user
|
||||
xurl --app dev-app /2/users/me # one-off override
|
||||
```
|
||||
|
||||
### Other auth methods
|
||||
|
||||
Examples with inline secret flags are intentionally omitted. If OAuth1 or app-only auth is needed, the user must run those commands manually outside agent/LLM context.
|
||||
|
||||
Tokens are persisted to `~/.xurl` in YAML format. Each app has its own isolated tokens. Do not read this file through the agent/LLM. Once authenticated, every command below will auto‑attach the right `Authorization` header.
|
||||
|
||||
---
|
||||
|
||||
## Quick Reference
|
||||
|
||||
| Action | Command |
|
||||
| ------------------------- | ----------------------------------------------------- |
|
||||
| Post | `xurl post "Hello world!"` |
|
||||
| Reply | `xurl reply POST_ID "Nice post!"` |
|
||||
| Quote | `xurl quote POST_ID "My take"` |
|
||||
| Delete a post | `xurl delete POST_ID` |
|
||||
| Read a post | `xurl read POST_ID` |
|
||||
| Search posts | `xurl search "QUERY" -n 10` |
|
||||
| Who am I | `xurl whoami` |
|
||||
| Look up a user | `xurl user @handle` |
|
||||
| Home timeline | `xurl timeline -n 20` |
|
||||
| Mentions | `xurl mentions -n 10` |
|
||||
| Like | `xurl like POST_ID` |
|
||||
| Unlike | `xurl unlike POST_ID` |
|
||||
| Repost | `xurl repost POST_ID` |
|
||||
| Undo repost | `xurl unrepost POST_ID` |
|
||||
| Bookmark | `xurl bookmark POST_ID` |
|
||||
| Remove bookmark | `xurl unbookmark POST_ID` |
|
||||
| List bookmarks | `xurl bookmarks -n 10` |
|
||||
| List likes | `xurl likes -n 10` |
|
||||
| Follow | `xurl follow @handle` |
|
||||
| Unfollow | `xurl unfollow @handle` |
|
||||
| List following | `xurl following -n 20` |
|
||||
| List followers | `xurl followers -n 20` |
|
||||
| Block | `xurl block @handle` |
|
||||
| Unblock | `xurl unblock @handle` |
|
||||
| Mute | `xurl mute @handle` |
|
||||
| Unmute | `xurl unmute @handle` |
|
||||
| Send DM | `xurl dm @handle "message"` |
|
||||
| List DMs | `xurl dms -n 10` |
|
||||
| Upload media | `xurl media upload path/to/file.mp4` |
|
||||
| Media status | `xurl media status MEDIA_ID` |
|
||||
| **App Management** | |
|
||||
| Register app | Manual, outside agent (do not pass secrets via agent) |
|
||||
| List apps | `xurl auth apps list` |
|
||||
| Update app creds | Manual, outside agent (do not pass secrets via agent) |
|
||||
| Remove app | `xurl auth apps remove NAME` |
|
||||
| Set default (interactive) | `xurl auth default` |
|
||||
| Set default (command) | `xurl auth default APP_NAME [USERNAME]` |
|
||||
| Use app per-request | `xurl --app NAME /2/users/me` |
|
||||
| Auth status | `xurl auth status` |
|
||||
|
||||
> **Post IDs vs URLs:** Anywhere `POST_ID` appears above you can also paste a full post URL (e.g. `https://x.com/user/status/1234567890`) — xurl extracts the ID automatically.
|
||||
|
||||
> **Usernames:** Leading `@` is optional. `@elonmusk` and `elonmusk` both work.
|
||||
|
||||
---
|
||||
|
||||
## Command Details
|
||||
|
||||
### Posting
|
||||
|
||||
```bash
|
||||
# Simple post
|
||||
xurl post "Hello world!"
|
||||
|
||||
# Post with media (upload first, then attach)
|
||||
xurl media upload photo.jpg # → note the media_id from response
|
||||
xurl post "Check this out" --media-id MEDIA_ID
|
||||
|
||||
# Multiple media
|
||||
xurl post "Thread pics" --media-id 111 --media-id 222
|
||||
|
||||
# Reply to a post (by ID or URL)
|
||||
xurl reply 1234567890 "Great point!"
|
||||
xurl reply https://x.com/user/status/1234567890 "Agreed!"
|
||||
|
||||
# Reply with media
|
||||
xurl reply 1234567890 "Look at this" --media-id MEDIA_ID
|
||||
|
||||
# Quote a post
|
||||
xurl quote 1234567890 "Adding my thoughts"
|
||||
|
||||
# Delete your own post
|
||||
xurl delete 1234567890
|
||||
```
|
||||
|
||||
### Reading
|
||||
|
||||
```bash
|
||||
# Read a single post (returns author, text, metrics, entities)
|
||||
xurl read 1234567890
|
||||
xurl read https://x.com/user/status/1234567890
|
||||
|
||||
# Search recent posts (default 10 results)
|
||||
xurl search "golang"
|
||||
xurl search "from:elonmusk" -n 20
|
||||
xurl search "#buildinpublic lang:en" -n 15
|
||||
```
|
||||
|
||||
### User Info
|
||||
|
||||
```bash
|
||||
# Your own profile
|
||||
xurl whoami
|
||||
|
||||
# Look up any user
|
||||
xurl user elonmusk
|
||||
xurl user @XDevelopers
|
||||
```
|
||||
|
||||
### Timelines & Mentions
|
||||
|
||||
```bash
|
||||
# Home timeline (reverse chronological)
|
||||
xurl timeline
|
||||
xurl timeline -n 25
|
||||
|
||||
# Your mentions
|
||||
xurl mentions
|
||||
xurl mentions -n 20
|
||||
```
|
||||
|
||||
### Engagement
|
||||
|
||||
```bash
|
||||
# Like / unlike
|
||||
xurl like 1234567890
|
||||
xurl unlike 1234567890
|
||||
|
||||
# Repost / undo
|
||||
xurl repost 1234567890
|
||||
xurl unrepost 1234567890
|
||||
|
||||
# Bookmark / remove
|
||||
xurl bookmark 1234567890
|
||||
xurl unbookmark 1234567890
|
||||
|
||||
# List your bookmarks / likes
|
||||
xurl bookmarks -n 20
|
||||
xurl likes -n 20
|
||||
```
|
||||
|
||||
### Social Graph
|
||||
|
||||
```bash
|
||||
# Follow / unfollow
|
||||
xurl follow @XDevelopers
|
||||
xurl unfollow @XDevelopers
|
||||
|
||||
# List who you follow / your followers
|
||||
xurl following -n 50
|
||||
xurl followers -n 50
|
||||
|
||||
# List another user's following/followers
|
||||
xurl following --of elonmusk -n 20
|
||||
xurl followers --of elonmusk -n 20
|
||||
|
||||
# Block / unblock
|
||||
xurl block @spammer
|
||||
xurl unblock @spammer
|
||||
|
||||
# Mute / unmute
|
||||
xurl mute @annoying
|
||||
xurl unmute @annoying
|
||||
```
|
||||
|
||||
### Direct Messages
|
||||
|
||||
```bash
|
||||
# Send a DM
|
||||
xurl dm @someuser "Hey, saw your post!"
|
||||
|
||||
# List recent DM events
|
||||
xurl dms
|
||||
xurl dms -n 25
|
||||
```
|
||||
|
||||
### Media Upload
|
||||
|
||||
```bash
|
||||
# Upload a file (auto‑detects type for images/videos)
|
||||
xurl media upload photo.jpg
|
||||
xurl media upload video.mp4
|
||||
|
||||
# Specify type and category explicitly
|
||||
xurl media upload --media-type image/jpeg --category tweet_image photo.jpg
|
||||
|
||||
# Check processing status (videos need server‑side processing)
|
||||
xurl media status MEDIA_ID
|
||||
xurl media status --wait MEDIA_ID # poll until done
|
||||
|
||||
# Full workflow: upload then post
|
||||
xurl media upload meme.png # response includes media id
|
||||
xurl post "lol" --media-id MEDIA_ID
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Global Flags
|
||||
|
||||
These flags work on every command:
|
||||
|
||||
| Flag | Short | Description |
|
||||
| ------------ | ----- | ------------------------------------------------------------------ |
|
||||
| `--app` | | Use a specific registered app for this request (overrides default) |
|
||||
| `--auth` | | Force auth type: `oauth1`, `oauth2`, or `app` |
|
||||
| `--username` | `-u` | Which OAuth2 account to use (if you have multiple) |
|
||||
| `--verbose` | `-v` | Forbidden in agent/LLM sessions (can leak auth headers/tokens) |
|
||||
| `--trace` | `-t` | Add `X-B3-Flags: 1` trace header |
|
||||
|
||||
---
|
||||
|
||||
## Raw API Access
|
||||
|
||||
The shortcut commands cover the most common operations. For anything else, use xurl's raw curl‑style mode — it works with **any** X API v2 endpoint:
|
||||
|
||||
```bash
|
||||
# GET request (default)
|
||||
xurl /2/users/me
|
||||
|
||||
# POST with JSON body
|
||||
xurl -X POST /2/tweets -d '{"text":"Hello world!"}'
|
||||
|
||||
# PUT, PATCH, DELETE
|
||||
xurl -X DELETE /2/tweets/1234567890
|
||||
|
||||
# Custom headers
|
||||
xurl -H "Content-Type: application/json" /2/some/endpoint
|
||||
|
||||
# Force streaming mode
|
||||
xurl -s /2/tweets/search/stream
|
||||
|
||||
# Full URLs also work
|
||||
xurl https://api.x.com/2/users/me
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Streaming
|
||||
|
||||
Streaming endpoints are auto‑detected. Known streaming endpoints include:
|
||||
|
||||
- `/2/tweets/search/stream`
|
||||
- `/2/tweets/sample/stream`
|
||||
- `/2/tweets/sample10/stream`
|
||||
|
||||
You can force streaming on any endpoint with `-s`:
|
||||
|
||||
```bash
|
||||
xurl -s /2/some/endpoint
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
All commands return **JSON** to stdout, pretty‑printed with syntax highlighting. The output structure matches the X API v2 response format. A typical response looks like:
|
||||
|
||||
```json
|
||||
{
|
||||
"data": {
|
||||
"id": "1234567890",
|
||||
"text": "Hello world!"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Errors are also returned as JSON:
|
||||
|
||||
```json
|
||||
{
|
||||
"errors": [
|
||||
{
|
||||
"message": "Not authorized",
|
||||
"code": 403
|
||||
}
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Common Workflows
|
||||
|
||||
### Post with an image
|
||||
|
||||
```bash
|
||||
# 1. Upload the image
|
||||
xurl media upload photo.jpg
|
||||
# 2. Copy the media_id from the response, then post
|
||||
xurl post "Check out this photo!" --media-id MEDIA_ID
|
||||
```
|
||||
|
||||
### Reply to a conversation
|
||||
|
||||
```bash
|
||||
# 1. Read the post to understand context
|
||||
xurl read https://x.com/user/status/1234567890
|
||||
# 2. Reply
|
||||
xurl reply 1234567890 "Here are my thoughts..."
|
||||
```
|
||||
|
||||
### Search and engage
|
||||
|
||||
```bash
|
||||
# 1. Search for relevant posts
|
||||
xurl search "topic of interest" -n 10
|
||||
# 2. Like an interesting one
|
||||
xurl like POST_ID_FROM_RESULTS
|
||||
# 3. Reply to it
|
||||
xurl reply POST_ID_FROM_RESULTS "Great point!"
|
||||
```
|
||||
|
||||
### Check your activity
|
||||
|
||||
```bash
|
||||
# See who you are
|
||||
xurl whoami
|
||||
# Check your mentions
|
||||
xurl mentions -n 20
|
||||
# Check your timeline
|
||||
xurl timeline -n 20
|
||||
```
|
||||
|
||||
### Set up multiple apps
|
||||
|
||||
```bash
|
||||
# App credentials must already be configured manually outside agent/LLM context.
|
||||
# Authenticate users on each pre-configured app
|
||||
xurl auth default prod
|
||||
xurl auth oauth2 # authenticates on prod app
|
||||
|
||||
xurl auth default staging
|
||||
xurl auth oauth2 # authenticates on staging app
|
||||
|
||||
# Switch between them
|
||||
xurl auth default prod alice # prod app, alice user
|
||||
xurl --app staging /2/users/me # one-off request against staging
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Error Handling
|
||||
|
||||
- Non‑zero exit code on any error.
|
||||
- API errors are printed as JSON to stdout (so you can still parse them).
|
||||
- Auth errors suggest re‑running `xurl auth oauth2` or checking your tokens.
|
||||
- If a command requires your user ID (like, repost, bookmark, follow, etc.), xurl will automatically fetch it via `/2/users/me`. If that fails, you'll see an auth error.
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- **Rate limits:** The X API enforces rate limits per endpoint. If you get a 429 error, wait and retry. Write endpoints (post, reply, like, repost) have stricter limits than read endpoints.
|
||||
- **Scopes:** OAuth 2.0 tokens are requested with broad scopes. If you get a 403 on a specific action, your token may lack the required scope — re‑run `xurl auth oauth2` to get a fresh token.
|
||||
- **Token refresh:** OAuth 2.0 tokens auto‑refresh when expired. No manual intervention needed.
|
||||
- **Multiple apps:** Each app has its own isolated credentials and tokens. Configure credentials manually outside agent/LLM context, then switch with `xurl auth default` or `--app`.
|
||||
- **Multiple accounts:** You can authenticate multiple OAuth 2.0 accounts per app and switch between them with `--username` / `-u` or set a default with `xurl auth default APP USER`.
|
||||
- **Default user:** When no `-u` flag is given, xurl uses the default user for the active app (set via `xurl auth default`). If no default user is set, it uses the first available token.
|
||||
- **Token storage:** `~/.xurl` is YAML. Each app stores its own credentials and tokens. Never read or send this file to LLM context.
|
||||
Reference in New Issue
Block a user