GRAGO
[ an OpenClaw skill // v0.1.0 ]
// THE PROBLEM
You're running OpenClaw. Your agent is sharp — but every research task, every web fetch, every "go look that up" burns tokens. Cloud models are expensive. They add up fast.
Meanwhile, you've got a Mac Mini (or any capable machine) sitting there 24/7, doing nothing. It could be running a local LLM for free. But there's a catch:
⚠ Small local LLMs can't use tools.
They can't browse the web, call APIs, or fetch live data.
They're powerful — but blind.
So your expensive cloud model keeps doing all the heavy lifting, and your hardware collects dust.
// THE FIX
Grago bridges that gap. It's an OpenClaw skill that lets your agent delegate research and fetch tasks to your local machine — for free.
Grago handles the tool work that local LLMs can't: it fetches URLs, hits APIs, reads files, and transforms data with shell scripts. Then it pipes the results directly into your local model with a focused prompt.
Your OpenClaw agent stays smart. Your local machine does the legwork. Your token bill drops.
# Your OpenClaw agent delegates to Grago:
grago fetch "https://news.ycombinator.com/" \
--analyze "Summarize the top 5 stories" \
--model gemma2
# Multi-source research, analyzed locally:
grago research --sources sources.yaml \
--prompt "What are competitors charging?"
# Chain any shell command into your local model:
grago pipe \
--fetch "curl -s https://api.example.com/stats" \
--transform "jq .results" \
--analyze "Flag anything unusual"
// WHO IS THIS FOR?
- You use OpenClaw and your token costs are real
- You have a Mac Mini, M-series Mac, or any machine that can run Ollama
- You want your agent doing 24/7 research without a cloud bill
- You want your local models earning their keep
- You keep your data on your own hardware
// WHAT YOU GET
- grago.sh — the core fetch + pipe engine
- SKILL.md — drop-in OpenClaw skill (ready to use)
- install.sh — installs Ollama + a model automatically
- README.md — full setup and usage docs
- sources.yaml examples — research templates ready to run
- Lifetime updates via GitHub
REQUIRES: OpenClaw (any plan) + a machine capable of running Ollama
TESTED ON: Mac Mini M2, Mac Mini M4, MacBook Pro M-series
WORKS WITH: Gemma, Mistral, Llama, Qwen, GLM, and more
ONE-TIME PURCHASE. NO SUBSCRIPTION. NO CLOUD FEES.
$5.00
► GET GRAGO
After payment you'll go straight to GitHub to download, install, and start saving tokens.
// also from underclassic
LOBSTER TRAP
[ shared workspace for humans + agents // v0.1.0 ]
// THE PROBLEM
Your AI agent does work. Lots of it. Screenshots, research, code snippets, API keys — all buried in chat logs you'll never scroll back through.
There's no shared surface. No place where both of you can leave things for each other. The work disappears into the conversation.
// THE FIX
Lobster Trap is a persistent shared workspace — a real-time board where you and your agent can both read and write. Drop a screenshot, pin a note, stash a key, paste a code block.
It runs on your own hardware. Your data never leaves. WebSocket sync keeps it live between your browser and your agent's tools.
📌 Cork board — drag and drop screenshots, notes, code
🖼 Screenshots — AI-named, tagged, searchable
📝 Notes — persistent, pinnable, always there
🔑 Key vault — store and mask credentials locally
⚡ Real-time — WebSocket sync between you and your agent
// WHO IS THIS FOR?
- You use OpenClaw and your agent does a lot of background work
- You want a shared surface — not just a chat log
- You want your data on your own machine
- You have a Mac Mini, VPS, or Raspberry Pi to run it on
REQUIRES: Python 3.10+ · A machine to host it on
OPTIONAL: XAI or OpenAI key for AI screenshot naming
TESTED ON: Mac Mini M2/M4, Hetzner VPS, Raspberry Pi 4
SELF-HOSTED. ONE-TIME PURCHASE. YOUR DATA, YOUR MACHINE.
$5.00
► GET LOBSTER TRAP
After payment, your Mac app downloads immediately. Open the DMG, drag to Applications, and launch.