From a6a521688534e33b7e9e54efd45a4973ad45191b Mon Sep 17 00:00:00 2001 From: LMKaelbot Date: Tue, 24 Feb 2026 21:44:21 +0000 Subject: [PATCH 1/4] =?UTF-8?q?docs:=20add=20Pitfalls=20&=20Patterns=20sec?= =?UTF-8?q?tion=20=E2=80=94=20race=20condition=20fix=20for=20sub-agent=20f?= =?UTF-8?q?ile=20edits?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit When main session + sub-agents both edit a shared task file, the edit tool's exact-match requirement causes silent failures. Solution: split into an immutable goals file (main session only) and an append-only task log (sub-agents only append, never edit). Also adds tip to keep AUTONOMOUS.md token-light by archiving completed tasks to a separate file loaded only on-demand. Verified in production over ~1 day of autonomous heartbeat workflows. --- usecases/overnight-mini-app-builder.md | 34 ++++++++++++++++++++++++++ 1 file changed, 34 insertions(+) diff --git a/usecases/overnight-mini-app-builder.md b/usecases/overnight-mini-app-builder.md index b01520b..be0ccf8 100644 --- a/usecases/overnight-mini-app-builder.md +++ b/usecases/overnight-mini-app-builder.md @@ -81,6 +81,40 @@ in real-time as you complete tasks. - For overnight app building specifically: explicitly tell it to build MVPs and not to overcomplicate. You'll wake up every morning with a new surprise. - This compounds over time — the agent learns what kinds of tasks are most helpful and adjusts. +## Pitfalls & Patterns (Learned in Production) + +### ⚠️ Race Condition: Sub-Agents Editing Shared Files + +When you run this workflow with sub-agents, both the main session and spawned sub-agents may try to update the same task file (e.g., your Kanban/AUTONOMOUS.md). This causes silent failures. + +**Why it happens:** OpenClaw's `edit` tool requires an exact `oldText` match. If a sub-agent updates a line between the time your main session reads the file and tries to edit it, the text no longer matches — the edit silently fails. + +**The fix: split your task file into two roles:** + +1. **`AUTONOMOUS.md`** — stays small and clean. Contains only goals + open backlog. Only the main session touches this. Sub-agents never edit it. + +2. **`memory/tasks-log.md`** — append-only log. Sub-agents only ever *add new lines at the bottom*. Never edit existing lines. + +```markdown +# tasks-log.md — Completed Tasks (append-only) +# Sub-agents: always append to the END. Never edit existing lines. + +### 2026-02-24 +- ✅ TASK-001: Research competitors → research/competitors.md +- ✅ TASK-002: Draft blog post → drafts/post-1.md +``` + +This pattern is borrowed from Git's commit log: you never rewrite history, you only add new commits. It eliminates race conditions entirely and has a bonus: `AUTONOMOUS.md` stays small, so it costs fewer tokens every time it's loaded in a heartbeat. + +**Rule to give your agent:** In sub-agent spawn instructions, always include: +> "When done, append a ✅ line to `memory/tasks-log.md`. Never edit `AUTONOMOUS.md` directly." + +### 💡 Keep AUTONOMOUS.md Token-Light + +The task tracking file gets loaded on every heartbeat poll. If it grows unbounded with completed tasks, you'll burn tokens unnecessarily. + +Keep `AUTONOMOUS.md` under ~50 lines: goals (one-liners) + open backlog only. Archive everything completed to a separate file that is only read on-demand. + ## Based On Inspired by [Alex Finn](https://www.youtube.com/watch?v=UTCi_q6iuCM&t=414s) and his [video on life-changing OpenClaw use cases](https://www.youtube.com/watch?v=41_TNGDDnfQ). From 5750847c5eb1f3c22bb32ce31f5fa3a9fef9cf50 Mon Sep 17 00:00:00 2001 From: hesamsheikh Date: Fri, 27 Feb 2026 03:02:16 +0100 Subject: [PATCH 2/4] added Discord server --- README.md | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/README.md b/README.md index 02543eb..c34156f 100644 --- a/README.md +++ b/README.md @@ -13,6 +13,10 @@ [![Awesome](https://awesome.re/badge.svg)](https://awesome.re) ![Use Cases](https://img.shields.io/badge/usecases-30-blue?style=flat-square) ![Last Update](https://img.shields.io/github/last-commit/hesamsheikh/awesome-openclaw-usecases?label=Last%20Update&style=flat-square) +[![Follow on X](https://img.shields.io/badge/Follow%20on-X-000000?style=flat-square&logo=x)](https://x.com/Hesamation) +[![Discord](https://img.shields.io/badge/Discord-Open%20Source%20AI%20Builders-5865F2?style=flat-square&logo=discord&logoColor=white)](https://discord.gg/vtJykN3t) + +

Say hello on X.

# Awesome OpenClaw Use Cases @@ -21,7 +25,6 @@ Solving the bottleneck of OpenClaw adaptation: Not ~~skills~~, but finding **way > **Warning:** OpenClaw skills and third-party dependencies referenced here may have critical security vulnerabilities. Many use cases link to community-built skills, plugins, and external repos that have **not been audited by the maintainer of this list**. Always review skill source code, check requested permissions, and avoid hardcoding API keys or credentials. You are solely responsible for your own security. - ## Social Media | Name | Description | From c997bfbf0727c5ed3ea412d3e0f6ef3d5fb4296d Mon Sep 17 00:00:00 2001 From: zychenpeng Date: Sat, 28 Feb 2026 05:19:54 +0800 Subject: [PATCH 3/4] Add use case: Pre-Build Idea Validator (idea-reality-mcp) Adds a use case for automatically scanning GitHub, HN, npm, PyPI, and Product Hunt before starting any new project. Uses the idea-reality-mcp MCP server to return a reality_signal score and top competitors. Co-Authored-By: Claude Opus 4.6 --- README.md | 1 + usecases/pre-build-idea-validator.md | 104 +++++++++++++++++++++++++++ 2 files changed, 105 insertions(+) create mode 100644 usecases/pre-build-idea-validator.md diff --git a/README.md b/README.md index c34156f..8cddaf8 100644 --- a/README.md +++ b/README.md @@ -78,6 +78,7 @@ Solving the bottleneck of OpenClaw adaptation: Not ~~skills~~, but finding **way | [AI Earnings Tracker](usecases/earnings-tracker.md) | Track tech/AI earnings reports with automated previews, alerts, and detailed summaries. | | [Personal Knowledge Base (RAG)](usecases/knowledge-base-rag.md) | Build a searchable knowledge base by dropping URLs, tweets, and articles into chat. | | [Market Research & Product Factory](usecases/market-research-product-factory.md) | Mine Reddit and X for real pain points using the Last 30 Days skill, then have OpenClaw build MVPs that solve them. | +| [Pre-Build Idea Validator](usecases/pre-build-idea-validator.md) | Automatically scan GitHub, HN, npm, PyPI, and Product Hunt before building anything new — stop if the space is crowded, proceed if it's open. | | [Semantic Memory Search](usecases/semantic-memory-search.md) | Add vector-powered semantic search to your OpenClaw markdown memory files with hybrid retrieval and auto-sync. | ## Finance & Trading diff --git a/usecases/pre-build-idea-validator.md b/usecases/pre-build-idea-validator.md new file mode 100644 index 0000000..cc3c6a5 --- /dev/null +++ b/usecases/pre-build-idea-validator.md @@ -0,0 +1,104 @@ +# Pre-Build Idea Validator + +Before OpenClaw starts building anything new, it automatically checks whether the idea already exists across GitHub, Hacker News, npm, PyPI, and Product Hunt — and adjusts its approach based on what it finds. + +## What It Does + +- Scans 5 real data sources (GitHub, Hacker News, npm, PyPI, Product Hunt) before any code is written +- Returns a `reality_signal` score (0-100) indicating how crowded the space is +- Shows top competitors with star counts and descriptions +- Suggests pivot directions when the space is saturated +- Works as a pre-build gate: high signal = stop and discuss, low signal = proceed + +## Pain Point + +You tell your agent "build me an AI code review tool" and it happily spends 6 hours coding. Meanwhile, 143,000+ repos already exist on GitHub — the top one has 53,000 stars. The agent never checks because you never asked, and it doesn't know to look. You only discover competitors after you've invested significant time. This pattern repeats for every new project idea. + +## Skills You Need + +- [idea-reality-mcp](https://github.com/mnemox-ai/idea-reality-mcp) — MCP server that scans real data sources and returns a competition score + +## How to Set It Up + +1. Install idea-reality-mcp: + +```bash +uvx idea-reality-mcp +``` + +2. Add the MCP server to your OpenClaw config: + +```json +{ + "mcpServers": { + "idea-reality": { + "command": "uvx", + "args": ["idea-reality-mcp"] + } + } +} +``` + +3. Add this to your OpenClaw agent instructions: + +```text +Before starting any new project, feature, or tool, always run idea_check first. + +Rules: +- If reality_signal > 70: STOP. Report the top 3 competitors with star counts. + Ask me if I want to proceed, pivot, or abandon. +- If reality_signal 30-70: Show me the results and pivot_hints. + Suggest a niche angle that existing projects don't cover. +- If reality_signal < 30: Proceed to build. + Mention that the space is open. +- Always show the reality_signal score and top competitors before writing any code. +``` + +## Real World Example + +**You:** "Build me a CLI tool for AI code review" + +OpenClaw runs `idea_check("AI code review tool")` and responds: + +> **reality_signal: 90/100 (very high)** +> +> Top competitors: +> 1. Gitea — 53,940 stars +> 2. reviewdog — 9,104 stars +> 3. Danger (Ruby) — 5,649 stars +> +> This space has 143,000+ related repos. The top players are mature and well-funded. +> +> **Pivot suggestions:** +> - Focus on a specific language (Rust/Go-only AI code review) +> - Target a specific framework (React/Vue component review) +> - Target a specific industry (financial/medical code compliance review) +> +> Should I proceed with a differentiated angle, or pivot to something else? + +**You:** "What about an MCP server for pre-build idea validation?" + +> **reality_signal: 8/100 (very low)** +> +> 0 direct competitors found on PyPI or npm. GitHub has some tangentially related repos but none solve this exact problem. +> +> The space is wide open. Proceeding to build. + +## Variations + +- **Deep mode for serious decisions**: Use `depth="deep"` to scan all 5 sources in parallel (GitHub + HN + npm + PyPI + Product Hunt) for major project decisions. +- **Batch validation**: Before a hackathon, give OpenClaw a list of 10 ideas and have it rank them by `reality_signal` — lowest score = most original opportunity. +- **Web demo first**: Try without installing at [mnemox.ai/check](https://mnemox.ai/check) to see if the workflow fits your needs. + +## Key Insights + +- This prevents the most expensive mistake in building: **solving a problem that's already been solved**. +- The `reality_signal` is based on real data (repo counts, star distributions, HN discussion volume), not LLM guessing. +- A high score doesn't mean "don't build" — it means "differentiate or don't bother." +- A low score means genuine white space exists. That's where solo builders have the best odds. + +## Related Links + +- [idea-reality-mcp GitHub](https://github.com/mnemox-ai/idea-reality-mcp) +- [Web demo](https://mnemox.ai/check) (try without installing) +- [PyPI](https://pypi.org/project/idea-reality-mcp/) From a92a9fdcaa1a04eb50753dce5ab2d6fb9d98f0ff Mon Sep 17 00:00:00 2001 From: "github-actions[bot]" Date: Sat, 28 Feb 2026 14:15:17 +0000 Subject: [PATCH 4/4] Update use case count badge to 31 --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 8cddaf8..d7f5f21 100644 --- a/README.md +++ b/README.md @@ -11,7 +11,7 @@
[![Awesome](https://awesome.re/badge.svg)](https://awesome.re) -![Use Cases](https://img.shields.io/badge/usecases-30-blue?style=flat-square) +![Use Cases](https://img.shields.io/badge/usecases-31-blue?style=flat-square) ![Last Update](https://img.shields.io/github/last-commit/hesamsheikh/awesome-openclaw-usecases?label=Last%20Update&style=flat-square) [![Follow on X](https://img.shields.io/badge/Follow%20on-X-000000?style=flat-square&logo=x)](https://x.com/Hesamation) [![Discord](https://img.shields.io/badge/Discord-Open%20Source%20AI%20Builders-5865F2?style=flat-square&logo=discord&logoColor=white)](https://discord.gg/vtJykN3t)