Merge branch 'main' into main

This commit is contained in:
Hesam Sheikh
2026-02-28 15:50:03 +01:00
committed by GitHub
3 changed files with 144 additions and 2 deletions

View File

@@ -11,8 +11,12 @@
<br />
[![Awesome](https://awesome.re/badge.svg)](https://awesome.re)
![Use Cases](https://img.shields.io/badge/usecases-33-blue?style=flat-square)
![Use Cases](https://img.shields.io/badge/usecases-31-blue?style=flat-square)
![Last Update](https://img.shields.io/github/last-commit/hesamsheikh/awesome-openclaw-usecases?label=Last%20Update&style=flat-square)
[![Follow on X](https://img.shields.io/badge/Follow%20on-X-000000?style=flat-square&logo=x)](https://x.com/Hesamation)
[![Discord](https://img.shields.io/badge/Discord-Open%20Source%20AI%20Builders-5865F2?style=flat-square&logo=discord&logoColor=white)](https://discord.gg/vtJykN3t)
<p><sub><a href="https://x.com/Hesamation">Say hello on X.</a></sub></p>
</div>
# Awesome OpenClaw Use Cases
@@ -21,7 +25,6 @@ Solving the bottleneck of OpenClaw adaptation: Not ~~skills~~, but finding **way
> **Warning:** OpenClaw skills and third-party dependencies referenced here may have critical security vulnerabilities. Many use cases link to community-built skills, plugins, and external repos that have **not been audited by the maintainer of this list**. Always review skill source code, check requested permissions, and avoid hardcoding API keys or credentials. You are solely responsible for your own security.
## Social Media
| Name | Description |
@@ -78,6 +81,7 @@ Solving the bottleneck of OpenClaw adaptation: Not ~~skills~~, but finding **way
| [AI Earnings Tracker](usecases/earnings-tracker.md) | Track tech/AI earnings reports with automated previews, alerts, and detailed summaries. |
| [Personal Knowledge Base (RAG)](usecases/knowledge-base-rag.md) | Build a searchable knowledge base by dropping URLs, tweets, and articles into chat. |
| [Market Research & Product Factory](usecases/market-research-product-factory.md) | Mine Reddit and X for real pain points using the Last 30 Days skill, then have OpenClaw build MVPs that solve them. |
| [Pre-Build Idea Validator](usecases/pre-build-idea-validator.md) | Automatically scan GitHub, HN, npm, PyPI, and Product Hunt before building anything new — stop if the space is crowded, proceed if it's open. |
| [Semantic Memory Search](usecases/semantic-memory-search.md) | Add vector-powered semantic search to your OpenClaw markdown memory files with hybrid retrieval and auto-sync. |
## Finance & Trading

View File

@@ -81,6 +81,40 @@ in real-time as you complete tasks.
- For overnight app building specifically: explicitly tell it to build MVPs and not to overcomplicate. You'll wake up every morning with a new surprise.
- This compounds over time — the agent learns what kinds of tasks are most helpful and adjusts.
## Pitfalls & Patterns (Learned in Production)
### ⚠️ Race Condition: Sub-Agents Editing Shared Files
When you run this workflow with sub-agents, both the main session and spawned sub-agents may try to update the same task file (e.g., your Kanban/AUTONOMOUS.md). This causes silent failures.
**Why it happens:** OpenClaw's `edit` tool requires an exact `oldText` match. If a sub-agent updates a line between the time your main session reads the file and tries to edit it, the text no longer matches — the edit silently fails.
**The fix: split your task file into two roles:**
1. **`AUTONOMOUS.md`** — stays small and clean. Contains only goals + open backlog. Only the main session touches this. Sub-agents never edit it.
2. **`memory/tasks-log.md`** — append-only log. Sub-agents only ever *add new lines at the bottom*. Never edit existing lines.
```markdown
# tasks-log.md — Completed Tasks (append-only)
# Sub-agents: always append to the END. Never edit existing lines.
### 2026-02-24
- ✅ TASK-001: Research competitors → research/competitors.md
- ✅ TASK-002: Draft blog post → drafts/post-1.md
```
This pattern is borrowed from Git's commit log: you never rewrite history, you only add new commits. It eliminates race conditions entirely and has a bonus: `AUTONOMOUS.md` stays small, so it costs fewer tokens every time it's loaded in a heartbeat.
**Rule to give your agent:** In sub-agent spawn instructions, always include:
> "When done, append a ✅ line to `memory/tasks-log.md`. Never edit `AUTONOMOUS.md` directly."
### 💡 Keep AUTONOMOUS.md Token-Light
The task tracking file gets loaded on every heartbeat poll. If it grows unbounded with completed tasks, you'll burn tokens unnecessarily.
Keep `AUTONOMOUS.md` under ~50 lines: goals (one-liners) + open backlog only. Archive everything completed to a separate file that is only read on-demand.
## Based On
Inspired by [Alex Finn](https://www.youtube.com/watch?v=UTCi_q6iuCM&t=414s) and his [video on life-changing OpenClaw use cases](https://www.youtube.com/watch?v=41_TNGDDnfQ).

View File

@@ -0,0 +1,104 @@
# Pre-Build Idea Validator
Before OpenClaw starts building anything new, it automatically checks whether the idea already exists across GitHub, Hacker News, npm, PyPI, and Product Hunt — and adjusts its approach based on what it finds.
## What It Does
- Scans 5 real data sources (GitHub, Hacker News, npm, PyPI, Product Hunt) before any code is written
- Returns a `reality_signal` score (0-100) indicating how crowded the space is
- Shows top competitors with star counts and descriptions
- Suggests pivot directions when the space is saturated
- Works as a pre-build gate: high signal = stop and discuss, low signal = proceed
## Pain Point
You tell your agent "build me an AI code review tool" and it happily spends 6 hours coding. Meanwhile, 143,000+ repos already exist on GitHub — the top one has 53,000 stars. The agent never checks because you never asked, and it doesn't know to look. You only discover competitors after you've invested significant time. This pattern repeats for every new project idea.
## Skills You Need
- [idea-reality-mcp](https://github.com/mnemox-ai/idea-reality-mcp) — MCP server that scans real data sources and returns a competition score
## How to Set It Up
1. Install idea-reality-mcp:
```bash
uvx idea-reality-mcp
```
2. Add the MCP server to your OpenClaw config:
```json
{
"mcpServers": {
"idea-reality": {
"command": "uvx",
"args": ["idea-reality-mcp"]
}
}
}
```
3. Add this to your OpenClaw agent instructions:
```text
Before starting any new project, feature, or tool, always run idea_check first.
Rules:
- If reality_signal > 70: STOP. Report the top 3 competitors with star counts.
Ask me if I want to proceed, pivot, or abandon.
- If reality_signal 30-70: Show me the results and pivot_hints.
Suggest a niche angle that existing projects don't cover.
- If reality_signal < 30: Proceed to build.
Mention that the space is open.
- Always show the reality_signal score and top competitors before writing any code.
```
## Real World Example
**You:** "Build me a CLI tool for AI code review"
OpenClaw runs `idea_check("AI code review tool")` and responds:
> **reality_signal: 90/100 (very high)**
>
> Top competitors:
> 1. Gitea — 53,940 stars
> 2. reviewdog — 9,104 stars
> 3. Danger (Ruby) — 5,649 stars
>
> This space has 143,000+ related repos. The top players are mature and well-funded.
>
> **Pivot suggestions:**
> - Focus on a specific language (Rust/Go-only AI code review)
> - Target a specific framework (React/Vue component review)
> - Target a specific industry (financial/medical code compliance review)
>
> Should I proceed with a differentiated angle, or pivot to something else?
**You:** "What about an MCP server for pre-build idea validation?"
> **reality_signal: 8/100 (very low)**
>
> 0 direct competitors found on PyPI or npm. GitHub has some tangentially related repos but none solve this exact problem.
>
> The space is wide open. Proceeding to build.
## Variations
- **Deep mode for serious decisions**: Use `depth="deep"` to scan all 5 sources in parallel (GitHub + HN + npm + PyPI + Product Hunt) for major project decisions.
- **Batch validation**: Before a hackathon, give OpenClaw a list of 10 ideas and have it rank them by `reality_signal` — lowest score = most original opportunity.
- **Web demo first**: Try without installing at [mnemox.ai/check](https://mnemox.ai/check) to see if the workflow fits your needs.
## Key Insights
- This prevents the most expensive mistake in building: **solving a problem that's already been solved**.
- The `reality_signal` is based on real data (repo counts, star distributions, HN discussion volume), not LLM guessing.
- A high score doesn't mean "don't build" — it means "differentiate or don't bother."
- A low score means genuine white space exists. That's where solo builders have the best odds.
## Related Links
- [idea-reality-mcp GitHub](https://github.com/mnemox-ai/idea-reality-mcp)
- [Web demo](https://mnemox.ai/check) (try without installing)
- [PyPI](https://pypi.org/project/idea-reality-mcp/)