Add acreadiness-cockpit plugin (AgentRC measure -> generate -> maintain) 🤖🤖🤖 (#1593)

* Add acreadiness-cockpit plugin

Adds a new plugin that drives Microsoft AgentRC from Copilot chat,
framing every interaction inside AgentRC's Measure -> Generate ->
Maintain loop.

Custom agent (agents/ai-readiness-reporter.agent.md):
  Runs `agentrc readiness --json`, interprets every result against
  the 9-pillar / 5-level maturity model, then renders a self-contained
  reports/index.html from a fixed HTML/CSS template (bundled with the
  acreadiness-assess skill) so every user gets an identically styled
  dashboard. Honours policies (disabled criteria, overrides, pass-rate
  thresholds) and surfaces extras separately.

Skills:
  - acreadiness-assess: Measure step. Wraps `agentrc readiness --json`
    and hands off to the @ai-readiness-reporter agent. Bundles the
    canonical report-template.html.
  - acreadiness-generate-instructions: Generate step. Wraps
    `agentrc instructions`. Defaults to .github/copilot-instructions.md
    (Copilot-native). Asks flat vs nested. For monorepos, emits per-area
    .github/instructions/<area>.instructions.md files with applyTo
    globs taken from agentrc.config.json.
  - acreadiness-policy: Maintain step. Helps pick, scaffold, or apply an
    AgentRC policy (criteria.disable, criteria.override, extras,
    thresholds) and wire it into CI via --fail-level.

Plugin (plugins/acreadiness-cockpit/):
  Declarative plugin.json referencing the agent and three skills.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

* Address PR review feedback

- Align documented slash-command names with plugin manifest:
  /acreadiness-assess, /acreadiness-generate-instructions,
  /acreadiness-policy (was /assess, /generate-instructions, /policy
  inside SKILL bodies and argument-hints).
- Move the literal % from the report template into the substituted
  values for {{passRate}} and {{threshold}} so an N/A value of '—'
  no longer renders as '—%'. Updated the agent placeholder contract
  accordingly.
- Point the report footer at the canonical plugin folder under
  github/awesome-copilot instead of the personal source fork.
- Add explicit HTML-escaping rules to the agent: HTML-escape every
  {{placeholder}} substitution, and replace </script with <\/script
  inside the embedded JSON block so untrusted repo content cannot
  break the markup or inject scripts.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>

---------

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
This commit is contained in:
mvanderbend-msoft
2026-05-04 06:11:14 +02:00
committed by GitHub
parent a1197525bd
commit ebd22496dd
11 changed files with 809 additions and 0 deletions

View File

@@ -0,0 +1,46 @@
---
name: acreadiness-assess
description: 'Run the AgentRC readiness assessment on the current repository and produce a static HTML dashboard at reports/index.html. Wraps `npx github:microsoft/agentrc readiness` and hands off rendering to the @ai-readiness-reporter custom agent. Supports policies (--policy) for org-specific scoring. Use when asked to assess, audit, or score the AI readiness of a repo.'
argument-hint: "[--policy <path-or-pkg>] [--per-area] — e.g. /acreadiness-assess, /acreadiness-assess --policy ./policies/strict.json"
---
# /acreadiness-assess — AI-readiness assessment
Use this skill whenever the user asks for an **AI-readiness assessment**, a **readiness check**, an **audit**, or wants to **see how AI-ready** their repository is.
This skill is the *Measure* step in AgentRC's **Measure → Generate → Maintain** loop. The result is a self-contained HTML dashboard the user can open with `file://` or commit to the repo.
## Steps
1. **Confirm prerequisites.** Node 20+ must be on PATH. If unsure, run `node --version`.
2. **Decide on a policy** (optional but encouraged):
- If the user provided `--policy <source>`, capture it.
- Otherwise check `agentrc.config.json` for a `policies` array.
- If neither, run with no policy (built-in defaults).
- For a primer on policies, suggest the `acreadiness-policy` skill.
3. **Run the readiness scan** in the repo root with structured output:
```bash
npx -y github:microsoft/agentrc readiness --json [--policy <source>] [--per-area]
```
The `CommandResult<T>` JSON envelope is your input for the next step.
4. **Hand off to the `ai-readiness-reporter` custom agent** to interpret the JSON and produce `reports/index.html`. The agent renders via the bundled template `report-template.html` (shipped alongside this skill) so every report has an identical look & feel. The agent:
- Reads the bundled `report-template.html` and substitutes placeholders with real data.
- Inlines all CSS, ships a single static file (works under `file://`).
- Renders maturity level, overall score, grade, pass-rate vs threshold.
- Breaks down all 9 pillars across **Repo Health** (8) and **AI Setup** (1) with *what it measures*, *why it matters for AI*, *current state*, and *a specific recommendation*.
- Tags every pillar with an **AI relevance** badge (High / Medium / Low).
- Surfaces **Extras** separately (they never affect the score).
- Shows the **Active Policy** including any disabled/overridden criteria and thresholds.
- Produces a **Prioritised Remediation Plan** (🔴 Fix First / 🟡 Fix Next / 🔵 Plan).
- Embeds the raw AgentRC JSON for reuse.
5. **Tell the user where the report lives** (`reports/index.html`) and how to open it. Summarise in chat: maturity level, overall score, top three lowest pillars, and the single highest-leverage next action (almost always: run the `acreadiness-generate-instructions` skill).
## Notes
- AgentRC also has a built-in HTML renderer (`--visual` / `--output report.html`) but its output is intentionally generic. This skill produces a tailored, opinionated dashboard via the custom agent — closer to a code review than a metrics dump.
- For CI gating, recommend `agentrc readiness --fail-level <n>` (15).
- The skill never modifies repository files other than creating `reports/index.html`.

View File

@@ -0,0 +1,227 @@
<!--
AI Readiness Report — canonical template
--------------------------------------------
This file is the single source of truth for the look & feel of the
reports/index.html output. The @ai-readiness-reporter agent MUST load
this file, substitute the {{placeholders}} with real data from
`agentrc readiness --json`, and write the result to reports/index.html.
Rules for the agent:
- Do NOT change the HTML structure, class names, CSS variables or the
inline <style> block. The template is intentionally fixed so every
consumer of this plugin gets an identical-looking report.
- Replace every {{placeholder}} with concrete data. Repeat the marked
blocks (pillar cards, plan rows, maturity rows, extra rows) for
each item. Remove blocks that don't apply (e.g. policy section if
no policy is active).
- Keep the file self-contained: no external CSS/JS, no network fonts.
- Preserve the <script type="application/json" id="raw-data"> block
and embed the compact AgentRC JSON inside it.
Placeholders used:
{{repoName}} repository name
{{date}} ISO date the report was generated
{{level}} maturity level number (1-5)
{{levelName}} maturity level name (Functional, Documented, ...)
{{overallPct}} overall readiness as integer percent
{{grade}} letter grade A-F
{{passRatePct}} pass rate as integer percent (or "—" if N/A)
{{thresholdPct}} policy pass-rate threshold (or "—")
{{policyName}} active policy name (omit policy section if none)
{{policySummary}} one-paragraph summary of disabled/overridden criteria
{{rawJsonCompact}} compact JSON for embedding
{{rawJsonPretty}} pretty JSON for the <details> view
Pillar card placeholders (repeat per pillar):
{{pillarName}} {{pillarScore}} {{pillarRelevance}} (high|medium|low)
{{pillarStatus}} (good|warn|bad — drives bar + dot colour)
{{pillarWhat}} {{pillarWhyAi}} {{pillarCurrent}} {{pillarRecommendation}}
-->
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1" />
<title>AI Readiness — {{repoName}}</title>
<style>
:root {
--bg:#0f1115; --panel:#161a22; --panel-2:#1d2230; --border:#262c3a;
--text:#e6e9ef; --muted:#8a93a6; --accent:#6ea8ff;
--good:#4ade80; --warn:#fbbf24; --bad:#f87171;
}
* { box-sizing: border-box; }
html,body { margin:0; background:var(--bg); color:var(--text);
font:14px/1.5 -apple-system,BlinkMacSystemFont,"Segoe UI",Roboto,sans-serif; }
a { color: var(--accent); }
header { padding: 28px 32px; border-bottom: 1px solid var(--border);
background: linear-gradient(180deg,#141823,#0f1115); }
header h1 { margin: 0 0 4px; font-size: 22px; }
header .meta { color: var(--muted); font-size: 13px; }
main { max-width: 1180px; margin: 0 auto; padding: 24px 32px 80px; }
.panel { background:var(--panel); border:1px solid var(--border);
border-radius:10px; padding:20px; margin-bottom:18px; }
.grid { display:grid; gap:16px; }
.grid.cols-3 { grid-template-columns: repeat(3, 1fr); }
.grid.cols-2 { grid-template-columns: 1fr 1fr; }
.kpi .num { font-size: 30px; font-weight: 700; }
.kpi .lbl { color: var(--muted); font-size: 11px; text-transform: uppercase; letter-spacing: .8px; }
.badge { display:inline-block; padding:3px 10px; border-radius:999px;
font-size:12px; font-weight:600; }
.lvl-1 { background:#3a1f24; color:#f87171; }
.lvl-2 { background:#3b2c1d; color:#fbbf24; }
.lvl-3 { background:#2c3119; color:#d3e85e; }
.lvl-4 { background:#1d3325; color:#4ade80; }
.lvl-5 { background:#1c2c3d; color:#6ea8ff; }
.bar { height:8px; background:var(--panel-2); border-radius:4px; overflow:hidden; }
.bar > span { display:block; height:100%; background: var(--accent); }
.bar.good > span { background: var(--good); }
.bar.warn > span { background: var(--warn); }
.bar.bad > span { background: var(--bad); }
table { width:100%; border-collapse:collapse; }
th,td { text-align:left; padding:8px 10px; border-bottom:1px solid var(--border); font-size:13px; }
th { color:var(--muted); font-weight:500; text-transform:uppercase; font-size:11px; letter-spacing:.8px; }
code { background:#0a0c11; padding:1px 6px; border-radius:4px; }
h2 { font-size:14px; color:var(--muted); text-transform:uppercase; letter-spacing:.8px; margin:0 0 12px; }
.dot { width:8px; height:8px; border-radius:50%; display:inline-block; }
.dot.good { background:var(--good); } .dot.warn { background:var(--warn); } .dot.bad { background:var(--bad); }
footer { color: var(--muted); font-size: 12px; text-align: center; padding: 20px; }
/* Pillar cards */
.pillar { background:var(--panel-2); border:1px solid var(--border);
border-radius:8px; padding:14px 16px; }
.pillar h3 { margin:0 0 6px; font-size:15px; display:flex; align-items:center; gap:10px; flex-wrap:wrap; }
.pillar .why { color:var(--muted); font-size:13px; margin:8px 0 0; }
.pillar .what { font-size:13px; margin:6px 0 0; }
.pillar .rec { font-size:13px; margin:8px 0 0; }
.rel { font-size:10px; padding:2px 8px; border-radius:999px; text-transform:uppercase; letter-spacing:.6px; font-weight:600; }
.rel.high { background:#1c2c3d; color:#6ea8ff; }
.rel.medium { background:#2c3119; color:#d3e85e; }
.rel.low { background:#262c3a; color:#8a93a6; }
</style>
</head>
<body>
<header>
<h1>AI Readiness Report</h1>
<div class="meta">
<strong>{{repoName}}</strong> · Assessed {{date}} ·
<span class="badge lvl-{{level}}">L{{level}} — {{levelName}}</span> ·
Overall <strong>{{overallPct}}%</strong> · Grade <strong>{{grade}}</strong>
<!-- if a policy is active, append: · Policy <code>{{policyName}}</code> -->
</div>
</header>
<main>
<!-- 1. What is AI Readiness? -->
<section class="panel">
<h2>What is AI Readiness?</h2>
<p>AI coding agents are only as effective as the context they receive. AgentRC measures how AI-ready a repo is across <strong>9 pillars</strong> in two categories — Repo Health and AI Setup — and maps the result to a <strong>5-level maturity model</strong>. This report is the <em>Measure</em> step in AgentRC's <em>Measure → Generate → Maintain</em> loop.</p>
<p style="color:var(--muted);font-size:13px;margin-top:8px">Each pillar carries an <strong>AI relevance</strong> rating (High / Medium / Low) so you can tell at a glance which gaps most directly affect Copilot's output and which are general engineering hygiene.</p>
</section>
<!-- 2. KPIs -->
<section class="grid cols-3">
<div class="panel kpi"><span class="lbl">Maturity</span><div class="num"><span class="badge lvl-{{level}}">L{{level}} — {{levelName}}</span></div></div>
<div class="panel kpi"><span class="lbl">Overall Score</span><div class="num">{{overallPct}}%</div><div style="color:var(--muted);font-size:12px">Grade {{grade}}</div></div>
<div class="panel kpi"><span class="lbl">Pass rate</span><div class="num">{{passRate}}</div><div style="color:var(--muted);font-size:12px">Threshold {{threshold}}</div></div>
</section>
<!-- 3. Maturity progression -->
<section class="panel">
<h2>Maturity Progression</h2>
<table>
<thead><tr><th>Level</th><th>Name</th><th>Status</th></tr></thead>
<tbody>
<!-- Render levels 5 → 1. Mark the current level with "◼ You are here". Example row:
<tr><td>L3</td><td>Standardized</td><td>◼ You are here</td></tr>
-->
</tbody>
</table>
</section>
<!-- 4. Active policy (omit this section entirely when no policy is active) -->
<section class="panel">
<h2>Active Policy</h2>
<p><code>{{policyName}}</code> — {{policySummary}}</p>
</section>
<!-- 5. Repo Health Pillars -->
<section class="panel">
<h2>Repo Health Breakdown</h2>
<div class="grid cols-2">
<!--
Repeat one .pillar block per Repo Health pillar (8 pillars):
Style, Build, Testing, Docs, Dev Environment, Code Quality, Observability, Security.
<div class="pillar">
<h3>
<span class="dot {{pillarStatus}}"></span>
{{pillarName}}
<span class="rel {{pillarRelevance}}">AI relevance: {{pillarRelevance}}</span>
<span style="margin-left:auto;color:var(--muted);font-size:13px">{{pillarScore}}%</span>
</h3>
<div class="bar {{pillarStatus}}"><span style="width:{{pillarScore}}%"></span></div>
<p class="what"><strong>What it measures:</strong> {{pillarWhat}}</p>
<p class="why"><strong>Why it matters for AI:</strong> {{pillarWhyAi}}</p>
<p class="rec"><strong>Current state:</strong> {{pillarCurrent}}</p>
<p class="rec"><strong>Recommendation:</strong> {{pillarRecommendation}}</p>
</div>
-->
</div>
</section>
<!-- 6. AI Setup Pillars -->
<section class="panel">
<h2>AI Setup Breakdown</h2>
<div class="grid cols-2">
<!-- AI Tooling pillar block — same structure as above, AI relevance is always "high". -->
</div>
</section>
<!-- 7. Extras -->
<section class="panel">
<h2>Extras (informational, do not affect score)</h2>
<table>
<thead><tr><th></th><th>Extra</th><th>Status</th></tr></thead>
<tbody>
<!-- agents-doc, pr-template, pre-commit, architecture-doc rows. Use ✅ or ◻. -->
</tbody>
</table>
</section>
<!-- 8. Prioritised Remediation Plan -->
<section class="panel">
<h2>Prioritised Remediation Plan</h2>
<h3 style="color:var(--bad)">🔴 Fix First (high impact / low effort)</h3>
<table><thead><tr><th>#</th><th>Finding</th><th>File / config</th><th>Why it matters</th></tr></thead><tbody><!-- rows --></tbody></table>
<h3 style="color:var(--warn)">🟡 Fix Next (medium impact / low effort)</h3>
<table><thead><tr><th>#</th><th>Finding</th><th>File / config</th><th>Why</th></tr></thead><tbody><!-- rows --></tbody></table>
<h3 style="color:var(--accent)">🔵 Plan (medium impact / medium effort)</h3>
<table><thead><tr><th>#</th><th>Finding</th><th>File / config</th><th>Why</th></tr></thead><tbody><!-- rows --></tbody></table>
</section>
<!-- 9. Next steps -->
<section class="panel">
<h2>Next Steps</h2>
<ol>
<li>Generate or refresh instructions: <code>agentrc instructions --output .github/copilot-instructions.md</code> (or use the <code>generate-instructions</code> skill).</li>
<li>Address each item under <strong>🔴 Fix First</strong>; re-run this report to confirm score improvement.</li>
<li>Codify org standards via a JSON policy (<code>strict.json</code>, <code>ai-only.json</code>, …) and re-run with <code>--policy</code>.</li>
<li>Wire <code>agentrc readiness --fail-level &lt;n&gt;</code> into CI to prevent regressions.</li>
</ol>
</section>
<!-- 10. Raw data -->
<details class="panel">
<summary style="cursor:pointer;color:var(--muted)">Raw AgentRC JSON</summary>
<pre style="overflow:auto;font-size:11px;color:#b8c0d2">{{rawJsonPretty}}</pre>
</details>
<script type="application/json" id="raw-data">{{rawJsonCompact}}</script>
</main>
<footer>
Generated by <a href="https://github.com/github/awesome-copilot/tree/main/plugins/acreadiness-cockpit">acreadiness-cockpit</a>
· powered by <a href="https://github.com/microsoft/agentrc">microsoft/agentrc</a>.
</footer>
</body>
</html>

View File

@@ -0,0 +1,107 @@
---
name: acreadiness-generate-instructions
description: 'Generate tailored AI agent instruction files via AgentRC instructions command. Produces .github/copilot-instructions.md (default, recommended for Copilot in VS Code) plus optional per-area .instructions.md files with applyTo globs for monorepos. Use after running /acreadiness-assess to close gaps in the AI Tooling pillar.'
argument-hint: "[--output .github/copilot-instructions.md|AGENTS.md] [--strategy flat|nested] [--areas | --area <name>] [--apply-to <glob>] [--claude-md] [--dry-run]"
---
# /acreadiness-generate-instructions — write AI agent instructions
Use this skill whenever the user wants to **create**, **regenerate**, or **refresh** their custom instructions for AI coding agents (Copilot, Claude, etc.). This is the *Generate* step in AgentRC's **Measure → Generate → Maintain** loop and the single highest-leverage action for the **AI Tooling** pillar.
## Output options
VS Code recognises several instruction file types — AgentRC generates the most common ones:
| File | Scope | When to use |
|---|---|---|
| `.github/copilot-instructions.md` | Always-on, whole workspace | **Default** — VS Code Copilot's native instruction file |
| `AGENTS.md` | Always-on, whole workspace | Multi-agent repos (Copilot + Claude + others) |
| `.github/instructions/*.instructions.md` | Scoped by `applyTo` glob | Per-area / per-language rules in monorepos |
| `CLAUDE.md` | Claude-specific | Add via `--claude-md` (nested only) |
## Strategies
- **`flat`** *(default)* — single `.github/copilot-instructions.md` at the chosen path. Simple, easy to review.
- **`nested`** — hub at `.github/copilot-instructions.md` + per-topic detail files at `.github/instructions/<topic>.instructions.md`, each with an `applyTo` glob so VS Code only loads the topic when it's relevant. Better for large or multi-stack repos.
> **Why `.github/instructions/` and not `.agents/`?** AgentRC's default nested layout writes to `.agents/`, which is the right home for *agent-agnostic* repos (Copilot + Claude + Cursor reading `AGENTS.md`). For VS Code Copilot specifically, the native location is `.github/instructions/` with `applyTo` frontmatter — that's what Copilot auto-discovers. This skill rewrites AgentRC's nested output to the VS Code-native location whenever the main output is `.github/copilot-instructions.md`. If you instead chose `--output AGENTS.md`, nested keeps AgentRC's default `.agents/` layout.
For monorepos, generate **area-scoped** instructions with `--areas`, `--area <name>`, or `--areas-only`. Areas are defined in `agentrc.config.json`. Per-area output is written as VS Code `.instructions.md` files with an `applyTo` glob (see below).
### Topic vs area `.instructions.md` files
Both end up in `.github/instructions/` but they answer different questions:
| Kind | Filename example | `applyTo` example | Where it comes from |
|---|---|---|---|
| **Topic** (nested) | `testing.instructions.md` | `**/*.{test,spec}.{ts,tsx,js}` | AgentRC `--strategy nested` topic split |
| **Area** (monorepo) | `frontend.instructions.md` | `apps/frontend/**` | `agentrc.config.json` areas + `--areas` |
You can have both at once: a nested set of topic files plus per-area files for a monorepo.
## Per-area files with `applyTo`
When the user opts into areas, emit one VS Code-native `.instructions.md` file per area at `.github/instructions/<area>.instructions.md`. Each file MUST start with frontmatter declaring the glob the rules apply to:
```markdown
---
applyTo: "apps/frontend/**"
---
# Frontend area instructions
…AgentRC-generated content for this area…
```
Workflow:
1. **Read `agentrc.config.json`** to discover declared areas and their `paths` / globs. If `paths` is missing, ask the user for the glob (e.g. `src/api/**`).
2. **Run `agentrc instructions --areas`** (or `--area <name>`) to produce the per-area body content.
3. **Wrap each area's content** in `.github/instructions/<area>.instructions.md` with the `applyTo` frontmatter taken from the area's `paths`. If the user passed `--apply-to <glob>` on a single-area call, use that glob verbatim.
4. **Leave the main file alone** — the root `.github/copilot-instructions.md` stays as the always-on instructions; `.instructions.md` files only kick in for matching paths.
Naming: lowercase, kebab-case area name. Examples: `.github/instructions/frontend.instructions.md`, `.github/instructions/api.instructions.md`, `.github/instructions/infra.instructions.md`.
## Steps
1. **Pick the target file**. **Default to `.github/copilot-instructions.md`.** Switch to `AGENTS.md` only if the user mentions multi-agent / Claude / Cursor support.
2. **Always ask which strategy to use**`flat` or `nested` — unless the user already specified one in their message or via `--strategy`. Present the trade-off briefly:
- **Flat** *(default)* — one `.github/copilot-instructions.md`. Simple, easy to review in a single PR. Best for small/medium repos with one stack.
- **Nested** — hub `.github/copilot-instructions.md` + per-topic `.github/instructions/<topic>.instructions.md` files (each with an `applyTo` glob so VS Code only loads them when relevant). Best for large or multi-stack repos. Add `--claude-md` to also emit `CLAUDE.md`.
Recommend `nested` proactively when the repo has > 5 top-level directories, multiple stacks, or already uses a monorepo tool (turbo/nx/pnpm workspaces).
3. **Detect monorepo areas** by reading `agentrc.config.json`. If areas exist, ask the user whether they want **per-area `.instructions.md` files with `applyTo`** in addition to the root file. Default to "yes" when `agentrc.config.json` declares areas.
4. **Run dry-run first** so the user can preview:
```bash
npx -y github:microsoft/agentrc instructions --output <file> --strategy <flat|nested> [--areas|--area <name>] [--claude-md] --dry-run
```
5. **Show a short summary** of what would change — files that would be created or overwritten, area count + their `applyTo` globs, model used (default `claude-sonnet-4.6`).
6. **On confirmation, run the same command without `--dry-run`** (and optionally `--force` if files already exist).
7. **Post-process layout for Copilot output**:
- **If `--output` ends in `copilot-instructions.md` and strategy is `nested`**: move/rewrite AgentRC's `.agents/<topic>.md` files to `.github/instructions/<topic>.instructions.md`. Add frontmatter to each file with an appropriate `applyTo` glob (see "Topic applyTo defaults" below). Delete the now-empty `.agents/` directory.
- **If `--areas` was used**: also write `.github/instructions/<area>.instructions.md` for every area, using each area's `paths` from `agentrc.config.json` as the `applyTo` glob (override with `--apply-to` for single-area calls).
- **If `--output AGENTS.md`** was chosen: keep AgentRC's native `.agents/` layout for nested — agent-agnostic readers expect it there.
Create the `.github/instructions/` directory if missing.
### Topic `applyTo` defaults
When promoting AgentRC's nested topic files to `.instructions.md`, use these defaults unless the user specifies otherwise:
| Topic | Default `applyTo` |
|---|---|
| `testing` | `**/*.{test,spec}.{ts,tsx,js,jsx,mjs,cjs}` |
| `style` / `code-quality` / `formatting` | `**/*.{ts,tsx,js,jsx,mjs,cjs,py,go,rs,java,kt,cs}` |
| `build` / `ci` | `**/{package.json,turbo.json,nx.json,.github/workflows/**}` |
| `docs` | `**/*.md` |
| `security` | `**` |
| anything else / hub-level | `**` |
8. **Verify** by reading the generated file(s) back and showing the user a 1-paragraph synopsis: stack detected, conventions captured, length, list of `.instructions.md` files with their globs.
9. **Suggest next steps**:
- Re-run the `assess` skill to confirm the AI Tooling pillar score improved.
- If the user already has both `copilot-instructions.md` and `AGENTS.md`, recommend consolidating to a single source of truth (AgentRC flags this at maturity Level 2+).
## Notes
- AgentRC reads your **actual code** — no templates. Output reflects detected languages, frameworks, and conventions.
- `--claude-md` (nested strategy only) also emits `CLAUDE.md`.
- VS Code applies `.instructions.md` files automatically when the active file matches `applyTo`. The root `.github/copilot-instructions.md` always loads.
- Never run this skill non-interactively in CI; instructions are part of the repo and should land via PR.

View File

@@ -0,0 +1,96 @@
---
name: acreadiness-policy
description: 'Help the user pick, write, or apply an AgentRC policy. Policies customise readiness scoring by disabling irrelevant checks, overriding impact/level, setting pass-rate thresholds, or chaining org baselines with team overrides. Use when the user asks about strict mode, AI-only scoring, custom weights, CI gating, or wants org-wide standardisation.'
argument-hint: "[show | new <name> | apply <path-or-pkg>] — e.g. /acreadiness-policy show, /acreadiness-policy new strict-frontend"
---
# /acreadiness-policy — AgentRC policies
Use this skill when the user asks about **policies**, **strict mode**, **custom scoring**, **disabling checks**, **org standards**, or **CI gating** of readiness.
A policy is a small JSON file with three optional sections — `criteria`, `extras`, `thresholds` — that customise how AgentRC scores readiness.
## Built-in examples
AgentRC ships with three example policies in `examples/policies/`:
| Policy | What it does |
|---|---|
| `strict.json` | 100% pass rate, raises impact on key criteria |
| `ai-only.json` | Disables all repo-health checks, focuses on AI tooling |
| `repo-health-only.json` | Disables AI checks, focuses on traditional quality |
Recommend these as starting points before writing a custom policy.
## Policy schema
```jsonc
{
"name": "my-policy",
"criteria": {
"disable": ["env-example", "observability", "dependabot"],
"override": {
"readme": { "impact": "high", "level": 2 },
"lint-config": { "title": "Linter required" }
}
},
"extras": {
"disable": ["pre-commit"]
},
"thresholds": {
"passRate": 0.9
}
}
```
### Impact weights
| Impact | Weight |
|---|---|
| critical | 5 |
| high | 4 |
| medium | 3 |
| low | 2 |
| info | 0 |
`Score = 1 (deductions / max possible weight)`. Grades: **A** ≥ 0.9, **B** ≥ 0.8, **C** ≥ 0.7, **D** ≥ 0.6, **F** < 0.6.
## Sub-commands
### `show`
List policies currently in effect (from `agentrc.config.json` `policies` array, or none).
### `new <name>`
Scaffold `policies/<name>.json` with sensible defaults. Walk the user through:
1. **What to disable** — irrelevant pillars or extras for their stack (e.g. disable `observability` for a static site).
2. **What to raise** — override `impact` to `high` or `critical` for must-haves (e.g. `readme`, `codeowners`).
3. **Pass-rate threshold** — typical org baselines: `0.7` (lenient), `0.85` (standard), `1.0` (strict).
4. Reference the policy from `agentrc.config.json`:
```json
{ "policies": ["./policies/<name>.json"] }
```
### `apply <path-or-pkg>`
Run `agentrc readiness --json --policy <source>` and re-render the report by handing off to the `assess` skill / `ai-readiness-reporter` agent. Supports chaining:
```bash
npx -y github:microsoft/agentrc readiness --json --policy ./org-baseline.json,./team-frontend.json
```
## CI gating
Combine policies with `--fail-level` to enforce a minimum maturity level in CI:
```yaml
- run: npx -y github:microsoft/agentrc readiness --policy ./policies/strict.json --fail-level 3
```
## Advanced
JSON policies can disable, override, and set thresholds — but **cannot add new criteria**. For new detection logic, point users at AgentRC's TypeScript plugin system (`docs/dev/plugins.md`).
## Operating rules
- **Never silently disable a pillar.** If the user wants to disable `observability`, confirm and explain the trade-off.
- **Prefer overriding `impact` over disabling.** Disabling hides the gap entirely; overriding lets it still appear in the report.
- **Recommend extras stay enabled.** They cost nothing — they don't affect the score.
- **Suggest layering** — most orgs want a baseline policy + per-team overrides chained with `--policy a.json,b.json`.