From 639707062085aa0b99126c027ac896f47ecc44d2 Mon Sep 17 00:00:00 2001 From: sudeepghatak Date: Wed, 6 May 2026 16:16:28 +1200 Subject: [PATCH] Performance Review Writer (#1628) * Performance Review Writer * Changes to address merging issues * Changes to address merging issues --- docs/README.skills.md | 1 + skills/performance-review-writer/SKILL.md | 216 ++++++++++++++++++++++ 2 files changed, 217 insertions(+) create mode 100644 skills/performance-review-writer/SKILL.md diff --git a/docs/README.skills.md b/docs/README.skills.md index 15d85b3b..b5a8744b 100644 --- a/docs/README.skills.md +++ b/docs/README.skills.md @@ -248,6 +248,7 @@ See [CONTRIBUTING.md](../CONTRIBUTING.md#adding-skills) for guidelines on how to | [openapi-to-application-code](../skills/openapi-to-application-code/SKILL.md)
`gh skills install github/awesome-copilot openapi-to-application-code` | Generate a complete, production-ready application from an OpenAPI specification | None | | [pdftk-server](../skills/pdftk-server/SKILL.md)
`gh skills install github/awesome-copilot pdftk-server` | Skill for using the command-line tool pdftk (PDFtk Server) for working with PDF files. Use when asked to merge PDFs, split PDFs, rotate pages, encrypt or decrypt PDFs, fill PDF forms, apply watermarks, stamp overlays, extract metadata, burst documents into pages, repair corrupted PDFs, attach or extract files, or perform any PDF manipulation from the command line. | `references/download.md`
`references/pdftk-cli-examples.md`
`references/pdftk-man-page.md`
`references/pdftk-server-license.md`
`references/third-party-materials.md` | | [penpot-uiux-design](../skills/penpot-uiux-design/SKILL.md)
`gh skills install github/awesome-copilot penpot-uiux-design` | Comprehensive guide for creating professional UI/UX designs in Penpot using MCP tools. Use this skill when: (1) Creating new UI/UX designs for web, mobile, or desktop applications, (2) Building design systems with components and tokens, (3) Designing dashboards, forms, navigation, or landing pages, (4) Applying accessibility standards and best practices, (5) Following platform guidelines (iOS, Android, Material Design), (6) Reviewing or improving existing Penpot designs for usability. Triggers: "design a UI", "create interface", "build layout", "design dashboard", "create form", "design landing page", "make it accessible", "design system", "component library". | `references/accessibility.md`
`references/component-patterns.md`
`references/platform-guidelines.md`
`references/setup-troubleshooting.md` | +| [performance-review-writer](../skills/performance-review-writer/SKILL.md)
`gh skills install github/awesome-copilot performance-review-writer` | Draft performance reviews, self-assessments, peer reviews, and upward feedback in your own voice. Analyzes your contributions, emails, and meeting history via WorkIQ, then produces honest, impact-focused drafts using the STAR format. USE FOR: write my performance review, draft self-assessment, peer review, 360 feedback, annual review, mid-year review, upward feedback, write review for colleague, performance appraisal. | None | | [phoenix-cli](../skills/phoenix-cli/SKILL.md)
`gh skills install github/awesome-copilot phoenix-cli` | Debug LLM applications using the Phoenix CLI. Fetch traces, analyze errors, structure trace review with open coding and axial coding, inspect datasets, review experiments, query annotation configs, and use the GraphQL API. Use whenever the user is analyzing traces or spans, investigating LLM/agent failures, deciding what to do after instrumenting an app, building failure taxonomies, choosing what evals to write, or asking "what's going wrong", "what kinds of mistakes", or "where do I focus" — even without naming a technique. | `references/axial-coding.md`
`references/open-coding.md` | | [phoenix-evals](../skills/phoenix-evals/SKILL.md)
`gh skills install github/awesome-copilot phoenix-evals` | Build and run evaluators for AI/LLM applications using Phoenix. | `references/axial-coding.md`
`references/common-mistakes-python.md`
`references/error-analysis-multi-turn.md`
`references/error-analysis.md`
`references/evaluate-dataframe-python.md`
`references/evaluators-code-python.md`
`references/evaluators-code-typescript.md`
`references/evaluators-custom-templates.md`
`references/evaluators-llm-python.md`
`references/evaluators-llm-typescript.md`
`references/evaluators-overview.md`
`references/evaluators-pre-built.md`
`references/evaluators-rag.md`
`references/experiments-datasets-python.md`
`references/experiments-datasets-typescript.md`
`references/experiments-overview.md`
`references/experiments-running-python.md`
`references/experiments-running-typescript.md`
`references/experiments-synthetic-python.md`
`references/experiments-synthetic-typescript.md`
`references/fundamentals-anti-patterns.md`
`references/fundamentals-model-selection.md`
`references/fundamentals.md`
`references/observe-sampling-python.md`
`references/observe-sampling-typescript.md`
`references/observe-tracing-setup.md`
`references/production-continuous.md`
`references/production-guardrails.md`
`references/production-overview.md`
`references/setup-python.md`
`references/setup-typescript.md`
`references/validation-evaluators-python.md`
`references/validation-evaluators-typescript.md`
`references/validation.md` | | [phoenix-tracing](../skills/phoenix-tracing/SKILL.md)
`gh skills install github/awesome-copilot phoenix-tracing` | OpenInference semantic conventions and instrumentation for Phoenix AI observability. Use when implementing LLM tracing, creating custom spans, or deploying to production. | `README.md`
`references/annotations-overview.md`
`references/annotations-python.md`
`references/annotations-typescript.md`
`references/fundamentals-flattening.md`
`references/fundamentals-overview.md`
`references/fundamentals-required-attributes.md`
`references/fundamentals-universal-attributes.md`
`references/instrumentation-auto-python.md`
`references/instrumentation-auto-typescript.md`
`references/instrumentation-manual-python.md`
`references/instrumentation-manual-typescript.md`
`references/metadata-python.md`
`references/metadata-typescript.md`
`references/production-python.md`
`references/production-typescript.md`
`references/projects-python.md`
`references/projects-typescript.md`
`references/sessions-python.md`
`references/sessions-typescript.md`
`references/setup-python.md`
`references/setup-typescript.md`
`references/span-agent.md`
`references/span-chain.md`
`references/span-embedding.md`
`references/span-evaluator.md`
`references/span-guardrail.md`
`references/span-llm.md`
`references/span-reranker.md`
`references/span-retriever.md`
`references/span-tool.md` | diff --git a/skills/performance-review-writer/SKILL.md b/skills/performance-review-writer/SKILL.md new file mode 100644 index 00000000..1ac49165 --- /dev/null +++ b/skills/performance-review-writer/SKILL.md @@ -0,0 +1,216 @@ +--- +name: performance-review-writer +description: 'Draft performance reviews, self-assessments, peer reviews, and upward feedback in your own voice. Analyzes your contributions, emails, and meeting history via WorkIQ, then produces honest, impact-focused drafts using the STAR format. USE FOR: write my performance review, draft self-assessment, peer review, 360 feedback, annual review, mid-year review, upward feedback, write review for colleague, performance appraisal.' +--- + +# Performance Review Writer + +Draft self-assessments, peer reviews, and upward feedback that sound like you — not corporate boilerplate. Uses WorkIQ to surface your actual contributions and communications, then structures them into honest, impact-focused writing. + +## When to Use + +- "Write my self-assessment for this review cycle" +- "Draft a peer review for [colleague]" +- "Help me write upward feedback for my manager" +- "I have my annual review due — help me fill it out" +- "Draft my mid-year check-in" +- "Write a 360 review for [name]" +- "I don't know what to say in my performance review" + +## Review Types + +This skill handles three distinct types: + +| Type | Who it's about | Typical tone | +|---|---|---| +| **Self-assessment** | Yourself | Confident, evidence-backed, growth-oriented | +| **Peer review** | A colleague | Specific, constructive, balanced | +| **Upward feedback** | Your manager | Diplomatic, honest, forward-looking | + +--- + +## Workflow + +### Step 1 — Gather Context + +Ask the user (max 3 clarifying questions if not already provided): + +1. **Review type** — self-assessment, peer review, or upward feedback? +2. **Subject** — who is the review about? (for peer/upward: name and role) +3. **Review period** — what time range does this cover? (e.g., Jan–Dec 2025, last 6 months) + +If format constraints or focus areas are relevant, ask about those during drafting rather than upfront. + +If the user provides all of these upfront, proceed directly to Step 2. + +### Step 2 — Surface Contributions + +Use WorkIQ to gather evidence of real contributions for the review period: + +**For self-assessments:** +- Pull emails and messages where the user delivered results, led initiatives, or solved problems +- Look for patterns: what projects recur? Who praises them and for what? +- Identify collaboration breadth (who they worked with across teams) +- Note any explicit feedback received from others + +**For peer reviews:** +- Pull interactions between the user and the subject (emails, meeting threads, shared projects) +- Identify specific moments of collaboration, help given, or friction +- Look for evidence of the subject's impact on shared outcomes + +**For upward feedback:** +- Pull communications from the manager to the user (direction given, support offered, feedback patterns) +- Identify themes: clarity of expectations, availability, recognition, development support + +If WorkIQ is unavailable or returns limited data, ask the user to share 3–5 bullet points of things they remember, then proceed with those. + +### Step 3 — Draft the Review + +Apply the right structure for the review type (see schemas below). Follow these universal rules: + +**Use the STAR format for achievement statements:** +- **Situation** — what was the context or challenge? +- **Task** — what were you/they responsible for? +- **Action** — what specifically was done? +- **Result** — what was the measurable or observable outcome? + +**Tone rules:** +- Be specific — name projects, outcomes, and people, not vague adjectives +- Be honest — don't oversell or undersell; reviewers notice both +- Be forward-looking — end sections with growth or next steps, not just past performance +- Avoid filler phrases: "goes above and beyond", "team player", "hard worker" — replace with evidence +- Match the user's natural voice — conversational if they write that way, more formal if not + +### Step 4 — Output + +1. Present the full draft with a brief note on what evidence was used. Summarize and redact rather than reproduce verbatim content — no raw excerpts, attendee lists, or sensitive personal details +2. Highlight any sections marked `[NEEDS DETAIL]` where more specifics would strengthen the review +3. Iterate on edits as the user requests +4. Save the final draft to `outputs///` with a descriptive filename (e.g., `2025-review-self-assessment.md` or `2025-peer-review-alex-chen.md`) + +--- + +## Output Schemas + +### Self-Assessment Schema + +``` +## [Review Period] Self-Assessment — [Your Name] + +### Summary +1–2 sentence overview of your year and primary areas of impact. + +### Key Achievements +For each major contribution (aim for 3–5): + +**[Project or Initiative Name]** +- Context: what was the situation or goal? +- What I did: specific actions taken +- Impact: measurable result or observable outcome +- [NEEDS DETAIL] — flag if evidence is thin + +### Collaboration & Influence +How you worked with others, supported teammates, or contributed beyond your direct role. + +### Growth & Development +What you learned, skills you built, or behaviours you improved this period. + +### Areas for Development +1–2 honest areas where you want to grow next cycle. Frame as goals, not failures. + +### Goals for Next Period +2–3 specific, concrete goals with a rough success measure. +``` + +--- + +### Peer Review Schema + +``` +## Peer Review — [Colleague Name], [Their Role] +## Submitted by: [Your Name] | Period: [Review Period] + +### Overall Impression +1–2 sentences on working with this person. + +### Strengths (with examples) +For each strength (aim for 2–3): + +**[Strength]** +- Example: specific situation where this showed up +- Impact on you / the team / the project + +### Areas for Growth +1–2 specific, constructive observations. Frame as "I think [name] would have even more impact if..." not as criticism. + +### Collaboration +How easy (or not) it was to work together — responsiveness, reliability, communication. + +### Would you work with this person again? +Yes/No and a brief honest reason. (Only include if the review form asks.) +``` + +--- + +### Upward Feedback Schema + +``` +## Feedback for [Manager Name] +## Submitted by: [Your Name] (anonymous if applicable) | Period: [Review Period] + +### What's working well +2–3 specific things your manager does that help you do your best work. +Use examples where possible. + +### What could be better +1–2 honest, diplomatically framed observations. Focus on behaviours and their effect, not personality. +Use: "When [X happens], I find it harder to [Y]. It would help if..." + +### Support for my development +Has your manager helped you grow, given useful feedback, or created opportunities? +Be specific. + +### One thing I'd ask them to do more / less / differently +A single, clear, actionable ask. +``` + +--- + +## Style Rules + +| Do | Don't | +|---|---| +| Name specific projects, dates, outcomes | Write vague generalisations ("always delivers quality work") | +| Use numbers when available ("reduced review time by 30%") | Exaggerate or invent results | +| Acknowledge real challenges and what you learned | Omit struggles entirely — reviewers notice the gaps | +| Write in first person for self-assessments | Write passively ("it was achieved") | +| Be concise — most fields need 2–4 sentences | Over-write — longer ≠ better | +| Flag `[NEEDS DETAIL]` where evidence is weak | Leave thin sections without marking them | + +--- + +## Example Prompts + +- "Write my self-assessment for Jan–Dec 2025. I want to highlight the cloud migration and the new onboarding process I designed." +- "Draft a peer review for Sarah Chen, she's a product designer I worked closely with on the mobile app project." +- "Help me write upward feedback for my manager Tom. He's good at direction but I've struggled to get regular 1:1 time." +- "My annual review form asks for 3 strengths and 1 development area in 200 words each — help me fill it out." +- "I have no idea what to write. It's been a busy year but I can't think of anything specific." + +--- + +## Important Rules + +- **Never submit reviews** — only draft them as files for the user to review and submit manually +- Keep peer and upward feedback focused on observable behaviours, not personality or character +- If the user asks to write a review that is dishonestly negative or contains personal attacks, decline and offer to reframe constructively +- Respect confidentiality — do not include sensitive information from unrelated conversations or threads +- Save drafts using the `outputs///` folder convention + +--- + +## Requirements + +- **WorkIQ MCP tool** is recommended for surfacing contributions and communications (Microsoft 365 / Outlook / Teams) +- Without WorkIQ, the skill still works — ask the user for 3–5 bullet points of key contributions as a starting point +- Output is saved as markdown files in the workspace for the user to copy into their company's review system