Files
awesome-copilot/skills/phoenix-evals/references/evaluators-overview.md
Jim Bennett d79183139a Add Arize and Phoenix LLM observability skills (#1204)
* Add 9 Arize LLM observability skills

Add skills for Arize AI platform covering trace export, instrumentation,
datasets, experiments, evaluators, AI provider integrations, annotations,
prompt optimization, and deep linking to the Arize UI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add 3 Phoenix AI observability skills

Add skills for Phoenix (Arize open-source) covering CLI debugging,
LLM evaluation workflows, and OpenInference tracing/instrumentation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Ignoring intentional bad spelling

* Fix CI: remove .DS_Store from generated skills README and add codespell ignore

Remove .DS_Store artifact from winmd-api-search asset listing in generated
README.skills.md so it matches the CI Linux build output. Add queston to
codespell ignore list (intentional misspelling example in arize-dataset skill).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add arize-ax and phoenix plugins

Bundle the 9 Arize skills into an arize-ax plugin and the 3 Phoenix
skills into a phoenix plugin for easier installation as single packages.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fix skill folder structures to match source repos

Move arize supporting files from references/ to root level and rename
phoenix references/ to rules/ to exactly match the original source
repository folder structures.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fixing file locations

* Fixing readme

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-02 09:58:55 +11:00

1.1 KiB

Evaluators: Overview

When and how to build automated evaluators.

Decision Framework

Should I Build an Evaluator?
        │
        ▼
Can I fix it with a prompt change?
    YES → Fix the prompt first
    NO  → Is this a recurring issue?
          YES → Build evaluator
          NO  → Add to watchlist

Don't automate prematurely. Many issues are simple prompt fixes.

Evaluator Requirements

  1. Clear criteria - Specific, not "Is it good?"
  2. Labeled test set - 100+ examples with human labels
  3. Measured accuracy - Know TPR/TNR before deploying

Evaluator Lifecycle

  1. Discover - Error analysis reveals pattern
  2. Design - Define criteria and test cases
  3. Implement - Build code or LLM evaluator
  4. Calibrate - Validate against human labels
  5. Deploy - Add to experiment/CI pipeline
  6. Monitor - Track accuracy over time
  7. Maintain - Update as product evolves

What NOT to Automate

  • Rare issues - <5 instances? Watchlist, don't build
  • Quick fixes - Fixable by prompt change? Fix it
  • Evolving criteria - Stabilize definition first