mirror of
https://github.com/github/awesome-copilot.git
synced 2026-04-11 02:35:55 +00:00
* Add 9 Arize LLM observability skills Add skills for Arize AI platform covering trace export, instrumentation, datasets, experiments, evaluators, AI provider integrations, annotations, prompt optimization, and deep linking to the Arize UI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add 3 Phoenix AI observability skills Add skills for Phoenix (Arize open-source) covering CLI debugging, LLM evaluation workflows, and OpenInference tracing/instrumentation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Ignoring intentional bad spelling * Fix CI: remove .DS_Store from generated skills README and add codespell ignore Remove .DS_Store artifact from winmd-api-search asset listing in generated README.skills.md so it matches the CI Linux build output. Add queston to codespell ignore list (intentional misspelling example in arize-dataset skill). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add arize-ax and phoenix plugins Bundle the 9 Arize skills into an arize-ax plugin and the 3 Phoenix skills into a phoenix plugin for easier installation as single packages. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Fix skill folder structures to match source repos Move arize supporting files from references/ to root level and rename phoenix references/ to rules/ to exactly match the original source repository folder structures. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Fixing file locations * Fixing readme --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2.6 KiB
2.6 KiB
Evaluators: LLM Evaluators in Python
LLM evaluators use a language model to judge outputs. Use when criteria are subjective.
Quick Start
from phoenix.evals import ClassificationEvaluator, LLM
llm = LLM(provider="openai", model="gpt-4o")
HELPFULNESS_TEMPLATE = """Rate how helpful the response is.
<question>{{input}}</question>
<response>{{output}}</response>
"helpful" means directly addresses the question.
"not_helpful" means does not address the question.
Your answer (helpful/not_helpful):"""
helpfulness = ClassificationEvaluator(
name="helpfulness",
prompt_template=HELPFULNESS_TEMPLATE,
llm=llm,
choices={"not_helpful": 0, "helpful": 1}
)
Template Variables
Use XML tags to wrap variables for clarity:
| Variable | XML Tag |
|---|---|
{{input}} |
<question>{{input}}</question> |
{{output}} |
<response>{{output}}</response> |
{{reference}} |
<reference>{{reference}}</reference> |
{{context}} |
<context>{{context}}</context> |
create_classifier (Factory)
Shorthand factory that returns a ClassificationEvaluator. Prefer direct
ClassificationEvaluator instantiation for more parameters/customization:
from phoenix.evals import create_classifier, LLM
relevance = create_classifier(
name="relevance",
prompt_template="""Is this response relevant to the question?
<question>{{input}}</question>
<response>{{output}}</response>
Answer (relevant/irrelevant):""",
llm=LLM(provider="openai", model="gpt-4o"),
choices={"relevant": 1.0, "irrelevant": 0.0},
)
Input Mapping
Column names must match template variables. Rename columns or use bind_evaluator:
# Option 1: Rename columns to match template variables
df = df.rename(columns={"user_query": "input", "ai_response": "output"})
# Option 2: Use bind_evaluator
from phoenix.evals import bind_evaluator
bound = bind_evaluator(
evaluator=helpfulness,
input_mapping={"input": "user_query", "output": "ai_response"},
)
Running
from phoenix.evals import evaluate_dataframe
results_df = evaluate_dataframe(dataframe=df, evaluators=[helpfulness])
Best Practices
- Be specific - Define exactly what pass/fail means
- Include examples - Show concrete cases for each label
- Explanations by default -
ClassificationEvaluatorincludes explanations automatically - Study built-in prompts - See
phoenix.evals.__generated__.classification_evaluator_configsfor examples of well-structured evaluation prompts (Faithfulness, Correctness, DocumentRelevance, etc.)