mirror of
https://github.com/github/awesome-copilot.git
synced 2026-04-11 02:35:55 +00:00
* Add 9 Arize LLM observability skills Add skills for Arize AI platform covering trace export, instrumentation, datasets, experiments, evaluators, AI provider integrations, annotations, prompt optimization, and deep linking to the Arize UI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add 3 Phoenix AI observability skills Add skills for Phoenix (Arize open-source) covering CLI debugging, LLM evaluation workflows, and OpenInference tracing/instrumentation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Ignoring intentional bad spelling * Fix CI: remove .DS_Store from generated skills README and add codespell ignore Remove .DS_Store artifact from winmd-api-search asset listing in generated README.skills.md so it matches the CI Linux build output. Add queston to codespell ignore list (intentional misspelling example in arize-dataset skill). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add arize-ax and phoenix plugins Bundle the 9 Arize skills into an arize-ax plugin and the 3 Phoenix skills into a phoenix plugin for easier installation as single packages. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Fix skill folder structures to match source repos Move arize supporting files from references/ to root level and rename phoenix references/ to rules/ to exactly match the original source repository folder structures. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Fixing file locations * Fixing readme --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
1.9 KiB
1.9 KiB
Setup: Python
Packages required for Phoenix evals and experiments.
Installation
# Core Phoenix package (includes client, evals, otel)
pip install arize-phoenix
# Or install individual packages
pip install arize-phoenix-client # Phoenix client only
pip install arize-phoenix-evals # Evaluation utilities
pip install arize-phoenix-otel # OpenTelemetry integration
LLM Providers
For LLM-as-judge evaluators, install your provider's SDK:
pip install openai # OpenAI
pip install anthropic # Anthropic
pip install google-generativeai # Google
Validation (Optional)
pip install scikit-learn # For TPR/TNR metrics
Quick Verify
from phoenix.client import Client
from phoenix.evals import LLM, ClassificationEvaluator
from phoenix.otel import register
# All imports should work
print("Phoenix Python setup complete")
Key Imports (Evals 2.0)
from phoenix.client import Client
from phoenix.evals import (
ClassificationEvaluator, # LLM classification evaluator (preferred)
LLM, # Provider-agnostic LLM wrapper
async_evaluate_dataframe, # Batch evaluate a DataFrame (preferred, async)
evaluate_dataframe, # Batch evaluate a DataFrame (sync)
create_evaluator, # Decorator for code-based evaluators
create_classifier, # Factory for LLM classification evaluators
bind_evaluator, # Map column names to evaluator params
Score, # Score dataclass
)
from phoenix.evals.utils import to_annotation_dataframe # Format results for Phoenix annotations
Prefer: ClassificationEvaluator over create_classifier (more parameters/customization).
Prefer: async_evaluate_dataframe over evaluate_dataframe (better throughput for LLM evals).
Do NOT use legacy 1.0 imports: OpenAIModel, AnthropicModel, run_evals, llm_classify.