mirror of
https://github.com/github/awesome-copilot.git
synced 2026-04-11 18:55:55 +00:00
* Add 9 Arize LLM observability skills Add skills for Arize AI platform covering trace export, instrumentation, datasets, experiments, evaluators, AI provider integrations, annotations, prompt optimization, and deep linking to the Arize UI. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add 3 Phoenix AI observability skills Add skills for Phoenix (Arize open-source) covering CLI debugging, LLM evaluation workflows, and OpenInference tracing/instrumentation. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Ignoring intentional bad spelling * Fix CI: remove .DS_Store from generated skills README and add codespell ignore Remove .DS_Store artifact from winmd-api-search asset listing in generated README.skills.md so it matches the CI Linux build output. Add queston to codespell ignore list (intentional misspelling example in arize-dataset skill). Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Add arize-ax and phoenix plugins Bundle the 9 Arize skills into an arize-ax plugin and the 3 Phoenix skills into a phoenix plugin for easier installation as single packages. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Fix skill folder structures to match source repos Move arize supporting files from references/ to root level and rename phoenix references/ to rules/ to exactly match the original source repository folder structures. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com> * Fixing file locations * Fixing readme --------- Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
115 lines
2.4 KiB
Markdown
115 lines
2.4 KiB
Markdown
# Python SDK Annotation Patterns
|
|
|
|
Add feedback to spans, traces, documents, and sessions using the Python client.
|
|
|
|
## Client Setup
|
|
|
|
```python
|
|
from phoenix.client import Client
|
|
client = Client() # Default: http://localhost:6006
|
|
```
|
|
|
|
## Span Annotations
|
|
|
|
Add feedback to individual spans:
|
|
|
|
```python
|
|
client.spans.add_span_annotation(
|
|
span_id="abc123",
|
|
annotation_name="quality",
|
|
annotator_kind="HUMAN",
|
|
label="high_quality",
|
|
score=0.95,
|
|
explanation="Accurate and well-formatted",
|
|
metadata={"reviewer": "alice"},
|
|
sync=True
|
|
)
|
|
```
|
|
|
|
## Document Annotations
|
|
|
|
Rate individual documents in RETRIEVER spans:
|
|
|
|
```python
|
|
client.spans.add_document_annotation(
|
|
span_id="retriever_span",
|
|
document_position=0, # 0-based index
|
|
annotation_name="relevance",
|
|
annotator_kind="LLM",
|
|
label="relevant",
|
|
score=0.95
|
|
)
|
|
```
|
|
|
|
## Trace Annotations
|
|
|
|
Feedback on entire traces:
|
|
|
|
```python
|
|
client.traces.add_trace_annotation(
|
|
trace_id="trace_abc",
|
|
annotation_name="correctness",
|
|
annotator_kind="HUMAN",
|
|
label="correct",
|
|
score=1.0
|
|
)
|
|
```
|
|
|
|
## Session Annotations
|
|
|
|
Feedback on multi-turn conversations:
|
|
|
|
```python
|
|
client.sessions.add_session_annotation(
|
|
session_id="session_xyz",
|
|
annotation_name="user_satisfaction",
|
|
annotator_kind="HUMAN",
|
|
label="satisfied",
|
|
score=0.85
|
|
)
|
|
```
|
|
|
|
## RAG Pipeline Example
|
|
|
|
```python
|
|
from phoenix.client import Client
|
|
from phoenix.client.resources.spans import SpanDocumentAnnotationData
|
|
|
|
client = Client()
|
|
|
|
# Document relevance (batch)
|
|
client.spans.log_document_annotations(
|
|
document_annotations=[
|
|
SpanDocumentAnnotationData(
|
|
name="relevance", span_id="retriever_span", document_position=i,
|
|
annotator_kind="LLM", result={"label": label, "score": score}
|
|
)
|
|
for i, (label, score) in enumerate([
|
|
("relevant", 0.95), ("relevant", 0.80), ("irrelevant", 0.10)
|
|
])
|
|
]
|
|
)
|
|
|
|
# LLM response quality
|
|
client.spans.add_span_annotation(
|
|
span_id="llm_span",
|
|
annotation_name="faithfulness",
|
|
annotator_kind="LLM",
|
|
label="faithful",
|
|
score=0.90
|
|
)
|
|
|
|
# Overall trace quality
|
|
client.traces.add_trace_annotation(
|
|
trace_id="trace_123",
|
|
annotation_name="correctness",
|
|
annotator_kind="HUMAN",
|
|
label="correct",
|
|
score=1.0
|
|
)
|
|
```
|
|
|
|
## API Reference
|
|
|
|
- [Python Client API](https://arize-phoenix.readthedocs.io/projects/client/en/latest/)
|