Files
awesome-copilot/skills/phoenix-tracing/references/instrumentation-auto-python.md
Jim Bennett d79183139a Add Arize and Phoenix LLM observability skills (#1204)
* Add 9 Arize LLM observability skills

Add skills for Arize AI platform covering trace export, instrumentation,
datasets, experiments, evaluators, AI provider integrations, annotations,
prompt optimization, and deep linking to the Arize UI.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add 3 Phoenix AI observability skills

Add skills for Phoenix (Arize open-source) covering CLI debugging,
LLM evaluation workflows, and OpenInference tracing/instrumentation.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Ignoring intentional bad spelling

* Fix CI: remove .DS_Store from generated skills README and add codespell ignore

Remove .DS_Store artifact from winmd-api-search asset listing in generated
README.skills.md so it matches the CI Linux build output. Add queston to
codespell ignore list (intentional misspelling example in arize-dataset skill).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Add arize-ax and phoenix plugins

Bundle the 9 Arize skills into an arize-ax plugin and the 3 Phoenix
skills into a phoenix plugin for easier installation as single packages.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fix skill folder structures to match source repos

Move arize supporting files from references/ to root level and rename
phoenix references/ to rules/ to exactly match the original source
repository folder structures.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>

* Fixing file locations

* Fixing readme

---------

Co-authored-by: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
2026-04-02 09:58:55 +11:00

86 lines
2.2 KiB
Markdown

# Phoenix Tracing: Auto-Instrumentation (Python)
**Automatically create spans for LLM calls without code changes.**
## Overview
Auto-instrumentation patches supported libraries at runtime to create spans automatically. Use for supported frameworks (LangChain, LlamaIndex, OpenAI SDK, etc.). For custom logic, manual-instrumentation-python.md.
## Supported Frameworks
**Python:**
- LLM SDKs: OpenAI, Anthropic, Bedrock, Mistral, Vertex AI, Groq, Ollama
- Frameworks: LangChain, LlamaIndex, DSPy, CrewAI, Instructor, Haystack
- Install: `pip install openinference-instrumentation-{name}`
## Setup
**Install and enable:**
```bash
pip install arize-phoenix-otel
pip install openinference-instrumentation-openai # Add others as needed
```
```python
from phoenix.otel import register
register(project_name="my-app", auto_instrument=True) # Discovers all installed instrumentors
```
**Example:**
```python
from phoenix.otel import register
from openai import OpenAI
register(project_name="my-app", auto_instrument=True)
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
```
Traces appear in Phoenix UI with model, input/output, tokens, timing automatically captured. See span kind files for full attribute schemas.
**Selective instrumentation** (explicit control):
```python
from phoenix.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer_provider = register(project_name="my-app") # No auto_instrument
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
```
## Limitations
Auto-instrumentation does NOT capture:
- Custom business logic
- Internal function calls
**Example:**
```python
def my_custom_workflow(query: str) -> str:
preprocessed = preprocess(query) # Not traced
response = client.chat.completions.create(...) # Traced (auto)
postprocessed = postprocess(response) # Not traced
return postprocessed
```
**Solution:** Add manual instrumentation:
```python
@tracer.chain
def my_custom_workflow(query: str) -> str:
preprocessed = preprocess(query)
response = client.chat.completions.create(...)
postprocessed = postprocess(response)
return postprocessed
```