mirror of
https://github.com/github/awesome-copilot.git
synced 2026-04-11 18:55:55 +00:00
2.2 KiB
2.2 KiB
Phoenix Tracing: Auto-Instrumentation (Python)
Automatically create spans for LLM calls without code changes.
Overview
Auto-instrumentation patches supported libraries at runtime to create spans automatically. Use for supported frameworks (LangChain, LlamaIndex, OpenAI SDK, etc.). For custom logic, manual-instrumentation-python.md.
Supported Frameworks
Python:
- LLM SDKs: OpenAI, Anthropic, Bedrock, Mistral, Vertex AI, Groq, Ollama
- Frameworks: LangChain, LlamaIndex, DSPy, CrewAI, Instructor, Haystack
- Install:
pip install openinference-instrumentation-{name}
Setup
Install and enable:
pip install arize-phoenix-otel
pip install openinference-instrumentation-openai # Add others as needed
from phoenix.otel import register
register(project_name="my-app", auto_instrument=True) # Discovers all installed instrumentors
Example:
from phoenix.otel import register
from openai import OpenAI
register(project_name="my-app", auto_instrument=True)
client = OpenAI()
response = client.chat.completions.create(
model="gpt-4",
messages=[{"role": "user", "content": "Hello!"}]
)
Traces appear in Phoenix UI with model, input/output, tokens, timing automatically captured. See span kind files for full attribute schemas.
Selective instrumentation (explicit control):
from phoenix.otel import register
from openinference.instrumentation.openai import OpenAIInstrumentor
tracer_provider = register(project_name="my-app") # No auto_instrument
OpenAIInstrumentor().instrument(tracer_provider=tracer_provider)
Limitations
Auto-instrumentation does NOT capture:
- Custom business logic
- Internal function calls
Example:
def my_custom_workflow(query: str) -> str:
preprocessed = preprocess(query) # Not traced
response = client.chat.completions.create(...) # Traced (auto)
postprocessed = postprocess(response) # Not traced
return postprocessed
Solution: Add manual instrumentation:
@tracer.chain
def my_custom_workflow(query: str) -> str:
preprocessed = preprocess(query)
response = client.chat.completions.create(...)
postprocessed = postprocess(response)
return postprocessed