mirror of
https://github.com/github/awesome-copilot.git
synced 2026-04-13 19:55:56 +00:00
chore: publish from staged
This commit is contained in:
361
plugins/arize-ax/skills/arize-dataset/SKILL.md
Normal file
361
plugins/arize-ax/skills/arize-dataset/SKILL.md
Normal file
@@ -0,0 +1,361 @@
|
||||
---
|
||||
name: arize-dataset
|
||||
description: "INVOKE THIS SKILL when creating, managing, or querying Arize datasets and examples. Covers dataset CRUD, appending examples, exporting data, and file-based dataset creation using the ax CLI."
|
||||
---
|
||||
|
||||
# Arize Dataset Skill
|
||||
|
||||
## Concepts
|
||||
|
||||
- **Dataset** = a versioned collection of examples used for evaluation and experimentation
|
||||
- **Dataset Version** = a snapshot of a dataset at a point in time; updates can be in-place or create a new version
|
||||
- **Example** = a single record in a dataset with arbitrary user-defined fields (e.g., `question`, `answer`, `context`)
|
||||
- **Space** = an organizational container; datasets belong to a space
|
||||
|
||||
System-managed fields on examples (`id`, `created_at`, `updated_at`) are auto-generated by the server -- never include them in create or append payloads.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
Proceed directly with the task — run the `ax` command you need. Do NOT check versions, env vars, or profiles upfront.
|
||||
|
||||
If an `ax` command fails, troubleshoot based on the error:
|
||||
- `command not found` or version error → see references/ax-setup.md
|
||||
- `401 Unauthorized` / missing API key → run `ax profiles show` to inspect the current profile. If the profile is missing or the API key is wrong: check `.env` for `ARIZE_API_KEY` and use it to create/update the profile via references/ax-profiles.md. If `.env` has no key either, ask the user for their Arize API key (https://app.arize.com/admin > API Keys)
|
||||
- Space ID unknown → check `.env` for `ARIZE_SPACE_ID`, or run `ax spaces list -o json`, or ask the user
|
||||
- Project unclear → check `.env` for `ARIZE_DEFAULT_PROJECT`, or ask, or run `ax projects list -o json --limit 100` and present as selectable options
|
||||
|
||||
## List Datasets: `ax datasets list`
|
||||
|
||||
Browse datasets in a space. Output goes to stdout.
|
||||
|
||||
```bash
|
||||
ax datasets list
|
||||
ax datasets list --space-id SPACE_ID --limit 20
|
||||
ax datasets list --cursor CURSOR_TOKEN
|
||||
ax datasets list -o json
|
||||
```
|
||||
|
||||
### Flags
|
||||
|
||||
| Flag | Type | Default | Description |
|
||||
|------|------|---------|-------------|
|
||||
| `--space-id` | string | from profile | Filter by space |
|
||||
| `--limit, -l` | int | 15 | Max results (1-100) |
|
||||
| `--cursor` | string | none | Pagination cursor from previous response |
|
||||
| `-o, --output` | string | table | Output format: table, json, csv, parquet, or file path |
|
||||
| `-p, --profile` | string | default | Configuration profile |
|
||||
|
||||
## Get Dataset: `ax datasets get`
|
||||
|
||||
Quick metadata lookup -- returns dataset name, space, timestamps, and version list.
|
||||
|
||||
```bash
|
||||
ax datasets get DATASET_ID
|
||||
ax datasets get DATASET_ID -o json
|
||||
```
|
||||
|
||||
### Flags
|
||||
|
||||
| Flag | Type | Default | Description |
|
||||
|------|------|---------|-------------|
|
||||
| `DATASET_ID` | string | required | Positional argument |
|
||||
| `-o, --output` | string | table | Output format |
|
||||
| `-p, --profile` | string | default | Configuration profile |
|
||||
|
||||
### Response fields
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `id` | string | Dataset ID |
|
||||
| `name` | string | Dataset name |
|
||||
| `space_id` | string | Space this dataset belongs to |
|
||||
| `created_at` | datetime | When the dataset was created |
|
||||
| `updated_at` | datetime | Last modification time |
|
||||
| `versions` | array | List of dataset versions (id, name, dataset_id, created_at, updated_at) |
|
||||
|
||||
## Export Dataset: `ax datasets export`
|
||||
|
||||
Download all examples to a file. Use `--all` for datasets larger than 500 examples (unlimited bulk export).
|
||||
|
||||
```bash
|
||||
ax datasets export DATASET_ID
|
||||
# -> dataset_abc123_20260305_141500/examples.json
|
||||
|
||||
ax datasets export DATASET_ID --all
|
||||
ax datasets export DATASET_ID --version-id VERSION_ID
|
||||
ax datasets export DATASET_ID --output-dir ./data
|
||||
ax datasets export DATASET_ID --stdout
|
||||
ax datasets export DATASET_ID --stdout | jq '.[0]'
|
||||
```
|
||||
|
||||
### Flags
|
||||
|
||||
| Flag | Type | Default | Description |
|
||||
|------|------|---------|-------------|
|
||||
| `DATASET_ID` | string | required | Positional argument |
|
||||
| `--version-id` | string | latest | Export a specific dataset version |
|
||||
| `--all` | bool | false | Unlimited bulk export (use for datasets > 500 examples) |
|
||||
| `--output-dir` | string | `.` | Output directory |
|
||||
| `--stdout` | bool | false | Print JSON to stdout instead of file |
|
||||
| `-p, --profile` | string | default | Configuration profile |
|
||||
|
||||
**Agent auto-escalation rule:** If an export returns exactly 500 examples, the result is likely truncated — re-run with `--all` to get the full dataset.
|
||||
|
||||
**Export completeness verification:** After exporting, confirm the row count matches what the server reports:
|
||||
```bash
|
||||
# Get the server-reported count from dataset metadata
|
||||
ax datasets get DATASET_ID -o json | jq '.versions[-1] | {version: .id, examples: .example_count}'
|
||||
|
||||
# Compare to what was exported
|
||||
jq 'length' dataset_*/examples.json
|
||||
|
||||
# If counts differ, re-export with --all
|
||||
```
|
||||
|
||||
Output is a JSON array of example objects. Each example has system fields (`id`, `created_at`, `updated_at`) plus all user-defined fields:
|
||||
|
||||
```json
|
||||
[
|
||||
{
|
||||
"id": "ex_001",
|
||||
"created_at": "2026-01-15T10:00:00Z",
|
||||
"updated_at": "2026-01-15T10:00:00Z",
|
||||
"question": "What is 2+2?",
|
||||
"answer": "4",
|
||||
"topic": "math"
|
||||
}
|
||||
]
|
||||
```
|
||||
|
||||
## Create Dataset: `ax datasets create`
|
||||
|
||||
Create a new dataset from a data file.
|
||||
|
||||
```bash
|
||||
ax datasets create --name "My Dataset" --space-id SPACE_ID --file data.csv
|
||||
ax datasets create --name "My Dataset" --space-id SPACE_ID --file data.json
|
||||
ax datasets create --name "My Dataset" --space-id SPACE_ID --file data.jsonl
|
||||
ax datasets create --name "My Dataset" --space-id SPACE_ID --file data.parquet
|
||||
```
|
||||
|
||||
### Flags
|
||||
|
||||
| Flag | Type | Required | Description |
|
||||
|------|------|----------|-------------|
|
||||
| `--name, -n` | string | yes | Dataset name |
|
||||
| `--space-id` | string | yes | Space to create the dataset in |
|
||||
| `--file, -f` | path | yes | Data file: CSV, JSON, JSONL, or Parquet |
|
||||
| `-o, --output` | string | no | Output format for the returned dataset metadata |
|
||||
| `-p, --profile` | string | no | Configuration profile |
|
||||
|
||||
### Passing data via stdin
|
||||
|
||||
Use `--file -` to pipe data directly — no temp file needed:
|
||||
|
||||
```bash
|
||||
echo '[{"question": "What is 2+2?", "answer": "4"}]' | ax datasets create --name "my-dataset" --space-id SPACE_ID --file -
|
||||
|
||||
# Or with a heredoc
|
||||
ax datasets create --name "my-dataset" --space-id SPACE_ID --file - << 'EOF'
|
||||
[{"question": "What is 2+2?", "answer": "4"}]
|
||||
EOF
|
||||
```
|
||||
|
||||
To add rows to an existing dataset, use `ax datasets append --json '[...]'` instead — no file needed.
|
||||
|
||||
### Supported file formats
|
||||
|
||||
| Format | Extension | Notes |
|
||||
|--------|-----------|-------|
|
||||
| CSV | `.csv` | Column headers become field names |
|
||||
| JSON | `.json` | Array of objects |
|
||||
| JSON Lines | `.jsonl` | One object per line (NOT a JSON array) |
|
||||
| Parquet | `.parquet` | Column names become field names; preserves types |
|
||||
|
||||
**Format gotchas:**
|
||||
- **CSV**: Loses type information — dates become strings, `null` becomes empty string. Use JSON/Parquet to preserve types.
|
||||
- **JSONL**: Each line is a separate JSON object. A JSON array (`[{...}, {...}]`) in a `.jsonl` file will fail — use `.json` extension instead.
|
||||
- **Parquet**: Preserves column types. Requires `pandas`/`pyarrow` to read locally: `pd.read_parquet("examples.parquet")`.
|
||||
|
||||
## Append Examples: `ax datasets append`
|
||||
|
||||
Add examples to an existing dataset. Two input modes -- use whichever fits.
|
||||
|
||||
### Inline JSON (agent-friendly)
|
||||
|
||||
Generate the payload directly -- no temp files needed:
|
||||
|
||||
```bash
|
||||
ax datasets append DATASET_ID --json '[{"question": "What is 2+2?", "answer": "4"}]'
|
||||
|
||||
ax datasets append DATASET_ID --json '[
|
||||
{"question": "What is gravity?", "answer": "A fundamental force..."},
|
||||
{"question": "What is light?", "answer": "Electromagnetic radiation..."}
|
||||
]'
|
||||
```
|
||||
|
||||
### From a file
|
||||
|
||||
```bash
|
||||
ax datasets append DATASET_ID --file new_examples.csv
|
||||
ax datasets append DATASET_ID --file additions.json
|
||||
```
|
||||
|
||||
### To a specific version
|
||||
|
||||
```bash
|
||||
ax datasets append DATASET_ID --json '[{"q": "..."}]' --version-id VERSION_ID
|
||||
```
|
||||
|
||||
### Flags
|
||||
|
||||
| Flag | Type | Required | Description |
|
||||
|------|------|----------|-------------|
|
||||
| `DATASET_ID` | string | yes | Positional argument |
|
||||
| `--json` | string | mutex | JSON array of example objects |
|
||||
| `--file, -f` | path | mutex | Data file (CSV, JSON, JSONL, Parquet) |
|
||||
| `--version-id` | string | no | Append to a specific version (default: latest) |
|
||||
| `-o, --output` | string | no | Output format for the returned dataset metadata |
|
||||
| `-p, --profile` | string | no | Configuration profile |
|
||||
|
||||
Exactly one of `--json` or `--file` is required.
|
||||
|
||||
### Validation
|
||||
|
||||
- Each example must be a JSON object with at least one user-defined field
|
||||
- Maximum 100,000 examples per request
|
||||
|
||||
**Schema validation before append:** If the dataset already has examples, inspect its schema before appending to avoid silent field mismatches:
|
||||
|
||||
```bash
|
||||
# Check existing field names in the dataset
|
||||
ax datasets export DATASET_ID --stdout | jq '.[0] | keys'
|
||||
|
||||
# Verify your new data has matching field names
|
||||
echo '[{"question": "..."}]' | jq '.[0] | keys'
|
||||
|
||||
# Both outputs should show the same user-defined fields
|
||||
```
|
||||
|
||||
Fields are free-form: extra fields in new examples are added, and missing fields become null. However, typos in field names (e.g., `queston` vs `question`) create new columns silently -- verify spelling before appending.
|
||||
|
||||
## Delete Dataset: `ax datasets delete`
|
||||
|
||||
```bash
|
||||
ax datasets delete DATASET_ID
|
||||
ax datasets delete DATASET_ID --force # skip confirmation prompt
|
||||
```
|
||||
|
||||
### Flags
|
||||
|
||||
| Flag | Type | Default | Description |
|
||||
|------|------|---------|-------------|
|
||||
| `DATASET_ID` | string | required | Positional argument |
|
||||
| `--force, -f` | bool | false | Skip confirmation prompt |
|
||||
| `-p, --profile` | string | default | Configuration profile |
|
||||
|
||||
## Workflows
|
||||
|
||||
### Find a dataset by name
|
||||
|
||||
Users often refer to datasets by name rather than ID. Resolve a name to an ID before running other commands:
|
||||
|
||||
```bash
|
||||
# Find dataset ID by name
|
||||
ax datasets list -o json | jq '.[] | select(.name == "eval-set-v1") | .id'
|
||||
|
||||
# If the list is paginated, fetch more
|
||||
ax datasets list -o json --limit 100 | jq '.[] | select(.name | test("eval-set")) | {id, name}'
|
||||
```
|
||||
|
||||
### Create a dataset from file for evaluation
|
||||
|
||||
1. Prepare a CSV/JSON/Parquet file with your evaluation columns (e.g., `input`, `expected_output`)
|
||||
- If generating data inline, pipe it via stdin using `--file -` (see the Create Dataset section)
|
||||
2. `ax datasets create --name "eval-set-v1" --space-id SPACE_ID --file eval_data.csv`
|
||||
3. Verify: `ax datasets get DATASET_ID`
|
||||
4. Use the dataset ID to run experiments
|
||||
|
||||
### Add examples to an existing dataset
|
||||
|
||||
```bash
|
||||
# Find the dataset
|
||||
ax datasets list
|
||||
|
||||
# Append inline or from a file (see Append Examples section for full syntax)
|
||||
ax datasets append DATASET_ID --json '[{"question": "...", "answer": "..."}]'
|
||||
ax datasets append DATASET_ID --file additional_examples.csv
|
||||
```
|
||||
|
||||
### Download dataset for offline analysis
|
||||
|
||||
1. `ax datasets list` -- find the dataset
|
||||
2. `ax datasets export DATASET_ID` -- download to file
|
||||
3. Parse the JSON: `jq '.[] | .question' dataset_*/examples.json`
|
||||
|
||||
### Export a specific version
|
||||
|
||||
```bash
|
||||
# List versions
|
||||
ax datasets get DATASET_ID -o json | jq '.versions'
|
||||
|
||||
# Export that version
|
||||
ax datasets export DATASET_ID --version-id VERSION_ID
|
||||
```
|
||||
|
||||
### Iterate on a dataset
|
||||
|
||||
1. Export current version: `ax datasets export DATASET_ID`
|
||||
2. Modify the examples locally
|
||||
3. Append new rows: `ax datasets append DATASET_ID --file new_rows.csv`
|
||||
4. Or create a fresh version: `ax datasets create --name "eval-set-v2" --space-id SPACE_ID --file updated_data.json`
|
||||
|
||||
### Pipe export to other tools
|
||||
|
||||
```bash
|
||||
# Count examples
|
||||
ax datasets export DATASET_ID --stdout | jq 'length'
|
||||
|
||||
# Extract a single field
|
||||
ax datasets export DATASET_ID --stdout | jq '.[].question'
|
||||
|
||||
# Convert to CSV with jq
|
||||
ax datasets export DATASET_ID --stdout | jq -r '.[] | [.question, .answer] | @csv'
|
||||
```
|
||||
|
||||
## Dataset Example Schema
|
||||
|
||||
Examples are free-form JSON objects. There is no fixed schema -- columns are whatever fields you provide. System-managed fields are added by the server:
|
||||
|
||||
| Field | Type | Managed by | Notes |
|
||||
|-------|------|-----------|-------|
|
||||
| `id` | string | server | Auto-generated UUID. Required on update, forbidden on create/append |
|
||||
| `created_at` | datetime | server | Immutable creation timestamp |
|
||||
| `updated_at` | datetime | server | Auto-updated on modification |
|
||||
| *(any user field)* | any JSON type | user | String, number, boolean, null, nested object, array |
|
||||
|
||||
|
||||
## Related Skills
|
||||
|
||||
- **arize-trace**: Export production spans to understand what data to put in datasets → use `arize-trace`
|
||||
- **arize-experiment**: Run evaluations against this dataset → next step is `arize-experiment`
|
||||
- **arize-prompt-optimization**: Use dataset + experiment results to improve prompts → use `arize-prompt-optimization`
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
| Problem | Solution |
|
||||
|---------|----------|
|
||||
| `ax: command not found` | See references/ax-setup.md |
|
||||
| `401 Unauthorized` | API key is wrong, expired, or doesn't have access to this space. Fix the profile using references/ax-profiles.md. |
|
||||
| `No profile found` | No profile is configured. See references/ax-profiles.md to create one. |
|
||||
| `Dataset not found` | Verify dataset ID with `ax datasets list` |
|
||||
| `File format error` | Supported: CSV, JSON, JSONL, Parquet. Use `--file -` to read from stdin. |
|
||||
| `platform-managed column` | Remove `id`, `created_at`, `updated_at` from create/append payloads |
|
||||
| `reserved column` | Remove `time`, `count`, or any `source_record_*` field |
|
||||
| `Provide either --json or --file` | Append requires exactly one input source |
|
||||
| `Examples array is empty` | Ensure your JSON array or file contains at least one example |
|
||||
| `not a JSON object` | Each element in the `--json` array must be a `{...}` object, not a string or number |
|
||||
|
||||
## Save Credentials for Future Use
|
||||
|
||||
See references/ax-profiles.md § Save Credentials for Future Use.
|
||||
115
plugins/arize-ax/skills/arize-dataset/references/ax-profiles.md
Normal file
115
plugins/arize-ax/skills/arize-dataset/references/ax-profiles.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# ax Profile Setup
|
||||
|
||||
Consult this when authentication fails (401, missing profile, missing API key). Do NOT run these checks proactively.
|
||||
|
||||
Use this when there is no profile, or a profile has incorrect settings (wrong API key, wrong region, etc.).
|
||||
|
||||
## 1. Inspect the current state
|
||||
|
||||
```bash
|
||||
ax profiles show
|
||||
```
|
||||
|
||||
Look at the output to understand what's configured:
|
||||
- `API Key: (not set)` or missing → key needs to be created/updated
|
||||
- No profile output or "No profiles found" → no profile exists yet
|
||||
- Connected but getting `401 Unauthorized` → key is wrong or expired
|
||||
- Connected but wrong endpoint/region → region needs to be updated
|
||||
|
||||
## 2. Fix a misconfigured profile
|
||||
|
||||
If a profile exists but one or more settings are wrong, patch only what's broken.
|
||||
|
||||
**Never pass a raw API key value as a flag.** Always reference it via the `ARIZE_API_KEY` environment variable. If the variable is not already set in the shell, instruct the user to set it first, then run the command:
|
||||
|
||||
```bash
|
||||
# If ARIZE_API_KEY is already exported in the shell:
|
||||
ax profiles update --api-key $ARIZE_API_KEY
|
||||
|
||||
# Fix the region (no secret involved — safe to run directly)
|
||||
ax profiles update --region us-east-1b
|
||||
|
||||
# Fix both at once
|
||||
ax profiles update --api-key $ARIZE_API_KEY --region us-east-1b
|
||||
```
|
||||
|
||||
`update` only changes the fields you specify — all other settings are preserved. If no profile name is given, the active profile is updated.
|
||||
|
||||
## 3. Create a new profile
|
||||
|
||||
If no profile exists, or if the existing profile needs to point to a completely different setup (different org, different region):
|
||||
|
||||
**Always reference the key via `$ARIZE_API_KEY`, never inline a raw value.**
|
||||
|
||||
```bash
|
||||
# Requires ARIZE_API_KEY to be exported in the shell first
|
||||
ax profiles create --api-key $ARIZE_API_KEY
|
||||
|
||||
# Create with a region
|
||||
ax profiles create --api-key $ARIZE_API_KEY --region us-east-1b
|
||||
|
||||
# Create a named profile
|
||||
ax profiles create work --api-key $ARIZE_API_KEY --region us-east-1b
|
||||
```
|
||||
|
||||
To use a named profile with any `ax` command, add `-p NAME`:
|
||||
```bash
|
||||
ax spans export PROJECT_ID -p work
|
||||
```
|
||||
|
||||
## 4. Getting the API key
|
||||
|
||||
**Never ask the user to paste their API key into the chat. Never log, echo, or display an API key value.**
|
||||
|
||||
If `ARIZE_API_KEY` is not already set, instruct the user to export it in their shell:
|
||||
|
||||
```bash
|
||||
export ARIZE_API_KEY="..." # user pastes their key here in their own terminal
|
||||
```
|
||||
|
||||
They can find their key at https://app.arize.com/admin > API Keys. Recommend they create a **scoped service key** (not a personal user key) — service keys are not tied to an individual account and are safer for programmatic use. Keys are space-scoped — make sure they copy the key for the correct space.
|
||||
|
||||
Once the user confirms the variable is set, proceed with `ax profiles create --api-key $ARIZE_API_KEY` or `ax profiles update --api-key $ARIZE_API_KEY` as described above.
|
||||
|
||||
## 5. Verify
|
||||
|
||||
After any create or update:
|
||||
|
||||
```bash
|
||||
ax profiles show
|
||||
```
|
||||
|
||||
Confirm the API key and region are correct, then retry the original command.
|
||||
|
||||
## Space ID
|
||||
|
||||
There is no profile flag for space ID. Save it as an environment variable:
|
||||
|
||||
**macOS/Linux** — add to `~/.zshrc` or `~/.bashrc`:
|
||||
```bash
|
||||
export ARIZE_SPACE_ID="U3BhY2U6..."
|
||||
```
|
||||
Then `source ~/.zshrc` (or restart terminal).
|
||||
|
||||
**Windows (PowerShell):**
|
||||
```powershell
|
||||
[System.Environment]::SetEnvironmentVariable('ARIZE_SPACE_ID', 'U3BhY2U6...', 'User')
|
||||
```
|
||||
Restart terminal for it to take effect.
|
||||
|
||||
## Save Credentials for Future Use
|
||||
|
||||
At the **end of the session**, if the user manually provided any credentials during this conversation **and** those values were NOT already loaded from a saved profile or environment variable, offer to save them.
|
||||
|
||||
**Skip this entirely if:**
|
||||
- The API key was already loaded from an existing profile or `ARIZE_API_KEY` env var
|
||||
- The space ID was already set via `ARIZE_SPACE_ID` env var
|
||||
- The user only used base64 project IDs (no space ID was needed)
|
||||
|
||||
**How to offer:** Use **AskQuestion**: *"Would you like to save your Arize credentials so you don't have to enter them next time?"* with options `"Yes, save them"` / `"No thanks"`.
|
||||
|
||||
**If the user says yes:**
|
||||
|
||||
1. **API key** — Run `ax profiles show` to check the current state. Then run `ax profiles create --api-key $ARIZE_API_KEY` or `ax profiles update --api-key $ARIZE_API_KEY` (the key must already be exported as an env var — never pass a raw key value).
|
||||
|
||||
2. **Space ID** — See the Space ID section above to persist it as an environment variable.
|
||||
38
plugins/arize-ax/skills/arize-dataset/references/ax-setup.md
Normal file
38
plugins/arize-ax/skills/arize-dataset/references/ax-setup.md
Normal file
@@ -0,0 +1,38 @@
|
||||
# ax CLI — Troubleshooting
|
||||
|
||||
Consult this only when an `ax` command fails. Do NOT run these checks proactively.
|
||||
|
||||
## Check version first
|
||||
|
||||
If `ax` is installed (not `command not found`), always run `ax --version` before investigating further. The version must be `0.8.0` or higher — many errors are caused by an outdated install. If the version is too old, see **Version too old** below.
|
||||
|
||||
## `ax: command not found`
|
||||
|
||||
**macOS/Linux:**
|
||||
1. Check common locations: `~/.local/bin/ax`, `~/Library/Python/*/bin/ax`
|
||||
2. Install: `uv tool install arize-ax-cli` (preferred), `pipx install arize-ax-cli`, or `pip install arize-ax-cli`
|
||||
3. Add to PATH if needed: `export PATH="$HOME/.local/bin:$PATH"`
|
||||
|
||||
**Windows (PowerShell):**
|
||||
1. Check: `Get-Command ax` or `where.exe ax`
|
||||
2. Common locations: `%APPDATA%\Python\Scripts\ax.exe`, `%LOCALAPPDATA%\Programs\Python\Python*\Scripts\ax.exe`
|
||||
3. Install: `pip install arize-ax-cli`
|
||||
4. Add to PATH: `$env:PATH = "$env:APPDATA\Python\Scripts;$env:PATH"`
|
||||
|
||||
## Version too old (below 0.8.0)
|
||||
|
||||
Upgrade: `uv tool install --force --reinstall arize-ax-cli`, `pipx upgrade arize-ax-cli`, or `pip install --upgrade arize-ax-cli`
|
||||
|
||||
## SSL/certificate error
|
||||
|
||||
- macOS: `export SSL_CERT_FILE=/etc/ssl/cert.pem`
|
||||
- Linux: `export SSL_CERT_FILE=/etc/ssl/certs/ca-certificates.crt`
|
||||
- Fallback: `export SSL_CERT_FILE=$(python -c "import certifi; print(certifi.where())")`
|
||||
|
||||
## Subcommand not recognized
|
||||
|
||||
Upgrade ax (see above) or use the closest available alternative.
|
||||
|
||||
## Still failing
|
||||
|
||||
Stop and ask the user for help.
|
||||
Reference in New Issue
Block a user