jeremylongshore

model-evaluation-suite

@jeremylongshore/model-evaluation-suite
jeremylongshore
1,004
122 forks
Updated 1/18/2026
View on GitHub

evaluating-machine-learning-models: This skill allows Claude to evaluate machine learning models using a comprehensive suite of metrics. It should be used when the user requests model performance analysis, validation, or testing. Claude can use this skill to assess model accuracy, precision, recall, F1-score, and other relevant metrics. Trigger this skill when the user mentions "evaluate model", "model performance", "testing metrics", "validation results", or requests a comprehensive "model evaluation".

Installation

$skills install @jeremylongshore/model-evaluation-suite
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathbackups/skills-batch-20251204-000554/plugins/ai-ml/model-evaluation-suite/skills/model-evaluation-suite/SKILL.md
Branchmain
Scoped Name@jeremylongshore/model-evaluation-suite

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

skills list