whisper-transcribe: Transcribes audio and video files to text using OpenAI's Whisper CLI with contextual grounding. Converts audio/video to text, transcribes recordings, and creates transcripts from media files. Use when asked to "whisper transcribe", "transcribe audio", "convert recording to text", or "speech to text". Uses markdown files in the same directory as context to improve transcription accuracy for technical terms, proper nouns, and domain-specific vocabulary.
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
npx agent-skills-cli listSkill Instructions
name: whisper-transcribe description: | Transcribes audio and video files to text using OpenAI's Whisper CLI with contextual grounding. Converts audio/video to text, transcribes recordings, and creates transcripts from media files. Use when asked to "whisper transcribe", "transcribe audio", "convert recording to text", or "speech to text". Uses markdown files in the same directory as context to improve transcription accuracy for technical terms, proper nouns, and domain-specific vocabulary. version: 1.0.0 category: media-processing triggers:
- whisper
- transcribe
- transcription
- audio to text
- video to text
- speech to text
- convert recording
- meeting transcript
- .mp3
- .wav
- .m4a
- .mp4
- .webm author: Claude Code license: MIT tags:
- whisper
- transcription
- audio
- video
- speech-to-text
- context-grounding
Whisper Transcribe Skill
Transcribe audio and video files to text using OpenAI's Whisper with contextual grounding from markdown files.
Purpose
Intelligent audio/video transcription that:
- Converts media files to accurate text transcripts
- Uses markdown context files to correct technical terms, names, and jargon
- Handles various audio/video formats (mp3, wav, m4a, mp4, webm, etc.)
When to Use
- User asks to transcribe an audio or video file
- User wants to convert a recording to text
- User mentions "whisper" in context of transcription
- User needs meeting notes or interview transcripts
- User has media files with domain-specific terminology
Installation
macOS (Recommended for MacBook Pro)
# Install via Homebrew (recommended)
brew install ffmpeg openai-whisper
# Verify installation
whisper --version
Linux/pip Installation
# Install ffmpeg first
sudo apt install ffmpeg # Debian/Ubuntu
# or: sudo dnf install ffmpeg # Fedora
# Install Whisper
pip install openai-whisper
Verify Installation
whisper --version
ffmpeg -version
Transcription Workflow
Step 1: Identify Media File and Context
- Locate the audio/video file to transcribe
- Check for markdown files in the same directory (context files)
- If no context files exist, optionally create one using
assets/context-template.md
Step 2: Run Whisper Transcription
Basic transcription:
whisper "/path/to/audio.mp3" --output_dir "/path/to/output"
With model selection (trade-off: speed vs accuracy):
# Fast (less accurate)
whisper "audio.mp3" --model tiny
# Balanced (recommended)
whisper "audio.mp3" --model base
# High quality
whisper "audio.mp3" --model small
# Best quality (slower, requires more RAM)
whisper "audio.mp3" --model medium
whisper "audio.mp3" --model large
With language specification:
whisper "audio.mp3" --language en
Output format options:
whisper "audio.mp3" --output_format txt # Plain text
whisper "audio.mp3" --output_format srt # Subtitles
whisper "audio.mp3" --output_format vtt # Web subtitles
whisper "audio.mp3" --output_format json # Detailed JSON
whisper "audio.mp3" --output_format all # All formats
Step 3: Apply Context Grounding
Use the scripts/transcribe_with_context.py script for automated grounding, or manually apply corrections:
# Automated approach (recommended)
python scripts/transcribe_with_context.py /path/to/audio.mp3
For manual grounding:
- Read the transcript output
- Read all
.mdfiles in the media file's directory - Extract terminology, names, and technical terms from context files
- Search transcript for likely misrecognitions
- Apply corrections based on context
Common corrections:
- "cooler net ease" -> "Kubernetes"
- "sequel" -> "SQL"
- "post gress" -> "Postgres"
- Names: Match phonetic variations to names in context files
Step 4: Save Corrected Transcript
Save the grounded transcript with a clear filename:
original_filename_transcript.txt
original_filename_transcript.md
Context Files
Context files are markdown files in the same directory as the media file. They provide grounding information to improve transcription accuracy.
What to Include in Context Files
- People: Names of speakers, team members, interviewees
- Technical Terms: Domain-specific vocabulary, product names
- Acronyms: Abbreviations and their expansions
- Organizations: Company names, department names
- Projects: Project codenames, feature names
Context File Example
See assets/context-template.md for a complete template.
# Meeting Context
## Speakers
- Richard Hightower (host)
- Jane Smith (engineering lead)
## Technical Terms
- Kubernetes (container orchestration)
- FastAPI (Python web framework)
- AlloyDB (Google Cloud database)
## Acronyms
- CI/CD - Continuous Integration/Continuous Deployment
- PR - Pull Request
Model Selection Guide
Use base for general use, medium for important recordings. See references/whisper-options.md for full model comparison and all available options.
Quick reference: tiny (fastest) < base (balanced) < small (better) < medium (high) < large (best accuracy)
For MacBook Pro with Apple Silicon: small or medium models recommended for best speed/accuracy balance.
Troubleshooting
"whisper: command not found"
# macOS
brew install openai-whisper
# Linux
pip install openai-whisper
export PATH="$HOME/.local/bin:$PATH"
"ffmpeg not found"
# macOS
brew install ffmpeg
# Linux
sudo apt install ffmpeg
Out of memory errors
Use a smaller model:
whisper "audio.mp3" --model tiny
Slow transcription
- Use
tinyorbasemodel for faster results - Ensure correct architecture is being used (Apple Silicon vs Intel)
Resources
scripts/
The scripts/transcribe_with_context.py script automates the full workflow:
- Finds context files automatically
- Runs Whisper transcription
- Applies context-based corrections
- Saves the final transcript
Usage:
python scripts/transcribe_with_context.py /path/to/audio.mp3
references/
See references/whisper-options.md for complete CLI reference and advanced options.
assets/
The assets/context-template.md provides a template for creating context files to improve transcription accuracy.
More by SpillwaveSolutions
View allInstallation and configuration skill for Agent Brain document search system. Use when asked to "install agent brain", "setup agent brain", "configure agent brain", "setting up document search", "installing agent-brain packages", "configuring API keys", "initializing project for search", "troubleshooting agent brain", "pip install agent-brain", "agent brain not working", "agent brain setup error", "configure embeddings provider", "setup ollama for agent brain", or "agent brain environment variables". Covers package installation, provider configuration, project initialization, and server management.
Expert Agent Brain skill for document search with BM25 keyword, semantic vector, hybrid, graph, and multi retrieval modes. Use when asked to "search documentation", "query domain", "find in docs", "bm25 search", "hybrid search", "semantic search", "graph search", "multi search", "find dependencies", "code relationships", "searching knowledge base", "querying indexed documents", "finding code references", "exploring codebase", "what calls this function", "find imports", "trace dependencies", "brain search", "brain query", "knowledge base search", "cache management", "clear embedding cache", "cache hit rate", or "cache status". Supports multi-instance architecture with automatic server discovery. GraphRAG mode enables relationship-aware queries for code dependencies and entity connections. Pluggable providers for embeddings (OpenAI, Cohere, Ollama) and summarization (Anthropic, OpenAI, Gemini, Grok, Ollama). Supports multiple runtimes (Claude Code, OpenCode, Gemini CLI) with shared .agent-brain/ data directory.
Advanced document search with BM25 keyword matching, semantic vector search, and hybrid retrieval.Enables precise technical queries, conceptual understanding, and intelligent result fusion.Supports local document indexing and provides comprehensive search capabilities for knowledge bases.
Set up and maintain a structured project memory system in docs/project_notes/ that tracks bugs with solutions, architectural decisions, key project facts, and work history. Use this skill when asked to "set up project memory", "track our decisions", "log a bug fix", "update project memory", or "initialize memory system". Configures both CLAUDE.md and AGENTS.md to maintain memory awareness across different AI coding tools.
