Analyze a codebase and generate comprehensive documentation including architecture, components, interfaces, workflows, and dependencies. Creates an AI-optimized knowledge base (index.md) and can consolidate into AGENTS.md, README.md, or CONTRIBUTING.md. Use when the user wants to document a codebase, create AGENTS.md, understand system architecture, generate developer documentation, or asks to "summarize the codebase".
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
npx agent-skills-cli listSkill Instructions
name: codebase-summary description: Analyze a codebase and generate comprehensive documentation including architecture, components, interfaces, workflows, and dependencies. Creates an AI-optimized knowledge base (index.md) and can consolidate into AGENTS.md, README.md, or CONTRIBUTING.md. Use when the user wants to document a codebase, create AGENTS.md, understand system architecture, generate developer documentation, or asks to "summarize the codebase".
Codebase Summary
Generate comprehensive codebase documentation optimized for AI assistants and developers.
Parameters
Gather all parameters upfront in a single prompt:
| Parameter | Default | Description |
|---|---|---|
codebase_path | Current directory | Path to analyze |
output_dir | .sop/summary | Documentation output directory |
consolidate | false | Create consolidated file at codebase root |
consolidate_target | AGENTS.md | Target: AGENTS.md, README.md, or CONTRIBUTING.md |
check_consistency | true | Check for cross-document inconsistencies |
check_completeness | true | Identify documentation gaps |
update_mode | false | Update existing docs based on git changes |
Workflow
Step 1: Setup
- Validate
codebase_pathexists - Create
output_dirif needed - If
update_modeandindex.mdexists:- Run
git log --oneline -20to identify recent changes - Focus analysis on modified components
- Run
Step 2: Analyze Structure
Run the structure analyzer:
python {baseDir}/scripts/analyze_structure.py "{codebase_path}" --depth 4 --output "{output_dir}/codebase_info.md"
Run the dependency extractor:
python {baseDir}/scripts/extract_dependencies.py "{codebase_path}" --output "{output_dir}/dependencies.md"
Then manually analyze:
- Identify packages, modules, major components
- Map architectural patterns (MVC, microservices, etc.)
- Find key interfaces, APIs, entry points
Step 3: Generate Documentation
Create these files in {output_dir}/:
index.md - Primary AI context file:
- AI instructions for using the documentation
- Quick reference table mapping questions to files
- Table of contents with summaries for each file
- Brief codebase overview
architecture.md:
- System architecture with Mermaid
graphdiagram - Layer descriptions
- Design patterns used
- Key design decisions with rationale
components.md:
- Component overview with Mermaid
classDiagram - Per-component: purpose, location, key files, dependencies, interface
interfaces.md:
- API endpoints with request/response formats
- Internal interfaces and implementations
- Error codes and handling
data_models.md:
- ER diagram with Mermaid
erDiagram - Per-model: table, fields, indexes, relationships
workflows.md:
- Key processes with Mermaid
sequenceDiagram - Step-by-step breakdowns
- Error handling
See {baseDir}/references/documentation-templates.md for templates.
Step 4: Review
If check_consistency:
- Verify terminology consistency across documents
- Check cross-references are valid
If check_completeness:
- Identify undocumented components
- Note gaps from language/framework limitations
Save findings to {output_dir}/review_notes.md.
Step 5: Consolidate (if enabled)
If consolidate is true:
- Create file at codebase root (not in output_dir)
- Use
consolidate_targetas filename - Tailor content to target:
| Target | Focus |
|---|---|
| AGENTS.md | AI context, directory structure, coding patterns, testing |
| README.md | Project overview, installation, usage, getting started |
| CONTRIBUTING.md | Dev setup, coding standards, contribution workflow |
Default AGENTS.md prompt: Focus on information NOT in README.md or CONTRIBUTING.mdβfile purposes, directory structure, coding patterns, testing instructions, package guidance.
Step 6: Summary
Report:
- What was documented
- Next steps for using documentation
- How to add index.md to AI assistant context
- If
update_mode: summarize detected changes
Output Structure
{consolidate_target} # At codebase root if consolidate=true
{output_dir}/
βββ index.md # Primary AI context (read this first)
βββ codebase_info.md # Structure analysis output
βββ architecture.md # System architecture
βββ components.md # Component details
βββ interfaces.md # APIs and interfaces
βββ data_models.md # Data models
βββ workflows.md # Key workflows
βββ dependencies.md # Dependencies output
βββ review_notes.md # Review findings
Progress Indicators
Provide updates:
Setting up...
β
Created {output_dir}
Analyzing structure...
β
Found X packages across Y languages
β
Identified Z components
Generating documentation...
β
Created index.md
β
Generated architecture.md, components.md...
Reviewing...
β
Consistency check complete
β
Found N gaps documented in review_notes.md
Done!
β
Documentation at {output_dir}
β
Primary context file: {output_dir}/index.md
Resources
- Scripts:
{baseDir}/scripts/analyze_structure.py,{baseDir}/scripts/extract_dependencies.py - Templates:
{baseDir}/references/documentation-templates.md
More by m31uk3
View allGuide for creating reliable AI workflows and SOPs. Use when: (1) User wants to create a structured workflow for AI tasks, (2) User needs to build an SOP for complex processes, (3) User wants to ensure their workflow follows best practices for managing LLM uncertainty, (4) User mentions creating workflows for domains like code review, response analysis, documentation, or any structured process
Analyze whether your response addresses the actual question asked before posting. Use when: (1) About to post response to forum/Slack question, (2) Want to validate response coverage, (3) Need to ensure solving the right problem, (4) Want specific improvement suggestions for gaps in response
This skill should be used when the user asks to "add resiliency to a skill", "make this skill more robust", "improve error handling", "add validation mechanisms", "create self-correcting behavior", or discusses determinism, robustness, error correction, or homeostatic patterns in Agent Skills. Applies biological resiliency principles from Michael Levin's work to Agent Skill design.
Transform rough ideas into detailed design documents with implementation plans. Use when a user wants to develop an idea into a complete specification, create a design document from a concept, plan a feature implementation, or mentions "PDD", "prompt-driven development", "idea to design", "design doc from idea", or wants to systematically refine requirements before building. Guides through requirements clarification, research, detailed design, and implementation planning.
