project-documentation: Comprehensive codebase documentation generator following a layered methodology. This skill should be used when scanning and documenting a codebase for the first time, when creating onboarding documentation for new developers, when generating architecture overviews, walkthroughs, and API references. Supports README generation, architecture diagrams, entry point documentation, pattern guides, and edge case documentation.
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
npx agent-skills-cli listSkill Instructions
name: project-documentation description: Comprehensive codebase documentation generator following a layered methodology. This skill should be used when scanning and documenting a codebase for the first time, when creating onboarding documentation for new developers, when generating architecture overviews, walkthroughs, and API references. Supports README generation, architecture diagrams, entry point documentation, pattern guides, and edge case documentation.
Project Documentation
Comprehensive skill for scanning codebases and generating layered documentation to help developers get familiar quickly.
Overview
This skill implements a 6-phase documentation methodology:
- Understand - Explore codebase before documenting
- Structure - Create layered documentation (high/mid/low level)
- Essential Docs - Generate core documents
- Functions - Document code with intent
- Onboarding - Create self-paced learning materials
- Maintain - Keep documentation versioned and searchable
Documentation Workflow
Phase 1: Understand Before Documenting
Before writing documentation, thoroughly explore the codebase:
-
Explore Version Control History
git log --oneline -50 # Recent changes git log --all --oneline --graph # Visual branch history git shortlog -sn # Top contributors -
Analyze Code Structure
- Run
tree -L 3 -I node_modulesfor structure - Identify entry points (main., index., app.*)
- Detect framework patterns (React, Express, FastAPI, etc.)
- Run
-
Read Development Tests
- Tests reveal how code is intended to work
- Failed test history shows fixed issues
- Test structure mirrors code architecture
-
Trace Execution Flows
- Follow imports from entry points
- Map API routes to handlers
- Document data flow patterns
Phase 2: Create Layered Documentation
Generate three documentation layers:
| Layer | Purpose | Audience | Location |
|---|---|---|---|
| High-Level | Architecture, design principles | New devs, stakeholders | docs/architecture/ |
| Walkthrough | Flows, patterns, interactions | Contributing devs | docs/walkthroughs/ |
| Low-Level | Functions, parameters, returns | Active maintainers | Inline + docs/api/ |
Phase 3: Essential Documents
Generate using templates in templates/:
- README.md - See templates/readme-template.md
- Architecture Overview - See templates/architecture-template.md
- Walkthrough Guide - See templates/walkthrough-template.md
- API/Function Reference - See templates/api-reference-template.md
- Setup Guide - See templates/setup-guide-template.md
Phase 4: Document Functions Effectively
When documenting individual functions:
- Describe WHY, not just WHAT - Business assumptions, algorithm steps
- Use meaningful names - Self-documenting code
- Document intent - Design choices, trade-offs
- Include examples - Expected usage patterns
Reference: references/function-documentation.md
Phase 5: Onboarding-Focused Documentation
Create self-paced onboarding materials:
- Clear language - Avoid jargon without explanation
- Code snippets - Illustrate concepts with examples
- Consistent naming - Classes, functions, variables, files
- Decision rationale - Explain coding decisions
Phase 6: Maintainability
- Version documentation with source code
- Make documentation searchable
- Link from team communication channels
- Incrementally improve as codebase evolves
Documentation Priority Order
Generate documentation in this order:
- README with setup instructions (get developers running)
- Architecture diagram showing major components
- Entry points documentation (where code starts)
- Core patterns used throughout codebase
- Key functions/modules with purpose and examples
- Edge cases and gotchas that trip up newcomers
Output Structure
docs/
├── README.md # Project overview
├── architecture/
│ ├── overview.md # System architecture
│ ├── components.md # Component descriptions
│ └── diagrams/ # Architecture diagrams
├── walkthroughs/
│ ├── entry-points.md # Where code starts
│ ├── data-flow.md # How data moves
│ └── patterns.md # Recurring patterns
├── api/
│ ├── endpoints.md # API endpoints
│ └── functions.md # Key functions
├── setup/
│ ├── installation.md # Installation guide
│ ├── configuration.md # Configuration options
│ └── troubleshooting.md # Common issues
└── onboarding/
├── quickstart.md # 5-minute start
├── tutorials/ # Hands-on tutorials
└── gotchas.md # Edge cases & tips
Scanning Checklist
- Version control history analyzed
- Project type and framework identified
- Entry points documented
- Directory structure mapped
- Dependencies catalogued
- Key patterns identified
- Configuration files documented
- README created/updated
- Architecture overview generated
- Walkthrough guides created
- API reference generated
- Setup guide complete
- Onboarding materials ready
More by vneseyoungster
View allProcess and generate multimedia content using Google Gemini API. Capabilities include analyze audio files (transcription with timestamps, summarization, speech understanding, music/sound analysis up to 9.5 hours), understand images (captioning, object detection, OCR, visual Q&A, segmentation), process videos (scene detection, Q&A, temporal analysis, YouTube URLs, up to 6 hours), extract from documents (PDF tables, forms, charts, diagrams, multi-page), generate images (text-to-image, editing, composition, refinement). Use when working with audio/video files, analyzing images or screenshots, processing PDF documents, extracting structured data from media, creating images from text prompts, or implementing multimodal AI features. Supports multiple models (Gemini 2.5/2.0) with context windows up to 2M tokens.
gemini-vision: Guide for implementing Google Gemini API image understanding - analyze images with captioning, classification, visual QA, object detection, segmentation, and multi-image comparison. Use when analyzing images, answering visual questions, detecting objects, or processing documents with vision.
Searching internet for technical documentation using llms.txt standard, GitHub repositories via Repomix, and parallel exploration. Use when user needs: (1) Latest documentation for libraries/frameworks, (2) Documentation in llms.txt format, (3) GitHub repository analysis, (4) Documentation without direct llms.txt support, (5) Multiple documentation sources in parallel
security-scan: Scan code for OWASP vulnerabilities and security issues. Use for security-sensitive implementations.
