Execute an implementation plan task-by-task with TDD and continuous quality checks. Use when asked to "implement the plan", "execute the tasks", "start building from the plan", or after /arc:detail has created an implementation plan ready for execution.
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
skills listSkill Instructions
name: implement description: | Execute an implementation plan task-by-task with TDD and continuous quality checks. Use when asked to "implement the plan", "execute the tasks", "start building from the plan", or after /arc:detail has created an implementation plan ready for execution. license: MIT metadata: author: howells
<required_reading> Read these reference files NOW:
- ${CLAUDE_PLUGIN_ROOT}/references/testing-patterns.md
- ${CLAUDE_PLUGIN_ROOT}/references/frontend-design.md (if UI work involved)
- ${CLAUDE_PLUGIN_ROOT}/disciplines/dispatching-parallel-agents.md
- ${CLAUDE_PLUGIN_ROOT}/disciplines/finishing-a-development-branch.md
- ${CLAUDE_PLUGIN_ROOT}/disciplines/receiving-code-review.md </required_reading>
<rules_context> Check for project coding rules:
Use Glob tool: .ruler/*.md
If .ruler/ exists, detect stack and read relevant rules:
| Check | Read from .ruler/ |
|---|---|
| Always | code-style.md |
next.config.* exists | nextjs.md |
react in package.json | react.md |
tailwindcss in package.json | tailwind.md |
.ts or .tsx files | typescript.md |
vitest or jest in package.json | testing.md |
These rules define MUST/SHOULD/NEVER constraints. Follow them during implementation.
If .ruler/ doesn't exist:
No coding rules found. Run /arc:rules to set up standards, or continue without rules.
Rules are optional — proceed without them if the user prefers. </rules_context>
<process>You are here in the arc:
/arc:ideate → Design doc (on main) ✓
↓
/arc:detail → Implementation plan ✓
↓
/arc:review → Review (optional) ✓
↓
/arc:implement → Execute task-by-task ← YOU ARE HERE
Phase 1: Setup
If not already in worktree:
# Check current location
git branch --show-current
# If on main/dev, create worktree
git worktree add .worktrees/<feature-name> -b feature/<feature-name>
cd .worktrees/<feature-name>
Install dependencies:
pnpm install # or yarn/npm based on lockfile
Verify clean baseline:
pnpm test # or relevant test command
If tests fail before you start → stop and ask user.
Phase 2: Load Plan and Create Todos
Read implementation plan:
docs/plans/YYYY-MM-DD-<topic>-implementation.md
Create TodoWrite tasks:
One todo per task in the plan. Mark first as in_progress.
Phase 3: Execute in Batches
Default batch size: 3 tasks
For each task:
Step 1: Mark in_progress
Update TodoWrite.
Step 2: Follow TDD cycle exactly
1. Write the test (copy from plan)
2. Run test → verify FAIL
3. Write implementation (copy from plan, adapt as needed)
4. Run test → verify PASS
5. Fix TypeScript + lint (see below)
6. Commit with message from plan
<continuous_quality> After every implementation, before commit:
TypeScript check:
pnpm tsc --noEmit
# or: pnpm typecheck (if script exists)
Biome lint + format:
pnpm biome check --write .
# or: pnpm lint:fix (if script exists)
If issues found:
- Fix immediately
- Don't accumulate debt
- If stuck on a type issue → spawn a quick agent:
Task general-purpose model: haiku: "Fix TypeScript error in [file]: [error message]"
Why continuous:
- Catching TS errors early is easier than fixing 20 at once
- Biome auto-fix keeps code consistent
- Each commit is clean and deployable </continuous_quality>
If test doesn't fail when expected:
- Test might be wrong
- Implementation might already exist
- Stop and ask user
If test doesn't pass after implementation: Spawn debugger agent immediately:
Task general-purpose model: sonnet: "Test failing unexpectedly.
Test file: [path]
Test name: [name]
Error: [error message]
Implementation: [path]
Debug and fix."
If debugger can't resolve after one attempt → stop and ask user
Step 3: Mark completed
Update TodoWrite.
Step 4: Checkpoint after batch
After every 3 tasks:
Completed:
- Task 1: [description] ✓
- Task 2: [description] ✓
- Task 3: [description] ✓
Tests passing: [X/X]
Ready for feedback before continuing?
Wait for user confirmation or adjustments.
Phase 4: Quality Checkpoints
After completing data/types tasks:
- Spawn data-integrity-guardian for quick review
- Present findings as questions
Before starting UI tasks — INVOKE ARC:DESIGN FOR BUILD:
Skill arc:design: "Build UI components for [feature].
Aesthetic Direction (from design doc):
- Tone: [tone]
- Memorable element: [what stands out]
- Typography: [fonts]
- Color strategy: [approach]
- Motion: [philosophy]
Figma: [URL if available]
Files to create: [list from implementation plan]
Apply the aesthetic direction to every decision. Make it memorable, not generic."
Why invoke the skill, not just follow principles:
- The skill has creative energy and specific guidance
- It makes bold decisions, not safe ones
- It catches generic patterns as they're written, not after
Fetch Figma context:
mcp__figma__get_design_context: fileKey, nodeId
mcp__figma__get_screenshot: fileKey, nodeId
After each UI task, quick self-check:
- Would a designer call this "generic AI slop"?
- Is the memorable element actually memorable?
- Did I avoid Inter/Roboto/Arial and purple gradients?
After completing ALL UI tasks — INVOKE ARC:DESIGN FOR REVIEW:
Task general-purpose model: opus: "Review the completed UI implementation.
Aesthetic Direction (from design doc):
- Tone: [tone]
- Memorable element: [what stands out]
- Typography: [fonts]
- Color strategy: [approach]
Files: [list of UI component files]
Figma: [URL if available]
Check for:
- Generic AI aesthetics (Inter, purple gradients, cookie-cutter layouts)
- Deviation from aesthetic direction
- Missing memorable moments
- Inconsistent application of design system
- Accessibility concerns
- Missing states (loading, error, empty)"
- Run playwright visual test if available
- Take screenshots of key states
- Compare against Figma screenshot
- Address any review findings before proceeding
Optional: Web Interface Guidelines Review
If web-design-guidelines skill is available:
Skill web-design-guidelines: "Review [components] for Web Interface Guidelines compliance"
When implementing unfamiliar library APIs:
mcp__context7__resolve-library-id: "[library name]"
mcp__context7__get-library-docs: "[library ID]" topic: "[specific feature]"
Use current documentation to ensure correct API usage.
After completing all tasks:
- Run full test suite
- Run linting
Phase 5: Final Quality Sweep
Always run (in parallel agents for speed):
Task general-purpose model: haiku: "Run TypeScript check (tsc --noEmit) and fix any errors"
Task general-purpose model: haiku: "Run Biome check (biome check --write .) and fix any issues"
Task general-purpose model: haiku: "Run test suite and report results"
Wait for all agents to complete. If issues found, fix before proceeding.
Optional: React/Next.js Performance Review
For React/Next.js projects, if vercel-react-best-practices skill is available:
Skill vercel-react-best-practices: "Review implementation for React/Next.js performance patterns"
Phase 5b: E2E Tests (If Created)
If e2e tests were created as part of this implementation:
Spawn dedicated agent to run and fix e2e tests:
Task Bash run_in_background: true: "Run e2e tests for the feature we just implemented. Fix any failures and iterate until all pass."
Why a separate agent?
- E2E tests produce verbose output (traces, screenshots, DOM snapshots)
- Fixing may require multiple iterations
- Keeps main conversation context clean
Wait for agent to complete. Review its summary of fixes applied.
Phase 6: Expert Review (Optional)
For significant features, offer parallel review:
"Feature complete. Run expert review before PR?"
If yes, spawn in parallel (all use sonnet for balanced cost/quality):
- code-simplicity-reviewer (model: sonnet)
- architecture-strategist or domain-specific reviewer (model: sonnet)
- security-sentinel if auth/data involved (model: sonnet)
Present findings as Socratic questions (see ${CLAUDE_PLUGIN_ROOT}/references/review-patterns.md).
Phase 7: Ship
Ensure all tests pass:
pnpm test
pnpm lint
Create PR:
git push -u origin feature/<feature-name>
gh pr create --title "feat: <description>" --body "$(cat <<'EOF'
## Summary
- What was built
- Key decisions
## Testing
- [X] Unit tests added
- [X] E2E tests added (if applicable)
- [X] All tests passing
## Screenshots
[Include if UI changes]
## Design Doc
[Link to design doc]
## Implementation Plan
[Link to implementation plan]
EOF
)"
Report to user:
- PR URL
- Summary of what was built
- Any follow-up items
Cleanup worktree (optional):
cd ..
git worktree remove .worktrees/<feature-name>
<when_to_stop> STOP and ask user when:
- Test fails unexpectedly
- Implementation doesn't match plan
- Stuck after 2 debug attempts
- Plan has ambiguity
- New requirement discovered
- Security concern identified
Don't guess. Ask. </when_to_stop>
<progress_context>
Use Read tool: docs/progress.md (first 50 lines)
Look for related ideate/detail sessions and any prior implementation attempts. </progress_context>
<progress_append> After completing implementation (or pausing), append to progress journal:
## YYYY-MM-DD HH:MM — /arc:implement
**Task:** [Feature name]
**Outcome:** [Complete / In Progress (X/Y tasks) / Blocked]
**Files:** [Key files created/modified]
**Decisions:**
- [Key implementation decision]
**Next:** [PR created / Continue tomorrow / Blocked on X]
---
</progress_append>
<success_criteria> Execution is complete when:
- All tasks marked completed in TodoWrite
- All tests passing
- Linting passes
- PR created
- User informed of completion
- Progress journal updated </success_criteria>
More by howells
View allRemove LLM-generated artifacts from code. Cleans unnecessary comments and defensive checks. Use when asked to "clean up the code", "remove AI slop", "deslop this", or after AI-assisted coding sessions to remove artifacts that don't match codebase style.
Create or review a high-level vision document capturing project goals and purpose. Use when asked to "define the vision", "what is this project", "set goals", or when starting a new project that needs clarity on purpose and direction.
Create distinctive, non-generic UI designs with aesthetic direction and ASCII wireframes. Use when asked to "design the UI", "create a layout", "wireframe this", or when building UI that should be memorable rather than generic. Avoids AI slop patterns.
Persistent task backlog that survives across sessions. Git-committed, always available. Use when asked to "add a task", "show my tasks", "update the backlog", "prioritize", or when managing what needs to be done across sessions.
