Agent SkillsAgent Skills
MadAppGang

lsp-hover-testing

@MadAppGang/lsp-hover-testing
MadAppGang
1,862
37 forks
Updated 4/13/2026
View on GitHub

Automated LSP hover validation for Dingo transpiler. Use when testing hover functionality, validating position mappings, checking for hover drift, or debugging LSP issues after sourcemap changes.

Installation

$npx agent-skills-cli install @MadAppGang/lsp-hover-testing
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Path.claude/skills/lsp-hover-testing/SKILL.md
Branchmain
Scoped Name@MadAppGang/lsp-hover-testing

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

npx agent-skills-cli list

Skill Instructions


name: lsp-hover-testing description: Automated LSP hover validation for Dingo transpiler. Use when testing hover functionality, validating position mappings, checking for hover drift, or debugging LSP issues after sourcemap changes. allowed-tools: Read, Grep, Glob, Bash, Write, Edit

LSP Hover Testing Skill

Automated headless testing of LSP hover functionality for the Dingo transpiler. Replaces manual VS Code hover checks with reproducible, CI-compatible tests.

When to Use This Skill

  • After making changes to sourcemap/position tracking code
  • When debugging hover issues reported by users
  • To validate that column/line mappings work correctly
  • Before committing changes to pkg/lsp/, pkg/sourcemap/, or pkg/transpiler/
  • To create regression tests for hover functionality

Quick Start

# Build the tools first
go build -o dingo ./cmd/dingo
go build -o editors/vscode/server/bin/dingo-lsp ./cmd/dingo-lsp
go build -o lsp-hovercheck ./cmd/lsp-hovercheck

# Run hover tests
./lsp-hovercheck --spec "ai-docs/hover-specs/*.yaml"

# Verbose output for debugging
./lsp-hovercheck --spec ai-docs/hover-specs/http_handler.yaml --verbose

Spec File Format

Create YAML specs in ai-docs/hover-specs/:

file: examples/01_error_propagation/http_handler.dingo

cases:
  - id: 1
    line: 55                    # 1-based line number
    token: userID               # Token to hover on
    occurrence: 1               # Which occurrence (default: 1)
    description: "LHS variable"
    expect:
      contains: "var userID string"      # Must contain substring
      # OR
      containsAny:                        # Any of these
        - "var userID"
        - "userID string"
      # OR
      allowAny: true                      # Accept any result (skip assertion)

Assertion Types

TypeDescriptionExample
containsMust contain substringcontains: "func foo"
containsAnyAny of listed substringscontainsAny: ["func", "method"]
notContainsMust not containnotContains: "error"
allowAnySkip assertion, just recordallowAny: true

Output Format

http_handler.yaml:
------------------------------------------------------------
1: works
2: works
3: expected "var r", got "func extractUserID..."
4: works

============================================================
Total: 3 passed, 1 failed

Creating New Test Specs

Step 1: Identify test positions

# Show line numbers
sed -n '50,70p' examples/01_error_propagation/http_handler.dingo | nl -ba

Step 2: Create spec file

cat > ai-docs/hover-specs/my_example.yaml << 'EOF'
file: examples/my_example/file.dingo

cases:
  - id: 1
    line: 10
    token: myFunction
    description: "Function name hover"
    expect:
      contains: "func myFunction"
EOF

Step 3: Run and iterate

./lsp-hovercheck --spec ai-docs/hover-specs/my_example.yaml --verbose

Debugging Failed Tests

When a test fails, check:

  1. Column position: Is the token found at the right column?
  2. Tab handling: Lines starting with tabs may have offset issues
  3. Transformed lines: Error prop lines map to different Go positions
  4. LSP readiness: Increase --retries if hover returns empty

Verbose debug output

./lsp-hovercheck --spec ai-docs/hover-specs/http_handler.yaml --verbose

Shows:

  • Exact LSP request/response JSON
  • Computed column positions
  • Hover content returned

Known Limitations

VS Code vs Automated Differences

The automated test may show different results than VS Code due to:

  • Tab character handling differences
  • LSP initialization timing
  • VS Code extension preprocessing

Current Behavior (2025-12-14)

Position TypeAutomated ResultVS Code Result
Function namesWorksWorks
Function argumentsWorksShows function sig (bug)
LHS variablesEmptyShows temp var (bug)

File Locations

FilePurpose
cmd/lsp-hovercheck/Hover check tool source
ai-docs/hover-specs/Test specification files
editors/vscode/server/bin/dingo-lspLSP server binary

CI Integration

Add to your CI pipeline:

- name: Build tools
  run: |
    go build -o dingo ./cmd/dingo
    go build -o editors/vscode/server/bin/dingo-lsp ./cmd/dingo-lsp
    go build -o lsp-hovercheck ./cmd/lsp-hovercheck

- name: Run hover tests
  run: ./lsp-hovercheck --spec "ai-docs/hover-specs/*.yaml"

Related Files

More by MadAppGang

View all
external-model-selection
1,862

Choose optimal external AI models for code analysis, bug investigation, and architectural decisions. Use when consulting multiple LLMs via claudish, comparing model perspectives, or investigating complex Go/LSP/transpiler issues. Provides empirically validated model rankings (91/100 for MiniMax M2, 83/100 for Grok Code Fast) and proven consultation strategies based on real-world testing.

claudish-usage
713

CRITICAL - Guide for using Claudish CLI ONLY through sub-agents to run Claude Code with any AI model (OpenRouter, Gemini, OpenAI, local models). NEVER run Claudish directly in main context unless user explicitly requests it. Use when user mentions external AI models, Claudish, OpenRouter, Gemini, OpenAI, Ollama, or alternative models. Includes mandatory sub-agent delegation patterns, agent selection guide, file-based instructions, and strict rules to prevent context window pollution.

claudish-usage
713

CRITICAL - Guide for using Claudish CLI ONLY through sub-agents to run Claude Code with any AI model (OpenRouter, Gemini, OpenAI, local models). NEVER run Claudish directly in main context unless user explicitly requests it. Use when user mentions external AI models, Claudish, OpenRouter, Gemini, OpenAI, Ollama, or alternative models. Includes mandatory sub-agent delegation patterns, agent selection guide, file-based instructions, and strict rules to prevent context window pollution.

model-tracking-protocol
249

MANDATORY tracking protocol for multi-model validation. Creates structured tracking tables BEFORE launching models, tracks progress during execution, and ensures complete results presentation. Use when running 2+ external AI models in parallel. Trigger keywords - "multi-model", "parallel review", "external models", "consensus", "model tracking".