Automated LSP hover validation for Dingo transpiler. Use when testing hover functionality, validating position mappings, checking for hover drift, or debugging LSP issues after sourcemap changes.
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
npx agent-skills-cli listSkill Instructions
name: lsp-hover-testing description: Automated LSP hover validation for Dingo transpiler. Use when testing hover functionality, validating position mappings, checking for hover drift, or debugging LSP issues after sourcemap changes. allowed-tools: Read, Grep, Glob, Bash, Write, Edit
LSP Hover Testing Skill
Automated headless testing of LSP hover functionality for the Dingo transpiler. Replaces manual VS Code hover checks with reproducible, CI-compatible tests.
When to Use This Skill
- After making changes to sourcemap/position tracking code
- When debugging hover issues reported by users
- To validate that column/line mappings work correctly
- Before committing changes to
pkg/lsp/,pkg/sourcemap/, orpkg/transpiler/ - To create regression tests for hover functionality
Quick Start
# Build the tools first
go build -o dingo ./cmd/dingo
go build -o editors/vscode/server/bin/dingo-lsp ./cmd/dingo-lsp
go build -o lsp-hovercheck ./cmd/lsp-hovercheck
# Run hover tests
./lsp-hovercheck --spec "ai-docs/hover-specs/*.yaml"
# Verbose output for debugging
./lsp-hovercheck --spec ai-docs/hover-specs/http_handler.yaml --verbose
Spec File Format
Create YAML specs in ai-docs/hover-specs/:
file: examples/01_error_propagation/http_handler.dingo
cases:
- id: 1
line: 55 # 1-based line number
token: userID # Token to hover on
occurrence: 1 # Which occurrence (default: 1)
description: "LHS variable"
expect:
contains: "var userID string" # Must contain substring
# OR
containsAny: # Any of these
- "var userID"
- "userID string"
# OR
allowAny: true # Accept any result (skip assertion)
Assertion Types
| Type | Description | Example |
|---|---|---|
contains | Must contain substring | contains: "func foo" |
containsAny | Any of listed substrings | containsAny: ["func", "method"] |
notContains | Must not contain | notContains: "error" |
allowAny | Skip assertion, just record | allowAny: true |
Output Format
http_handler.yaml:
------------------------------------------------------------
1: works
2: works
3: expected "var r", got "func extractUserID..."
4: works
============================================================
Total: 3 passed, 1 failed
Creating New Test Specs
Step 1: Identify test positions
# Show line numbers
sed -n '50,70p' examples/01_error_propagation/http_handler.dingo | nl -ba
Step 2: Create spec file
cat > ai-docs/hover-specs/my_example.yaml << 'EOF'
file: examples/my_example/file.dingo
cases:
- id: 1
line: 10
token: myFunction
description: "Function name hover"
expect:
contains: "func myFunction"
EOF
Step 3: Run and iterate
./lsp-hovercheck --spec ai-docs/hover-specs/my_example.yaml --verbose
Debugging Failed Tests
When a test fails, check:
- Column position: Is the token found at the right column?
- Tab handling: Lines starting with tabs may have offset issues
- Transformed lines: Error prop lines map to different Go positions
- LSP readiness: Increase
--retriesif hover returns empty
Verbose debug output
./lsp-hovercheck --spec ai-docs/hover-specs/http_handler.yaml --verbose
Shows:
- Exact LSP request/response JSON
- Computed column positions
- Hover content returned
Known Limitations
VS Code vs Automated Differences
The automated test may show different results than VS Code due to:
- Tab character handling differences
- LSP initialization timing
- VS Code extension preprocessing
Current Behavior (2025-12-14)
| Position Type | Automated Result | VS Code Result |
|---|---|---|
| Function names | Works | Works |
| Function arguments | Works | Shows function sig (bug) |
| LHS variables | Empty | Shows temp var (bug) |
File Locations
| File | Purpose |
|---|---|
cmd/lsp-hovercheck/ | Hover check tool source |
ai-docs/hover-specs/ | Test specification files |
editors/vscode/server/bin/dingo-lsp | LSP server binary |
CI Integration
Add to your CI pipeline:
- name: Build tools
run: |
go build -o dingo ./cmd/dingo
go build -o editors/vscode/server/bin/dingo-lsp ./cmd/dingo-lsp
go build -o lsp-hovercheck ./cmd/lsp-hovercheck
- name: Run hover tests
run: ./lsp-hovercheck --spec "ai-docs/hover-specs/*.yaml"
Related Files
More by MadAppGang
View allChoose optimal external AI models for code analysis, bug investigation, and architectural decisions. Use when consulting multiple LLMs via claudish, comparing model perspectives, or investigating complex Go/LSP/transpiler issues. Provides empirically validated model rankings (91/100 for MiniMax M2, 83/100 for Grok Code Fast) and proven consultation strategies based on real-world testing.
CRITICAL - Guide for using Claudish CLI ONLY through sub-agents to run Claude Code with any AI model (OpenRouter, Gemini, OpenAI, local models). NEVER run Claudish directly in main context unless user explicitly requests it. Use when user mentions external AI models, Claudish, OpenRouter, Gemini, OpenAI, Ollama, or alternative models. Includes mandatory sub-agent delegation patterns, agent selection guide, file-based instructions, and strict rules to prevent context window pollution.
CRITICAL - Guide for using Claudish CLI ONLY through sub-agents to run Claude Code with any AI model (OpenRouter, Gemini, OpenAI, local models). NEVER run Claudish directly in main context unless user explicitly requests it. Use when user mentions external AI models, Claudish, OpenRouter, Gemini, OpenAI, Ollama, or alternative models. Includes mandatory sub-agent delegation patterns, agent selection guide, file-based instructions, and strict rules to prevent context window pollution.
MANDATORY tracking protocol for multi-model validation. Creates structured tracking tables BEFORE launching models, tracks progress during execution, and ensures complete results presentation. Use when running 2+ external AI models in parallel. Trigger keywords - "multi-model", "parallel review", "external models", "consensus", "model tracking".
