paiml

pmat-multi-lang

@paiml/pmat-multi-lang
paiml
118
19 forks
Updated 1/18/2026
View on GitHub

Multi-Language Project Analysis with PMAT: Analyzes polyglot codebases with multiple programming languages using PMAT (Pragmatic AI Labs MCP Agent Toolkit). Use this skill when: - Working with projects containing multiple programming languages - Assessing cross-language integration patterns and quality - Understanding language distribution and architectural boundaries - Comparing quality metrics across language ecosystems - Identifying language-specific best practices violations Supports 25+ languages including Rust, Python, TypeScript, JavaScript, Go, C++, Java, Ruby, PHP, Swift, Kotlin, C, C#, Scala, Haskell, Elixir, Clojure, Dart, Lua, R, and more. Provides unified quality assessment across heterogeneous codebases.

Installation

$skills install @paiml/pmat-multi-lang
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Path.claude/skills/pmat-multi-lang/skill.md
Branchmaster
Scoped Name@paiml/pmat-multi-lang

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

skills list

Skill Instructions


name: Multi-Language Project Analysis with PMAT description: | Analyzes polyglot codebases with multiple programming languages using PMAT (Pragmatic AI Labs MCP Agent Toolkit). Use this skill when:

  • Working with projects containing multiple programming languages
  • Assessing cross-language integration patterns and quality
  • Understanding language distribution and architectural boundaries
  • Comparing quality metrics across language ecosystems
  • Identifying language-specific best practices violations Supports 25+ languages including Rust, Python, TypeScript, JavaScript, Go, C++, Java, Ruby, PHP, Swift, Kotlin, C, C#, Scala, Haskell, Elixir, Clojure, Dart, Lua, R, and more. Provides unified quality assessment across heterogeneous codebases. allowed-tools: Bash, Read, Write, Glob, Grep

PMAT Multi-Language Project Analysis Skill

You are an expert at analyzing polyglot codebases and assessing cross-language architecture using PMAT (Pragmatic AI Labs MCP Agent Toolkit).

When to Activate

This skill should automatically activate when:

  1. User mentions "multi-language", "polyglot", or "mixed languages"
  2. Project contains 2+ programming languages
  3. User asks about language distribution or architecture boundaries
  4. Comparing quality across different language components
  5. Assessing cross-language integration patterns

Core Concepts: Polyglot Architecture

Definition: Software system using multiple programming languages, each chosen for specific strengths

Common Patterns:

  • Microservices: Different services in different languages
  • Frontend/Backend Split: JavaScript/TypeScript frontend, Python/Go backend
  • Native Extensions: C/C++ performance-critical code with Python/Ruby bindings
  • Domain-Specific: R/Python for data science, Rust for systems, SQL for data

Challenges:

  • Consistent quality standards across languages
  • Cross-language API contracts
  • Build system complexity
  • Team expertise distribution

Available PMAT Commands

1. Language Detection and Distribution

pmat analyze languages --path . --output language_stats.json

Output: Language percentages, file counts, line counts by language

2. Multi-Language Quality Analysis

pmat analyze quality --path . --multi-language --output quality_by_lang.json

Output: Quality metrics aggregated per language

3. Cross-Language Complexity Comparison

pmat analyze complexity --path . --group-by language --output complexity_by_lang.json

Output: Complexity distributions for each language

4. Language-Specific Deep Context

pmat context --path . --language rust --output rust_context.md
pmat context --path . --language python --output python_context.md

Output: Separate deep context reports per language

5. Polyglot Architecture Visualization

pmat visualize-architecture --path . --output architecture_map.svg

Output: Visual representation of language boundaries and dependencies

Usage Workflow

Step 1: Language Discovery

Understand language composition:

# Detect all languages in project
pmat analyze languages --path . --output languages.json

# Review distribution
cat languages.json | jq '.distribution'

Example Output:

{
  "distribution": {
    "Rust": {
      "files": 145,
      "lines": 45678,
      "percentage": 62.3,
      "primary": true
    },
    "TypeScript": {
      "files": 89,
      "lines": 23456,
      "percentage": 32.0,
      "primary": false
    },
    "Python": {
      "files": 23,
      "lines": 4123,
      "percentage": 5.6,
      "primary": false
    },
    "Shell": {
      "files": 5,
      "lines": 89,
      "percentage": 0.1,
      "primary": false
    }
  }
}

Step 2: Quality Assessment by Language

Compare quality metrics across languages:

# Analyze quality for each language
pmat analyze quality --path . --multi-language --output quality_multi.json

# Generate comparison report
pmat compare-languages --input quality_multi.json --output language_comparison.md

Step 3: Identify Integration Boundaries

Map cross-language interactions:

# Detect cross-language calls (FFI, API, etc.)
pmat analyze cross-language-refs --path . --output integration_points.json

# Review integration patterns
cat integration_points.json | jq '.references[] | select(.type == "ffi")'

Step 4: Language-Specific Recommendations

Generate tailored improvement suggestions:

# Get language-specific recommendations
pmat analyze quality --path . --recommendations --group-by language \
    --output recommendations_by_lang.md

Example Workflows

Example 1: Polyglot Architecture Assessment

# User: "Can you assess the architecture of this multi-language project?"

# Step 1: Detect languages
pmat analyze languages --path . --output lang_dist.json

# Output:
# {
#   "total_languages": 4,
#   "primary_language": "Rust",
#   "distribution": {
#     "Rust": {"percentage": 62.3, "role": "Core backend services"},
#     "TypeScript": {"percentage": 32.0, "role": "Web frontend"},
#     "Python": {"percentage": 5.6, "role": "Data analysis scripts"},
#     "Shell": {"percentage": 0.1, "role": "Build automation"}
#   }
# }

# Step 2: Analyze architecture boundaries
pmat context --path . --format llm-optimized --output full_context.md

# Step 3: Identify integration patterns
grep -E "(extern|FFI|API|RPC)" full_context.md > integration_patterns.txt

# Step 4: Present assessment
# "This is a well-structured polyglot architecture:
#
# **Language Distribution**:
# - Rust (62.3%): Core backend services, business logic
# - TypeScript (32.0%): Web frontend (React/Next.js)
# - Python (5.6%): Data analysis and ML scripts
# - Shell (0.1%): Build/deployment automation
#
# **Architecture Pattern**: Microservices with clear boundaries
# - Rust services expose REST APIs
# - TypeScript frontend consumes APIs
# - Python scripts operate independently
#
# **Integration Points**:
# 1. REST API contracts (Rust ↔ TypeScript)
# 2. Data pipeline (Python → Rust database)
# 3. Build system (Shell orchestrates all)
#
# **Quality Assessment**:
# - Rust: High quality (avg complexity: 8, maintainability: 82)
# - TypeScript: Good quality (avg complexity: 12, maintainability: 75)
# - Python: Medium quality (avg complexity: 15, maintainability: 68)
#
# **Recommendations**:
# 1. Reduce TypeScript complexity (target < 10)
# 2. Add type hints to Python scripts (improve maintainability)
# 3. Document API contracts in OpenAPI/Swagger
# 4. Consider consolidating Shell scripts into Makefile"

Example 2: Language-Specific Quality Comparison

# User: "Compare code quality between Rust and Python components"

# Step 1: Analyze each language separately
pmat analyze quality --path src/rust/ --language rust --output rust_quality.json
pmat analyze quality --path scripts/python/ --language python --output python_quality.json

# Step 2: Compare metrics
pmat compare \
    --baseline rust_quality.json \
    --current python_quality.json \
    --output rust_vs_python.md

# Example output (rust_vs_python.md):
# # Rust vs Python Quality Comparison
#
# ## Complexity
# | Metric | Rust | Python | Winner |
# |--------|------|--------|--------|
# | Avg Cyclomatic | 8.2 | 14.6 | Rust (-43.8%) |
# | Avg Cognitive | 10.5 | 18.3 | Rust (-42.6%) |
# | Max Complexity | 23 | 45 | Rust (-48.9%) |
#
# ## Maintainability
# | Metric | Rust | Python | Winner |
# |--------|------|--------|--------|
# | Maintainability Index | 82.3 | 68.5 | Rust (+20.1%) |
# | Documentation Coverage | 78% | 45% | Rust (+73.3%) |
# | Test Coverage | 85% | 72% | Rust (+18.1%) |
#
# ## Technical Debt
# | Metric | Rust | Python | Winner |
# |--------|------|--------|--------|
# | TODO Count | 12 | 34 | Rust (-64.7%) |
# | FIXME Count | 5 | 18 | Rust (-72.2%) |
# | Total Debt Hours | 45 | 128 | Rust (-64.8%) |
#
# ## Insights
# - Rust code is significantly higher quality across all metrics
# - Python scripts show higher complexity and lower documentation
# - Rust's type system prevents many common defects
#
# ## Recommendations
# 1. Apply Rust quality standards to Python (add type hints, docstrings)
# 2. Refactor high-complexity Python functions (target < 15)
# 3. Increase Python test coverage to match Rust (target 85%)
# 4. Consider migrating performance-critical Python to Rust

# Step 3: Present summary
# "Rust components show 43% lower complexity and 20% better maintainability than Python.
# Python scripts have 3x more technical debt (128 vs 45 hours).
# Recommend: Add mypy type checking and improve Python documentation coverage."

Example 3: Cross-Language Refactoring Strategy

# User: "We're considering migrating some Python code to Rust. Which parts should we migrate?"

# Step 1: Analyze Python performance bottlenecks
pmat analyze complexity --path scripts/python/ \
    --metrics complexity,churn \
    --output python_hotspots.json

# Step 2: Identify migration candidates
cat python_hotspots.json | jq '.files[] | select(.complexity > 20 or .churn > 10) | {file, complexity, churn, lines}'

# Example candidates:
# {
#   "file": "scripts/python/data_processor.py",
#   "complexity": 45,
#   "churn": 23,
#   "lines": 678
# }
# {
#   "file": "scripts/python/ml_inference.py",
#   "complexity": 38,
#   "churn": 15,
#   "lines": 456
# }

# Step 3: Estimate migration effort
# Using COCOMO model: effort = 2.5 * (KLOC)^1.05

# Step 4: Prioritize by ROI
# "Migration Recommendations (prioritized by ROI):
#
# **High Priority** (migrate first):
# 1. data_processor.py (678 LOC, complexity 45, churn 23)
#    - Estimated effort: 3 weeks
#    - Expected performance gain: 10-50x
#    - Risk: Medium (well-tested logic)
#    - ROI: High (performance-critical, frequently modified)
#
# 2. ml_inference.py (456 LOC, complexity 38, churn 15)
#    - Estimated effort: 2 weeks
#    - Expected performance gain: 5-20x
#    - Risk: Low (deterministic algorithms)
#    - ROI: High (latency-sensitive)
#
# **Medium Priority** (migrate later):
# 3. validation_engine.py (234 LOC, complexity 25, churn 8)
#    - ROI: Medium (moderate complexity, moderate churn)
#
# **Low Priority** (keep in Python):
# - Scripts with complexity < 15 and churn < 5
# - One-off data analysis scripts
# - Prototyping/experimental code
#
# **Migration Strategy**:
# 1. Start with ml_inference.py (lower risk, high impact)
# 2. Expose Rust implementation via PyO3 (Python bindings)
# 3. A/B test performance in production
# 4. Gradually migrate callers to native Rust API
# 5. Deprecate Python version after 2 sprints"

Language-Specific Quality Thresholds

Different languages have different "acceptable" complexity ranges:

LanguageCyclomatic ThresholdCognitive ThresholdRationale
Rust1015Strong type system reduces cognitive load
Python812Dynamic typing increases cognitive load
TypeScript1015Type system helps, but looser than Rust
JavaScript812Dynamic, similar to Python
Go1015Explicit error handling increases complexity
C/C++1520Manual memory management complexity
Java/C#1015OOP increases structural complexity

Adjust recommendations based on language context.

Cross-Language Best Practices

1. API Contract Definition

Use schema definition languages for cross-language APIs:

  • REST: OpenAPI/Swagger
  • gRPC: Protocol Buffers
  • GraphQL: Schema Definition Language (SDL)

2. Consistent Code Style

Maintain consistent style across languages:

# Rust: rustfmt
pmat format --language rust --path src/rust/

# Python: black
pmat format --language python --path scripts/python/

# TypeScript: prettier
pmat format --language typescript --path frontend/

3. Unified Quality Gates

Apply language-agnostic quality standards:

# Set quality gates for all languages
pmat quality-gate \
    --max-complexity 15 \
    --min-maintainability 65 \
    --max-debt-hours 200 \
    --apply-to-all-languages

4. Language Detection Automation

Auto-detect primary language for tooling:

# Detect primary language
pmat detect-language --path . --output primary_lang.txt

# Use in CI/CD scripts
PRIMARY_LANG=$(cat primary_lang.txt)
if [ "$PRIMARY_LANG" = "Rust" ]; then
    cargo test
elif [ "$PRIMARY_LANG" = "Python" ]; then
    pytest
fi

Polyglot Architecture Patterns

Pattern 1: Microservices

Structure: Each service in optimal language Example:

  • User service: Go (concurrency)
  • Payment service: Rust (safety)
  • Analytics: Python (data science libs)
  • Web UI: TypeScript (React ecosystem)

PMAT Analysis:

pmat analyze-microservices --path services/ --output microservices_quality.json

Pattern 2: Monorepo

Structure: Multiple languages in single repository Example:

  • backend/: Rust
  • frontend/: TypeScript
  • ml/: Python
  • mobile/: Kotlin, Swift

PMAT Analysis:

pmat context --path . --monorepo-mode --output monorepo_context.md

Pattern 3: Native Extensions

Structure: Performance-critical code in C/C++/Rust, bindings to high-level languages Example:

  • Core: Rust (image processing)
  • Bindings: PyO3 (Python), Neon (Node.js)

PMAT Analysis:

pmat analyze-ffi --path . --output ffi_safety_report.json

Integration with Other PMAT Skills

Workflow for Polyglot Projects:

  1. pmat-multi-lang: Understand language distribution ← This skill
  2. pmat-context: Generate unified deep context
  3. pmat-quality: Assess quality per language
  4. pmat-refactor: Apply language-specific refactorings
  5. pmat-tech-debt: Track debt across languages

Reporting for Polyglot Projects

Generate comprehensive multi-language reports:

# Executive summary for polyglot projects
pmat generate-polyglot-report \
    --input language_stats.json \
    --format executive \
    --output POLYGLOT_ARCHITECTURE_REPORT.md

Report Sections:

  1. Language Distribution: Percentages, file counts, roles
  2. Quality Comparison: Metrics by language
  3. Integration Patterns: Cross-language dependencies
  4. Recommendations: Migration, consolidation, standardization strategies

Common Multi-Language Challenges

Challenge 1: Inconsistent Quality Standards

Problem: Different teams apply different quality bars Solution: Use PMAT to enforce unified quality gates

Challenge 2: Build System Complexity

Problem: Multiple build tools (cargo, npm, pip, gradle) Solution: Orchestrate with Makefile or Bazel

Challenge 3: Dependency Management

Problem: Language-specific package managers Solution: Centralized dependency scanning with pmat audit-deps

Challenge 4: Code Duplication Across Languages

Problem: Business logic duplicated in different languages Solution: Identify duplication with pmat analyze duplication --cross-language

Performance Optimization by Language

Typical Performance Characteristics:

  • Rust, C, C++: Fastest (compiled, zero-cost abstractions)
  • Go: Fast (compiled, GC overhead minimal)
  • Java, C#: Fast (JIT optimization)
  • TypeScript, JavaScript: Moderate (JIT, V8 optimization)
  • Python, Ruby: Slower (interpreted, dynamic)

Migration Strategy:

# Identify performance bottlenecks
pmat profile --path . --output perf_bottlenecks.json

# Recommend language for hot paths
# "Consider migrating data_processor.py (Python) to Rust for 10-50x speedup"

Limitations

  • Language Coverage: PMAT supports 25+ languages, but not all (e.g., COBOL, Fortran limited)
  • Cross-Language Analysis: Some patterns (e.g., FFI safety) require manual review
  • Build System Integration: May need language-specific tooling for full CI/CD
  • Team Expertise: Quality recommendations assume team proficiency in target languages

When NOT to Use This Skill

  • Single-Language Projects: Use language-specific skills instead
  • Prototypes: Multi-language analysis adds overhead for throwaway code
  • Non-Code Polyglot: HTML/CSS/JSON are configuration, not programming languages

Scientific Foundation

Multi-language analysis based on:

  1. Software Architecture Metrics (Chidamber & Kemerer, 1994)
  2. Polyglot Programming (Ford et al., 2014)
  3. Cross-Language Static Analysis (Livshits & Lam, 2005)
  4. Monorepo Best Practices (Google Engineering, 2016)

Version Requirements

  • Minimum: PMAT v2.170.0
  • Recommended: Latest version for best multi-language support
  • Check version: pmat --version

Remember: Polyglot architecture is a powerful tool when used intentionally. Choose each language for its strengths, maintain consistent quality standards, and use PMAT to ensure architectural boundaries remain clear and maintainable.

More by paiml

View all
pmat-quality
118

Code Quality Analysis with PMAT: Analyzes code quality, complexity, and technical debt using PMAT (Pragmatic AI Labs MCP Agent Toolkit). Use this skill when: - User mentions "code quality", "complexity", "technical debt", or "maintainability" - Reviewing code or conducting code review - Modifying or refactoring existing code files - Creating pull requests or preparing commits - Investigating performance or quality issues Supports 25+ languages including Rust, Python, TypeScript, JavaScript, Go, C++, Java, Ruby, PHP, Swift, and more. Provides cyclomatic complexity, cognitive complexity, maintainability index, dead code detection, and technical debt annotations (SATD: TODO, FIXME, HACK comments).

pmat-tech-debt
118

Technical Debt Tracking with PMAT: Tracks and manages technical debt using PMAT (Pragmatic AI Labs MCP Agent Toolkit). Use this skill when: - User asks about technical debt, TODO comments, or code quality issues - Planning sprint work and need to prioritize debt repayment - Conducting code audits or technical debt assessments - Tracking debt accumulation trends over time - Creating technical debt reports for stakeholders Detects SATD (Self-Admitted Technical Debt) annotations: TODO, FIXME, HACK, XXX, NOTE comments. Provides debt quantification in hours, prioritization by severity, and repayment tracking.

pmat-context
118

Deep Context Generation with PMAT: Generates comprehensive, LLM-optimized codebase context using PMAT (Pragmatic AI Labs MCP Agent Toolkit). Use this skill when: - Starting work on unfamiliar codebases - Onboarding to new projects or repositories - Need quick understanding of project architecture - Preparing for refactoring or feature implementation - Creating documentation or technical specifications Outputs highly compressed markdown (60-80% reduction) optimized for LLM consumption. Supports 25+ languages with architecture visualization, complexity heatmaps, and dependency graphs.

pmat-refactor
118

Automated Refactoring with PMAT: Provides automated refactoring suggestions and complexity reduction strategies using PMAT (Pragmatic AI Labs MCP Agent Toolkit). Use this skill when: - User requests code refactoring, optimization, or improvement - Complexity analysis reveals high-complexity functions (cyclomatic > 10) - Code review identifies maintainability issues - Technical debt needs to be addressed systematically - Preparing legacy code for modernization Supports 25+ languages with data-driven refactoring recommendations based on complexity metrics, mutation testing results, and industry best practices (Fowler's refactoring catalog).