security-scan: Scan code for OWASP vulnerabilities and security issues. Use for security-sensitive implementations.
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
npx agent-skills-cli listSkill Instructions
name: security-scan description: Scan code for OWASP vulnerabilities and security issues. Use for security-sensitive implementations.
Security Scan Skill
Purpose
Identify and prevent security vulnerabilities.
OWASP Top 10 Checklist
Reference: checklists/owasp-top-10.md
A01: Broken Access Control
- Authorization on all endpoints
- Deny by default
- Rate limiting implemented
- CORS properly configured
A02: Cryptographic Failures
- Data encrypted in transit (HTTPS)
- Sensitive data encrypted at rest
- Strong algorithms used
- Keys properly managed
A03: Injection
- Parameterized queries
- Input validation
- Output encoding
- No eval() with user input
Authentication Checklist
Reference: checklists/auth-security.md
- Passwords hashed (bcrypt/argon2)
- Session properly managed
- Tokens securely stored
- Logout invalidates session
Data Validation Checklist
Reference: checklists/data-validation.md
- All input validated
- Type checking enforced
- Size limits set
- Format validation done
Automated Scan Script
#!/bin/bash
# Run dependency audit
npm audit
# Run static analysis (if available)
npx eslint --plugin security .
# Check for secrets
npx secretlint .
Vulnerability Severity Levels
Critical
- Remote code execution
- SQL injection
- Authentication bypass
- Sensitive data exposure
High
- Cross-site scripting (XSS)
- Cross-site request forgery (CSRF)
- Insecure deserialization
- Privilege escalation
Medium
- Information disclosure
- Missing encryption
- Weak session management
- Insufficient logging
Low
- Missing security headers
- Verbose error messages
- Outdated dependencies (no known exploits)
Security Report Format
Save to: docs/reviews/security-audit-{session}.md
Remediation Process
- Critical/High: Fix immediately, block merge
- Medium: Fix before release
- Low: Track in backlog
Best Practices
Do
- Use parameterized queries
- Validate all input
- Encode all output
- Use security headers
- Keep dependencies updated
Don't
- Hardcode secrets
- Trust user input
- Expose stack traces
- Use weak algorithms
- Skip authentication checks
More by vneseyoungster
View allgemini-vision: Guide for implementing Google Gemini API image understanding - analyze images with captioning, classification, visual QA, object detection, segmentation, and multi-image comparison. Use when analyzing images, answering visual questions, detecting objects, or processing documents with vision.
Searching internet for technical documentation using llms.txt standard, GitHub repositories via Repomix, and parallel exploration. Use when user needs: (1) Latest documentation for libraries/frameworks, (2) Documentation in llms.txt format, (3) GitHub repository analysis, (4) Documentation without direct llms.txt support, (5) Multiple documentation sources in parallel
project-documentation: Comprehensive codebase documentation generator following a layered methodology. This skill should be used when scanning and documenting a codebase for the first time, when creating onboarding documentation for new developers, when generating architecture overviews, walkthroughs, and API references. Supports README generation, architecture diagrams, entry point documentation, pattern guides, and edge case documentation.
Process and generate multimedia content using Google Gemini API. Capabilities include analyze audio files (transcription with timestamps, summarization, speech understanding, music/sound analysis up to 9.5 hours), understand images (captioning, object detection, OCR, visual Q&A, segmentation), process videos (scene detection, Q&A, temporal analysis, YouTube URLs, up to 6 hours), extract from documents (PDF tables, forms, charts, diagrams, multi-page), generate images (text-to-image, editing, composition, refinement). Use when working with audio/video files, analyzing images or screenshots, processing PDF documents, extracting structured data from media, creating images from text prompts, or implementing multimodal AI features. Supports multiple models (Gemini 2.5/2.0) with context windows up to 2M tokens.
