Agent SkillsAgent Skills
jeremylongshore

fairdb-backup-manager

@jeremylongshore/fairdb-backup-manager
jeremylongshore
2,103
284 forks
Updated 5/5/2026
View on GitHub

Automatically manages PostgreSQL backups with pgBackRest and Wasabi S3 storage when working with FairDB databases Activates when you request "fairdb backup manager" functionality.

Installation

$npx agent-skills-cli install @jeremylongshore/fairdb-backup-manager
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathbackups/skills-batch-20251204-000554/plugins/devops/fairdb-operations-kit/skills/fairdb-backup-manager/SKILL.md
Branchmain
Scoped Name@jeremylongshore/fairdb-backup-manager

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

npx agent-skills-cli list

Skill Instructions


name: fairdb-backup-manager description: | Automatically manages PostgreSQL backups with pgBackRest and Wasabi S3 storage when working with FairDB databases Activates when you request "fairdb backup manager" functionality. allowed-tools: Read, Write, Edit, Grep, Glob, Bash version: 1.0.0

FairDB Backup Manager

Purpose

I automatically handle all backup-related operations for FairDB PostgreSQL databases, including scheduling, verification, restoration, and monitoring of pgBackRest backups with Wasabi S3 storage.

Activation Triggers

I activate when you:

  • Mention "backup", "restore", "pgbackrest", or "recovery" in context of FairDB
  • Work with PostgreSQL backup configurations
  • Need to verify backup integrity
  • Discuss disaster recovery or data protection
  • Experience data loss or corruption issues

Core Capabilities

Backup Operations

  • Configure pgBackRest with Wasabi S3
  • Execute full, differential, and incremental backups
  • Manage backup schedules and retention policies
  • Compress and encrypt backup data
  • Monitor backup health and success rates

Restore Operations

  • Perform point-in-time recovery (PITR)
  • Restore specific databases or tables
  • Test restore procedures without impacting production
  • Validate restored data integrity
  • Document recovery time objectives (RTO)

Monitoring & Verification

  • Check backup completion status
  • Verify backup integrity with test restores
  • Monitor backup size and growth trends
  • Alert on backup failures or delays
  • Generate backup compliance reports

Automated Workflows

When activated, I will:

  1. Assess Current State

    • Check existing backup configuration
    • Review backup history and success rate
    • Identify any failed or missing backups
    • Analyze storage usage and costs
  2. Optimize Configuration

    • Adjust retention policies based on requirements
    • Configure optimal compression settings
    • Set up parallel backup processes
    • Implement incremental backup strategies
  3. Execute Operations

    • Run scheduled backups automatically
    • Perform test restores monthly
    • Clean up old backups per retention policy
    • Monitor and alert on issues
  4. Document & Report

    • Maintain backup/restore runbooks
    • Generate compliance reports
    • Track metrics and trends
    • Document recovery procedures

Integration with FairDB Commands

I work seamlessly with these FairDB commands:

  • /fairdb-setup-backup - Initial configuration
  • /fairdb-onboard-customer - Customer-specific backups
  • /fairdb-emergency-response - Disaster recovery
  • /fairdb-health-check - Backup health monitoring

Best Practices I Enforce

Backup Strategy

  • Full backups weekly (Sunday 2 AM)
  • Differential backups daily
  • Incremental backups hourly during business hours
  • WAL archiving for point-in-time recovery
  • Geographical redundancy with Wasabi regions

Security

  • AES-256 encryption for all backups
  • Secure key management
  • Access control and audit logging
  • Encrypted transport to S3
  • Immutable backup storage

Testing

  • Monthly restore tests
  • Quarterly disaster recovery drills
  • Automated integrity verification
  • Performance benchmarking
  • Documentation updates

Proactive Monitoring

I continuously monitor for:

  • Backup failures or delays
  • Storage capacity issues
  • Unusual backup sizes
  • Performance degradation
  • Compliance violations

Emergency Response

During data loss incidents, I:

  1. Assess the extent of data loss
  2. Identify the best recovery point
  3. Execute restore procedures
  4. Verify data integrity
  5. Document incident and recovery

Reporting

I generate these reports automatically:

  • Daily backup status summary
  • Weekly storage utilization report
  • Monthly compliance audit
  • Quarterly DR test results
  • Annual backup strategy review

Cost Optimization

I help reduce backup costs by:

  • Implementing intelligent retention policies
  • Using compression effectively
  • Cleaning up unnecessary backups
  • Optimizing transfer methods
  • Selecting appropriate storage tiers

Example Scenarios

Scenario: New Customer Onboarding

User: "Setting up database for new customer Acme Corp"
Me: I'll configure automated backups for Acme Corp:
- Creating dedicated pgBackRest stanza
- Setting up daily full backups with 30-day retention
- Configuring WAL archiving for PITR
- Scheduling test restores monthly
- Adding monitoring for backup success

Scenario: Backup Failure Alert

System: "Backup failed for customer_db"
Me: I'm investigating the backup failure:
- Checking disk space: OK
- Verifying S3 connectivity: Failed
- Issue: Wasabi credentials expired
- Action: Rotating credentials and retrying backup
- Result: Backup completed successfully

Scenario: Recovery Request

User: "Need to restore CustomerX database to yesterday 3 PM"
Me: Initiating point-in-time recovery:
- Target time: 2024-03-14 15:00:00
- Latest backup: Full backup from 2024-03-14 02:00:00
- Applying WAL logs to target time
- Restoration complete in 12 minutes
- Data verified and customer notified

Success Metrics

I track and optimize:

  • Backup success rate (target: >99.9%)
  • Recovery time objective (target: <1 hour)
  • Recovery point objective (target: <5 minutes)
  • Storage efficiency (compression ratio >3:1)
  • Cost per GB backed up

Continuous Improvement

I learn from each operation to:

  • Refine backup schedules
  • Improve recovery procedures
  • Optimize resource usage
  • Enhance monitoring alerts
  • Update documentation

More by jeremylongshore

View all
docker-compose-generator
2,103

generating-docker-compose-files: This skill enables Claude to generate Docker Compose configurations for multi-container applications. It leverages best practices for production-ready deployments, including defining services, networks, volumes, health checks, and resource limits. Claude should use this skill when the user requests a Docker Compose file, specifies application architecture involving multiple containers, or mentions needs for container orchestration, environment variables, or persistent data management in a Docker environment. Trigger terms include "docker-compose", "docker compose file", "multi-container", "container orchestration", "docker environment", "service definition", "volume management", "network configuration", "health checks", "resource limits", and ".env files".

environment-config-manager
2,103

managing-environment-configurations: This skill enables Claude to manage environment configurations and secrets across different deployments using the environment-config-manager plugin. It is invoked when the user needs to generate, update, or retrieve configuration settings for various environments (e.g., development, staging, production). Use this skill when the user explicitly mentions "environment configuration," "secrets management," "deployment configuration," or asks to "generate config files". It helps streamline DevOps workflows by providing production-ready configurations based on best practices.

git-commit-smart
2,103

generating-smart-commits: This skill generates conventional commit messages using AI analysis of staged Git changes. It automatically determines the commit type (feat, fix, docs, etc.), identifies breaking changes, and formats the message according to conventional commit standards. Use this when asked to create a commit message, write a Git commit, or when the user uses the `/commit-smart` or `/gc` command. It is especially useful after changes have been staged with `git add`.

log-aggregation-setup
2,103

setting-up-log-aggregation: This skill sets up log aggregation solutions using ELK (Elasticsearch, Logstash, Kibana), Loki, or Splunk. It generates production-ready configurations and setup code based on specific requirements and infrastructure. Use this skill when the user requests to set up logging infrastructure, configure log aggregation, deploy ELK stack, deploy Loki, deploy Splunk, or needs help with observability. It is triggered by terms like "log aggregation," "ELK setup," "Loki configuration," "Splunk deployment," or similar requests for centralized logging solutions.