Agent SkillsAgent Skills
WILLOSCAR

research-pipeline-runner

@WILLOSCAR/research-pipeline-runner
WILLOSCAR
421
29 forks
Updated 4/29/2026
View on GitHub

Run this repo’s Units+Checkpoints research pipelines end-to-end (survey/brief/paper-review/evidence-review/idea/tutorial/graduate-paper), with workspaces + checkpoints. **Trigger**: run pipeline, kickoff, 继续执行, 自动跑, 写一篇, survey/brief/review/调研/教程/系统综述/审稿. **Use when**: 用户希望端到端跑流程(创建 `workspaces/<name>/`、生成/执行 `UNITS.csv`、遇到 HUMAN checkpoint 停下等待)。 **Skip if**: 用户明确要手工逐条执行(用 `unit-executor`),或你不应自动推进到 prose 阶段。 **Network**: depends on selected pipeline (arXiv/PDF/citation verification may need network; offline import supported where available). **Guardrail**: 必须尊重 checkpoints(无 Approve 不写 prose);遇到 HUMAN 单元必须停下等待;禁止在 repo root 创建 workspace 工件。

Installation

$npx agent-skills-cli install @WILLOSCAR/research-pipeline-runner
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Path.codex/skills/research-pipeline-runner/SKILL.md
Branchmain
Scoped Name@WILLOSCAR/research-pipeline-runner

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

npx agent-skills-cli list

Skill Instructions


name: research-pipeline-runner description: | Run this repo’s Units+Checkpoints research pipelines end-to-end (survey/brief/paper-review/evidence-review/idea/tutorial/graduate-paper), with workspaces + checkpoints. Trigger: run pipeline, kickoff, 继续执行, 自动跑, 写一篇, survey/brief/review/调研/教程/系统综述/审稿. Use when: 用户希望端到端跑流程(创建 workspaces/<name>/、生成/执行 UNITS.csv、遇到 HUMAN checkpoint 停下等待)。 Skip if: 用户明确要手工逐条执行(用 unit-executor),或你不应自动推进到 prose 阶段。 Network: depends on selected pipeline (arXiv/PDF/citation verification may need network; offline import supported where available). Guardrail: 必须尊重 checkpoints(无 Approve 不写 prose);遇到 HUMAN 单元必须停下等待;禁止在 repo root 创建 workspace 工件。

Research Pipeline Runner

Goal: let a user trigger a full pipeline with one natural-language request, while keeping the run auditable (Units + artifacts + checkpoints).

This skill is coordination:

  • semantic work is done by the relevant skills’ SKILL.md
  • scripts are deterministic helpers (scaffold/validate/compile), not the author

Inputs

  • User goal (one sentence is enough), e.g.:
    • “给我写一个 agent 的 arxiv-survey-latex”
  • Optional:
    • explicit pipeline path (e.g., pipelines/arxiv-survey-latex.pipeline.md)
    • constraints (time window, language: EN/中文, evidence_mode: abstract/fulltext)

Outputs

  • A workspace under workspaces/<name>/ containing:
    • STATUS.md, GOAL.md, PIPELINE.lock.md, UNITS.csv, CHECKPOINTS.md, DECISIONS.md
    • pipeline-specific artifacts (papers/outline/sections/output/latex)

Non-negotiables

  • Use UNITS.csv as the execution contract; one unit at a time.
  • Respect checkpoints (CHECKPOINTS.md): no long prose until required approvals are recorded in DECISIONS.md (survey default: C2).
  • Stop at HUMAN checkpoints and wait for explicit sign-off.
  • Never create workspace artifacts in the repo root; always use workspaces/<name>/.

Decision tree: pick a pipeline

User goal → choose:

  • Survey/综述/调研 + Markdown draft → pipelines/arxiv-survey.pipeline.md
  • Survey/综述/调研 + PDF output → pipelines/arxiv-survey-latex.pipeline.md
  • Research brief / rapid review / 速览 → pipelines/research-brief.pipeline.md
  • Paper review / paper critique / 审稿 → pipelines/paper-review.pipeline.md
  • Evidence review / systematic review / 系统综述 → pipelines/evidence-review.pipeline.md
  • Idea finding / 选题 / 点子 / 找方向 → pipelines/idea-brainstorm.pipeline.md
  • Tutorial/教程 → pipelines/source-tutorial.pipeline.md

Recommended run loop (skills-first)

  1. Initialize workspace (C0):
  • create workspaces/<name>/
  • write GOAL.md, lock pipeline (PIPELINE.lock.md), seed queries.md
  1. Execute units sequentially:
  • follow each unit’s SKILL.md to produce the declared outputs
  • only mark DONE when acceptance criteria are satisfied and outputs exist
  1. Stop at HUMAN checkpoints:
  • default survey checkpoint is C2 (scope + outline)
  • write a concise approval request in DECISIONS.md and wait
  1. Writing-stage self-loop (when drafts look thin/template-y):
  • prefer local fixes over rewriting everything:
    • writer-context-pack (C4→C5 bridge) makes packs debuggable
    • subsection-writer writes per-file units
    • writer-selfloop fixes only failing sections/*.md
    • paragraph-curator / style-harmonizer / opener-variator converge structure and de-template the prose
    • evaluation-anchor-checker is the late section-level numeric hygiene sweep before merge
    • draft-polisher removes generator voice without changing citation keys

Strict-mode behavior (by design)

In --strict runs, several semantic C3/C4 artifacts are treated as scaffolds until explicitly marked refined. This is intentional: it prevents bootstrap JSONL from silently passing into C5 writing (a major source of hollow/templated prose).

Create these markers only after you have manually refined/spot-checked the artifacts:

  • outline/subsection_briefs.refined.ok
  • outline/chapter_briefs.refined.ok
  • outline/evidence_bindings.refined.ok
  • outline/evidence_drafts.refined.ok
  • outline/anchor_sheet.refined.ok
  • outline/writer_context_packs.refined.ok

The runner may BLOCK even if the JSONL exists; add the marker after refinement, then rerun/resume the unit.

  1. Finish:
  • merge → audit → (optional) LaTeX scaffold/compile

Optional CLI helpers (debug only)

  • Kickoff + run (optional; convenient, not required): python scripts/pipeline.py kickoff --topic "<topic>" --pipeline <pipeline-name> --run --strict
  • Resume: python scripts/pipeline.py run --workspace <ws> --strict
  • Approve checkpoint: python scripts/pipeline.py approve --workspace <ws> --checkpoint C2
  • Mark refined unit: python scripts/pipeline.py mark --workspace <ws> --unit-id <U###> --status DONE --note "LLM refined"

Handling common blocks

  • HUMAN approval required: summarize produced artifacts, ask for approval, then record it and resume.
  • Quality gate blocked (output/QUALITY_GATE.md exists): treat current outputs as scaffolding; refine per the unit’s SKILL.md; mark DONE; resume.
  • No network: use offline imports (papers/imports/ or arxiv-search --input).
  • Weak coverage: broaden queries or reduce/merge subsections (outline-budgeter) before writing.

Quality checklist

  • UNITS.csv statuses reflect actual outputs (no DONE without outputs).
  • No prose is written unless DECISIONS.md explicitly approves it.
  • The run stops at HUMAN checkpoints with clear next questions.
  • In strict mode, scaffold/stub outputs do not get marked DONE without refinement.

More by WILLOSCAR

View all
subsection-polisher
421

Polish a single H3 unit file under `sections/` into survey-grade prose (de-template + contrast/eval/limitation), without changing citation keys. **Trigger**: subsection polisher, per-subsection polish, polish section file, 小节润色, 去模板, 结构化段落. **Use when**: `sections/S*.md` exists but reads rigid/template-y; you want to fix quality locally before `section-merger`. **Skip if**: subsection files are missing, evidence packs are incomplete, or `Approve C2` is not recorded. **Network**: none. **Guardrail**: do not invent facts/citations; do not add/remove citation keys; keep citations within the same H3; keep citations subsection-scoped.

writer-context-pack
421

Build per-H3 writer context packs (NO PROSE): merge briefs + evidence packs + anchor facts + allowed citations into a single deterministic JSONL, so drafting is less hollow and less brittle. **Trigger**: writer context pack, context pack, drafting pack, paragraph plan pack, 写作上下文包. **Use when**: `outline/subsection_briefs.jsonl` + `outline/evidence_drafts.jsonl` + `outline/anchor_sheet.jsonl` exist and you want to make C5 drafting easier/more consistent. **Skip if**: upstream evidence is missing or scaffolded (fix `paper-notes` / `evidence-binder` / `evidence-draft` / `anchor-sheet` first). **Network**: none. **Guardrail**: NO PROSE; do not invent facts/citations; only use citation keys present in `citations/ref.bib`.

claim-evidence-matrix
421

Build a section-by-section claim–evidence matrix (`outline/claim_evidence_matrix.md`) from the outline and paper notes. **Trigger**: claim–evidence matrix, evidence mapping, 证据矩阵, 主张-证据对齐. **Use when**: 写 prose 之前需要把每个小节的可检验主张与证据来源显式化(outline + paper notes 已就绪)。 **Skip if**: 缺少 `outline/outline.yml` 或 `papers/paper_notes.jsonl`。 **Network**: none. **Guardrail**: bullets-only(NO PROSE);每个 claim 至少 2 个证据来源(或显式说明例外)。

evaluation-anchor-checker
421

Audit and rewrite evaluation/numeric claims to ensure they carry minimal protocol context (task + metric + constraint) and avoid underspecified model naming. **Trigger**: evaluation anchor checker, numeric claim hygiene, underspecified numbers, protocol context, 评测锚点检查, 数字断言, 指标上下文. **Use when**: before final merge/polish, or when reviewers would likely flag claims as underspecified (numbers without task/metric/budget), or `pipeline-auditor` warns about suspicious model naming. **Skip if**: evidence is too thin to justify numeric claims (route upstream to C3/C4), or you are pre-C2 (NO PROSE). **Network**: none. **Guardrail**: do not invent numbers; do not add/remove/move citation keys; if protocol context is missing, weaken/remove the numeric claim rather than guessing.