Download PDFs (when available) and extract plain text to support full-text evidence, writing `papers/fulltext_index.jsonl` and `papers/fulltext/*.txt`. **Trigger**: PDF download, fulltext, extract text, papers/pdfs, 全文抽取, 下载PDF. **Use when**: `queries.md` 设置 `evidence_mode: fulltext`(或你明确需要全文证据)并希望为 paper notes/claims 提供更强 evidence。 **Skip if**: `evidence_mode: abstract`(默认);或你不希望进行下载/抽取(成本/权限/时间)。 **Network**: fulltext 下载通常需要网络(除非你手工提供 PDF 缓存在 `papers/pdfs/`)。 **Guardrail**: 缓存下载到 `papers/pdfs/`;默认不覆盖已有抽取文本(除非显式要求重抽)。
Installation
Details
Usage
After installing, this skill will be available to your AI coding assistant.
Verify installation:
npx agent-skills-cli listSkill Instructions
name: pdf-text-extractor
description: |
Download PDFs (when available) and extract plain text to support full-text evidence, writing papers/fulltext_index.jsonl and papers/fulltext/*.txt.
Trigger: PDF download, fulltext, extract text, papers/pdfs, 全文抽取, 下载PDF.
Use when: queries.md 设置 evidence_mode: fulltext(或你明确需要全文证据)并希望为 paper notes/claims 提供更强 evidence。
Skip if: evidence_mode: abstract(默认);或你不希望进行下载/抽取(成本/权限/时间)。
Network: fulltext 下载通常需要网络(除非你手工提供 PDF 缓存在 papers/pdfs/)。
Guardrail: 缓存下载到 papers/pdfs/;默认不覆盖已有抽取文本(除非显式要求重抽)。
PDF Text Extractor
Optionally collect full-text snippets to deepen evidence beyond abstracts.
This skill is intentionally conservative: in many survey runs, abstract/snippet mode is enough and avoids heavy downloads.
Inputs
papers/core_set.csv(expectspaper_id,title, and ideallypdf_url/arxiv_id/url)- Optional:
outline/mapping.tsv(to prioritize mapped papers)
Outputs
papers/fulltext_index.jsonl(one record per attempted paper)- Side artifacts:
papers/pdfs/<paper_id>.pdf(cached downloads)papers/fulltext/<paper_id>.txt(extracted text)
Decision: evidence mode
queries.mdcan setevidence_mode: "abstract" | "fulltext".abstract(default template): do not download; write an index that clearly records skipping.fulltext: download PDFs (when possible) and extract text topapers/fulltext/.
Local PDFs Mode
When you cannot/should not download PDFs (restricted network, rate limits, no permission), provide PDFs manually and run in “local PDFs only” mode.
- PDF naming convention:
papers/pdfs/<paper_id>.pdfwhere<paper_id>matchespapers/core_set.csv. - Set
- evidence_mode: "fulltext"inqueries.md. - Run:
python .codex/skills/pdf-text-extractor/scripts/run.py --workspace <ws> --local-pdfs-only
If PDFs are missing, the script writes a to-do list:
output/MISSING_PDFS.md(human-readable summary)papers/missing_pdfs.csv(machine-readable list)
Workflow (heuristic)
- Read
papers/core_set.csv. - If
outline/mapping.tsvexists, prioritize mapped papers first. - For each selected paper (fulltext mode):
- resolve
pdf_url(usepdf_url, else derive fromarxiv_id/urlwhen possible) - download to
papers/pdfs/<paper_id>.pdfif missing - extract a reasonable prefix of text to
papers/fulltext/<paper_id>.txt - append/update a JSONL record in
papers/fulltext_index.jsonlwith status + stats
- resolve
- Never overwrite existing extracted text unless explicitly requested (delete the
.txtto re-extract).
Quality checklist
-
papers/fulltext_index.jsonlexists and is non-empty. - If
evidence_mode: "fulltext": at least a small but non-trivial subset has extracted text (strict mode blocks if extraction coverage is near-zero). - If
evidence_mode: "abstract": the index records clearly reflect skip status (no downloads attempted).
Script
Quick Start
python .codex/skills/pdf-text-extractor/scripts/run.py --helppython .codex/skills/pdf-text-extractor/scripts/run.py --workspace <workspace_dir>
All Options
--max-papers <n>: cap number of papers processed (can be overridden byqueries.md)--max-pages <n>: extract at most N pages per PDF--min-chars <n>: minimum extracted chars to count as OK--sleep <sec>: delay between downloads--local-pdfs-only: do not download; only usepapers/pdfs/<paper_id>.pdfif presentqueries.mdsupports:evidence_mode,fulltext_max_papers,fulltext_max_pages,fulltext_min_chars
Examples
- Abstract mode (no downloads):
- Set
- evidence_mode: "abstract"inqueries.md, then run the script (it will emitpapers/fulltext_index.jsonlwith skip statuses)
- Set
- Fulltext mode with local PDFs only:
- Set
- evidence_mode: "fulltext"inqueries.md, put PDFs underpapers/pdfs/, then run:python .codex/skills/pdf-text-extractor/scripts/run.py --workspace <ws> --local-pdfs-only
- Set
- Fulltext mode with smaller budget:
python .codex/skills/pdf-text-extractor/scripts/run.py --workspace <ws> --max-papers 20 --max-pages 4 --min-chars 1200
Notes
- Downloads are cached under
papers/pdfs/; extracted text is cached underpapers/fulltext/. - The script does not overwrite existing extracted text unless you delete the
.txtfile.
Troubleshooting
Issue: no PDFs are available to download
Fix:
- Use
evidence_mode: abstract(default) or provide local PDFs underpapers/pdfs/and rerun with--local-pdfs-only.
Issue: extracted text is empty/garbled
Fix:
- Try a different extraction backend if supported; otherwise mark the paper as
abstractevidence level and avoid strong fulltext claims.
More by WILLOSCAR
View allWrite the tutorial content (`output/TUTORIAL.md`) from an approved module plan, including exercises and answer outlines. **Trigger**: write tutorial, tutorial modules, 教程写作, TUTORIAL.md. **Use when**: tutorial pipeline 的写作阶段(C3),且 `DECISIONS.md` 已记录 HUMAN 对 scope/running example 的批准(C2)。 **Skip if**: module plan 未完成/未批准(先跑 `module-planner`/`exercise-builder` 并通过 Approve C2)。 **Network**: none. **Guardrail**: 只写已批准范围;保持 running example 一致;每模块包含练习与答案要点。
Use when a reader-facing deliverable exists and needs a deterministic PASS/FAIL quality gate. **Trigger**: self loop, self-loop, polish deliverable, quality gate, fix-on-fail, 收敛, 自循环, 质量门. **Use when**: A pipeline has produced a reader-facing deliverable (`output/*.md`) and you want deterministic convergence to PASS. **Skip if**: You are still pre-approval for prose or the upstream evidence/structure artifacts are missing. **Network**: none. **Guardrail**: Do not invent papers/citations/results. Only use in-scope inputs already present in the workspace.
Lock an ideation run into a single-source-of-truth brainstorm brief (`output/trace/IDEA_BRIEF.md`) and a replayable multi-query plan (`queries.md`). **Trigger**: idea brief, ideation brief, research ideas, brainstorm, 找 idea, 选题, 点子, 找方向. **Use when**: the user wants research ideas and their input is long / multi-turn; you need to clarify topic + constraints before retrieval. **Skip if**: the goal is to write a survey draft directly (use `arxiv-survey*` pipelines instead). **Network**: none. **Guardrail**: do not invent papers/citations; do not start retrieval here; keep the brief structured (no long prose).
Generate and verify BibTeX entries from paper notes, writing `citations/ref.bib` and `citations/verified.jsonl`. **Trigger**: citation, BibTeX, ref.bib, verified.jsonl, references, 引用, 参考文献. **Use when**: 已有 `papers/paper_notes.jsonl`,需要为 prose/LaTeX 准备可追溯的引用(每条都有 url/date/title 验证记录)。 **Skip if**: 还没有 paper notes(或本次产出不需要引用/参考文献)。 **Network**: 自动验证通常需要网络;无网络时可先 record,再标注 needs manual verification。 **Guardrail**: 每个 BibTeX entry 必须对应一条 `citations/verified.jsonl` 记录;prose 只能使用已存在于 `citations/ref.bib` 的 citation keys。
