Reusable Claude Code skills for paper review, code review, and computational reproducibility audits
Based on OpenAIReview by Chenhao Tan and the Chicago Human-AI Institute. Deep-review an academic paper using parallel sub-agents for section-level scrutiny. Produces tiered findings (major/moderate/minor) and saves viz-compatible results.
/openaireview <path-or-arxiv-url>
Examples:
/openaireview paper.pdf/openaireview https://arxiv.org/abs/2401.12345/openaireview paper.texThe skill runs a structured multi-pass review pipeline:
Automatically detects the input type (PDF, arXiv URL, .tex/.txt/.md), downloads if needed (arXiv HTML preferred, PDF fallback), parses the paper, splits it into sections, and writes a workspace to /tmp/<slug>_review/.
Reads the complete text including all appendices and tables, then builds a comprehensive structured summary covering:
Plans and launches 6-9 sub-agents in parallel:
Section sub-agents (one per major section or logical group):
Cross-cutting sub-agents (2-3, chosen based on what the paper needs):
After all sub-agents complete:
Outputs a structured review grouped by severity, with each issue including:
technical or logical)Saves results to ./review_results/<slug>.json for visualization with openaireview serve.
This skill requires additional Python scripts bundled alongside the SKILL.md. See the OpenAIReview GitHub repo for the full installation, or install via:
pip install openaireview