Reusable Claude Code skills for paper review, code review, and computational reproducibility audits
These are Claude Code skills — reusable prompt templates that run as slash commands inside Claude Code. They’re designed for academic economists and social scientists working with research papers and replication packages.
Claude Code is Anthropic’s AI coding agent that runs in your terminal or in the app. Skills are reusable prompt files (written in markdown) that you can invoke as slash commands — like /paper-review or /code-review — to run structured, multi-step tasks. When you type a slash command, Claude Code reads the skill file and follows its instructions, applying the frameworks and checklists defined within it to whatever file or folder you point it at.
Also works on Codex. These are just markdown prompt files, so they work equally well in OpenAI Codex (and other agentic coding tools that read local instruction files). Drop the
SKILL.mdcontents into a CodexAGENTS.md, save them as a Codex prompt, or paste them in directly — the frameworks and checklists are model-agnostic.
Review academic research papers using 5 established frameworks: Edmans’ editorial assessment, Nyhan’s peer review checklist, Humphreys’ comprehensive review, Blattman’s empirical paper guide, and Evans & Bellemare on introductions, abstracts, and conclusions.
Review a development economics project folder for replication best practices. Based on Gentzkow & Shapiro, DIME Analytics, Julian Reif, AEA Data Editor / Vilhuber, and the Social Science Data Editors Template README. Covers folder structure, master scripts, coding standards (Stata/R/Python), data management, output reproducibility, and documentation.
A systematic computational reproducibility audit protocol with 5 parallel audits: code audit, cross-language replication, directory & replication package, output automation, and econometrics. Based on Scott Cunningham’s MixtapeTools Referee 2 protocol.
A 6-agent pre-submission referee report targeting a specified journal. Runs spelling/grammar, internal consistency, unsupported claims, mathematics/notation, tables/figures, and contribution evaluation agents in parallel, then consolidates into a prioritised report. Based on Claes Backman’s AI Research Feedback.
Deep-review an academic paper using parallel sub-agents for section-level scrutiny. Runs a multi-pass pipeline: first understanding the full paper, then launching 6-9 parallel sub-agents (section reviewers + cross-cutting checks), then consolidating and tiering findings as major/moderate/minor. Based on OpenAIReview by Chenhao Tan and the Chicago Human-AI Institute.
Run the full coarse.ink review pipeline locally, using your Claude Code subscription as the LLM backend instead of paying per-call API costs. Handles structure analysis, overview synthesis, per-section review, proof verification, and an editorial dedup pass on PDFs, TeX, markdown, DOCX, HTML, or EPUB. Takes 10–25 minutes; only per-paper cost is the ~$0.05–0.15 Mistral OCR step via OpenRouter. Install with uvx --python 3.12 --from 'coarse-ink' coarse install-skills --all --force.
The easiest way to install is to give Claude Code the link to this page and ask it to install the skills:
Just paste https://lcrawfurd.github.io/claude-skills/ into Claude Code
and ask it to install the skills for you.
Or manually copy each skill’s markdown file into your ~/.claude/skills/ directory:
~/.claude/skills/
├── paper-review/
│ └── SKILL.md
├── code-review/
│ └── SKILL.md
├── referee2/
│ └── SKILL.md
└── backmanreview/
└── SKILL.md
Then invoke them as slash commands:
/paper-review path/to/paper.pdf
/code-review path/to/project/
/referee2 path/to/project/
/backmanreview QJE path/to/paper.tex