Research Practice Note  ·  Dynamic Persona Reviewer Project  ·  OpenReview
Practical Guide

Better AI-Assisted Paper Review
with Dynamic Reviewer Personas

A step-by-step workflow to simulate multiple expert perspectives on your paper using two simple prompts — no code required.

The Idea

Real reviewers are matched by expertise. Your AI reviewer should be too.

When a program chair assigns reviewers to a submission, they look for people whose prior work is directly relevant to the paper's methods, claims, and technical context. A reviewer of an efficient-attention paper, for example, brings a completely different lens than one who studies long-context reasoning or model compression.

Most AI-assisted review workflows skip this step entirely. They ask a generic LLM prompt to "review this paper," producing a single, uniform perspective. The two-prompt workflow below replicates the expertise-matching step that makes human review panels effective.

Core idea: Use your paper's own reference list to identify 4 realistic expert reviewers, build a short persona for each, and then let each persona review your paper independently. You get four distinct technical perspectives in one session.

Step 1 — Prompt 1

Identify 4 Candidate Reviewers

Paste this prompt into a new chat with your paper PDF already attached. The model will analyze the reference list to find researchers whose prior work is most technically relevant, covering different angles of your paper.

PROMPT 1 — Reviewer Matching Copy & paste to GPT
I have uploaded my research paper as a PDF above. Please help me identify 4 ideal reviewers for this paper.

Your task:
1. Read through the paper's reference list, related work section, and key technical claims.
2. Identify researchers whose prior work is directly cited and covers different aspects of this paper — for example: the core method, the evaluation benchmarks, theoretical foundations, or deployment concerns.
3. Each reviewer should bring a distinct perspective. Aim for diversity across: methodology, empirical rigor, application domain, and efficiency or practicality.
4. Strictly exclude any co-authors of this paper.

For each of the 4 recommended reviewers, output:
- **Name**: the researcher's full name
- **Key cited paper**: the most relevant paper of theirs that this submission cites
- **Review angle**: a 1–2 sentence description of the technical perspective they would bring

Format your response as a numbered list.

Step 2 — Prompt 2

Build a Reviewing Persona for Each Candidate

Run this in the same conversation immediately after Prompt 1. The model will use web knowledge of each candidate's research to synthesize a concise reviewing persona — focused on how they review, not just who they are.

PROMPT 2 — Persona Generation Copy & paste to GPT
Now, for each of the 4 reviewers you identified, write a reviewing persona I can use to simulate their feedback on my paper.

For each reviewer, your persona should describe:
- Their primary research focus and the technical areas they know deeply
- What they tend to pay close attention to when reviewing — e.g., novelty relative to prior work, strength of baselines, reproducibility, theoretical soundness, practical efficiency, or clarity of claims
- Specific questions or concerns they are likely to raise about this particular paper based on their background
- Their typical reviewing style (e.g., constructive but demanding, skeptical of incremental work, focused on empirical evidence)

Guidelines:
- The persona should describe reviewing behavior, not a biography. Focus on what they notice and what they push back on.
- Keep each persona to 150–200 words.
- Write in second person: "You are a reviewer whose expertise is in..."
- Do not include the reviewer's real name inside the persona text itself — treat the name only as a label.

Output 4 clearly labeled persona descriptions, one per reviewer.

Step 3 — Using the Personas

Review Your Paper Through 4 Independent Lenses

How to use each persona
  1. Open a new chat. Start a fresh conversation for each reviewer to avoid cross-contamination.
  2. Paste the persona as the first message. Copy the persona text from Step 2 and paste it at the top of the new conversation.
  3. Upload your paper and ask for a full review. Use a simple instruction like:
    Please review the attached paper fully. Follow the persona above when writing your review.
  4. Compare all 4 reviews side by side. Look for weaknesses that multiple reviewers flag independently — those are your highest-priority revision targets. Pay special attention to concerns that only one reviewer raises — those are likely blind spots in your current coverage of related work.

What to look for across the four reviews

Signal
Repeated concerns

If 3 or 4 reviewers raise the same weakness, it is a strong signal that the paper does not adequately address that point. Prioritize these in your revision.

High priority
Signal
Lens-specific critiques

A concern raised by only one reviewer reflects the unique perspective of that expertise area — often a gap in your related work or an underexplored baseline.

Worth addressing
Signal
Verdict disagreement

If different reviewers reach opposite accept/reject conclusions, your paper likely sits on the decision boundary. Strengthening the key experiments or contributions may tip the balance.

Refine your story
Signal
Questions for authors

Collect all "questions for authors" from the 4 reviews. If you cannot answer a question clearly in a rebuttal, add the clarification directly into the paper.

Rebuttal prep

Tips

Getting the most out of this workflow

Use a model that supports file upload

ChatGPT, Claude, or Gemini with PDF support work well. The model needs to read your full paper, not just a pasted abstract. Upload the camera-ready or latest draft PDF.

Run Prompt 1 and Prompt 2 in the same session

Keeping both prompts in one conversation lets the model carry context about which papers each reviewer wrote and how they relate to your submission. Start fresh only for the individual review step (Step 3).

Check the reviewer candidates for plausibility

Before running Step 3, skim the 4 candidates the model identified. If a name is unfamiliar or seems unrelated, ask the model to explain why it chose that person, or ask it to suggest an alternative. The quality of the personas depends on the quality of the candidates.

Use the reviews as a structured checklist, not as ground truth

AI-generated reviews are not a substitute for human expert feedback. Think of them as a structured pre-submission audit: they help you find obvious gaps before real reviewers do. The most valuable output is often the list of questions — use it to strengthen your weaknesses section and rebuttal strategy.

Estimated time: ~10 minutes total for the two prompts plus persona generation. Individual review sessions take roughly 3–5 minutes each depending on paper length. Total cost with a paid API: typically under $1 for all four reviews.