A step-by-step workflow to simulate multiple expert perspectives on your paper using two simple prompts — no code required.
The Idea
When a program chair assigns reviewers to a submission, they look for people whose prior work is directly relevant to the paper's methods, claims, and technical context. A reviewer of an efficient-attention paper, for example, brings a completely different lens than one who studies long-context reasoning or model compression.
Most AI-assisted review workflows skip this step entirely. They ask a generic LLM prompt to "review this paper," producing a single, uniform perspective. The two-prompt workflow below replicates the expertise-matching step that makes human review panels effective.
Step 1 — Prompt 1
Paste this prompt into a new chat with your paper PDF already attached. The model will analyze the reference list to find researchers whose prior work is most technically relevant, covering different angles of your paper.
I have uploaded my research paper as a PDF above. Please help me identify 4 ideal reviewers for this paper. Your task: 1. Read through the paper's reference list, related work section, and key technical claims. 2. Identify researchers whose prior work is directly cited and covers different aspects of this paper — for example: the core method, the evaluation benchmarks, theoretical foundations, or deployment concerns. 3. Each reviewer should bring a distinct perspective. Aim for diversity across: methodology, empirical rigor, application domain, and efficiency or practicality. 4. Strictly exclude any co-authors of this paper. For each of the 4 recommended reviewers, output: - **Name**: the researcher's full name - **Key cited paper**: the most relevant paper of theirs that this submission cites - **Review angle**: a 1–2 sentence description of the technical perspective they would bring Format your response as a numbered list.
Step 2 — Prompt 2
Run this in the same conversation immediately after Prompt 1. The model will use web knowledge of each candidate's research to synthesize a concise reviewing persona — focused on how they review, not just who they are.
Now, for each of the 4 reviewers you identified, write a reviewing persona I can use to simulate their feedback on my paper. For each reviewer, your persona should describe: - Their primary research focus and the technical areas they know deeply - What they tend to pay close attention to when reviewing — e.g., novelty relative to prior work, strength of baselines, reproducibility, theoretical soundness, practical efficiency, or clarity of claims - Specific questions or concerns they are likely to raise about this particular paper based on their background - Their typical reviewing style (e.g., constructive but demanding, skeptical of incremental work, focused on empirical evidence) Guidelines: - The persona should describe reviewing behavior, not a biography. Focus on what they notice and what they push back on. - Keep each persona to 150–200 words. - Write in second person: "You are a reviewer whose expertise is in..." - Do not include the reviewer's real name inside the persona text itself — treat the name only as a label. Output 4 clearly labeled persona descriptions, one per reviewer.
Step 3 — Using the Personas
If 3 or 4 reviewers raise the same weakness, it is a strong signal that the paper does not adequately address that point. Prioritize these in your revision.
High priorityA concern raised by only one reviewer reflects the unique perspective of that expertise area — often a gap in your related work or an underexplored baseline.
Worth addressingIf different reviewers reach opposite accept/reject conclusions, your paper likely sits on the decision boundary. Strengthening the key experiments or contributions may tip the balance.
Refine your storyCollect all "questions for authors" from the 4 reviews. If you cannot answer a question clearly in a rebuttal, add the clarification directly into the paper.
Rebuttal prepTips
ChatGPT, Claude, or Gemini with PDF support work well. The model needs to read your full paper, not just a pasted abstract. Upload the camera-ready or latest draft PDF.
Keeping both prompts in one conversation lets the model carry context about which papers each reviewer wrote and how they relate to your submission. Start fresh only for the individual review step (Step 3).
Before running Step 3, skim the 4 candidates the model identified. If a name is unfamiliar or seems unrelated, ask the model to explain why it chose that person, or ask it to suggest an alternative. The quality of the personas depends on the quality of the candidates.
AI-generated reviews are not a substitute for human expert feedback. Think of them as a structured pre-submission audit: they help you find obvious gaps before real reviewers do. The most valuable output is often the list of questions — use it to strengthen your weaknesses section and rebuttal strategy.