
Using AI to Support Your Research
Dr. Nadim Mahmud
Tools & TechIntroduction
Generative AI tools have entered the research workflow in a meaningful way. Whether you are refining a research question, cleaning up a methods section, or debugging a statistical script, AI can genuinely save time and sharpen your work. But these tools are still widely misunderstood, and the gap between thoughtful use and careless use is consequential.
This guide is not a list of tools. It is a practical, workflow-oriented framework for integrating AI into your research process responsibly and effectively. The goal is to help you answer three questions: Where exactly can AI help me? How do I use it well? And what do I need to be careful about?
What AI is (and isn't) in research
The most common mistake residents make is treating AI as an oracle: something that surfaces correct answers and reliable facts on demand. It is not. It is a highly capable assistant that works best when you already understand the material well enough to evaluate what it produces.
AI is genuinely useful for
- ›Speeding up repetitive writing and editing tasks
- ›Structuring your thinking and outlining arguments
- ›Explaining concepts and statistical methods in plain language
- ›Generating candidate research questions to react to
- ›Writing and debugging analysis code
- ›Drafting and polishing prose
AI is not reliable for
- ›Generating correct facts without verification
- ›Producing real, accurate citations
- ›Critical appraisal of study quality
- ›Replacing domain expertise or clinical judgment
- ›Synthesizing conflicting evidence with nuance
- ›Identifying what is novel in your specific subfield
AI in the Research Workflow
Select a phase of the research process to see where AI can (and cannot) help, with example prompts you can copy and adapt.
Ideation
Brainstorming and refining research questions
AI can help with
- ✓Generate candidate research questions from a clinical observation
- ✓Refine a broad topic into a focused PICO framework
- ✓Identify potential gaps in the existing literature
- ✓Explore angles you may not have considered
AI cannot replace
- ✗Clinical judgment about which questions matter most
- ✗Domain expertise required to evaluate feasibility
- ✗Your mentor's guidance on what is novel vs. already answered
Relevant tools
Example prompts
These are starting points. Adapt them to your specific study, data source, and question.
PICO refinement
I am a gastroenterology fellow interested in studying outcomes of patients with MASLD and cirrhosis who undergo bariatric surgery. Help me frame this as a focused PICO research question and suggest 3 specific, answerable variations I could pursue as a retrospective cohort study.
Gap identification
I am planning a literature review on antifungal prophylaxis in critically ill patients with liver failure. Based on what you know about this space, what are potential unanswered questions or research gaps that might be feasible to address with retrospective data?
Curated Tool List
A focused, practical list organized by category. Filter to find tools relevant to your current task.
| Tool | Best for | Cost |
|---|---|---|
| ChatGPT | Ideation, drafting, editing, code generation, statistical explanations | Free tier |
| Claude | Long-document editing, nuanced writing feedback, complex reasoning tasks | Free tier |
| Gemini | Real-time web search integration, multimodal tasks, Google Workspace users | Free tier |
| OpenEvidence | Clinical and medical literature questions with evidence-backed answers | Free tier |
| Elicit | Extracting structured data from papers, literature synthesis, systematic review support | Free tier |
| Consensus | Getting a quick evidence-based answer to a research question | Free tier |
| NotebookLM | Analyzing your own uploaded documents, synthesizing PDFs, generating audio summaries | Free tier |
| Research Rabbit | Mapping related literature and discovering connected papers visually | Free tier |
| Scite | Understanding how a paper has been cited (supporting vs. contrasting) | Paid |
| Grammarly | Grammar, clarity, and tone editing in real time | Free tier |
| Wordtune | Rewriting and paraphrasing sentences for clarity or concision | Free tier |
| GitHub Copilot | Code completion and suggestions within a code editor (VS Code, etc.) | Paid |
Risks and Limitations
Understanding AI's limitations is not optional for a medical researcher. It is what separates thoughtful use from harmful use. Click each risk to expand it.
Ethics and Authorship
The norms around AI use in academic publishing are still evolving, but several principles are already well-established. Getting these right matters for your reputation and your integrity as a researcher.
AI cannot be an author
Authorship requires the ability to take accountability for the work: to stand behind it, answer for it, and accept responsibility if errors are found. AI cannot do this. The ICMJE criteria for authorship (conception, drafting, revision, approval, accountability) all require human judgment and responsibility. Do not list AI tools as authors regardless of how much they contributed to drafting.
Disclosure is increasingly required
Most major journals (including Gastroenterology, Hepatology, NEJM, JAMA, and BMJ) now require disclosure of AI use in manuscript preparation. Policies vary: some require disclosure only for AI-generated text, others for any AI use including editing assistance. Check the target journal's author guidelines before submission. A typical disclosure reads:
You are responsible for AI-generated content
If AI writes a sentence that contains a fabricated statistic and that sentence appears in your published paper, you are responsible. Not the AI tool. Not its developers. The standards for accuracy, scientific integrity, and attribution apply to every word in your manuscript, regardless of how it was generated.
Never input PHI into public AI tools
This is not a gray area. Inputting protected health information into ChatGPT, Claude, or any non-HIPAA-compliant tool is a HIPAA violation, regardless of whether the data appears de-identified. Use AI only with fully anonymized, aggregate summaries of your data.
How to Prompt Effectively
The quality of AI output depends heavily on the quality of your input. The four principles below make a consistent difference.
Provide context
Tell the AI who you are, what kind of study you're doing, and what data you have access to. The more specific context you provide, the more usable the output.
Specify the format
Ask for bullet points, a table, a numbered list, or a specific word count. Unformatted prose is harder to use directly and takes more editing.
Set constraints
Tell the AI what NOT to do: 'Do not add new claims,' 'Do not change factual content,' 'Flag anything you are uncertain about.' Constraints prevent the AI from drifting.
Iterate
Treat AI as a collaborator, not a vending machine. If the first output isn't quite right, follow up: 'Make this more concise,' 'This sentence overstates causality, rewrite it,' 'Give me three alternative versions of this paragraph.'
Weak vs. strong prompts
Select a task type to see how a weak prompt compares to a stronger one.
Summarize this paper.
Summarize the following RCT in 5 bullet points covering: (1) study design and population, (2) primary intervention and comparator, (3) primary outcome and result, (4) key secondary outcomes, (5) major limitations. Then tell me one question I should ask before citing it. [Paste abstract]
Key Takeaways
AI is a tool, not a collaborator. It produces output; you provide judgment.
Use AI most confidently for drafting, editing, explaining concepts, writing code, and structuring your thinking.
Always verify facts and citations independently. AI hallucination is real, common, and consequential.
Never input PHI or identifiable patient data into a public AI tool.
Disclose AI use per your target journal's policy. Check author guidelines before submission.
Protect your own critical thinking. If you could not explain or defend an AI-generated output without the AI, it is not ready for your manuscript.
Continue Learning
Now that you have a framework for using AI responsibly, these related modules will help you put it into practice.
Organizing Citations with Reference Managers
Learn to use Zotero, EndNote, or Mendeley to manage references and catch AI-generated citation errors.
Explore →Conducting a Literature Review
Build a rigorous search strategy that complements (and validates) AI-assisted literature tools.
Explore →How to Write a Manuscript
Understand the structure of a manuscript so you can use AI effectively to draft and edit each section.
Explore →Navigating the IRB Process
Understand research ethics requirements that no AI tool can navigate for you.
Explore →