Using AI to Support Your Research

Using AI to Support Your Research

Dr. Nadim Mahmud

Tools & Tech

Introduction

Generative AI tools have entered the research workflow in a meaningful way. Whether you are refining a research question, cleaning up a methods section, or debugging a statistical script, AI can genuinely save time and sharpen your work. But these tools are still widely misunderstood, and the gap between thoughtful use and careless use is consequential.

This guide is not a list of tools. It is a practical, workflow-oriented framework for integrating AI into your research process responsibly and effectively. The goal is to help you answer three questions: Where exactly can AI help me? How do I use it well? And what do I need to be careful about?

The framing that matters most: AI augments your workflow. It does not replace scientific judgment. Every output AI produces is a starting point, not a finished product. The researchers who use AI best are the ones who understand the material well enough to evaluate what the AI gives them.

What AI is (and isn't) in research

The most common mistake residents make is treating AI as an oracle: something that surfaces correct answers and reliable facts on demand. It is not. It is a highly capable assistant that works best when you already understand the material well enough to evaluate what it produces.

AI is genuinely useful for

  • ›Speeding up repetitive writing and editing tasks
  • ›Structuring your thinking and outlining arguments
  • ›Explaining concepts and statistical methods in plain language
  • ›Generating candidate research questions to react to
  • ›Writing and debugging analysis code
  • ›Drafting and polishing prose

AI is not reliable for

  • ›Generating correct facts without verification
  • ›Producing real, accurate citations
  • ›Critical appraisal of study quality
  • ›Replacing domain expertise or clinical judgment
  • ›Synthesizing conflicting evidence with nuance
  • ›Identifying what is novel in your specific subfield

AI in the Research Workflow

Select a phase of the research process to see where AI can (and cannot) help, with example prompts you can copy and adapt.

Ideation

Brainstorming and refining research questions

AI can help with

  • ✓Generate candidate research questions from a clinical observation
  • ✓Refine a broad topic into a focused PICO framework
  • ✓Identify potential gaps in the existing literature
  • ✓Explore angles you may not have considered

AI cannot replace

  • ✗Clinical judgment about which questions matter most
  • ✗Domain expertise required to evaluate feasibility
  • ✗Your mentor's guidance on what is novel vs. already answered

Relevant tools

ChatGPTClaudeGemini

Example prompts

These are starting points. Adapt them to your specific study, data source, and question.

PICO refinement

I am a gastroenterology fellow interested in studying outcomes of patients with MASLD and cirrhosis who undergo bariatric surgery. Help me frame this as a focused PICO research question and suggest 3 specific, answerable variations I could pursue as a retrospective cohort study.

Gap identification

I am planning a literature review on antifungal prophylaxis in critically ill patients with liver failure. Based on what you know about this space, what are potential unanswered questions or research gaps that might be feasible to address with retrospective data?

Curated Tool List

A focused, practical list organized by category. Filter to find tools relevant to your current task.

ToolBest forCost
ChatGPTIdeation, drafting, editing, code generation, statistical explanationsFree tier
ClaudeLong-document editing, nuanced writing feedback, complex reasoning tasksFree tier
GeminiReal-time web search integration, multimodal tasks, Google Workspace usersFree tier
OpenEvidenceClinical and medical literature questions with evidence-backed answersFree tier
ElicitExtracting structured data from papers, literature synthesis, systematic review supportFree tier
ConsensusGetting a quick evidence-based answer to a research questionFree tier
NotebookLMAnalyzing your own uploaded documents, synthesizing PDFs, generating audio summariesFree tier
Research RabbitMapping related literature and discovering connected papers visuallyFree tier
SciteUnderstanding how a paper has been cited (supporting vs. contrasting)Paid
GrammarlyGrammar, clarity, and tone editing in real timeFree tier
WordtuneRewriting and paraphrasing sentences for clarity or concisionFree tier
GitHub CopilotCode completion and suggestions within a code editor (VS Code, etc.)Paid
A note on AI-assisted literature tools: Elicit, Consensus, OpenEvidence, and similar tools are excellent for getting oriented quickly, but they cannot replace a rigorous systematic search. For any review intended for publication, use PubMed, Scopus, or Web of Science with a documented search strategy. See the Conducting a Literature Review module for guidance on search strategy.

Risks and Limitations

Understanding AI's limitations is not optional for a medical researcher. It is what separates thoughtful use from harmful use. Click each risk to expand it.

Ethics and Authorship

The norms around AI use in academic publishing are still evolving, but several principles are already well-established. Getting these right matters for your reputation and your integrity as a researcher.

AI cannot be an author

Authorship requires the ability to take accountability for the work: to stand behind it, answer for it, and accept responsibility if errors are found. AI cannot do this. The ICMJE criteria for authorship (conception, drafting, revision, approval, accountability) all require human judgment and responsibility. Do not list AI tools as authors regardless of how much they contributed to drafting.

Disclosure is increasingly required

Most major journals (including Gastroenterology, Hepatology, NEJM, JAMA, and BMJ) now require disclosure of AI use in manuscript preparation. Policies vary: some require disclosure only for AI-generated text, others for any AI use including editing assistance. Check the target journal's author guidelines before submission. A typical disclosure reads:

"ChatGPT (OpenAI) was used for editing assistance in preparation of this manuscript. All content was reviewed, verified, and is the full responsibility of the authors."

You are responsible for AI-generated content

If AI writes a sentence that contains a fabricated statistic and that sentence appears in your published paper, you are responsible. Not the AI tool. Not its developers. The standards for accuracy, scientific integrity, and attribution apply to every word in your manuscript, regardless of how it was generated.

Never input PHI into public AI tools

This is not a gray area. Inputting protected health information into ChatGPT, Claude, or any non-HIPAA-compliant tool is a HIPAA violation, regardless of whether the data appears de-identified. Use AI only with fully anonymized, aggregate summaries of your data.

How to Prompt Effectively

The quality of AI output depends heavily on the quality of your input. The four principles below make a consistent difference.

Provide context

Tell the AI who you are, what kind of study you're doing, and what data you have access to. The more specific context you provide, the more usable the output.

Specify the format

Ask for bullet points, a table, a numbered list, or a specific word count. Unformatted prose is harder to use directly and takes more editing.

Set constraints

Tell the AI what NOT to do: 'Do not add new claims,' 'Do not change factual content,' 'Flag anything you are uncertain about.' Constraints prevent the AI from drifting.

Iterate

Treat AI as a collaborator, not a vending machine. If the first output isn't quite right, follow up: 'Make this more concise,' 'This sentence overstates causality, rewrite it,' 'Give me three alternative versions of this paragraph.'

Weak vs. strong prompts

Select a task type to see how a weak prompt compares to a stronger one.

Weak prompt

Summarize this paper.

Strong prompt

Summarize the following RCT in 5 bullet points covering: (1) study design and population, (2) primary intervention and comparator, (3) primary outcome and result, (4) key secondary outcomes, (5) major limitations. Then tell me one question I should ask before citing it. [Paste abstract]

Why it matters: The stronger prompt defines the exact format, specifies which dimensions matter, and adds a critical-thinking prompt that keeps you engaged rather than passively accepting a summary.

Key Takeaways

AI is a tool, not a collaborator. It produces output; you provide judgment.

Use AI most confidently for drafting, editing, explaining concepts, writing code, and structuring your thinking.

Always verify facts and citations independently. AI hallucination is real, common, and consequential.

Never input PHI or identifiable patient data into a public AI tool.

Disclose AI use per your target journal's policy. Check author guidelines before submission.

Protect your own critical thinking. If you could not explain or defend an AI-generated output without the AI, it is not ready for your manuscript.

Continue Learning

Now that you have a framework for using AI responsibly, these related modules will help you put it into practice.