Systematic Review and PRISMA
Module 1: Understanding the Structure of Research Papers and Theses
From Concept to Submission Series
Academic Writing Mastery: The Complete 2026 Guide To Research Papers, Thesis & Dissertation Writing
← Back to Main Post: The Complete Guide to Research Paper Structure
Systematic Review and PRISMA
The systematic review is one of the most publishable pieces of research a scholar can produce. A well-executed systematic review answers a clearly defined question by synthesising all available evidence meeting specified criteria — and does so through a method rigorous enough that another researcher following the same protocol would reach the same results. That combination of comprehensiveness and replicability is exactly what journals and grant committees value.
The problem is that most researchers who attempt systematic reviews for the first time underestimate what the method requires. They conduct a thorough literature search, critically analyse what they read, and write a synthesis. That is a narrative review — a different and less rigorous method. A systematic review requires a registered protocol, a documented search strategy, a PRISMA flow diagram, an explicit quality appraisal, and a structured report. Each of these elements has specific requirements that this post covers in full.
If you are writing a research paper or thesis that involves a literature review, the question you need to answer first is: does my review need to be systematic? This post begins there.
Systematic Review, Narrative Review, Scoping Review: Which One Do You Need?
These three review types are often confused. They have different purposes, different methodological requirements, and produce different kinds of knowledge claims.
| Review type | Purpose | When to use it |
| Narrative review | Survey and discuss existing literature on a topic. Synthesises but does not aim for comprehensiveness. Author selects sources based on relevance and quality as judged by the reviewer. | Background chapters of theses and dissertations. Conceptual papers. Introductions to empirical papers. When you want to situate your work in the field, not answer a specific empirical question. |
| Systematic review | Answer a specific, pre-defined research question by identifying, appraising, and synthesising all available evidence meeting explicit inclusion criteria. Designed to minimise bias. | When you want to produce a standalone publishable review that answers an empirical question about what the evidence shows. Requires a registered protocol and full PRISMA reporting. |
| Scoping review | Map the extent and nature of existing literature on a topic. Identifies what has been studied, by whom, and what gaps exist. Does not typically include quality appraisal. | When the field is new or poorly defined and you want to understand what exists before designing a systematic review. When the question is ‘what is known’ rather than ‘what does the evidence show about X.’ |
| Meta-analysis | Statistically combines quantitative results from multiple studies to produce a pooled estimate. Requires sufficient homogeneity across studies. | When you have enough quantitative studies with comparable outcome measures to justify statistical combination. Typically follows or accompanies a systematic review. |
The distinction between narrative and systematic reviews matters because they make different claims. A narrative review says ‘the literature suggests.’ A systematic review says ‘across all available evidence meeting these criteria, the finding is.’ Journals increasingly expect authors to be clear about which they are conducting. Calling a narrative review a ‘systematic review’ without the methodology to support that claim is a form of misrepresentation that peer reviewers will identify.
The PRISMA Framework: What It Is and Why It Exists
PRISMA stands for Preferred Reporting Items for Systematic reviews and Meta-Analyses. The current version is PRISMA 2020, published in the British Medical Journal and the PLOS journals simultaneously — the simultaneous publication across disciplines signals that PRISMA is not discipline-specific. It covers systematic reviews across medicine, public health, social science, psychology, education, and any other field that produces empirical research amenable to systematic synthesis.
PRISMA is a reporting standard, not a methodology. This distinction matters. PRISMA does not tell you how to design your search strategy or how to appraise study quality. It tells you what information a completed systematic review must report so that readers can assess its trustworthiness and, if needed, replicate it. The 27-item PRISMA 2020 checklist specifies exactly what must appear and where.
The PRISMA flow diagram is the most visible element of the framework. It is a four-stage visual record of how studies moved through your review process: identification (how many records you found and where), screening (how many you assessed for eligibility after deduplication), eligibility (how many full texts you assessed and how many you excluded with reasons), and inclusion (how many studies you ultimately included). Every systematic review submitted to a journal that follows PRISMA is expected to include this diagram.
Stage 1: Formulating the Research Question
A systematic review begins with a research question precise enough that you can specify exactly what evidence would answer it. Vague questions produce unmanageable reviews. The most widely used framework for structuring systematic review questions is PICO — but it has discipline-specific adaptations.
PICO and its variants
| Framework | Components and uses |
| PICO | Population / Intervention / Comparison / Outcome. Standard for clinical and health research. Example: In first-year university students (P), does structured peer mentoring (I) compared to no mentoring (C) improve retention rates (O)? |
| PICo | Population / phenomenon of Interest / Context. For qualitative research where there is no intervention or comparison group. Example: What are the experiences (I) of first-generation students (P) navigating institutional processes in Indian government colleges (C)? |
| SPIDER | Sample / Phenomenon of Interest / Design / Evaluation / Research type. Alternative to PICo for qualitative and mixed methods. |
| ECLIPSE | Expectation / Client group / Location / Impact / Professionals / SErvice. For health services and policy research. |
| PCC | Population / Concept / Context. Used for scoping reviews specifically. |
Choose the framework that fits your research type. The key test: can you specify in advance what a study would need to report in order to be eligible for inclusion? If you cannot, the question is not yet sufficiently defined.
Example: a well-formed systematic review question
In undergraduate students at Indian universities (Population), does structured peer mentoring (Intervention) compared to no peer mentoring or informal peer support (Comparison) reduce first-year dropout rates (Outcome)?
This question specifies the population precisely enough to distinguish between studies (undergraduate, Indian universities, first-year). It specifies what counts as the intervention (structured — not any peer contact). It specifies what counts as the comparison. And it specifies the outcome measure. You can now write your inclusion criteria directly from this question.
Stage 2: Writing the Protocol and Registering It
Before you begin searching, you must write and register a protocol. This is the single requirement that most researchers skip — and its absence is the primary reason systematic reviews fail peer review.
The protocol specifies in advance: your research question, your inclusion and exclusion criteria, your databases and search strategy, your quality appraisal tool, your data extraction template, and your planned method of synthesis. You then register this protocol publicly before you begin. The registration creates a time-stamped record that you cannot change after seeing the data.
Why registration matters: a systematic review without a pre-registered protocol cannot rule out that the author adjusted the inclusion criteria, the search strategy, or the synthesis method after seeing what the studies reported. That is a form of bias — one that is difficult to detect and that fundamentally undermines the review’s claim to be systematic rather than selective.
Where to register
| Registry | Best for |
| PROSPERO (prospero.york.ac.uk) | Health, medicine, social care, public health, education, social science. Most widely accepted. Free registration. |
| Open Science Framework (osf.io) | Any discipline. Particularly used in psychology, social science, and education. Allows full protocol upload. |
| Research Registry (researchregistry.com) | Clinical research. Includes systematic reviews. |
| INPLASY (inplasy.com) | Multidisciplinary. Fast registration turnaround. Useful when PROSPERO has long wait times. |
Indian researchers should note that PROSPERO registration is increasingly required — not merely recommended — by journals in health and social science. Several UGC-CARE listed journals now specify PROSPERO registration as a submission requirement for systematic reviews. Register before you search, not after.
Stage 3: Designing and Executing the Search Strategy
The search strategy is the methodological core of the systematic review. Its job is to identify all studies that could potentially meet your inclusion criteria across all relevant databases, with a process documented in enough detail that another researcher could replicate it exactly.
Selecting databases
No single database covers all relevant literature. For most social science and health reviews, a minimum of three to five databases is expected. For Indian-focused research, Indian databases must be included alongside international ones.
| Database | Coverage | Notes for Indian researchers |
| PubMed / MEDLINE | Medicine, public health, nursing, allied health | Essential for health-related reviews. Free access. |
| PsycINFO | Psychology, psychiatry, behavioural sciences | Required for psychology-related reviews. |
| Scopus | Multidisciplinary. Strong international coverage. | Includes many Indian journals. Subscription required. |
| Web of Science | Multidisciplinary. High-impact journal focus. | Subscription required. Strong for citation tracking. |
| ERIC | Education research globally | Free. Essential for education reviews. |
| SSRN | Social sciences, economics, law | Preprints and working papers. Free. |
| IndMED / MedIND | Indian biomedical and health literature | Essential for reviews covering Indian health research. |
| Shodhganga | Indian theses and dissertations | Free. Critical for grey literature in Indian-focused reviews. |
| Google Scholar | Broad multidisciplinary | Use for supplementary searching and grey literature, not as a primary database. |
Building the search string
A systematic search string combines your key concepts using Boolean operators (AND, OR, NOT) and database-specific controls. The search must be broad enough to capture all relevant studies and documented precisely enough to reproduce.
(“peer mentoring” OR “peer mentoring program” OR “peer support” OR “mentoring intervention”) AND (“student retention” OR “dropout” OR “attrition” OR “persistence”) AND (“university” OR “higher education” OR “college”) AND (“India” OR “Indian”)
Several practical points: use quotation marks for exact phrases; use truncation symbols (reten* captures retention, retaining, retained) where the database supports it; adapt the string for each database because syntax differs between Scopus, PubMed, and Web of Science; document the exact string used in each database, the date of search, and the number of results returned.
Grey literature and handsearching
A search limited to indexed databases misses grey literature — reports, government documents, conference proceedings, and unpublished studies that may be methodologically sound. Failing to include grey literature biases the review toward published, positive findings (publication bias). You should also handsearch the reference lists of all included studies and conduct forward citation searches for key papers.
| Grey literature source | What it covers |
| Government websites (MoE, UGC, ICMR, ICSSR) | Policy reports, funded research outcomes, regulatory documents |
| Conference proceedings | AERA, BPS, APSE, ICSSR national conferences — unpublished or early findings |
| Institutional repositories | Shodhganga for theses; university repositories for working papers |
| WHO, World Bank, UNESCO reports | International comparative and policy research |
| ProQuest Dissertations & Theses | International theses not in Shodhganga |
Stage 4: Screening — Title/Abstract and Full Text
Screening is the process of moving from the full set of retrieved records to the studies you will include. PRISMA 2020 documents this process in the flow diagram across four stages.
Writing inclusion and exclusion criteria
Your inclusion and exclusion criteria must be written before screening begins, derived directly from your PICO question, and specific enough to make consistent decisions. Criteria that are too vague produce inter-rater disagreements that are difficult to resolve.
| Criterion type | Example |
| Population | Include: undergraduate students in Indian universities. Exclude: postgraduate students, international students studying outside India, secondary school students. |
| Intervention | Include: structured peer mentoring programmes with defined matching, training, or meeting schedules. Exclude: informal peer support, general student support services without a peer component. |
| Comparison | Include: studies with a no-mentoring control group or comparison institution. Exclude: studies comparing two types of mentoring without a no-mentoring comparator. |
| Outcome | Include: studies that report first-year retention rate, dropout rate, or completion of first year. Exclude: studies that measure academic performance only without retention data. |
| Study design | Include: randomised controlled trials, quasi-experimental studies, cohort studies. Exclude: case studies, editorials, opinion pieces, theoretical papers. |
| Language | Include: English and Hindi. Exclude: all other languages (state this and acknowledge as a limitation). |
| Date range | Include: 2010–2025. Exclude: studies published before 2010 (rationale: prior to UGC’s current regulatory framework). |
Two-stage screening and inter-rater reliability
Screening happens in two stages. In stage one, two reviewers independently screen all titles and abstracts against the inclusion criteria, marking each record as include, exclude, or uncertain. In stage two, the same two reviewers independently assess full texts of all records not excluded at stage one.
After each stage, disagreements between the two reviewers are resolved by discussion, and if not resolved, by a third reviewer. Inter-rater reliability — the degree of agreement between reviewers — is measured using Cohen’s kappa and reported in the paper. A kappa above 0.80 indicates strong agreement; values between 0.60 and 0.80 are acceptable with explanation.
If you are conducting a solo review — as a Master’s or PhD student without a co-reviewer — acknowledge this as a limitation and document your decision-making process in detail. Some journals will not accept solo reviews for the reason that inter-rater reliability cannot be demonstrated. Check the journal’s requirements before investing in a solo systematic review.
Stage 5: Quality Appraisal
Quality appraisal — also called critical appraisal or risk of bias assessment — evaluates the methodological quality of each included study. It answers the question: given how this study was designed and conducted, how much should we trust its findings?
Quality appraisal does not mean excluding studies that score poorly. It means documenting methodological limitations so that the synthesis can weight evidence appropriately and readers can assess the confidence they should place in the conclusions.
Choosing the right appraisal tool
| Tool | Best for |
| Cochrane Risk of Bias Tool 2 (RoB 2) | Randomised controlled trials. The standard tool for RCTs. Assesses bias across five domains. |
| ROBINS-I | Non-randomised intervention studies (quasi-experimental, cohort). Assesses bias across seven domains. |
| Newcastle-Ottawa Scale (NOS) | Observational studies — cohort and case-control. Produces a numerical quality score. |
| CASP tools (Critical Appraisal Skills Programme) | Multiple designs: qualitative studies, cohort studies, RCTs, systematic reviews. Free. Widely used in social science. |
| Mixed Methods Appraisal Tool (MMAT) | Mixed methods studies. Assesses quantitative, qualitative, and mixed components. |
| JBI Critical Appraisal Tools | Wide range of study designs including qualitative, analytical cross-sectional, case reports. Free. |
Apply the appraisal tool independently — both reviewers complete it for each included study without seeing each other’s ratings. Report the results for every included study in a risk of bias table. Do not simply state that you ‘appraised all studies for quality’ without reporting the results.
Stage 6: Data Extraction
Data extraction is the process of systematically recording the information you need from each included study. It must be done using a pre-defined extraction template — one you designed as part of your protocol before searching.
Standard data extraction fields
| Category | Specific fields to extract |
| Study identification | Author(s), year, title, journal, country, funding source |
| Participants | Sample size, age range, gender distribution, institution type, year of study, inclusion/exclusion criteria used in the study |
| Intervention | Description, duration, intensity, who delivered it, theoretical basis |
| Comparison condition | Description of control or comparator group |
| Outcomes | Primary outcome, secondary outcomes, measurement instruments, follow-up period |
| Results | Effect size or odds ratio, confidence intervals, p-values, qualitative themes if applicable |
| Study design | Design type, randomisation method, blinding, attrition rate |
| Quality appraisal score | Ratings from your chosen appraisal tool by domain |
| Notes | Anything relevant that does not fit the above categories |
Extract data independently — both reviewers work from the same included studies without sharing their extractions first. Then compare and resolve discrepancies. For quantitative data, discrepancies are often arithmetic errors or transcription differences that are straightforward to resolve by returning to the source. For qualitative data, conceptual disagreements may require more discussion.
Where data is missing or unclear, contact the original study authors. Document all contact attempts and responses. If authors do not respond and the missing data would affect your conclusions, acknowledge this as a limitation.
Stage 7: Synthesis
Synthesis is where the evidence from individual studies is combined to answer your review question. The appropriate synthesis method depends on the studies you have included.
Quantitative synthesis: meta-analysis
If you have enough quantitative studies with comparable populations, interventions, and outcome measures, meta-analysis allows you to statistically pool results into a single estimate with confidence intervals. Meta-analysis uses a forest plot to display individual study results and the pooled estimate.
Before conducting meta-analysis, assess clinical and statistical heterogeneity — the degree to which studies differ in populations, interventions, outcomes, and design. The I² statistic measures statistical heterogeneity: values above 75% suggest high heterogeneity that may make pooling inappropriate. When heterogeneity is high, use a random effects model rather than a fixed effects model and consider subgroup analysis to explore the source of heterogeneity.
If you lack sufficient studies or homogeneity for meta-analysis, use narrative synthesis instead — but document why you chose narrative synthesis rather than meta-analysis.
Qualitative synthesis
If your included studies are qualitative, use thematic synthesis or meta-ethnography. Thematic synthesis — the most commonly used approach for qualitative systematic reviews in education and social science — involves three stages: line-by-line coding of findings from primary studies, development of descriptive themes, and generation of analytical themes that go beyond the content of the original studies to produce new interpretive claims.
Meta-ethnography is appropriate when the qualitative studies share enough conceptual material to allow translation between studies — when concepts from one study can be recognised as equivalent to, or refuting, concepts in another.
Mixed methods synthesis
When your included studies use both quantitative and qualitative methods, you need a convergent or sequential synthesis approach. Convergent synthesis collects and analyses quantitative and qualitative evidence separately, then integrates the findings at the interpretation stage. Sequential synthesis uses qualitative findings to explain or contextualise quantitative results. Document your approach and the rationale for choosing it.
The PRISMA 2020 Flow Diagram: Building It Correctly
The PRISMA flow diagram is a mandatory visual element of every systematic review. It documents the number of records at each stage of the review process. The 2020 version updates the original 2009 diagram to separately account for new searches and database searches, and to distinguish between records identified through different sources.
The four stages
| Stage | What to record |
| Identification | Records identified from each database (with numbers per database); records identified from other sources (registers, grey literature, handsearching, citation tracking). Total records before deduplication. Duplicates removed. |
| Screening | Records screened (= records after deduplication). Records excluded at title/abstract stage with reason. Records sought for retrieval. Records not retrieved. |
| Eligibility | Full-text reports assessed for eligibility. Full-text reports excluded with specific reasons and numbers for each reason. |
| Included | Studies included in the review. Reports of included studies (a single study may have multiple reports). |
Practical guidance: keep a running record of numbers from the moment you start searching. It is very difficult to reconstruct accurate PRISMA numbers after the fact, particularly the number of duplicates removed and the number of full texts not retrieved. A simple spreadsheet tracking each stage is essential.
The PRISMA flow diagram template is available free at prisma-statement.org. Use it, or a faithful reproduction of it. Do not design your own version of the flow diagram — journals recognise the standard format and deviations raise questions.
The PRISMA 2020 Checklist: What Each Section Must Report
The PRISMA 2020 checklist has 27 items across seven sections. Not all items apply to every review — meta-analysis items do not apply to qualitative reviews, for instance — but every item that does apply must be addressed. Journals that require PRISMA typically ask authors to submit the completed checklist alongside the manuscript, indicating the page number where each item is addressed.
| Section | Items (abbreviated) | Common failure points |
| Title / Abstract | Identify as systematic review in title. Structured abstract following PRISMA-A. | Calling a narrative review a ‘systematic review.’ Abstract missing search or inclusion criteria details. |
| Introduction | Rationale — why is this review needed. Objectives — explicit PICO question. | Rationale too general. Research question not operationalised as PICO. |
| Methods: Protocol | Registration number and registry. Any amendments to protocol after registration. | No registration. Protocol changes not disclosed or explained. |
| Methods: Eligibility | Inclusion/exclusion criteria for studies. Rationale for each criterion. | Criteria too vague to apply consistently. No rationale for date range or language restrictions. |
| Methods: Information sources | Databases with dates searched. Other sources. | Too few databases. Grey literature not included. Search dates not reported. |
| Methods: Search | Full search strategy for at least one database. | Search string not provided. Cannot be replicated. |
| Methods: Selection | Number of reviewers, process, disagreement resolution. | Solo review without acknowledgment. Kappa not reported. |
| Methods: Data extraction | Process, template, contact with authors. | Template not described. No mention of duplicate extraction. |
| Methods: Quality appraisal | Tool used, process, use of results in synthesis. | Tool named but results not reported per study. |
| Methods: Synthesis | Methods for combining results, heterogeneity assessment, sensitivity analysis. | Meta-analysis without I² reporting. No rationale for synthesis approach. |
| Results: Study selection | Numbers at each stage, reasons for exclusion. PRISMA flow diagram. | Diagram missing. Exclusion reasons not reported. |
| Results: Study characteristics | For each included study: citation, design, participants, interventions, outcomes. | Characteristics table incomplete. Missing sample sizes. |
| Results: Quality appraisal | Results of appraisal for each study. | Appraisal conducted but results table absent. |
| Results: Synthesis | All results for each outcome. Forest plots if meta-analysis. | Results reported narratively without reference to individual study data. |
| Discussion | Main findings, limitations, conclusions. | Conclusions overstated relative to evidence quality. |
Writing the Systematic Review: Structure and Common Failures
A systematic review follows a specific reporting structure. Most journals have a preferred format, but the core elements are consistent.
| Section | Content and length guidance |
| Introduction | The gap in current evidence that the review addresses. Why a systematic review is the appropriate method. The pre-specified research question in PICO format. Approximately 400–600 words. |
| Methods | All PRISMA methods items in order: protocol registration, eligibility criteria, information sources, search strategy (one database in full), selection process, data extraction, quality appraisal, synthesis approach. Approximately 800–1200 words. |
| Results | PRISMA flow diagram. Characteristics of included studies table. Quality appraisal table. Synthesis results with reference to individual studies throughout. Approximately 1000–2000 words depending on number of included studies. |
| Discussion | Main findings in relation to the research question. Limitations (search limitations, grey literature, publication bias, quality of included studies, solo review if applicable). Implications for practice, policy, and future research. Approximately 600–900 words. |
| Conclusion | Brief restatement of the main finding and its significance. No new information. Approximately 150–250 words. |
The three most common systematic review failures
1. The search that cannot be replicated.
The most common reason systematic reviews fail peer review is that the search strategy is described in general terms (‘databases were searched using relevant terms’) rather than reported in full. Paste your exact search string for at least one database into the methods section. State the date you searched. State the number of records returned from each database.
2. Conclusions that outrun the evidence.
If the included studies are small, heterogeneous, and of variable quality, the conclusion ‘peer mentoring significantly improves retention’ overstates what the evidence supports. The appropriate conclusion is: ‘limited evidence from studies of variable quality suggests a possible positive effect of structured peer mentoring on retention, but the evidence base is insufficient to support firm conclusions.’ That sounds weaker. It is more accurate, and reviewers will accept it where they will reject the overstatement.
3. Quality appraisal conducted but not used.
Conducting quality appraisal and then ignoring its results in the synthesis is worse than not conducting it. If several of your included studies have high risk of bias in a key domain — for example, no blinding of outcome assessors — that limitation must appear in the synthesis and the discussion. ‘Despite the methodological limitations of individual studies, the direction of effect was consistent’ is a defensible conclusion. Reporting the effect size without acknowledging the quality issues is not.
FAQs
Q: What is a systematic review and how is it different from a literature review?
A systematic review answers a pre-defined research question by identifying, appraising, and synthesising all available evidence meeting explicit criteria, using a documented protocol that another researcher could replicate. A literature review surveys existing scholarship without requiring comprehensiveness, pre-registration, or quality appraisal. The critical difference is bias minimisation: a systematic review’s protocol prevents selective inclusion. Calling a literature review a systematic review is a misrepresentation that peer reviewers will identify.
Q: What is PRISMA and why is it required?
PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) is a reporting standard specifying what a completed systematic review must report. PRISMA 2020 has 27 items covering title, abstract, methods, results, and discussion. Journals require it to ensure systematic reviews report sufficient methodological detail to be evaluated. PRISMA does not prescribe how to design your search — it requires that your search strategy, including the exact search string, be reported in full.
Q: Why is registering a systematic review protocol important?
Registration at PROSPERO, OSF, or INPLASY before you begin searching creates a time-stamped public record of your planned methods. This prevents adjusting inclusion criteria, search strategy, or synthesis approach after seeing the data — a form of bias that undermines the claim to be systematic. Many journals now require a PROSPERO registration number as a submission condition. For Indian researchers, UGC-CARE listed health and social science journals increasingly make registration mandatory.
Q: What is a PRISMA flow diagram?
A mandatory visual element of every systematic review documenting how studies moved through the review process across four stages: Identification (records found in each database and other sources); Screening (records assessed after deduplication); Eligibility (full texts assessed, exclusions by reason); and Included (final studies). The template is available free at prisma-statement.org. Keep running record counts from the moment you begin searching — these numbers are very difficult to reconstruct retrospectively.
Q: How many databases should you search for a systematic review?
A minimum of three to five databases. For health and social science reviews: PubMed, Scopus, Web of Science, and one discipline-specific database (PsycINFO, ERIC, SSRN). For India-focused reviews, IndMED and Shodhganga must be included alongside international databases — omitting Indian databases is a methodological gap Indian reviewers will flag. Grey literature (government reports, conference proceedings) must also be searched to avoid publication bias.
Author
Dr. Rekha Khandelwal, a legal scholar and academic writing expert, is the founder of AspirixWriters. She has extensive experience in guiding students and researchers in writing research papers, theses, and dissertations with clarity and originality. Her work focuses on ethical AI-assisted writing, structured research, and making academic writing simple and effective for learners worldwide.
Author Profile Dr. Rekha Khandelwal | Academic Writer, Legal Technical Writer, AI Expert & Author | AspirixWriters
References
- Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., … & Moher, D. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. https://doi.org/10.1136/bmj.n71
- Higgins, J. P. T., Thomas, J., Chandler, J., Cumpston, M., Li, T., Page, M. J., & Welch, V. A. (Eds.). (2022). Cochrane Handbook for Systematic Reviews of Interventions (Version 6.3). Cochrane. www.training.cochrane.org/handbook
- Thomas, J., & Harden, A. (2008). Methods for the thematic synthesis of qualitative research in systematic reviews. BMC Medical Research Methodology, 8(1), 45.
- Gough, D., Oliver, S., & Thomas, J. (Eds.). (2017). An Introduction to Systematic Reviews (2nd ed.). Sage.
- Boell, S. K., & Cecez-Kecmanovic, D. (2015). On being ‘systematic’ in literature reviews in IS. Journal of Information Technology, 30(2), 161–173.
- Tricco, A. C., Lillie, E., Zarin, W., et al. (2018). PRISMA Extension for Scoping Reviews (PRISMA-ScR). Annals of Internal Medicine, 169(7), 467–473.
- PRISMA Statement and resources: prisma-statement.org
- PROSPERO International Prospective Register of Systematic Reviews: prospero.york.ac.uk
- INFLIBNET Shodhganga: shodhganga.inflibnet.ac.in
Next: Legal Research Methods: A Complete Guide To Doctrinal, Empirical And Comparative Legal Research
← Back to Main Post: The Complete Guide to Research Paper Structure
Next in Series
- Complete Guide: The Academic Writing Process: Complete Guide from First Draft to Submission (2026)(Module 2)
- Complete Guide: Research Methodologies: Complete Guide to Quantitative, Qualitative, Mixed Methods & Legal Research (2026) (Module 3)
- Complete Guide: Data Analysis and Results Presentation: Complete Guide for Quantitative, Qualitative & Legal Research (2026) (Module 4)
- Complete Guide: Organization and Academic Tone: Complete Guide to Professional Scholarly Writing (2026) (Module 5)
- Complete Guide: Peer Review and Publication: Complete Guide from Submission to Acceptance (2026) (Module 6)
- Complete Guide: AI Tools in Academic Research: Opportunities, Ethics, and Best Practices (2026) (Module 7)
- Complete Guide: Grant Writing and Research Funding: Complete Guide to Finding Money for Your Research (2026) (Module 8)
- Complete Guide: Academic Career Development: Complete Guide to Building Your Professional Life in Research (2026) (Module 9)
- Complete Guide: Research Ethics and the IRB Process: Complete Guide to Doing Research Responsibly (2026) (Module 10)
Research Ethics for Legal Researchers: Privilege, Confidentiality, Vulnerable Participants, and the DPDPA 2023
Research Ethics for Legal Researchers Academic Writing Mastery: The Complete 2026 Guide To Research Papers,…
Academic Career Development for Legal Researchers
Academic Career Development for Legal Researchers: NLU Faculty Pathways, Law School Hiring, and Building a…
Grant Writing and Research Funding for Legal Researchers
Academic Career Development: Complete Guide To Building Your Professional Life In Research (2026) Back to…
Peer Review and Publication in Legal Research
Peer Review and Publication in Legal Research: Law Reviews, Response Letters, and the Path to…
Academic Writing Mastery: The Complete 2026 Guide to Research Papers, Thesis & Dissertation Writing
Academic Writing Master From Concept to Submission Series Academic Writing Mastery Whether you are writing…
Research Integrity: Data Handling Authorship Ethics and the Indian Regulatory Framework
Cluster Post 5 | Module 10: Research Ethics and the IRB Process From Concept to…