Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
- Systematic Review | Definition, Example, & Guide
Systematic Review | Definition, Example & Guide
Published on June 15, 2022 by Shaun Turney . Revised on June 22, 2023.
A systematic review is a type of review that uses repeatable methods to find, select, and synthesize all available evidence. It answers a clearly formulated research question and explicitly states the methods used to arrive at the answer.
They answered the question “What is the effectiveness of probiotics in reducing eczema symptoms and improving quality of life in patients with eczema?”
In this context, a probiotic is a health product that contains live microorganisms and is taken by mouth. Eczema is a common skin condition that causes red, itchy skin.
Table of contents
What is a systematic review, systematic review vs. meta-analysis, systematic review vs. literature review, systematic review vs. scoping review, when to conduct a systematic review, pros and cons of systematic reviews, step-by-step example of a systematic review, other interesting articles, frequently asked questions about systematic reviews.
A review is an overview of the research that’s already been completed on a topic.
What makes a systematic review different from other types of reviews is that the research methods are designed to reduce bias . The methods are repeatable, and the approach is formal and systematic:
- Formulate a research question
- Develop a protocol
- Search for all relevant studies
- Apply the selection criteria
- Extract the data
- Synthesize the data
- Write and publish a report
Although multiple sets of guidelines exist, the Cochrane Handbook for Systematic Reviews is among the most widely used. It provides detailed guidelines on how to complete each step of the systematic review process.
Systematic reviews are most commonly used in medical and public health research, but they can also be found in other disciplines.
Systematic reviews typically answer their research question by synthesizing all available evidence and evaluating the quality of the evidence. Synthesizing means bringing together different information to tell a single, cohesive story. The synthesis can be narrative ( qualitative ), quantitative , or both.
Prevent plagiarism. Run a free check.
Systematic reviews often quantitatively synthesize the evidence using a meta-analysis . A meta-analysis is a statistical analysis, not a type of review.
A meta-analysis is a technique to synthesize results from multiple studies. It’s a statistical analysis that combines the results of two or more studies, usually to estimate an effect size .
A literature review is a type of review that uses a less systematic and formal approach than a systematic review. Typically, an expert in a topic will qualitatively summarize and evaluate previous work, without using a formal, explicit method.
Although literature reviews are often less time-consuming and can be insightful or helpful, they have a higher risk of bias and are less transparent than systematic reviews.
Similar to a systematic review, a scoping review is a type of review that tries to minimize bias by using transparent and repeatable methods.
However, a scoping review isn’t a type of systematic review. The most important difference is the goal: rather than answering a specific question, a scoping review explores a topic. The researcher tries to identify the main concepts, theories, and evidence, as well as gaps in the current research.
Sometimes scoping reviews are an exploratory preparation step for a systematic review, and sometimes they are a standalone project.
A faster, more affordable way to improve your paper
Scribbr’s new AI Proofreader checks your document and corrects spelling, grammar, and punctuation mistakes with near-human accuracy and the efficiency of AI!
Proofread my paper
A systematic review is a good choice of review if you want to answer a question about the effectiveness of an intervention , such as a medical treatment.
To conduct a systematic review, you’ll need the following:
- A precise question , usually about the effectiveness of an intervention. The question needs to be about a topic that’s previously been studied by multiple researchers. If there’s no previous research, there’s nothing to review.
- If you’re doing a systematic review on your own (e.g., for a research paper or thesis ), you should take appropriate measures to ensure the validity and reliability of your research.
- Access to databases and journal archives. Often, your educational institution provides you with access.
- Time. A professional systematic review is a time-consuming process: it will take the lead author about six months of full-time work. If you’re a student, you should narrow the scope of your systematic review and stick to a tight schedule.
- Bibliographic, word-processing, spreadsheet, and statistical software . For example, you could use EndNote, Microsoft Word, Excel, and SPSS.
A systematic review has many pros .
- They minimize research bias by considering all available evidence and evaluating each study for bias.
- Their methods are transparent , so they can be scrutinized by others.
- They’re thorough : they summarize all available evidence.
- They can be replicated and updated by others.
Systematic reviews also have a few cons .
- They’re time-consuming .
- They’re narrow in scope : they only answer the precise research question.
The 7 steps for conducting a systematic review are explained with an example.
Step 1: Formulate a research question
Formulating the research question is probably the most important step of a systematic review. A clear research question will:
- Allow you to more effectively communicate your research to other researchers and practitioners
- Guide your decisions as you plan and conduct your systematic review
A good research question for a systematic review has four components, which you can remember with the acronym PICO :
- Population(s) or problem(s)
You can rearrange these four components to write your research question:
- What is the effectiveness of I versus C for O in P ?
Sometimes, you may want to include a fifth component, the type of study design . In this case, the acronym is PICOT .
- Type of study design(s)
- The population of patients with eczema
- The intervention of probiotics
- In comparison to no treatment, placebo , or non-probiotic treatment
- The outcome of changes in participant-, parent-, and doctor-rated symptoms of eczema and quality of life
- Randomized control trials, a type of study design
Their research question was:
- What is the effectiveness of probiotics versus no treatment, a placebo, or a non-probiotic treatment for reducing eczema symptoms and improving quality of life in patients with eczema?
Step 2: Develop a protocol
A protocol is a document that contains your research plan for the systematic review. This is an important step because having a plan allows you to work more efficiently and reduces bias.
Your protocol should include the following components:
- Background information : Provide the context of the research question, including why it’s important.
- Research objective (s) : Rephrase your research question as an objective.
- Selection criteria: State how you’ll decide which studies to include or exclude from your review.
- Search strategy: Discuss your plan for finding studies.
- Analysis: Explain what information you’ll collect from the studies and how you’ll synthesize the data.
If you’re a professional seeking to publish your review, it’s a good idea to bring together an advisory committee . This is a group of about six people who have experience in the topic you’re researching. They can help you make decisions about your protocol.
It’s highly recommended to register your protocol. Registering your protocol means submitting it to a database such as PROSPERO or ClinicalTrials.gov .
Step 3: Search for all relevant studies
Searching for relevant studies is the most time-consuming step of a systematic review.
To reduce bias, it’s important to search for relevant studies very thoroughly. Your strategy will depend on your field and your research question, but sources generally fall into these four categories:
- Databases: Search multiple databases of peer-reviewed literature, such as PubMed or Scopus . Think carefully about how to phrase your search terms and include multiple synonyms of each word. Use Boolean operators if relevant.
- Handsearching: In addition to searching the primary sources using databases, you’ll also need to search manually. One strategy is to scan relevant journals or conference proceedings. Another strategy is to scan the reference lists of relevant studies.
- Gray literature: Gray literature includes documents produced by governments, universities, and other institutions that aren’t published by traditional publishers. Graduate student theses are an important type of gray literature, which you can search using the Networked Digital Library of Theses and Dissertations (NDLTD) . In medicine, clinical trial registries are another important type of gray literature.
- Experts: Contact experts in the field to ask if they have unpublished studies that should be included in your review.
At this stage of your review, you won’t read the articles yet. Simply save any potentially relevant citations using bibliographic software, such as Scribbr’s APA or MLA Generator .
- Databases: EMBASE, PsycINFO, AMED, LILACS, and ISI Web of Science
- Handsearch: Conference proceedings and reference lists of articles
- Gray literature: The Cochrane Library, the metaRegister of Controlled Trials, and the Ongoing Skin Trials Register
- Experts: Authors of unpublished registered trials, pharmaceutical companies, and manufacturers of probiotics
Step 4: Apply the selection criteria
Applying the selection criteria is a three-person job. Two of you will independently read the studies and decide which to include in your review based on the selection criteria you established in your protocol . The third person’s job is to break any ties.
To increase inter-rater reliability , ensure that everyone thoroughly understands the selection criteria before you begin.
If you’re writing a systematic review as a student for an assignment, you might not have a team. In this case, you’ll have to apply the selection criteria on your own; you can mention this as a limitation in your paper’s discussion.
You should apply the selection criteria in two phases:
- Based on the titles and abstracts : Decide whether each article potentially meets the selection criteria based on the information provided in the abstracts.
- Based on the full texts: Download the articles that weren’t excluded during the first phase. If an article isn’t available online or through your library, you may need to contact the authors to ask for a copy. Read the articles and decide which articles meet the selection criteria.
It’s very important to keep a meticulous record of why you included or excluded each article. When the selection process is complete, you can summarize what you did using a PRISMA flow diagram .
Next, Boyle and colleagues found the full texts for each of the remaining studies. Boyle and Tang read through the articles to decide if any more studies needed to be excluded based on the selection criteria.
When Boyle and Tang disagreed about whether a study should be excluded, they discussed it with Varigos until the three researchers came to an agreement.
Step 5: Extract the data
Extracting the data means collecting information from the selected studies in a systematic way. There are two types of information you need to collect from each study:
- Information about the study’s methods and results . The exact information will depend on your research question, but it might include the year, study design , sample size, context, research findings , and conclusions. If any data are missing, you’ll need to contact the study’s authors.
- Your judgment of the quality of the evidence, including risk of bias .
You should collect this information using forms. You can find sample forms in The Registry of Methods and Tools for Evidence-Informed Decision Making and the Grading of Recommendations, Assessment, Development and Evaluations Working Group .
Extracting the data is also a three-person job. Two people should do this step independently, and the third person will resolve any disagreements.
They also collected data about possible sources of bias, such as how the study participants were randomized into the control and treatment groups.
Step 6: Synthesize the data
Synthesizing the data means bringing together the information you collected into a single, cohesive story. There are two main approaches to synthesizing the data:
- Narrative ( qualitative ): Summarize the information in words. You’ll need to discuss the studies and assess their overall quality.
- Quantitative : Use statistical methods to summarize and compare data from different studies. The most common quantitative approach is a meta-analysis , which allows you to combine results from multiple studies into a summary result.
Generally, you should use both approaches together whenever possible. If you don’t have enough data, or the data from different studies aren’t comparable, then you can take just a narrative approach. However, you should justify why a quantitative approach wasn’t possible.
Boyle and colleagues also divided the studies into subgroups, such as studies about babies, children, and adults, and analyzed the effect sizes within each group.
Step 7: Write and publish a report
The purpose of writing a systematic review article is to share the answer to your research question and explain how you arrived at this answer.
Your article should include the following sections:
- Abstract : A summary of the review
- Introduction : Including the rationale and objectives
- Methods : Including the selection criteria, search method, data extraction method, and synthesis method
- Results : Including results of the search and selection process, study characteristics, risk of bias in the studies, and synthesis results
- Discussion : Including interpretation of the results and limitations of the review
- Conclusion : The answer to your research question and implications for practice, policy, or research
To verify that your report includes everything it needs, you can use the PRISMA checklist .
Once your report is written, you can publish it in a systematic review database, such as the Cochrane Database of Systematic Reviews , and/or in a peer-reviewed journal.
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
- Student’s t -distribution
- Normal distribution
- Null and Alternative Hypotheses
- Chi square tests
- Confidence interval
- Quartiles & Quantiles
- Cluster sampling
- Stratified sampling
- Data cleansing
- Reproducibility vs Replicability
- Peer review
- Prospective cohort study
- Implicit bias
- Cognitive bias
- Placebo effect
- Hawthorne effect
- Hindsight bias
- Affect heuristic
- Social desirability bias
A literature review is a survey of scholarly sources (such as books, journal articles, and theses) related to a specific topic or research question .
It is often written as part of a thesis, dissertation , or research paper , in order to situate your work in relation to existing knowledge.
A literature review is a survey of credible sources on a topic, often used in dissertations , theses, and research papers . Literature reviews give an overview of knowledge on a subject, helping you identify relevant theories and methods, as well as gaps in existing research. Literature reviews are set up similarly to other academic texts , with an introduction , a main body, and a conclusion .
An annotated bibliography is a list of source references that has a short description (called an annotation ) for each of the sources. It is often assigned as part of the research process for a paper .
A systematic review is secondary research because it uses existing research. You don’t collect new data yourself.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Turney, S. (2023, June 22). Systematic Review | Definition, Example & Guide. Scribbr. Retrieved November 14, 2023, from https://www.scribbr.com/methodology/systematic-review/
Is this article helpful?
Other students also liked, how to write a literature review | guide, examples, & templates, how to write a research proposal | examples & templates, what is critical thinking | definition & examples, what is your plagiarism score.
- Guidelines and procedures
- Management tools
- Define the question
- Check the topic
- Determine inclusion/exclusion criteria
- Develop a protocol
- Identify keywords
- Databases and search strategies
- Grey literature
- Manage and organise
- Screen & Select
- Locate full text
- Extract data
- Examples of systematic reviews
- Accessing help This link opens in a new window
- Systematic Style Reviews Guide This link opens in a new window
Please choose the tab below for your discipline to see relevant examples.
For more information about how to conduct and write reviews, please see the Guidelines section of this guide.
- Health & Medicine
- Social sciences
- Vibration and bubbles: a systematic review of the effects of helicopter retrieval on injured divers. (2018).
- Nicotine effects on exercise performance and physiological responses in nicotine‐naïve individuals: a systematic review. (2018).
- Association of total white cell count with mortality and major adverse events in patients with peripheral arterial disease: A systematic review. (2014).
- Do MOOCs contribute to student equity and social inclusion? A systematic review 2014–18. (2020).
- Interventions in Foster Family Care: A Systematic Review. (2020).
- Determinants of happiness among healthcare professionals between 2009 and 2019: a systematic review. (2020).
- Systematic review of the outcomes and trade-offs of ten types of decarbonization policy instruments. (2021).
- A systematic review on Asian's farmers' adaptation practices towards climate change. (2018).
- Are concentrations of pollutants in sharks, rays and skates (Elasmobranchii) a cause for concern? A systematic review. (2020).
- << Previous: Write
- Next: Publish >>
- Last Updated: Aug 17, 2023 3:31 PM
- URL: https://libguides.jcu.edu.au/systematic-review
Systematic Reviews and Meta Analysis
- Getting Started
- Guides and Standards
- Review Protocols
- Databases and Sources
- Randomized Controlled Trials
- Controlled Clinical Trials
- Observational Designs
- Tests of Diagnostic Accuracy
- Software and Tools
- Where do I get all those articles?
- EPI 233/528
- Countway Mediated Search
Systematic review Q & A
What is a systematic review.
A systematic review is guided filtering and synthesis of all available evidence addressing a specific, focused research question, generally about a specific intervention or exposure. The use of standardized, systematic methods and pre-selected eligibility criteria reduce the risk of bias in identifying, selecting and analyzing relevant studies. A well-designed systematic review includes clear objectives, pre-selected criteria for identifying eligible studies, an explicit methodology, a thorough and reproducible search of the literature, an assessment of the validity or risk of bias of each included study, and a systematic synthesis, analysis and presentation of the findings of the included studies. A systematic review may include a meta-analysis.
For details about carrying out systematic reviews, see the Guides and Standards section of this guide.
Is my research topic appropriate for systematic review methods?
A systematic review is best deployed to test a specific hypothesis about a healthcare or public health intervention or exposure. By focusing on a single intervention or a few specific interventions for a particular condition, the investigator can ensure a manageable results set. Moreover, examining a single or small set of related interventions, exposures, or outcomes, will simplify the assessment of studies and the synthesis of the findings.
Systematic reviews are poor tools for hypothesis generation: for instance, to determine what interventions have been used to increase the awareness and acceptability of a vaccine or to investigate the ways that predictive analytics have been used in health care management. In the first case, we don't know what interventions to search for and so have to screen all the articles about awareness and acceptability. In the second, there is no agreed on set of methods that make up predictive analytics, and health care management is far too broad. The search will necessarily be incomplete, vague and very large all at the same time. In most cases, reviews without clearly and exactly specified populations, interventions, exposures, and outcomes will produce results sets that quickly outstrip the resources of a small team and offer no consistent way to assess and synthesize findings from the studies that are identified.
If not a systematic review, then what?
You might consider performing a scoping review . This framework allows iterative searching over a reduced number of data sources and no requirement to assess individual studies for risk of bias. The framework includes built-in mechanisms to adjust the analysis as the work progresses and more is learned about the topic. A scoping review won't help you limit the number of records you'll need to screen (broad questions lead to large results sets) but may give you means of dealing with a large set of results.
This tool can help you decide what kind of review is right for your question.
Can my student complete a systematic review during her summer project?
Probably not. Systematic reviews are a lot of work. Including creating the protocol, building and running a quality search, collecting all the papers, evaluating the studies that meet the inclusion criteria and extracting and analyzing the summary data, a well done review can require dozens to hundreds of hours of work that can span several months. Moreover, a systematic review requires subject expertise, statistical support and a librarian to help design and run the search. Be aware that librarians sometimes have queues for their search time. It may take several weeks to complete and run a search. Moreover, all guidelines for carrying out systematic reviews recommend that at least two subject experts screen the studies identified in the search. The first round of screening can consume 1 hour per screener for every 100-200 records. A systematic review is a labor-intensive team effort.
How can I know if my topic has been been reviewed already?
Before starting out on a systematic review, check to see if someone has done it already. In PubMed you can use the systematic review subset to limit to a broad group of papers that is enriched for systematic reviews. You can invoke the subset by selecting if from the Article Types filters to the left of your PubMed results, or you can append AND systematic[sb] to your search. For example:
"neoadjuvant chemotherapy" AND systematic[sb]
The systematic review subset is very noisy, however. To quickly focus on systematic reviews (knowing that you may be missing some), simply search for the word systematic in the title:
"neoadjuvant chemotherapy" AND systematic[ti]
Any PRISMA-compliant systematic review will be captured by this method since including the words "systematic review" in the title is a requirement of the PRISMA checklist. Cochrane systematic reviews do not include 'systematic' in the title, however. It's worth checking the Cochrane Database of Systematic Reviews independently.
You can also search for protocols that will indicate that another group has set out on a similar project. Many investigators will register their protocols in PROSPERO , a registry of review protocols. Other published protocols as well as Cochrane Review protocols appear in the Cochrane Methodology Register, a part of the Cochrane Library .
- Next: Guides and Standards >>
- Last Updated: Oct 26, 2023 2:31 PM
- URL: https://guides.library.harvard.edu/meta-analysis
How to Write a Systematic Review of the Literature
- 1 1 Texas Tech University, Lubbock, TX, USA.
- 2 2 University of Florida, Gainesville, FL, USA.
- PMID: 29283007
- DOI: 10.1177/1937586717747384
This article provides a step-by-step approach to conducting and reporting systematic literature reviews (SLRs) in the domain of healthcare design and discusses some of the key quality issues associated with SLRs. SLR, as the name implies, is a systematic way of collecting, critically evaluating, integrating, and presenting findings from across multiple research studies on a research question or topic of interest. SLR provides a way to assess the quality level and magnitude of existing evidence on a question or topic of interest. It offers a broader and more accurate level of understanding than a traditional literature review. A systematic review adheres to standardized methodologies/guidelines in systematic searching, filtering, reviewing, critiquing, interpreting, synthesizing, and reporting of findings from multiple publications on a topic/domain of interest. The Cochrane Collaboration is the most well-known and widely respected global organization producing SLRs within the healthcare field and a standard to follow for any researcher seeking to write a transparent and methodologically sound SLR. Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA), like the Cochrane Collaboration, was created by an international network of health-based collaborators and provides the framework for SLR to ensure methodological rigor and quality. The PRISMA statement is an evidence-based guide consisting of a checklist and flowchart intended to be used as tools for authors seeking to write SLR and meta-analyses.
Keywords: evidence based design; healthcare design; systematic literature review.
- Evidence-Based Medicine* / organization & administration
- Research Design*
- Systematic Reviews as Topic*
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Account settings
- Advanced Search
- Journal List
- Int J Prev Med
How to Write a Systematic Review: A Narrative Review
Ali hasanpour dehkordi.
Social Determinants of Health Research Center, Shahrekord University of Medical Sciences, Shahrekord, Iran
1 Health Information Technology Research Center, Student Research Committee, Department of Medical Library and Information Sciences, School of Management and Medical Information Sciences, Isfahan University of Medical Sciences, Isfahan, Iran
Hanan A. Ibrahim
2 Department of International Relations, College of Law, Bayan University, Erbil, Kurdistan, Iraq
3 MSc in Biostatistics, Health Promotion Research Center, Iran University of Medical Sciences, Tehran, Iran
Reza Ghanei Gheshlagh
4 Spiritual Health Research Center, Research Institute for Health Development, Kurdistan University of Medical Sciences, Sanandaj, Iran
In recent years, published systematic reviews in the world and in Iran have been increasing. These studies are an important resource to answer evidence-based clinical questions and assist health policy-makers and students who want to identify evidence gaps in published research. Systematic review studies, with or without meta-analysis, synthesize all available evidence from studies focused on the same research question. In this study, the steps for a systematic review such as research question design and identification, the search for qualified published studies, the extraction and synthesis of information that pertain to the research question, and interpretation of the results are presented in details. This will be helpful to all interested researchers.
A systematic review, as its name suggests, is a systematic way of collecting, evaluating, integrating, and presenting findings from several studies on a specific question or topic.[ 1 ] A systematic review is a research that, by identifying and combining evidence, is tailored to and answers the research question, based on an assessment of all relevant studies.[ 2 , 3 ] To identify assess and interpret available research, identify effective and ineffective health-care interventions, provide integrated documentation to help decision-making, and identify the gap between studies is one of the most important reasons for conducting systematic review studies.[ 4 ]
In the review studies, the latest scientific information about a particular topic is criticized. In these studies, the terms of review, systematic review, and meta-analysis are used instead. A systematic review is done in one of two methods, quantitative (meta-analysis) and qualitative. In a meta-analysis, the results of two or more studies for the evaluation of say health interventions are combined to measure the effect of treatment, while in the qualitative method, the findings of other studies are combined without using statistical methods.[ 5 ]
Since 1999, various guidelines, including the QUORUM, the MOOSE, the STROBE, the CONSORT, and the QUADAS, have been introduced for reporting meta-analyses. But recently the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) statement has gained widespread popularity.[ 6 , 7 , 8 , 9 ] The systematic review process based on the PRISMA statement includes four steps of how to formulate research questions, define the eligibility criteria, identify all relevant studies, extract and synthesize data, and deduce and present results (answers to research questions).[ 2 ]
Systematic Review Protocol
Systematic reviews start with a protocol. The protocol is a researcher road map that outlines the goals, methodology, and outcomes of the research. Many journals advise writers to use the PRISMA statement to write the protocol.[ 10 ] The PRISMA checklist includes 27 items related to the content of a systematic review and meta-analysis and includes abstracts, methods, results, discussions, and financial resources.[ 11 ] PRISMA helps writers improve their systematic review and meta-analysis report. Reviewers and editors of medical journals acknowledge that while PRISMA may not be used as a tool to assess the methodological quality, it does help them to publish a better study article [ Figure 1 ].[ 12 ]
Screening process and articles selection according to the PRISMA guidelines
The main step in designing the protocol is to define the main objectives of the study and provide some background information. Before starting a systematic review, it is important to assess that your study is not a duplicate; therefore, in search of published research, it is necessary to review PREOSPERO and the Cochrane Database of Systematic. Sometimes it is better to search, in four databases, related systematic reviews that have already been published (PubMed, Web of Sciences, Scopus, Cochrane), published systematic review protocols (PubMed, Web of Sciences, Scopus, Cochrane), systematic review protocols that have already been registered but have not been published (PROSPERO, Cochrane), and finally related published articles (PubMed, Web of Sciences, Scopus, Cochrane). The goal is to reduce duplicate research and keep up-to-date systematic reviews.[ 13 ]
Writing a research question is the first step in systematic review that summarizes the main goal of the study.[ 14 ] The research question determines which types of studies should be included in the analysis (quantitative, qualitative, methodic mix, review overviews, or other studies). Sometimes a research question may be broken down into several more detailed questions.[ 15 ] The vague questions (such as: is walking helpful?) makes the researcher fail to be well focused on the collected studies or analyze them appropriately.[ 16 ] On the other hand, if the research question is rigid and restrictive (e.g., walking for 43 min and 3 times a week is better than walking for 38 min and 4 times a week?), there may not be enough studies in this area to answer this question and hence the generalizability of the findings to other populations will be reduced.[ 16 , 17 ] A good question in systematic review should include components that are PICOS style which include population (P), intervention (I), comparison (C), outcome (O), and setting (S).[ 18 ] Regarding the purpose of the study, control in clinical trials or pre-poststudies can replace C.[ 19 ]
Search and identify eligible texts
After clarifying the research question and before searching the databases, it is necessary to specify searching methods, articles screening, studies eligibility check, check of the references in eligible studies, data extraction, and data analysis. This helps researchers ensure that potential biases in the selection of potential studies are minimized.[ 14 , 17 ] It should also look at details such as which published and unpublished literature have been searched, how they were searched, by which mechanism they were searched, and what are the inclusion and exclusion criteria.[ 4 ] First, all studies are searched and collected according to predefined keywords; then the title, abstract, and the entire text are screened for relevance by the authors.[ 13 ] By screening articles based on their titles, researchers can quickly decide on whether to retain or remove an article. If more information is needed, the abstracts of the articles will also be reviewed. In the next step, the full text of the articles will be reviewed to identify the relevant articles, and the reason for the removal of excluded articles is reported.[ 20 ] Finally, it is recommended that the process of searching, selecting, and screening articles be reported as a flowchart.[ 21 ] By increasing research, finding up-to-date and relevant information has become more difficult.[ 22 ]
Currently, there is no specific guideline as to which databases should be searched, which database is the best, and how many should be searched; but overall, it is advisable to search broadly. Because no database covers all health topics, it is recommended to use several databases to search.[ 23 ] According to the A MeaSurement Tool to Assess Systematic Reviews scale (AMSTAR) at least two databases should be searched in systematic and meta-analysis, although more comprehensive and accurate results can be obtained by increasing the number of searched databases.[ 24 ] The type of database to be searched depends on the systematic review question. For example, in a clinical trial study, it is recommended that Cochrane, multi-regional clinical trial (mRCTs), and International Clinical Trials Registry Platform be searched.[ 25 ]
For example, MEDLINE, a product of the National Library of Medicine in the United States of America, focuses on peer-reviewed articles in biomedical and health issues, while Embase covers the broad field of pharmacology and summaries of conferences. CINAHL is a great resource for nursing and health research and PsycINFO is a great database for psychology, psychiatry, counseling, addiction, and behavioral problems. Also, national and regional databases can be used to search related articles.[ 26 , 27 ] In addition, the search for conferences and gray literature helps to resolve the file-drawn problem (negative studies that may not be published yet).[ 26 ] If a systematic review is carried out on articles in a particular country or region, the databases in that region or country should also be investigated. For example, Iranian researchers can use national databases such as Scientific Information Database and MagIran. Comprehensive search to identify the maximum number of existing studies leads to a minimization of the selection bias. In the search process, the available databases should be used as much as possible, since many databases are overlapping.[ 17 ] Searching 12 databases (PubMed, Scopus, Web of Science, EMBASE, GHL, VHL, Cochrane, Google Scholar, Clinical trials.gov, mRCTs, POPLINE, and SIGLE) covers all articles published in the field of medicine and health.[ 25 ] Some have suggested that references management software be used to search for more easy identification and removal of duplicate articles from several different databases.[ 20 ] At least one search strategy is presented in the article.[ 21 ]
The methodological quality assessment of articles is a key step in systematic review that helps identify systemic errors (bias) in results and interpretations. In systematic review studies, unlike other review studies, qualitative assessment or risk of bias is required. There are currently several tools available to review the quality of the articles. The overall score of these tools may not provide sufficient information on the strengths and weaknesses of the studies.[ 28 ] At least two reviewers should independently evaluate the quality of the articles, and if there is any objection, the third author should be asked to examine the article or the two researchers agree on the discussion. Some believe that the study of the quality of studies should be done by removing the name of the journal, title, authors, and institutions in a Blinded fashion.[ 29 ]
There are several ways for quality assessment, such as Sack's quality assessment (1988),[ 30 ] overview quality assessment questionnaire (1991),[ 31 ] CASP (Critical Appraisal Skills Program),[ 32 ] and AMSTAR (2007),[ 33 ] Besides, CASP,[ 34 ] the National Institute for Health and Care Excellence,[ 35 ] and the Joanna Briggs Institute System for the Unified Management, Assessment and Review of Information checklists.[ 30 , 36 ] However, it is worth mentioning that there is no single tool for assessing the quality of all types of reviews, but each is more applicable to some types of reviews. Often, the STROBE tool is used to check the quality of articles. It reviews the title and abstract (item 1), introduction (items 2 and 3), implementation method (items 4–12), findings (items 13–17), discussion (Items 18–21), and funding (item 22). Eighteen items are used to review all articles, but four items (6, 12, 14, and 15) apply in certain situations.[ 9 ] The quality of interventional articles is often evaluated by the JADAD tool, which consists of three sections of randomization (2 scores), blinding (2 scores), and patient count (1 scores).[ 29 ]
At this stage, the researchers extract the necessary information in the selected articles. Elamin believes that reviewing the titles and abstracts and data extraction is a key step in the review process, which is often carried out by two of the research team independently, and ultimately, the results are compared.[ 37 ] This step aimed to prevent selection bias and it is recommended that the chance of agreement between the two researchers (Kappa coefficient) be reported at the end.[ 26 ] Although data collection forms may differ in systematic reviews, they all have information such as first author, year of publication, sample size, target community, region, and outcome. The purpose of data synthesis is to collect the findings of eligible studies, evaluate the strengths of the findings of the studies, and summarize the results. In data synthesis, we can use different analysis frameworks such as meta-ethnography, meta-analysis, or thematic synthesis.[ 38 ] Finally, after quality assessment, data analysis is conducted. The first step in this section is to provide a descriptive evaluation of each study and present the findings in a tabular form. Reviewing this table can determine how to combine and analyze various studies.[ 28 ] The data synthesis approach depends on the nature of the research question and the nature of the initial research studies.[ 39 ] After reviewing the bias and the abstract of the data, it is decided that the synthesis is carried out quantitatively or qualitatively. In case of conceptual heterogeneity (systematic differences in the study design, population, and interventions), the generalizability of the findings will be reduced and the study will not be meta-analysis. The meta-analysis study allows the estimation of the effect size, which is reported as the odds ratio, relative risk, hazard ratio, prevalence, correlation, sensitivity, specificity, and incidence with a confidence interval.[ 26 ]
Estimation of the effect size in systematic review and meta-analysis studies varies according to the type of studies entered into the analysis. Unlike the mean, prevalence, or incidence index, in odds ratio, relative risk, and hazard ratio, it is necessary to combine logarithm and logarithmic standard error of these statistics [ Table 1 ].
Effect size in systematic review and meta-analysis
OR=Odds ratio; RR=Relative risk; RCT= Randomized controlled trial; PPV: positive predictive value; NPV: negative predictive value; PLR: positive likelihood ratio; NLR: negative likelihood ratio; DOR: diagnostic odds ratio
Interpreting and presenting results (answers to research questions)
A systematic review ends with the interpretation of results. At this stage, the results of the study are summarized and the conclusions are presented to improve clinical and therapeutic decision-making. A systematic review with or without meta-analysis provides the best evidence available in the hierarchy of evidence-based practice.[ 14 ] Using meta-analysis can provide explicit conclusions. Conceptually, meta-analysis is used to combine the results of two or more studies that are similar to the specific intervention and the similar outcomes. In meta-analysis, instead of the simple average of the results of various studies, the weighted average of studies is reported, meaning studies with larger sample sizes account for more weight. To combine the results of various studies, we can use two models of fixed and random effects. In the fixed-effect model, it is assumed that the parameters studied are constant in all studies, and in the random-effect model, the measured parameter is assumed to be distributed between the studies and each study has measured some of it. This model offers a more conservative estimate.[ 40 ]
Three types of homogeneity tests can be used: (1) forest plot, (2) Cochrane's Q test (Chi-squared), and (3) Higgins I 2 statistics. In the forest plot, more overlap between confidence intervals indicates more homogeneity. In the Q statistic, when the P value is less than 0.1, it indicates heterogeneity exists and a random-effect model should be used.[ 41 ] Various tests such as the I 2 index are used to determine heterogeneity, values between 0 and 100; the values below 25%, between 25% and 50%, and above 75% indicate low, moderate, and high levels of heterogeneity, respectively.[ 26 , 42 ] The results of the meta-analyzing study are presented graphically using the forest plot, which shows the statistical weight of each study with a 95% confidence interval and a standard error of the mean.[ 40 ]
The importance of meta-analyses and systematic reviews in providing evidence useful in making clinical and policy decisions is ever-increasing. Nevertheless, they are prone to publication bias that occurs when positive or significant results are preferred for publication.[ 43 ] Song maintains that studies reporting a certain direction of results or powerful correlations may be more likely to be published than the studies which do not.[ 44 ] In addition, when searching for meta-analyses, gray literature (e.g., dissertations, conference abstracts, or book chapters) and unpublished studies may be missed. Moreover, meta-analyses only based on published studies may exaggerate the estimates of effect sizes; as a result, patients may be exposed to harmful or ineffective treatment methods.[ 44 , 45 ] However, there are some tests that can help in detecting negative expected results that are not included in a review due to publication bias.[ 46 ] In addition, publication bias can be reduced through searching for data that are not published.
Systematic reviews and meta-analyses have certain advantages; some of the most important ones are as follows: examining differences in the findings of different studies, summarizing results from various studies, increased accuracy of estimating effects, increased statistical power, overcoming problems related to small sample sizes, resolving controversies from disagreeing studies, increased generalizability of results, determining the possible need for new studies, overcoming the limitations of narrative reviews, and making new hypotheses for further research.[ 47 , 48 ]
Despite the importance of systematic reviews, the author may face numerous problems in searching, screening, and synthesizing data during this process. A systematic review requires extensive access to databases and journals that can be costly for nonacademic researchers.[ 13 ] Also, in reviewing the inclusion and exclusion criteria, the inevitable mindsets of browsers may be involved and the criteria are interpreted differently from each other.[ 49 ] Lee refers to some disadvantages of these studies, the most significant ones are as follows: a research field cannot be summarized by one number, publication bias, heterogeneity, combining unrelated things, being vulnerable to subjectivity, failing to account for all confounders, comparing variables that are not comparable, just focusing on main effects, and possible inconsistency with results of randomized trials.[ 47 ] Different types of programs are available to perform meta-analysis. Some of the most commonly used statistical programs are general statistical packages, including SAS, SPSS, R, and Stata. Using flexible commands in these programs, meta-analyses can be easily run and the results can be readily plotted out. However, these statistical programs are often expensive. An alternative to using statistical packages is to use programs designed for meta-analysis, including Metawin, RevMan, and Comprehensive Meta-analysis. However, these programs may have limitations, including that they can accept few data formats and do not provide much opportunity to set the graphical display of findings. Another alternative is to use Microsoft Excel. Although it is not a free software, it is usually found in many computers.[ 20 , 50 ]
A systematic review study is a powerful and valuable tool for answering research questions, generating new hypotheses, and identifying areas where there is a lack of tangible knowledge. A systematic review study provides an excellent opportunity for researchers to improve critical assessment and evidence synthesis skills.
All authors contributed equally to this work.
Financial support and sponsorship
Conflicts of interest.
There are no conflicts of interest.
- UNC Libraries
- HSL Academic Process
- Systematic Reviews
Systematic Reviews: Home
Created by health science librarians.
- New Guide Message
- Systematic review resources
What is a Systematic Review?
A simplified process map, how can the library help, systematic reviews in non-health disciplines, resources for performing systematic reviews.
- Step 1: Complete Pre-Review Tasks
- Step 2: Develop a Protocol
- Step 3: Conduct Literature Searches
- Step 4: Manage Citations
- Step 5: Screen Citations
- Step 6: Assess Quality of Included Studies
- Step 7: Extract Data from Included Studies
- Step 8: Write the Review
Check our FAQ's
Chat with us (during business hours)
Call (919) 962-0800
Make an appointment with a librarian
Request a systematic or scoping review consultation
There are many types of literature reviews.
Before beginning a systematic review, consider whether it is the best type of review for your question, goals, and resources. The table below compares a few different types of reviews to help you decide which is best for you.
- Scoping Review Guide For more information about scoping reviews, refer to the UNC HSL Scoping Review Guide.
- UNC HSL's Simplified, Step-by-Step Process Map A PDF file of the HSL's Systematic Review Process Map.
The average systematic review takes 1,168 hours to complete. ¹ A librarian can help you speed up the process.
Systematic reviews follow established guidelines and best practices to produce high-quality research. Librarian involvement in systematic reviews is based on two levels. In Tier 1, the librarian will collaborate with researchers in a consultative manner. In Tier 2, the librarian will be an active member of your research team and co-author on your review. Roles and expectations of librarians vary based on the level of involvement desired. Examples of these differences are outlined in the table below.
- Request a systematic or scoping review consultation
Researchers are conducting systematic reviews in a variety of disciplines. If your focus is on a topic other than health sciences, you may want to also consult the resources below to learn how systematic reviews may vary in your field. You can also contact a librarian for your discipline with questions.
- EPPI-Centre methods for conducting systematic reviews The EPPI-Centre develops methods and tools for conducting systematic reviews, including reviews for education, public and social policy.
- Collaboration for Environmental Evidence (CEE) CEE seeks to promote and deliver evidence syntheses on issues of greatest concern to environmental policy and practice as a public service
- Siddaway AP, Wood AM, Hedges LV. How to Do a Systematic Review: A Best Practice Guide for Conducting and Reporting Narrative Reviews, Meta-Analyses, and Meta-Syntheses. Annu Rev Psychol. 2019 Jan 4;70:747-770. doi: 10.1146/annurev-psych-010418-102803. A resource for psychology systematic reviews, which also covers qualitative meta-syntheses or met-ethnographies
- The Campbell Collaboration
- Guidelines for Performing Systematic Literature Reviews in Software Engineering The objective of this report is to propose comprehensive guidelines for systematic literature reviews appropriate for software engineering researchers, including PhD students.
Sport, Exercise, & Nutrition
- Application of systematic review methodology to the field of nutrition by Tufts Evidence-based Practice Center Publication Date: 2009
- Systematic Reviews and Meta-Analysis — Open & Free (Open Learning Initiative) The course follows guidelines and standards developed by the Campbell Collaboration, based on empirical evidence about how to produce the most comprehensive and accurate reviews of research
- Systematic Reviews by David Gough, Sandy Oliver & James Thomas Publication Date: 2020
- Updating systematic reviews by University of Ottawa Evidence-based Practice Center Publication Date: 2007
Looking for our previous Systematic Review guide?
Our legacy guide was used June 2020 to August 2022
- Systematic Review Legacy Guide
- Next: Step 1: Complete Pre-Review Tasks >>
- Last Updated: Oct 17, 2023 9:42 AM
- URL: https://guides.lib.unc.edu/systematic-reviews
Search & Find
- E-Research by Discipline
- More Search & Find
Places & Spaces
- Places to Study
- Book a Study Room
- Printers, Scanners, & Computers
- More Places & Spaces
- Borrowing & Circulation
- Request a Title for Purchase
- Schedule Instruction Session
- More Services
Support & Guides
- Course Reserves
- Research Guides
- Citing & Writing
- More Support & Guides
- Mission Statement
- Diversity Statement
- Staff Directory
- Job Opportunities
- Give to the Libraries
- News & Exhibits
- Reckoning Initiative
- More About Us
- Search This Site
- Give Us Your Feedback
- 208 Raleigh Street CB #3916
- Chapel Hill, NC 27515-8890
How to conduct systematic literature reviews in management research: a guide in 6 steps and 14 decisions
- Review Paper
- Open access
- Published: 12 May 2023
- volume 17 , pages 1899–1933 ( 2023 )
You have full access to this open access article
- Philipp C. Sauer ORCID: orcid.org/0000-0002-1823-0723 1 &
- Stefan Seuring ORCID: orcid.org/0000-0003-4204-9948 2
Explore all metrics
Cite this article
Systematic literature reviews (SLRs) have become a standard tool in many fields of management research but are often considerably less stringently presented than other pieces of research. The resulting lack of replicability of the research and conclusions has spurred a vital debate on the SLR process, but related guidance is scattered across a number of core references and is overly centered on the design and conduct of the SLR, while failing to guide researchers in crafting and presenting their findings in an impactful way. This paper offers an integrative review of the widely applied and most recent SLR guidelines in the management domain. The paper adopts a well-established six-step SLR process and refines it by sub-dividing the steps into 14 distinct decisions: (1) from the research question, via (2) characteristics of the primary studies, (3) to retrieving a sample of relevant literature, which is then (4) selected and (5) synthesized so that, finally (6), the results can be reported. Guided by these steps and decisions, prior SLR guidelines are critically reviewed, gaps are identified, and a synthesis is offered. This synthesis elaborates mainly on the gaps while pointing the reader toward the available guidelines. The paper thereby avoids reproducing existing guidance but critically enriches it. The 6 steps and 14 decisions provide methodological, theoretical, and practical guidelines along the SLR process, exemplifying them via best-practice examples and revealing their temporal sequence and main interrelations. The paper guides researchers in the process of designing, executing, and publishing a theory-based and impact-oriented SLR.
Avoid common mistakes on your manuscript.
The application of systematic or structured literature reviews (SLRs) has developed into an established approach in the management domain (Kraus et al. 2020 ), with 90% of management-related SLRs published within the last 10 years (Clark et al. 2021 ). Such reviews help to condense knowledge in the field and point to future research directions, thereby enabling theory development (Fink 2010 ; Koufteros et al. 2018 ). SLRs have become an established method by now (e.g., Durach et al. 2017 ; Koufteros et al. 2018 ). However, many SLR authors struggle to efficiently synthesize and apply review protocols and justify their decisions throughout the review process (Paul et al. 2021 ) since only a few studies address and explain the respective research process and the decisions to be taken in this process. Moreover, the available guidelines do not form a coherent body of literature but focus on the different details of an SLR, while a comprehensive and detailed SLR process model is lacking. For example, Seuring and Gold ( 2012 ) provide some insights into the overall process, focusing on content analysis for data analysis without covering the practicalities of the research process in detail. Similarly, Durach et al. ( 2017 ) address SLRs from a paradigmatic perspective, offering a more foundational view covering ontological and epistemological positions. Durach et al. ( 2017 ) emphasize the philosophy of science foundations of an SLR. Although somewhat similar guidelines for SLRs might be found in the wider body of literature (Denyer and Tranfield 2009 ; Fink 2010 ; Snyder 2019 ), they often take a particular focus and are less geared toward explaining and reflecting on the single choices being made during the research process. The current body of SLR guidelines leaves it to the reader to find the right links among the guidelines and to justify their inconsistencies. This is critical since a vast number of SLRs are conducted by early-stage researchers who likely struggle to synthesize the existing guidance and best practices (Fisch and Block 2018 ; Kraus et al. 2020 ), leading to the frustration of authors, reviewers, editors, and readers alike.
Filling these gaps is critical in our eyes since researchers conducting literature reviews form the foundation of any kind of further analysis to position their research into the respective field (Fink 2010 ). So-called “systematic literature reviews” (e.g., Davis and Crombie 2001 ; Denyer and Tranfield 2009 ; Durach et al. 2017 ) or “structured literature reviews” (e.g., Koufteros et al. 2018 ; Miemczyk et al. 2012 ) differ from nonsystematic literature reviews in that the analysis of a certain body of literature becomes a means in itself (Kraus et al. 2020 ; Seuring et al. 2021 ). Although two different terms are used for this approach, the related studies refer to the same core methodological references that are also cited in this paper. Therefore, we see them as identical and abbreviate them as SLR.
There are several guidelines on such reviews already, which have been developed outside the management area (e.g. Fink 2010 ) or with a particular focus on one management domain (e.g., Kraus et al. 2020 ). SLRs aim at capturing the content of the field at a point in time but should also aim at informing future research (Denyer and Tranfield 2009 ), making follow-up research more efficient and productive (Kraus et al. 2021 ). Such standalone literature reviews would and should also prepare subsequent empirical or modeling research, but usually, they require far more effort and time (Fisch and Block 2018 ; Lim et al. 2022 ). To achieve this preparation, SLRs can essentially a) describe the state of the literature, b) test a hypothesis based on the available literature, c) extend the literature, and d) critique the literature (Xiao and Watson 2019 ). Beyond guiding the next incremental step in research, SLRs “may challenge established assumptions and norms of a given field or topic, recognize critical problems and factual errors, and stimulate future scientific conversations around that topic” (Kraus et al. 2022 , p. 2578). Moreover, they have the power to answer research questions that are beyond the scope of individual empirical or modeling studies (Snyder 2019 ) and to build, elaborate, and test theories beyond this single study scope (Seuring et al. 2021 ). These contributions of an SLR may be highly influential and therefore underline the need for high-quality planning, execution, and reporting of their process and details.
Regardless of the individual aims of standalone SLRs, their numbers have exponentially risen in the last two decades (Kraus et al. 2022 ) and almost all PhD or large research project proposals in the management domain include such a standalone SLR to build a solid foundation for their subsequent work packages. Standalone SLRs have thus become a key part of management research (Kraus et al. 2021 ; Seuring et al. 2021 ), which is also underlined by the fact that there are journals and special issues exclusively accepting standalone SLRs (Kraus et al. 2022 ; Lim et al. 2022 ).
However, SLRs require a commitment that is often comparable to an additional research process or project. Hence, SLRs should not be taken as a quick solution, as a simplistic, descriptive approach would usually not yield a publishable paper (see also Denyer and Tranfield 2009 ; Kraus et al. 2020 ).
Furthermore, as with other research techniques, SLRs are based on the rigorous application of rules and procedures, as well as on ensuring the validity and reliability of the method (Fisch and Block 2018 ; Seuring et al. 2021 ). In effect, there is a need to ensure “the same level of rigour to reviewing research evidence as should be used in producing that research evidence in the first place” (Davis and Crombie 2001 , p.1). This rigor holds for all steps of the research process, such as establishing the research question, collecting data, analyzing it, and making sense of the findings (Durach et al. 2017 ; Fink 2010 ; Seuring and Gold 2012 ). However, there is a high degree of diversity where some would be justified, while some papers do not report the full details of the research process. This lack of detail contrasts with an SLR’s aim of creating a valid map of the currently available research in the reviewed field, as critical information on the review’s completeness and potential reviewer biases cannot be judged by the reader or reviewer. This further impedes later replications or extensions of such reviews, which could provide longitudinal evidence of the development of a field (Denyer and Tranfield 2009 ; Durach et al. 2017 ). Against this observation, this paper addresses the following question:
Which decisions need to be made in an SLR process, and what practical guidelines can be put forward for making these decisions?
Answering this question, the key contributions of this paper are fourfold: (1) identifying the gaps in existing SLR guidelines, (2) refining the SLR process model by Durach et al. ( 2017 ) through 14 decisions, (3) synthesizing and enriching guidelines for these decisions, exemplifying the key decisions by means of best practice SLRs, and (4) presenting and discussing a refined SLR process model.
In some cases, we point to examples from operations and supply chain management. However, they illustrate the purposes discussed in the respective sections. We carefully checked that the arguments held for all fields of management-related research, and multiple examples from other fields of management were also included.
2 Identification of the need for an enriched process model, including a set of sequential decisions and their interrelations
In line with the exponential increase in SLR papers (Kraus et al. 2022 ), multiple SLR guidelines have recently been published. Since 2020, we have found a total of 10 papers offering guidelines on SLRs and other reviews for the field of management in general or some of its sub-fields. These guidelines are of double interest to this paper since we aim to complement them to fill the gap identified in the introduction while minimizing the doubling of efforts. Table 1 lists the 10 most recent guidelines and highlights their characteristics, research objectives, contributions, and how our paper aims to complement these previous contributions.
The sheer number and diversity of guideline papers, as well as the relevance expressed in them, underline the need for a comprehensive and exhaustive process model. At the same time, the guidelines take specific foci on, for example, updating earlier guidelines to new technological potentials (Kraus et al. 2020 ), clarifying the foundational elements of SLRs (Kraus et al. 2022 ) and proposing a review protocol (Paul et al. 2021 ) or the application and development of theory in SLRs (Seuring et al. 2021 ). Each of these foci fills an entire paper, while the authors acknowledge that much more needs to be considered in an SLR. Working through these most recent guidelines, it becomes obvious that the common paper formats in the management domain create a tension for guideline papers between elaborating on a) the SLR process and b) the details, options, and potentials of individual process steps.
Our analysis in Table 1 evidences that there are a number of rich contributions on aspect b), while the aspect a) of SLR process models has not received the same attention despite the substantial confusion of authors toward them (Paul et al. 2021 ). In fact, only two of the most recent guidelines approach SLR process models. First, Kraus et al. ( 2020 ) incrementally extended the 20-year-old Tranfield et al. ( 2003 ) three-stage model into four stages. A little later, Paul et al. ( 2021 ) proposed a three-stage (including six sub-stages) SPAR-4-SLR review protocol. It integrates the PRISMA reporting items (Moher et al. 2009 ; Page et al. 2021 ) that originate from clinical research to define 14 actions stating what items an SLR in management needs to report for reasons of validity, reliability, and replicability. Almost naturally, these 14 reporting-oriented actions mainly relate to the first SLR stage of “assembling the literature,” which accounts for nine of the 14 actions. Since this protocol is published in a special issue editorial, its presentation and elaboration are somewhat limited by the already mentioned word count limit. Nevertheless, the SPAR-4-SLR protocol provides a very useful checklist for researchers that enables them to include all data required to document the SLR and to avoid confusion from editors, reviewers, and readers regarding SLR characteristics.
Beyond Table 1 , Durach et al. ( 2017 ) synthesized six common SLR “steps” that differ only marginally in the delimitation of one step to another from the sub-stages of the previously mentioned SLR processes. In addition, Snyder ( 2019 ) proposed a process comprising four “phases” that take more of a bird’s perspective in addressing (1) design, (2) conduct, (3) analysis, and (4) structuring and writing the review. Moreover, Xiao and Watson ( 2019 ) proposed only three “stages” of (1) planning, (2) conducting, and (3) reporting the review that combines the previously mentioned conduct and the analysis and defines eight steps within them. Much in line with the other process models, the final reporting stage contains only one of the eight steps, leaving the reader somewhat alone in how to effectively craft a manuscript that contributes to the further development of the field.
In effect, the mentioned SLR processes differ only marginally, while the systematic nature of actions in the SPAR-4-SLR protocol (Paul et al. 2021 ) can be seen as a reporting must-have within any of the mentioned SLR processes. The similarity of the SLR processes is, however, also evident in the fact that they leave open how the SLR analysis can be executed, enriched, and reflected to make a contribution to the reviewed field. In contrast, this aspect is richly described in the other guidelines that do not offer an SLR process, leading us again toward the tension for guideline papers between elaborating on a) the SLR process and b) the details, options, and potentials of each process step.
To help (prospective) SLR authors successfully navigate this tension of existing guidelines, it is thus the ambition of this paper to adopt a comprehensive SLR process model along which an SLR project can be planned, executed, and written up in a coherent way. To enable this coherence, 14 distinct decisions are defined, reflected, and interlinked, which have to be taken across the different steps of the SLR process. At the same time, our process model aims to actively direct researchers to the best practices, tips, and guidance that previous guidelines have provided for individual decisions. We aim to achieve this by means of an integrative review of the relevant SLR guidelines, as outlined in the following section.
3 Methodology: an integrative literature review of guidelines for systematic literature reviews in management
It might seem intuitive to contribute to the debate on the “gold standard” of systematic literature reviews (Davis et al. 2014 ) by conducting a systematic review ourselves. However, there are different types of reviews aiming for distinctive contributions. Snyder ( 2019 ) distinguished between a) systematic, b) semi-systematic, and c) integrative (or critical) reviews, which aim for i) (mostly quantitative) synthesis and comparison of prior (primary) evidence, ii) an overview of the development of a field over time, and iii) a critique and synthesis of prior perspectives to reconceptualize or advance them. Each review team needs to position itself in such a typology of reviews to define the aims and scope of the review. To do so and structure the related research process, we adopted the four generic steps for an (integrative) literature review by Snyder ( 2019 )—(1) design, (2) conduct, (3) analysis, and (4) structuring and writing the review—on which we report in the remainder of this section. Since the last step is a very practical one that, for example, asks, “Is the contribution of the review clearly communicated?” (Snyder 2019 ), we will focus on the presentation of the method applied to the initial three steps:
(1) Regarding the design, we see the need for this study emerging from our experience in reviewing SLR manuscripts, supervising PhD students who, almost by default, need to prepare an SLR, and recurring discussions on certain decisions in the process of both. These discussions regularly left some blank or blurry spaces (see Table 1 ) that induced substantial uncertainty regarding critical decisions in the SLR process (Paul et al. 2021 ). To address this gap, we aim to synthesize prior guidance and critically enrich it, thus adopting an integrative approach for reviewing existing SLR guidance in the management domain (Snyder 2019 ).
(2) To conduct the review, we started collecting the literature that provided guidance on the individual SLR parts. We built on a sample of 13 regularly cited or very recent papers in the management domain. We started with core articles that we successfully used to publish SLRs in top-tier OSCM journals, such as Tranfield et al. ( 2003 ) and Durach et al. ( 2017 ), and we checked their references and papers that cited these publications. The search focus was defined by the following criteria: the articles needed to a) provide original methodological guidance for SLRs by providing new aspects of the guideline or synthesizing existing ones into more valid guidelines and b) focus on the management domain. Building on the nature of a critical or integrative review that does not require a full or representative sample (Snyder 2019 ), we limited the sample to the papers displayed in Table 2 that built the core of the currently applied SLR guidelines. In effect, we found 11 technical papers and two SLRs of SLRs (Carter and Washispack 2018 ; Seuring and Gold 2012 ). From the latter, we mainly analyzed the discussion and conclusion parts that explicitly developed guidance on conducting SLRs.
(3) For analyzing these papers, we first adopted the six-step SLR process proposed by Durach et al. ( 2017 , p.70), which they define as applicable to any “field, discipline or philosophical perspective”. The contrast between the six-step SLR process used for the analysis and the four-step process applied by ourselves may seem surprising but is justified by the use of an integrative approach. This approach differs mainly in retrieving and selecting pertinent literature that is key to SLRs and thus needs to be part of the analysis framework.
While deductively coding the sample papers against Durach et al.’s ( 2017 ), guidance in the six steps, we inductively built a set of 14 decisions presented in the right columns of Table 2 that are required to be made in any SLR. These decisions built a second and more detailed level of analysis, for which the single guidelines were coded as giving low, medium, or high levels of detail (see Table 3 ), which helped us identify the gaps in the current guidance papers and led our way in presenting, critically discussing, and enriching the literature. In effect, we see that almost all guidelines touch on the same issues and try to give a comprehensive overview. However, this results in multiple guidelines that all lack the space to go into detail, while only a few guidelines focus on filling a gap in the process. It is our ambition with this analysis to identify the gaps in the guidelines, thereby identifying a precise need for refinement, and to offer a first step into this refinement. Adopting advice from the literature sample, the coding was conducted by the entire author team (Snyder 2019 ; Tranfield et al. 2003 ) including discursive alignments of interpretation (Seuring and Gold 2012 ). This enabled a certain reliability and validity of the analysis by reducing the within-study and expectancy bias (Durach et al. 2017 ), while the replicability was supported by reporting the review sample and the coding results in Table 3 (Carter and Washispack 2018 ).
(4) For the writing of the review, we only pointed to the unusual structure of presenting the method without a theory section and then the findings in the following section. However, this was motivated by the nature of the integrative review so that the review findings at the same time represent the “state of the art,” “literature review,” or “conceptualization” sections of a paper.
4 Findings of the integrative review: presentation, critical discussion, and enrichment of prior guidance
4.1 the overall research process for a systematic literature review.
Even within our sample of only 13 guidelines, there are four distinct suggestions for structuring the SLR process. One of the earliest SLR process models was proposed by Tranfield et al. ( 2003 ) encompassing the three stages of (1) planning the review, (2) conducting a review, and (3) reporting and dissemination. Snyder ( 2019 ) proposed four steps employed in this study: (1) design, (2) conduct, (3) analysis, and (4) structuring and writing the review. Borrowing from content analysis guidelines, Seuring and Gold ( 2012 ) defined four steps: (1) material collection, (2) descriptive analysis, (3) category selection, and (4) material evaluation. Most recently Kraus et al. ( 2020 ) proposed four steps: (1) planning the review, (2) identifying and evaluating studies, (3) extracting and synthesizing data, and (4) disseminating the review findings. Most comprehensively, Durach et al. ( 2017 ) condensed prior process models into their generic six steps for an SLR. Adding the review of the process models by Snyder ( 2019 ) and Seuring and Gold ( 2012 ) to Durach et al.’s ( 2017 ) SLR process review of four papers, we support their conclusion of the general applicability of the six steps defined. Consequently, these six steps form the backbone of our coding scheme, as shown in the left column of Table 2 and described in the middle column.
As stated in Sect. 3 , we synthesized the review papers against these six steps but experienced that the papers were taking substantially different foci by providing rich details for some steps while largely bypassing others. To capture this heterogeneity and better operationalize the SLR process, we inductively introduced the right column, identifying 14 decisions to be made. These decisions are all elaborated in the reviewed papers but to substantially different extents, as the detailed coding results in Table 3 underline.
Mapping Table 3 for potential gaps in the existing guidelines, we found six decisions on which we found only low- to medium-level details, while high-detail elaboration was missing. These six decisions, which are illustrated in Fig. 1 , belong to three steps: 1: defining the research question, 5: synthesizing the literature, and 6: reporting the results. This result underscores our critique of currently unbalanced guidance that is, on the one hand, detailed on determining the required characteristics of primary studies (step 2), retrieving a sample of potentially relevant literature (step 3), and selecting the pertinent literature (step 4). On the other hand, authors, especially PhD students, are left without substantial guidance on the steps critical to publication. Instead, they are called “to go one step further … and derive meaningful conclusions” (Fisch and Block 2018 , p. 105) without further operationalizations on how this can be achieved; for example, how “meet the editor” conference sessions regularly cause frustration among PhDs when editors call for “new,” “bold,” and “relevant” research. Filling the gaps in the six decisions with best practice examples and practical experience is the main focus of this study’s contribution. The other eight decisions are synthesized with references to the guidelines that are most helpful and relevant for the respective step in our eyes.
The 6 steps and 14 decisions of the SLR process
4.2 Step 1: defining the research question
When initiating a research project, researchers make three key decisions.
Decision 1 considers the essential tasks of establishing a relevant and timely research question, but despite the importance of the decision, which determines large parts of further decisions (Snyder 2019 ; Tranfield et al. 2003 ), we only find scattered guidance in the literature. Hence, how can a research topic be specified to allow a strong literature review that is neither too narrow nor too broad? The latter is the danger in meta-reviews (i.e., reviews of reviews) (Aguinis et al. 2020 ; Carter and Washispack 2018 ; Kache and Seuring 2014 ). In this respect, even though the method would be robust, the findings would not be novel. In line with Carter and Washispack ( 2018 ), there should always be room for new reviews, yet over time, they must move from a descriptive overview of a field further into depth and provide detailed analyses of constructs. Clark et al. ( 2021 ) provided a detailed but very specific reflection on how they crafted a research question for an SLR and that revisiting the research question multiple times throughout the SLR process helps to coherently and efficiently move forward with the research. More generically, Kraus et al. ( 2020 ) listed six key contributions of an SLR that can guide the definition of the research question. Finally, Snyder ( 2019 ) suggested moving into more detail from existing SLRs and specified two main avenues for crafting an SLR research question that are either investigating the relationship among multiple effects, the effect of (a) specific variable(s), or mapping the evidence regarding a certain research area. For the latter, we see three possible alternative approaches, starting with a focus on certain industries. Examples are analyses of the food industry (Beske et al. 2014 ), retailing (Wiese et al. 2012 ), mining and minerals (Sauer and Seuring 2017 ), or perishable product supply chains (Lusiantoro et al. 2018 ) and traceability at the example of the apparel industry (Garcia-Torres et al. 2019 ). A second opportunity would be to assess the status of research in a geographical area that composes an interesting context from a research perspective, such as sustainable supply chain management (SSCM) in Latin America (Fritz and Silva 2018 ), yet this has to be justified explicitly, avoiding the fact that geographical focus is taken as the reason per se (e.g., Crane et al. 2016 ). A third variant addresses emerging issues, such as SCM, in a base-of-the-pyramid setting (Khalid and Seuring 2019 ) and the use of blockchain technology (Wang et al. 2019 ) or digital transformation (Hanelt et al. 2021 ). These approaches limit the reviewed field to enable a more contextualized analysis in which the novelty, continued relevance, or unjustified underrepresentation of the context can be used to specify a research gap and related research question(s). This also impacts the following decisions, as shown below.
Decision 2 concerns the option for a theoretical approach (i.e., the adoption of an inductive, abductive, or deductive approach) to theory building through the literature review. The review of previous guidance on this delivers an interesting observation. On the one hand, there are early elaborations on systematic reviews, realist synthesis, meta-synthesis, and meta-analysis by Tranfield et al. ( 2003 ) that are borrowing from the origins of systematic reviews in medical research. On the other hand, recent management-related guidelines largely neglect details of related decisions, but point out that SLRs are a suitable tool for theory building (Kraus et al. 2020 ). Seuring et al. ( 2021 ) set out to fill this gap and provided substantial guidance on how to use theory in SLRs to advance the field. To date, the option for a theoretical approach is only rarely made explicit, leaving the reader often puzzled about how advancement in theory has been crafted and impeding a review’s replicability (Seuring et al. 2021 ). Many papers still leave related choices in the dark (e.g., Rhaiem and Amara 2021 ; Rojas-Córdova et al. 2022 ) and move directly from the introduction to the method section.
In Decision 3, researchers need to adopt a theoretical framework (Durach et al. 2017 ) or at least a theoretical starting point, depending on the most appropriate theoretical approach (Seuring et al. 2021 ). Here, we find substantial guidance by Durach et al. ( 2017 ) that underlines the value of adopting a theoretical lens to investigate SCM phenomena and the literature. Moreover, the choice of a theoretical anchor enables a consistent definition and operationalization of constructs that are used to analyze the reviewed literature (Durach et al. 2017 ; Seuring et al. 2021 ). Hence, providing some upfront definitions is beneficial, clarifying what key terminology would be used in the subsequent paper, such as Devece et al. ( 2019 ) introduce their terminology on coopetition. Adding a practical hint beyond the elaborations of prior guidance papers for taking up established constructs in a deductive analysis (decision 2), there would be the question of whether these can yield interesting findings.
Here, it would be relevant to specify what kind of analysis is aimed for the SLR, where three approaches might be distinguished (i.e., bibliometric analysis, meta-analysis, and content analysis–based studies). Briefly distinguishing them, the core difference would be how many papers can be analyzed employing the respective method. Bibliometric analysis (Donthu et al. 2021 ) usually relies on the use of software, such as Biblioshiny, allowing the creation of figures on citations and co-citations. These figures enable the interpretation of large datasets in which several hundred papers can be analyzed in an automated manner. This allows for distinguishing among different research clusters, thereby following a more inductive approach. This would be contrasted by meta-analysis (e.g., Leuschner et al. 2013 ), where often a comparatively smaller number of papers is analyzed (86 in the respective case) but with a high number of observations (more than 17,000). The aim is to test for statistically significant correlations among single constructs, which requires that the related constructs and items be precisely defined (i.e., a clearly deductive approach to the analysis).
Content analysis is the third instrument frequently applied to data analysis, where an inductive or deductive approach might be taken (Seuring et al. 2021 ). Content-based analysis (see decision 9 in Sect. 4.6 ; Seuring and Gold 2012 ) is a labor-intensive step and can hardly be changed ex post. This also implies that only a certain number of papers might be analyzed (see Decision 6 in Sect. 4.5 ). It is advisable to adopt a wider set of constructs for the analysis stemming even from multiple established frameworks since it is difficult to predict which constructs and items might yield interesting insights. Hence, coding a more comprehensive set of items and dropping some in the process is less problematic than starting an analysis all over again for additional constructs and items. However, in the process of content analysis, such an iterative process might be required to improve the meaningfulness of the data and findings (Seuring and Gold 2012 ). A recent example of such an approach can be found in Khalid and Seuring ( 2019 ), building on the conceptual frameworks for SSCM of Carter and Rogers ( 2008 ), Seuring and Müller ( 2008 ), and Pagell and Wu ( 2009 ). This allows for an in-depth analysis of how SSCM constructs are inherently referred to in base-of-the-pyramid-related research. The core criticism and limitation of such an approach is the random and subjectively biased selection of frameworks for the purpose of analysis.
Beyond the aforementioned SLR methods, some reviews, similar to the one used here, apply a critical review approach. This is, however, nonsystematic, and not an SLR; thus, it is beyond the scope of this paper. Interested readers can nevertheless find some guidance on critical reviews in the available literature (e.g., Kraus et al. 2022 ; Snyder 2019 ).
4.3 Step 2: determining the required characteristics of primary studies
After setting the stage for the review, it is essential to determine which literature is to be reviewed in Decision 4. This topic is discussed by almost all existing guidelines and will thus only briefly be discussed here. Durach et al. ( 2017 ) elaborated in great detail on defining strict inclusion and exclusion criteria that need to be aligned with the chosen theoretical framework. The relevant units of analysis need to be specified (often a single paper, but other approaches might be possible) along with suitable research methods, particularly if exclusively empirical studies are reviewed or if other methods are applied. Beyond that, they elaborated on potential quality criteria that should be applied. The same is considered by a number of guidelines that especially draw on medical research, in which systematic reviews aim to pool prior studies to infer findings from their total population. Here, it is essential to ensure the exclusion of poor-quality evidence that would lower the quality of the review findings (Mulrow 1987 ; Tranfield et al. 2003 ). This could be ensured by, for example, only taking papers from journals listed on the Web of Science or Scopus or journals listed in quartile 1 of Scimago ( https://www.scimagojr.com/ ), a database providing citation and reference data for journals.
The selection of relevant publication years should again follow the purpose of the study defined in Step 1. As such, there might be a justified interest in the wide coverage of publication years if a historical perspective is taken. Alternatively, more contemporary developments or the analysis of very recent issues can justify the selection of very few years of publication (e.g., Kraus et al. 2022 ). Again, it is hard to specify a certain time period covered, but if developments of a field should be analyzed, a five-year period might be a typical lower threshold. On current topics, there is often a trend of rising publishing numbers. This scenario implies the rising relevance of a topic; however, this should be treated with caution. The total number of papers published per annum has increased substantially in recent years, which might account for the recently heightened number of papers on a certain topic.
4.4 Step 3: retrieving a sample of potentially relevant literature
After defining the required characteristics of the literature to be reviewed, the literature needs to be retrieved based on two decisions. Decision 5 concerns suitable literature sources and databases that need to be defined. Turning to Web of Science or Scopus would be two typical options found in many of the examples mentioned already (see also detailed guidance by Paul and Criado ( 2020 ) as well as Paul et al. ( 2021 )). These databases aggregate many management journals, and a typical argument for turning to the Web of Science database is the inclusion of impact factors, as they indicate a certain minimum quality of the journal (Sauer and Seuring 2017 ). Additionally, Google Scholar is increasingly mentioned as a usable search engine, often providing higher numbers of search results than the mentioned databases (e.g., Pearce 2018 ). These results often entail duplicates of articles from multiple sources or versions of the same article, as well as articles in predatory journals (Paul et al. 2021 ). Therefore, we concur with Paul et al. ( 2021 ) who underline the quality assurance mechanisms in Web of Science and Scopus, making them preferred databases for the literature search. From a practical perspective, it needs to be mentioned that SLRs in management mainly rely on databases that are not free to use. Against this limitation, Pearce ( 2018 ) provided a list of 20 search engines that are free of charge and elaborated on their advantages and disadvantages. Due to the individual limitations of the databases, it is advisable to use a combination of them (Kraus et al. 2020 , 2022 ) and build a consolidated sample by screening the papers found for duplicates, as regularly done in SLRs.
This decision also includes the choice of the types of literature to be analyzed. Typically, journal papers are selected, ensuring that the collected papers are peer-reviewed and have thus undergone an academic quality management process. Meanwhile, conference papers are usually avoided since they are often less mature and not checked for quality (e.g., Seuring et al. 2021 ). Nevertheless, for emerging topics, it might be too restrictive to consider only peer-reviewed journal articles and limit the literature to only a few references. Analyzing such rapidly emerging topics is relevant for timely and impact-oriented research and might justify the selection of different sources. Kraus et al. ( 2020 ) provided a discussion on the use of gray literature (i.e., nonacademic sources), and Sauer ( 2021 ) provided an example of a review of sustainability standards from a management perspective to derive implications for their application by managers on the one hand and for enhancing their applicability on the other hand.
Another popular way to limit the review sample is the restriction to a certain list of journals (Kraus et al. 2020 ; Snyder 2019 ). While this is sometimes favored by highly ranked journals, Carter and Washispack ( 2018 ), for example, found that many pertinent papers are not necessarily published in journals within the field. Webster and Watson ( 2002 ) quite tellingly cited a reviewer labeling the selection of top journals as an unjustified excuse for investigating the full body of relevant literature. Both aforementioned guidelines thus discourage the restriction to particular journals, a guidance that we fully support.
However, there is an argument to be made supporting the exclusion of certain lower-ranked journals. This can be done, for example, by using Scimago Journal quartiles ( www.scimagojr.com , last accessed 13. of April 2023) and restricting it to journals in the first quartile (e.g., Yavaprabhas et al. 2022 ). Other papers (e.g., Kraus et al. 2021 ; Rojas-Córdova et al. 2022 ) use certain journal quality lists to limit their sample. However, we argue for a careful check by the authors against the topic reviewed regarding what would be included and excluded.
Decision 6 entails the definition of search terms and a search string to be applied in the database just chosen. The search terms should reflect the aims of the review and the exclusion criteria that might be derived from the unit of analysis and the theoretical framework (Durach et al. 2017 ; Snyder 2019 ). Overall, two approaches to keywords can be observed. First, some guides suggest using synonyms of the key terms of interest (e.g., Durach et al. 2017 ; Kraus et al. 2020 ) in order to build a wide baseline sample that will be condensed in the next step. This is, of course, especially helpful if multiple terms delimitate a field together or different synonymous terms are used in parallel in different fields or journals. Empirical journals in supply chain management, for example, use the term “multiple supplier tiers ” (e.g., Tachizawa and Wong 2014 ), while modeling journals in the same field label this as “multiple supplier echelons ” (e.g., Brandenburg and Rebs 2015 ). Second, in some cases, single keywords are appropriate for capturing a central aspect or construct of a field if the single keyword has a global meaning tying this field together. This approach is especially relevant to the study of relatively broad terms, such as “social media” (Lim and Rasul 2022 ). However, this might result in very high numbers of publications found and therefore requires a purposeful combination with other search criteria, such as specific journals (Kraus et al. 2021 ; Lim et al. 2021 ), publication dates, article types, research methods, or the combination with keywords covering domains to which the search is aimed to be specified.
Since SLRs are often required to move into detail or review the intersections of relevant fields, we recommend building groups of keywords (single terms or multiple synonyms) for each field to be connected that are coupled via Boolean operators. To determine when a point of saturation for a keyword group is reached, one could monitor the increase in papers found in a database when adding another synonym. Once the increase is significantly decreasing or even zeroing, saturation is reached (Sauer and Seuring 2017 ). The keywords themselves can be derived from the list of keywords of influential publications in the field, while attention should be paid to potential synonyms in neighboring fields (Carter and Washispack 2018 ; Durach et al. 2017 ; Kraus et al. 2020 ).
4.5 Step 4: selecting the pertinent literature
The inclusion and exclusion criteria (Decision 6) are typically applied in Decision 7 in a two-stage process, first on the title, abstract, and keywords of an article before secondly applying them to the full text of the remaining articles (see also Kraus et al. 2020 ; Snyder 2019 ). Beyond this, Durach et al. ( 2017 ) underlined that the pertinence of the publication regarding units of analysis and the theoretical framework needs to be critically evaluated in this step to avoid bias in the review analysis. Moreover, Carter and Washispack ( 2018 ) requested the publication of the included and excluded sources to ensure the replicability of Steps 3 and 4. This can easily be done as an online supplement to an eventually published review article.
Nevertheless, the question remains: How many papers justify a literature review? While it is hard to specify how many papers comprise a body of literature, there might be certain thresholds for which Kraus et al. ( 2020 ) provide a useful discussion. As a rough guide, more than 50 papers would usually make a sound starting point (see also Paul and Criado 2020 ), while there are SLRs on emergent topics, such as multitier supply chain management, where 39 studies were included (Tachizawa and Wong 2014 ). An SLR on “learning from innovation failures” builds on 36 papers (Rhaiem and Amara 2021 ), which we would see as the lower threshold. However, such a low number should be an exception, and anything lower would certainly trigger the following question: Why is a review needed? Meanwhile, there are also limits on how many papers should be reviewed. While there are cases with 191 (Seuring and Müller 2008 ), 235 (Rojas-Córdova et al. 2022 ), or up to nearly 400 papers reviewed (Spens and Kovács 2006 ), these can be regarded as upper thresholds. Over time, similar topics seem to address larger datasets.
4.6 Step 5: synthesizing the literature
Before synthesizing the literature, Decision 8 considers the selection of a data extraction tool for which we found surprisingly little guidance. Some guidance is given on the use of cloud storage to enable remote team work (Clark et al. 2021 ). Beyond this, we found that SLRs have often been compiled with marked and commented PDFs or printed papers that were accompanied by tables (Kraus et al. 2020 ) or Excel sheets (see also the process tips by Clark et al. 2021 ). This sheet tabulated the single codes derived from the theoretical framework (Decision 3) and the single papers to be reviewed (Decision 7) by crossing out individual cells, signaling the representation of a particular code in a particular paper. While the frequency distribution of the codes is easily compiled from this data tool, the related content needs to be looked at in the papers in a tedious back-and-forth process. Beyond that, we would strongly recommend using data analysis software, such as MAXQDA or NVivo. Such programs enable the import of literature in PDF format and the automatic or manual coding of text passages, their comparison, and tabulation. Moreover, there is a permanent and editable reference of the coded text to a code. This enables a very quick compilation of content summaries or statistics for single codes and the identification of qualitative and quantitative links between codes and papers.
All the mentioned data extraction or data processing tools require a license and therefore are not free of cost. While many researchers may benefit from national or institutional subscriptions to these services, others may not. As a potential alternative, Pearce ( 2018 ) proposed a set of free open-source software (FOSS), including an elaboration on how they can be combined to perform an SLR. He also highlighted that both free and proprietary solutions have advantages and disadvantages that are worthwhile for those who do not have the required tools provided by their employers or other institutions they are members of. The same may apply to the literature databases used for the literature acquisition in Decision 5 (Pearce 2018 ).
Moreover, there is a link to Step 1, Decision 3, where bibliometric reviews and meta-analyses were mentioned. These methods, which are alternatives to content analysis–based approaches, have specific demands, so specific tools would be appropriate, such as the Biblioshiny software or VOSviewer. As we will point out for all decisions, there is a high degree of interdependence among the steps and decisions made.
Decision 9 looks at conducting the data analysis, such as coding against (pre-defined) constructs, in SLRs that rely, in most cases, on content analysis. Seuring and Gold ( 2012 ) elaborated in detail on its characteristics and application in SLRs. As this paper also explains the process of qualitative content analysis in detail, repetition is avoided here, but a summary is offered. Since different ways exist to conduct a content analysis, it is even more important to explain and justify, for example, the choice of an inductive or deductive approach (see Decision 2). In several cases, analytic variables are applied on the go, so there is no theory-based introduction of related constructs. However, to ensure the validity and replicability of the review (see Decision 11), it is necessary to explicitly define all the variables and codes used to analyze and synthesize the reviewed material (Durach et al. 2017 ; Seuring and Gold 2012 ). To build a valid framework as the SLR outcome, it is vital to ensure that the constructs used for the data analysis are sufficiently defined, mutually exclusive, and comprehensively exhaustive. For meta-analysis, the predefined constructs and items would demand quantitative coding so that the resulting data could be analyzed using statistical software tools such as SPSS or R (e.g., Xiao and Watson 2019 ). Pointing to bibliometric analysis again, the respective software would be used for data analysis, yielding different figures and paper clusters, which would then require interpretation (e.g., Donthu et al. 2021 ; Xiao and Watson 2019 ).
Decision 10, on conducting subsequent statistical analysis, considers follow-up analysis of the coding results. Again, this is linked to the chosen SLR method, and a bibliographic analysis will require a different statistical analysis than a content analysis–based SLR (e.g., Lim et al. 2022 ; Xiao and Watson 2019 ). Beyond the use of content analysis and the qualitative interpretation of its results, applying contingency analysis offers the opportunity to quantitatively assess the links among constructs and items. It provides insights into which items are correlated with each other without implying causality. Thus, the interpretation of the findings must explain the causality behind the correlations between the constructs and the items. This must be based on sound reasoning and linking the findings to theoretical arguments. For SLRs, there have recently been two kinds of applications of contingency analysis, differentiated by unit of analysis. De Lima et al. ( 2021 ) used the entire paper as the unit of analysis, deriving correlations on two constructs that were used together in one paper. This is, of course, subject to critique as to whether the constructs really represent correlated content. Moving a level deeper, Tröster and Hiete ( 2018 ) used single-text passages on one aspect, argument, or thought as the unit of analysis. Such an approach is immune against the critique raised before and can yield more valid statistical support for thematic analysis. Another recent methodological contribution employing the same contingency analysis–based approach was made by Siems et al. ( 2021 ). Their analysis employs constructs from SSCM and dynamic capabilities. Employing four subsets of data (i.e., two time periods each in the food and automotive industries), they showed that the method allows distinguishing among time frames as well as among industries.
However, the unit of analysis must be precisely explained so that the reader can comprehend it. Both examples use contingency analysis to identify under-researched topics and develop them into research directions whose formulation represents the particular aim of an SLR (Paul and Criado 2020 ; Snyder 2019 ). Other statistical tools might also be applied, such as cluster analysis. Interestingly, Brandenburg and Rebs ( 2015 ) applied both contingency and cluster analyses. However, the authors stated that the contingency analysis did not yield usable results, so they opted for cluster analysis. In effect, Brandenburg and Rebs ( 2015 ) added analytical depth to their analysis of model types in SSCM by clustering them against the main analytical categories of content analysis. In any case, the application of statistical tools needs to fit the study purpose (Decision 1) and the literature sample (Decision 7), just as in their more conventional applications (e.g., in empirical research processes).
Decision 11 regards the additional consideration of validity and reliability criteria and emphasizes the need for explaining and justifying the single steps of the research process (Seuring and Gold 2012 ), much in line with other examples of research (Davis and Crombie 2001 ). This is critical to underlining the quality of the review but is often neglected in many submitted manuscripts. In our review, we find rich guidance on this decision, to which we want to guide readers (see Table 3 ). In particular, Durach et al. ( 2017 ) provide an entire section of biases and what needs to be considered and reported on them. Moreover, Snyder ( 2019 ) regularly reflects on these issues in her elaborations. This rich guidance elaborates on how to ensure the quality of the individual steps of the review process, such as sampling, study inclusion and exclusion, coding, synthesizing, and more practical issues, including team composition and teamwork organization, which are discussed in some guidelines (e.g., Clark et al. 2021 ; Kraus et al. 2020 ). We only want to underline that the potential biases are, of course, to be seen in conjunction with Decisions 2, 3, 4, 5, 6, 7, 9, and 10. These decisions and the elaboration by Durach et al. ( 2017 ) should provide ample points of reflection that, however, many SLR manuscripts fail to address.
4.7 Step 6: reporting the results
In the final step, there are three decisions on which there is surprisingly little guidance, although reviews often fail in this critical part of the process (Kraus et al. 2020 ). The reviewed guidelines discuss the presentation almost exclusively, while almost no guidance is given on the overall paper structure or the key content to be reported.
Consequently, the first choice to be made in Decision 12 is regarding the paper structure. We suggest following the five-step logic of typical research papers (see also Fisch and Block 2018 ) and explaining only a few points in which a difference from other papers is seen.
(1) Introduction: While the introduction would follow a conventional logic of problem statement, research question, contribution, and outline of the paper (see also Webster and Watson 2002 ), the next parts might depend on the theoretical choices made in Decision 2.
(2) Literature review section: If deductive logic is taken, the paper usually has a conventional flow. After the introduction, the literature review section covers the theoretical background and the choice of constructs and variables for the analysis (De Lima et al. 2021 ; Dieste et al. 2022 ). To avoid confusion in this section with the literature review, its labeling can also be closer to the reviewed object.
If an inductive approach is applied, it might be challenging to present the theoretical basis up front, as the codes emerge only from analyzing the material. In this case, the theory section might be rather short, concentrating on defining the core concepts or terms used, for example, in the keyword-based search for papers. The latter approach is exemplified by the study at hand, which presents a short review of the available literature in the introduction and the first part of the findings. However, we do not perform a systematic but integrative review, which allows for more freedom and creativity (Snyder 2019 ).
(3) Method section: This section should cover the steps and follow the logic presented in this paper or any of the reviewed guidelines so that the choices made during the research process are transparently disclosed (Denyer and Tranfield 2009 ; Paul et al. 2021 ; Xiao and Watson 2019 ). In particular, the search for papers and their selection requires a sound explanation of each step taken, including the provision of reasons for the delimitation of the final paper sample. A stage that is often not covered in sufficient detail is data analysis (Seuring and Gold 2012 ). This also needs to be outlined so that the reader can comprehend how sense has been made of the material collected. Overall, the demands for SLR papers are similar to case studies, survey papers, or almost any piece of empirical research; thus, each step of the research process needs to be comprehensively described, including Decisions 4–10. This comprehensiveness must also include addressing measures for validity and reliability (see Decision 11) or other suitable measures of rigor in the research process since they are a critical issue in literature reviews (Durach et al. 2017 ). In particular, inductively conducted reviews are prone to subjective influences and thus require sound reporting of design choices and their justification.
(4) Findings: The findings typically start with a descriptive analysis of the literature covered, such as journals, distribution across years, or (empirical) methods applied (Tranfield et al. 2003 ). For modeling-related reviews, classifying papers against the approach chosen is a standard approach, but this can often also serve as an analytic category that provides detailed insights. The descriptive analysis should be kept short since a paper only presenting descriptive findings will not be of great interest to other researchers due to the missing contribution (Snyder 2019 ). Nevertheless, there are opportunities to provide interesting findings in the descriptive analysis. Beyond a mere description of the distributions of the single results, such as the distribution of methods used in the sample, authors should combine analytical categories to derive more detailed insights (see also Tranfield et al. 2003 ). The distribution of methods used might well be combined with the years of publication to identify and characterize different phases in the development of a field of research or its maturity. Moreover, there could be value in the analysis of theories applied in the review sample (e.g., Touboulic and Walker 2015 ; Zhu et al. 2022 ) and in reflecting on the interplay of different qualitative and quantitative methods in spurring the theoretical development of the reviewed field. This could yield detailed insights into methodological as well as theoretical gaps, and we would suggest explicitly linking the findings of such analyses to the research directions that an SLR typically provides. This link could help make the research directions much more tangible by giving researchers a clear indication of how to follow up on the findings, as, for example, done by Maestrini et al. ( 2017 ) or Dieste et al. ( 2022 ). In contrast to the mentioned examples of an actionable research agenda, a typical weakness of premature SLR manuscripts is that they ask rather superficially for more research in the different aspects they reviewed but remain silent about how exactly this can be achieved.
We would thus like to encourage future SLR authors to systematically investigate the potential to combine two categories of descriptive analysis to move this section of the findings to a higher level of quality, interest, and relevance. The same can, of course, be done with the thematic findings, which comprise the second part of this section.
Moving into the thematic analysis, we have already reached Decision 13 on the presentation of the refined theoretical framework and the discussion of its contents. A first step might present the frequencies of the codes or constructs applied in the analysis. This allows the reader to understand which topics are relevant. If a rather small body of literature is analyzed, tables providing evidence on which paper has been coded for which construct might be helpful in improving the transparency of the research process. Tables or other forms of visualization might help to organize the many codes soundly (see also Durach et al. 2017 ; Paul and Criado 2020 ; Webster and Watson 2002 ). These findings might then lead to interpretation, for which it is necessary to extract meaning from the body of literature and present it accordingly (Snyder 2019 ). To do so, it might seem needless to say that the researchers should refer back to Decisions 1, 2, and 3 taken in Step 1 and their justifications. These typically identify the research gap to be filled, but after the lengthy process of the SLR, the authors often fail to step back from the coding results and put them into a larger perspective against the research gap defined in Decision 1 (see also Clark et al. 2021 ). To support this, it is certainly helpful to illustrate the findings in a figure or graph presenting the links among the constructs and items and adding causal reasoning to this (Durach et al. 2017 ; Paul and Criado 2020 ), such as the three figures by Seuring and Müller ( 2008 ) or other examples by De Lima et al. ( 2021 ) or Tipu ( 2022 ). This presentation should condense arguments made in the assessed literature but should also chart the course for future research. It will be these parts of the paper that are decisive for a strong SLR paper.
Moreover, some guidelines define the most fruitful way of synthesizing the findings as concept-centric synthesis (Clark et al. 2021 ; Fisch and Block 2018 ; Webster and Watson 2002 ). As presented in the previous sentence, the presentation of the review findings is centered on the content or concept of “concept-centric synthesis.” It is accompanied by a reference to all or the most relevant literature in which the concept is evident. Contrastingly, Webster and Watson ( 2002 ) found that author-centric synthesis discusses individual papers and what they have done and found (just like this sentence here). They added that this approach fails to synthesize larger samples. We want to note that we used the latter approach in some places in this paper. However, this aims to actively refer the reader to these studies, as they stand out from our relatively small sample. Beyond this, we want to link back to Decision 3, the selection of a theoretical framework and constructs. These constructs, or the parts of a framework, can also serve to structure the findings section by using them as headlines for subsections (Seuring et al. 2021 ).
Last but not least, there might even be cases where core findings and relationships might be opposed, and alternative perspectives could be presented. This would certainly be challenging to argue for but worthwhile to do in order to drive the reviewed field forward. A related example is the paper by Zhu et al. ( 2022 ), who challenged the current debate at the intersection of blockchain applications and supply chain management and pointed to the limited use of theoretical foundations for related analysis.
(5) Discussion and Conclusion: The discussion needs to explain the contribution the paper makes to the extant literature, that is, which previous findings or hypotheses are supported or contradicted and which aspects of the findings are particularly interesting for the future development of the reviewed field. This is in line with the content required in the discussion sections of any other paper type. A typical structure might point to the contribution and put it into perspective with already existing research. Further, limitations should be addressed on both the theoretical and methodological sides. This elaboration of the limitations can be coupled with the considerations of the validity and reliability of the study in Decision 11. The implications for future research are a core aim of an SLR (Clark et al. 2021 ; Mulrow 1987 ; Snyder 2019 ) and should be addressed in a further part of the discussion section. Recently, a growing number of literature reviews have also provided research questions for future research that provide a very concrete and actionable output of the SLR (e.g. Dieste et al. 2022 ; Maestrini et al. 2017 ). Moreover, we would like to reiterate our call to clearly link the research implications to the SLR findings, which helps the authors craft more tangible research directions and helps the reader to follow the authors’ interpretation. Literature review papers are usually not strongly positioned toward managerial implications, but even these implications might be included.
As a kind of normal demand, the conclusion should provide an answer to the research question put forward in the introduction, thereby closing the cycle of arguments made in the paper.
Although all the works seem to be done when the paper is written and the contribution is fleshed out, there is still one major decision to be made. Decision 14 concerns the identification of an appropriate journal for submission. Despite the popularity of the SLR method, a rising number of journals explicitly limit the number of SLRs published by them. Moreover, there are only two guidelines elaborating on this decision, underlining the need for the following considerations.
Although it might seem most attractive to submit the paper to the highest-ranking journal for the reviewed topic, we argue for two critical and review-related decisions to be made during the research process that influence whether the paper fits a certain outlet:
The theoretical foundation of the SLR (Decision 3) usually relates to certain journals in which it is published or discussed. If a deductive approach was taken, the journals in which the foundational papers were published might be suitable since the review potentially contributes to the further validation or refinement of the frameworks. Overall, we need to keep in mind that a paper needs to be added to a discussion in the journal, and this can be based on the theoretical framework or the reviewed papers, as shown below.
Appropriate journals for publication can be derived from the analyzed journal papers (Decision 7) (see also Paul and Criado 2020 ). This allows for an easy link to the theoretical debate in the respective journal by submitting it. This choice is identifiable in most of the papers mentioned in this paper and is often illustrated in the descriptive analysis.
If the journal chosen for the submission was neither related to the theoretical foundation nor overly represented in the body of literature analyzed, an explicit justification in the paper itself might be needed. Alternatively, an explanation might be provided in the letter to the editor when submitting the paper. If such a statement is not presented, the likelihood of it being transferred into the review process and passing it is rather low. Finally, we want to refer readers interested in the specificities of the publication-related review process of SLRs to Webster and Watson ( 2002 ), who elaborated on this for Management Information Systems Quarterly.
5 Discussion and conclusion
Critically reviewing the currently available SLR guidelines in the management domain, this paper synthesizes 14 key decisions to be made and reported across the SLR research process. Guidelines are presented for each decision, including tasks that assist in making sound choices to complete the research process and make meaningful contributions. Applying these guidelines should improve the rigor and robustness of many review papers and thus enhance their contributions. Moreover, some practical hints and best-practice examples are provided on issues that unexperienced authors regularly struggle to present in a manuscript (Fisch and Block 2018 ) and thus frustrate reviewers, readers, editors, and authors alike.
Strikingly, the review of prior guidelines reported in Table 3 revealed their focus on the technical details that need to be reported in any SLR. Consequently, our discipline has come a long way in crafting search strings, inclusion, and exclusion criteria, and elaborating on the validity and reliability of an SLR. Nevertheless, we left critical areas underdeveloped, such as the identification of relevant research gaps and questions, data extraction tools, analysis of the findings, and a meaningful and interesting reporting of the results. Our study contributes to filling these gaps by providing operationalized guidance to SLR authors, especially early-stage researchers who craft SLRs at the outset of their research journeys. At the same time, we need to underline that our paper is, of course, not the only useful reference for SLR authors. Instead, the readers are invited to find more guidance on the many aspects to consider in an SLR in the references we provide within the single decisions, as well as in Tables 1 and 2 . The tables also identify the strongholds of other guidelines that our paper does not want to replace but connect and extend at selected occasions, especially in SLR Steps 5 and 6.
The findings regularly underline the interconnection of the 14 decisions identified and discussed in this paper. We thus support Tranfield et al. ( 2003 ) who requested a flexible approach to the SLR while clearly reporting all design decisions and reflecting their impacts. In line with the guidance synthesized in this review, and especially Durach et al. ( 2017 ), we also present a refined framework in Figs. 1 and 2 . It specifically refines the original six-step SLR process by Durach et al. ( 2017 ) in three ways:
Enriched six-step process including the core interrelations of the 14 decisions
First, we subdivided the six steps into 14 decisions to enhance the operationalization of the process and enable closer guidance (see Fig. 1 ). Second, we added a temporal sequence to Fig. 2 by positioning the decisions from left to right according to this temporal sequence. This is based on systematically reflecting on the need to finish one decision before the following. If this need is evident, the following decision moves to the right; if not, the decisions are positioned below each other. Turning to Fig. 2 , it becomes evident that Step 2, “determining the required characteristics of primary studies,” and Step 3, “retrieving a sample of potentially relevant literature,” including their Decisions 4–6, can be conducted in an iterative manner. While this contrasts with the strict division of the six steps by Durach et al. ( 2017 ), it supports other guidance that suggests running pilot studies to iteratively define the literature sample, its sources, and characteristics (Snyder 2019 ; Tranfield et al. 2003 ; Xiao and Watson 2019 ). While this insight might suggest merging Steps 2 and 3, we refrain from this superficial change and building yet another SLR process model. Instead, we prefer to add detail and depth to Durach et al.’s ( 2017 ) model.
(Decisions: D1: specifying the research gap and related research question, D2: opting for a theoretical approach, D3: defining the core theoretical framework and constructs, D4: specifying inclusion and exclusion criteria, D5: defining sources and databases, D6: defining search terms and crafting a search string, D7: including and excluding literature for detailed analysis and synthesis, D8: selecting data extraction tool(s), D9: coding against (pre-defined) constructs, D10: conducting a subsequent (statistical) analysis (optional), D11: ensuring validity and reliability, D12: deciding on the structure of the paper, D13: presenting a refined theoretical framework and discussing its contents, and D14: deriving an appropriate journal from the analyzed papers).
This is also done through the third refinement, which underlines which previous or later decisions need to be considered within each single decision. Such a consideration moves beyond the mere temporal sequence of steps and decisions that does not reflect the full complexity of the SLR process. Instead, its focus is on the need to align, for example, the conduct of the data analysis (Decision 9) with the theoretical approach (Decision 2) and consequently ensure that the chosen theoretical framework and the constructs (Decision 3) are sufficiently defined for the data analysis (i.e., mutually exclusive and comprehensively exhaustive). The mentioned interrelations are displayed in Fig. 2 by means of directed arrows from one decision to another. The underlying explanations can be found in the earlier paper sections by searching for the individual decisions in the text on the impacted decisions. Overall, it is unsurprising to see that the vast majority of interrelations are directed from the earlier to the later steps and decisions (displayed through arrows below the diagonal of decisions), while only a few interrelations are inverse.
Combining the first refinement of the original framework (defining the 14 decisions) and the third refinement (revealing the main interrelations among the decisions) underlines the contribution of this study in two main ways. First, the centrality of ensuring validity and reliability (Decision 11) is underlined. It becomes evident that considerations of validity and reliability are central to the overall SLR process since all steps before the writing of the paper need to be revisited in iterative cycles through Decision 11. Any lack of related considerations will most likely lead to reviewer critique, putting the SLR publication at risk. On the positive side of this centrality, we also found substantial guidance on this issue. In contrast, as evidenced in Table 3 , there is a lack of prior guidance on Decisions 1, 8, 10, 12, 13, and 14, which this study is helping to fill. At the same time, these underexplained decisions are influenced by 14 of the 44 (32%) incoming arrows in Fig. 2 and influence the other decisions in 6 of the 44 (14%) instances. These interrelations among decisions to be considered when crafting an SLR were scattered across prior guidelines, lacked in-depth elaborations, and were hardly explicitly related to each other. Thus, we hope that our study and the refined SLR process model will help enhance the quality and contribution of future SLRs.
The data generated during this research is summarized in Table 3 and the analyzed papers are publicly available. They are clearly identified in Table 3 and the reference list.
Aguinis H, Ramani RS, Alabduljader N (2020) Best-practice recommendations for producers, evaluators, and users of methodological literature reviews. Organ Res Methods. https://doi.org/10.1177/1094428120943281
Article Google Scholar
Beske P, Land A, Seuring S (2014) Sustainable supply chain management practices and dynamic capabilities in the food industry: a critical analysis of the literature. Int J Prod Econ 152:131–143. https://doi.org/10.1016/j.ijpe.2013.12.026
Brandenburg M, Rebs T (2015) Sustainable supply chain management: a modeling perspective. Ann Oper Res 229:213–252. https://doi.org/10.1007/s10479-015-1853-1
Carter CR, Rogers DS (2008) A framework of sustainable supply chain management: moving toward new theory. Int Jnl Phys Dist Logist Manage 38:360–387. https://doi.org/10.1108/09600030810882816
Carter CR, Washispack S (2018) Mapping the path forward for sustainable supply chain management: a review of reviews. J Bus Logist 39:242–247. https://doi.org/10.1111/jbl.12196
Clark WR, Clark LA, Raffo DM, Williams RI (2021) Extending fisch and block’s (2018) tips for a systematic review in management and business literature. Manag Rev Q 71:215–231. https://doi.org/10.1007/s11301-020-00184-8
Crane A, Henriques I, Husted BW, Matten D (2016) What constitutes a theoretical contribution in the business and society field? Bus Soc 55:783–791. https://doi.org/10.1177/0007650316651343
Davis J, Mengersen K, Bennett S, Mazerolle L (2014) Viewing systematic reviews and meta-analysis in social research through different lenses. Springerplus 3:511. https://doi.org/10.1186/2193-1801-3-511
Davis HTO, Crombie IK (2001) What is asystematicreview? http://vivrolfe.com/ProfDoc/Assets/Davis%20What%20is%20a%20systematic%20review.pdf . Accessed 22 February 2019
De Lima FA, Seuring S, Sauer PC (2021) A systematic literature review exploring uncertainty management and sustainability outcomes in circular supply chains. Int J Prod Res. https://doi.org/10.1080/00207543.2021.1976859
Denyer D, Tranfield D (2009) Producing a systematic review. In: Buchanan DA, Bryman A (eds) The Sage handbook of organizational research methods. Sage Publications Ltd, Thousand Oaks, CA, pp 671–689
Devece C, Ribeiro-Soriano DE, Palacios-Marqués D (2019) Coopetition as the new trend in inter-firm alliances: literature review and research patterns. Rev Manag Sci 13:207–226. https://doi.org/10.1007/s11846-017-0245-0
Dieste M, Sauer PC, Orzes G (2022) Organizational tensions in industry 4.0 implementation: a paradox theory approach. Int J Prod Econ 251:108532. https://doi.org/10.1016/j.ijpe.2022.108532
Donthu N, Kumar S, Mukherjee D, Pandey N, Lim WM (2021) How to conduct a bibliometric analysis: an overview and guidelines. J Bus Res 133:285–296. https://doi.org/10.1016/j.jbusres.2021.04.070
Durach CF, Kembro J, Wieland A (2017) A new paradigm for systematic literature reviews in supply chain management. J Supply Chain Manag 53:67–85. https://doi.org/10.1111/jscm.12145
Fink A (2010) Conducting research literature reviews: from the internet to paper, 3rd edn. SAGE, Los Angeles
Fisch C, Block J (2018) Six tips for your (systematic) literature review in business and management research. Manag Rev Q 68:103–106. https://doi.org/10.1007/s11301-018-0142-x
Fritz MMC, Silva ME (2018) Exploring supply chain sustainability research in Latin America. Int Jnl Phys Dist Logist Manag 48:818–841. https://doi.org/10.1108/IJPDLM-01-2017-0023
Garcia-Torres S, Albareda L, Rey-Garcia M, Seuring S (2019) Traceability for sustainability: literature review and conceptual framework. Supp Chain Manag 24:85–106. https://doi.org/10.1108/SCM-04-2018-0152
Hanelt A, Bohnsack R, Marz D, Antunes Marante C (2021) A systematic review of the literature on digital transformation: insights and implications for strategy and organizational change. J Manag Stud 58:1159–1197. https://doi.org/10.1111/joms.12639
Kache F, Seuring S (2014) Linking collaboration and integration to risk and performance in supply chains via a review of literature reviews. Supp Chain Mnagmnt 19:664–682. https://doi.org/10.1108/SCM-12-2013-0478
Khalid RU, Seuring S (2019) Analyzing base-of-the-pyramid research from a (sustainable) supply chain perspective. J Bus Ethics 155:663–686. https://doi.org/10.1007/s10551-017-3474-x
Koufteros X, Mackelprang A, Hazen B, Huo B (2018) Structured literature reviews on strategic issues in SCM and logistics: part 2. Int Jnl Phys Dist Logist Manage 48:742–744. https://doi.org/10.1108/IJPDLM-09-2018-363
Kraus S, Breier M, Dasí-Rodríguez S (2020) The art of crafting a systematic literature review in entrepreneurship research. Int Entrep Manag J 16:1023–1042. https://doi.org/10.1007/s11365-020-00635-4
Kraus S, Mahto RV, Walsh ST (2021) The importance of literature reviews in small business and entrepreneurship research. J Small Bus Manag. https://doi.org/10.1080/00472778.2021.1955128
Kraus S, Breier M, Lim WM, Dabić M, Kumar S, Kanbach D, Mukherjee D, Corvello V, Piñeiro-Chousa J, Liguori E, Palacios-Marqués D, Schiavone F, Ferraris A, Fernandes C, Ferreira JJ (2022) Literature reviews as independent studies: guidelines for academic practice. Rev Manag Sci 16:2577–2595. https://doi.org/10.1007/s11846-022-00588-8
Leuschner R, Rogers DS, Charvet FF (2013) A meta-analysis of supply chain integration and firm performance. J Supply Chain Manag 49:34–57. https://doi.org/10.1111/jscm.12013
Lim WM, Rasul T (2022) Customer engagement and social media: revisiting the past to inform the future. J Bus Res 148:325–342. https://doi.org/10.1016/j.jbusres.2022.04.068
Lim WM, Yap S-F, Makkar M (2021) Home sharing in marketing and tourism at a tipping point: what do we know, how do we know, and where should we be heading? J Bus Res 122:534–566. https://doi.org/10.1016/j.jbusres.2020.08.051
Lim WM, Kumar S, Ali F (2022) Advancing knowledge through literature reviews: ‘what’, ‘why’, and ‘how to contribute.’ Serv Ind J 42:481–513. https://doi.org/10.1080/02642069.2022.2047941
Lusiantoro L, Yates N, Mena C, Varga L (2018) A refined framework of information sharing in perishable product supply chains. Int J Phys Distrib Logist Manag 48:254–283. https://doi.org/10.1108/IJPDLM-08-2017-0250
Maestrini V, Luzzini D, Maccarrone P, Caniato F (2017) Supply chain performance measurement systems: a systematic review and research agenda. Int J Prod Econ 183:299–315. https://doi.org/10.1016/j.ijpe.2016.11.005
Miemczyk J, Johnsen TE, Macquet M (2012) Sustainable purchasing and supply management: a structured literature review of definitions and measures at the dyad, chain and network levels. Supp Chain Mnagmnt 17:478–496. https://doi.org/10.1108/13598541211258564
Moher D, Liberati A, Tetzlaff J, Altman DG (2009) Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med 6:e1000097. https://doi.org/10.1371/journal.pmed.1000097
Mukherjee D, Lim WM, Kumar S, Donthu N (2022) Guidelines for advancing theory and practice through bibliometric research. J Bus Res 148:101–115. https://doi.org/10.1016/j.jbusres.2022.04.042
Mulrow CD (1987) The medical review article: state of the science. Ann Intern Med 106:485–488. https://doi.org/10.7326/0003-4819-106-3-485
Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, Shamseer L, Tetzlaff JM, Akl EA, Brennan SE, Chou R, Glanville J, Grimshaw JM, Hróbjartsson A, Lalu MM, Li T, Loder EW, Mayo-Wilson E, McDonald S, McGuinness LA, Stewart LA, Thomas J, Tricco AC, Welch VA, Whiting P, Moher D (2021) The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. J Clin Epidemiol 134:178–189. https://doi.org/10.1016/j.jclinepi.2021.03.001
Pagell M, Wu Z (2009) Building a more complete theory of sustainable supply chain management using case studies of 10 exemplars. J Supply Chain Manag 45:37–56. https://doi.org/10.1111/j.1745-493X.2009.03162.x
Paul J, Criado AR (2020) The art of writing literature review: What do we know and what do we need to know? Int Bus Rev 29:101717. https://doi.org/10.1016/j.ibusrev.2020.101717
Paul J, Lim WM, O’Cass A, Hao AW, Bresciani S (2021) Scientific procedures and rationales for systematic literature reviews (SPAR-4-SLR). Int J Consum Stud. https://doi.org/10.1111/ijcs.12695
Pearce JM (2018) How to perform a literature review with free and open source software. Pract Assess Res Eval 23:1–13
Rhaiem K, Amara N (2021) Learning from innovation failures: a systematic review of the literature and research agenda. Rev Manag Sci 15:189–234. https://doi.org/10.1007/s11846-019-00339-2
Rojas-Córdova C, Williamson AJ, Pertuze JA, Calvo G (2022) Why one strategy does not fit all: a systematic review on exploration–exploitation in different organizational archetypes. Rev Manag Sci. https://doi.org/10.1007/s11846-022-00577-x
Sauer PC (2021) The complementing role of sustainability standards in managing international and multi-tiered mineral supply chains. Resour Conserv Recycl 174:105747. https://doi.org/10.1016/j.resconrec.2021.105747
Sauer PC, Seuring S (2017) Sustainable supply chain management for minerals. J Clean Prod 151:235–249. https://doi.org/10.1016/j.jclepro.2017.03.049
Seuring S, Gold S (2012) Conducting content-analysis based literature reviews in supply chain management. Supp Chain Mnagmnt 17:544–555. https://doi.org/10.1108/13598541211258609
Seuring S, Müller M (2008) From a literature review to a conceptual framework for sustainable supply chain management. J Clean Prod 16:1699–1710. https://doi.org/10.1016/j.jclepro.2008.04.020
Seuring S, Yawar SA, Land A, Khalid RU, Sauer PC (2021) The application of theory in literature reviews: illustrated with examples from supply chain management. Int J Oper Prod Manag 41:1–20. https://doi.org/10.1108/IJOPM-04-2020-0247
Siems E, Land A, Seuring S (2021) Dynamic capabilities in sustainable supply chain management: an inter-temporal comparison of the food and automotive industries. Int J Prod Econ 236:108128. https://doi.org/10.1016/j.ijpe.2021.108128
Snyder H (2019) Literature review as a research methodology: an overview and guidelines. J Bus Res 104:333–339. https://doi.org/10.1016/j.jbusres.2019.07.039
Spens KM, Kovács G (2006) A content analysis of research approaches in logistics research. Int Jnl Phys Dist Logist Manage 36:374–390. https://doi.org/10.1108/09600030610676259
Tachizawa EM, Wong CY (2014) Towards a theory of multi-tier sustainable supply chains: a systematic literature review. Supp Chain Mnagmnt 19:643–663. https://doi.org/10.1108/SCM-02-2014-0070
Tipu SAA (2022) Organizational change for environmental, social, and financial sustainability: a systematic literature review. Rev Manag Sci 16:1697–1742. https://doi.org/10.1007/s11846-021-00494-5
Touboulic A, Walker H (2015) Theories in sustainable supply chain management: a structured literature review. Int Jnl Phys Dist Logist Manage 45:16–42. https://doi.org/10.1108/IJPDLM-05-2013-0106
Tranfield D, Denyer D, Smart P (2003) Towards a methodology for developing evidence-informed management knowledge by means of systematic review. Br J Manag 14:207–222. https://doi.org/10.1111/1467-8551.00375
Tröster R, Hiete M (2018) Success of voluntary sustainability certification schemes: a comprehensive review. J Clean Prod 196:1034–1043. https://doi.org/10.1016/j.jclepro.2018.05.240
Wang Y, Han JH, Beynon-Davies P (2019) Understanding blockchain technology for future supply chains: a systematic literature review and research agenda. Supp Chain Mnagmnt 24:62–84. https://doi.org/10.1108/SCM-03-2018-0148
Webster J, Watson RT (2002) Analyzing the past to prepare for the future: writing a literature review. MIS Q 26:xiii–xxiii
Wiese A, Kellner J, Lietke B, Toporowski W, Zielke S (2012) Sustainability in retailing: a summative content analysis. Int J Retail Distrib Manag 40:318–335. https://doi.org/10.1108/09590551211211792
Xiao Y, Watson M (2019) Guidance on conducting a systematic literature review. J Plan Educ Res 39:93–112. https://doi.org/10.1177/0739456X17723971
Yavaprabhas K, Pournader M, Seuring S (2022) Blockchain as the “trust-building machine” for supply chain management. Ann Oper Res. https://doi.org/10.1007/s10479-022-04868-0
Zhu Q, Bai C, Sarkis J (2022) Blockchain technology and supply chains: the paradox of the atheoretical research discourse. Transp Res Part E Logist Transp Rev 164:102824. https://doi.org/10.1016/j.tre.2022.102824
Open Access funding enabled and organized by Projekt DEAL.
Authors and affiliations.
EM Strasbourg Business School, Université de Strasbourg, HuManiS UR 7308, 67000, Strasbourg, France
Philipp C. Sauer
Chair of Supply Chain Management, Faculty of Economics and Management, The University of Kassel, Kassel, Germany
You can also search for this author in PubMed Google Scholar
The article is based on the idea and extensive experience of SS. The literature search and data analysis has mainly been performed by PCS and supported by SS before the paper manuscript has been written and revised in a common effort of both authors.
Correspondence to Stefan Seuring .
Conflict of interest.
The authors have no competing interests to declare that are relevant to the content of this article.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and Permissions
About this article
Sauer, P.C., Seuring, S. How to conduct systematic literature reviews in management research: a guide in 6 steps and 14 decisions. Rev Manag Sci 17 , 1899–1933 (2023). https://doi.org/10.1007/s11846-023-00668-3
Received : 29 September 2022
Accepted : 17 April 2023
Published : 12 May 2023
Issue Date : July 2023
DOI : https://doi.org/10.1007/s11846-023-00668-3
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Research process
- Structured literature review
- Systematic literature review
- Find a journal
- Publish with us
- Chester Fritz Library
- Library of the Health Sciences
- Thormodsgard Law Library
- Get started
Literature Reviews within a Scholarly Work
Literature reviews as a scholarly work.
- Finding Literature Reviews
- Your Literature Search
- Library Books
- How to Videos
- Communicating & Citing Research
Literature reviews summarize and analyze what has been written on a particular topic and identify gaps or disagreements in the scholarly work on that topic.
Within a scholarly work, the literature review situates the current work within the larger scholarly conversation and emphasizes how that particular scholarly work contributes to the conversation on the topic. The literature review portion may be as brief as a few paragraphs focusing on a narrow topic area.
When writing this type of literature review, it's helpful to start by identifying sources most relevant to your research question. A citation tracking database such as Web of Science can also help you locate seminal articles on a topic and find out who has more recently cited them. See "Your Literature Search" for more details.
A literature review may itself be a scholarly publication and provide an analysis of what has been written on a particular topic without contributing original research. These types of literature reviews can serve to help keep people updated on a field as well as helping scholars choose a research topic to fill gaps in the knowledge on that topic. Common types include:
Systematic literature reviews follow specific procedures in some ways similar to setting up an experiment to ensure that future scholars can replicate the same steps. They are also helpful for evaluating data published over multiple studies. Thus, these are common in the medical field and may be used by healthcare providers to help guide diagnosis and treatment decisions. Cochrane Reviews are one example of this type of literature review.
When a systematic review is not feasible, a semi-systematic review can help synthesize research on a topic or how a topic has been studied in different fields (Snyder 2019). Rather than focusing on quantitative data, this review type identifies themes, theoretical perspectives, and other qualitative information related to the topic. These types of reviews can be particularly helpful for a historical topic overview, for developing a theoretical model, and for creating a research agenda for a field (Snyder 2019). As with systematic reviews, a search strategy must be developed before conducting the review.
An integrative review is less systematic and can be helpful for developing a theoretical model or to reconceptualize a topic. As Synder (2019) notes, " This type of review often re quires a more creative collection of data, as the purpose is usually not to cover all articles ever published on the topic but rather to combine perspectives and insights from di ff erent fi elds or research traditions" (p. 336).
Source: Snyder, H. (2019). Literature review as a research methodology: An overview and guidelines. Journal of Business Research. 104. 333-339. doi: 10.1016/j.jbusres.2019.07.039
- << Previous: Get started
- Next: Finding Literature Reviews >>
- Last Updated: Oct 27, 2023 12:59 PM
- URL: https://libguides.und.edu/literature-reviews
- View all journals
- Explore content
- About the journal
- Publish with us
- Sign up for alerts
- Systematic Review
- Published: 15 November 2023
Clinical characteristics indexing genetic differences in bipolar disorder – a systematic review
- Hanna M. van Loo ORCID: orcid.org/0000-0002-9282-8053 1 ,
- Ymkje Anna de Vries ORCID: orcid.org/0000-0003-4580-4873 2 ,
- Jacob Taylor 3 , 4 , 5 ,
- Luka Todorovic ORCID: orcid.org/0000-0002-6903-6517 1 , 2 ,
- Camille Dollinger 6 &
- Kenneth S. Kendler ORCID: orcid.org/0000-0001-8689-6570 7
Molecular Psychiatry ( 2023 ) Cite this article
- Bipolar disorder
Bipolar disorder is a heterogenous condition with a varied clinical presentation. While progress has been made in identifying genetic variants associated with bipolar disorder, most common genetic variants have not yet been identified. More detailed phenotyping (beyond diagnosis) may increase the chance of finding genetic variants. Our aim therefore was to identify clinical characteristics that index genetic differences in bipolar disorder.
We performed a systematic review of all genome-wide molecular genetic, family, and twin studies investigating familial/genetic influences on the clinical characteristics of bipolar disorder. We performed an electronic database search of PubMed and PsycInfo until October 2022. We reviewed title/abstracts of 2693 unique records and full texts of 391 reports, identifying 445 relevant analyses from 142 different reports. These reports described 199 analyses from family studies, 183 analyses from molecular genetic studies and 63 analyses from other types of studies. We summarized the overall evidence per phenotype considering study quality, power, and number of studies.
We found moderate to strong evidence for a positive association of age at onset, subtype (bipolar I versus bipolar II), psychotic symptoms and manic symptoms with familial/genetic risk of bipolar disorder. Sex was not associated with overall genetic risk but could indicate qualitative genetic differences. Assessment of genetically relevant clinical characteristics of patients with bipolar disorder can be used to increase the phenotypic and genetic homogeneity of the sample in future genetic studies, which may yield more power, increase specificity, and improve understanding of the genetic architecture of bipolar disorder.
This is a preview of subscription content, access via your institution
Subscribe to this journal
Receive 12 print issues and online access
$259.00 per year
only $21.58 per issue
Rent or buy this article
Prices vary by article type
Prices may be subject to local taxes which are calculated during checkout
Bienvenu OJ, Davydow DS, Kendler KS. Psychiatric ‘diseases’ versus behavioral disorders and degree of genetic influence. Psychol Med. 2011;41:33–40.
Article CAS PubMed Google Scholar
Johansson V, Kuja-Halkola R, Cannon TD, Hultman CM, Hedman AM. A population-based heritability estimate of bipolar disorder – In a Swedish twin sample. Psychiatry Res. 2019;278:180–7.
Article PubMed Google Scholar
McGuffin P, Rijsdijk F, Andrew M, Sham P, Katz R, Cardno A. The heritability of bipolar affective disorder and the genetic relationship to unipolar depression. Arch Gen Psychiatry. 2003;60:497–502.
Gordovez FJA, McMahon FJ. The genetics of bipolar disorder. Mol Psychiatry. 2020;25:544–59.
Mullins N, Forstner AJ, O’Connell KS, Coombes B, Coleman JRI, Qiao Z, et al. Genome-wide association study of more than 40,000 bipolar disorder cases provides new insights into the underlying biology. Nat Genet. 2021;53:817–29.
Article CAS PubMed PubMed Central Google Scholar
Robinson PN. Deep phenotyping for precision medicine. Hum Mutat. 2012;33:777–80.
Weng C, Shah NH, Hripcsak G. Deep phenotyping: Embracing complexity and temporality—Towards scalability, portability, and interoperability. J Biomed Inf. 2020;105:103433.
Article Google Scholar
Flint J. The genetic basis of major depressive disorder. Mol Psychiatry. 2023. 2023. https://doi.org/10.1038/s41380-023-01957-9 .
O’Connell KS, Coombes BJ. Genetic contributions to bipolar disorder: current status and future directions. Psychol Med. 2021;51:2156–67.
Article PubMed PubMed Central Google Scholar
Stahl EA, Breen G, Forstner AJ, McQuillin A, Ripke S, Trubetskoy V, et al. Genome-wide association study identifies 30 loci associated with bipolar disorder. Nat Genet. 2019;51:793–803.
CONVERGE consortium. Sparse whole-genome sequencing identifies two loci for major depressive disorder. Nature 2015;523:588–91.
Article PubMed Central Google Scholar
Mitchell BL, Campos AI, Whiteman DC, Olsen CM, Gordon SD, Walker AJ, et al. The Australian Genetics of Depression Study: New Risk Loci and Dissecting Heterogeneity Between Subtypes. Biol Psychiatry. 2022;92:227–35.
Smoller JW, Finn CT. Family, Twin, and Adoption Studies of Bipolar Disorder. Am J Med Genet - Semin Med Genet. 2003;123 C:48–58.
MacQueen GM, Hajek T, Alda M. The phenotypes of bipolar disorder: relevance for genetic investigations. Mol Psychiatry. 2005;10:811–26.
Scott K, Nunes A, Pavlova B, Meier S, Alda M. Familial traits of bipolar disorder: A systematic review and meta-analysis. Acta Psychiatr Scand. 2023;148:133–41.
Fanous AH, Kendler KS. Genetic heterogeneity, modifier genes, and quantitative phenotypes in psychiatric illness: searching for a framework. Mol Psychiatry. 2005;10:6–13.
Taylor J, de Vries YA, van Loo HM, Kendler KS. Clinical characteristics indexing genetic differences in schizophrenia: a systematic review. Mol Psychiatry. 2023;28:883–90.
Duncan LE, Ostacher M, Ballon J. How genome-wide association studies (GWAS) made traditional candidate gene studies obsolete. Neuropsychopharmacology 2019;44:1518–23.
Gottesman II, Gould TD. The endophenotype concept in psychiatry: Etymology and strategic intentions. Am J Psychiatry. 2003;160:636–45.
Duffy A, Carlson G, Dubicka B, Hillegers MHJ. Pre-pubertal bipolar disorder: origins and current status of the controversy. Int J Bipolar Disorder. 2020;8:1–10.
Hillegers MHJ. Debate: No bipolar disorder in prepubertal children at high familial risk. Child Adolesc Ment Health. 2019;24:101–2.
Blokland GAM, Grove J, Chen CY, Cotsapas C, Tobet S, Handa R, et al. Sex-Dependent Shared and Nonshared Genetic Architecture Across Mood and Psychotic Disorders. Biol Psychiatry. 2022;91:102–17.
Charney AW, Stahl EA, Green EK, Chen CY, Moran JL, Chambert K, et al. Contribution of Rare Copy Number Variants to Bipolar Disorder Risk Is Limited to Schizoaffective Cases. Biol Psychiatry. 2019;86:110–9.
Mistry S, Harrison JR, Smith DJ, Escott-Price V, Zammit S. The use of polygenic risk scores to identify phenotypes associated with genetic risk of bipolar disorder and depression: A systematic review. J Affect Disord. 2018;234:148–55.
Mistry S, Harrison JR, Smith DJ, Escott-Price V, Zammit S. The use of polygenic risk scores to identify phenotypes associated with genetic risk of schizophrenia: Systematic review. Schizophr Res. 2018;197:2–8.
Kendler KS, Ohlsson H, Bacanu S, Sundquist J, Sundquist K. Differences in genetic risk score profiles for drug use disorder, major depression, and ADHD as a function of sex, age at onset, recurrence, mode of ascertainment, and treatment. Psychol Med. 2023;53:3448–60.
Mars N, Koskela JT, Ripatti P, Kiiskinen TTJ, Havulinna AS, Lindbohm JV, et al. Polygenic and clinical risk scores and their impact on age at onset and prediction of cardiometabolic diseases and common cancers. Nat Med. 2020;26:549–57.
Escott-Price V, Nalls MA, Morris HR, Lubbe S, Brice A, Gasser T, et al. Polygenic risk of Parkinson disease is correlated with disease age at onset. Ann Neurol. 2015;77:582–91.
Harder A, Nguyen T-D, Pasman JA, Mosing MA, Hägg S, Lu Y. Genetics of age-at-onset in major depression. Transl Psychiatry. 2022;12:124.
Lee SH, Ripke S, Neale BM, Faraone SV, Purcell SM, Perlis RH, et al. Genetic relationship between five psychiatric disorders estimated from genome-wide SNPs. Nat Genet. 2013;45:984–94.
Lee PH, Anttila V, Won H, Feng YCA, Rosenthal J, Zhu Z, et al. Genomic Relationships, Novel Loci, and Pleiotropic Mechanisms across Eight Psychiatric Disorders. Cell 2019;179:1469–1482.
Craddock N, Jones I, Kirov G, Jones L. The Bipolar Affective Disorder Dimension Scale (BADDS)-a dimensional scale for rating lifetime psychopathology in bipolar spectrum disorders. BMC Psychiatry. 2004;4:19.
First MB, Spitzer RL, Gibbon M, Williams JBW. Structured Clinical Interview for DSM-IV Axis I Disorders, Research Version, Non-patient Edition (SCID-I/NP). New York, NY: Biometrics Research, New York State Psychiatric Institute; 2002.
Nurnberger JI, Blehar MC, Kaufmann CA, York-Cooler C, Simpson SG, Harkavy-Friedman J, et al. Diagnostic Interview for Genetic Studies: Rationale, Unique Features, and Training. Arch Gen Psychiatry. 1994;51:849–59.
Ferrari AJ, Stockings E, Khoo JP, Erskine HE, Degenhardt L, Vos T, et al. The prevalence and burden of bipolar disorder: findings from the Global Burden of Disease Study 2013. Bipolar Disord. 2016;18:440–50.
Ferrari AJ, Baxter AJ, Whiteford HA. A systematic review of the global distribution and availability of prevalence data for bipolar disorder. J Affect Disord. 2011;134:1–13.
Aleman A, Kahn RS, Selten JP. Sex differences in the risk of schizophrenia: evidence from meta-analysis. Arch Gen Psychiatry. 2003;60:565–71.
McGrath J, Saha S, Welham J, El Saadi O, MacCauley C, Chant D. A systematic review of the incidence of schizophrenia: the distribution of rates and the influence of sex, urbanicity, migrant status and methodology. BMC Med. 2004;2:13.
Eranti SV, MacCabe JH, Bundy H, Murray RM. Gender difference in age at onset of schizophrenia: a meta-analysis. Psychol Med. 2013;43:155–67.
American Psychiatric Association. Diagnostic and statistical manual of mental disorders: DSM-IV. 4th. Arlington, VA US: American Psychiatric Publishing, Inc; 1994.
Benabarre A, Vieta E, Colom F, Martínez-Arán A, Reinares M, Gastó C. Bipolar disorder, schizoaffective disorder and schizophrenia: epidemiologic, clinical and prognostic differences. Eur Psychiatry. 2001;16:167–72.
Mendel E. Die Manie. Eine Monographie. Wien und Leipzig Urban & Schwarzenberg; 1881.
Kendler KS, Klee A. Emanuel Mendel’s 1881 “Die Manie – Eine Monographie” (Mania – A Monograph). J Affect Disord Rep. 2023;12:100515.
Kendler KS, Ohlsson H, Sundquist J, Sundquist K. The impact of sex, age at onset, recurrence, mode of ascertainment and medical complications on the family genetic risk score profiles for alcohol use disorder. Psychol Med. 2023;53:1732–40.
Dahl A, Thompson M, An U, Krebs M, Appadurai V, Border R, et al. Phenotype integration improves power and preserves specificity in biobank-based genetic studies of MDD. BioRxiv. 2023:2022.08.15.503980.
Kendler KS, Ohlsson H, Sundquist J, Sundquist K. Family Genetic Risk Scores and the Genetic Architecture of Major Affective and Psychotic Disorders in a Swedish National Sample. JAMA Psychiatry. 2021;78:735–43.
The funding organizations had no impact on study design, data collection, analysis, or interpretation, or decision to submit the manuscript. Supported in part by the Stanley Center for Psychiatric Research. HL was supported by a VENI grant from the Talent Program of the Netherlands Organization of Scientific Research (NWO-ZonMW 09150161810021).
Authors and affiliations.
Department of Psychiatry and Interdisciplinary Center Psychopathology and Emotion regulation, University of Groningen, University Medical Center Groningen, Groningen, the Netherlands
Hanna M. van Loo & Luka Todorovic
Department of Child and Adolescent Psychiatry, University of Groningen, University Medical Center Groningen, Groningen, The Netherlands
Ymkje Anna de Vries & Luka Todorovic
Department of Psychiatry, Brigham and Women’s Hospital, Boston, MA, USA
Stanley Center for Psychiatric Research, Broad Institute of MIT and Harvard, Cambridge, MA, USA
Program in Medical and Population Genetics, Broad Institute of MIT and Harvard, Cambridge, MA, USA
Department of Epidemiology, Harvard T.H. Chan School of Public Health, Boston, MA, USA
Virginia Institute for Psychiatric and Behavioral Genetics and Department of Psychiatry, Virginia Commonwealth University, Richmond, VA, USA
Kenneth S. Kendler
You can also search for this author in PubMed Google Scholar
HL, YV, JT, and KK contributed to the conception and design of the study. HL, YV, JT, LT, and CD contributed to the literature search and data extraction. HL, YV, and JT contributed to the analysis of the results. HL wrote the first draft of the manuscript, which was critically revised by all other authors. All authors approved the final draft of the manuscript.
Correspondence to Hanna M. van Loo .
The authors declare no competing interests.
Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary material, supplementary tables, rights and permissions.
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
Reprints and Permissions
About this article
Cite this article.
van Loo, H.M., de Vries, Y.A., Taylor, J. et al. Clinical characteristics indexing genetic differences in bipolar disorder – a systematic review. Mol Psychiatry (2023). https://doi.org/10.1038/s41380-023-02297-4
Received : 01 May 2023
Revised : 29 September 2023
Accepted : 06 October 2023
Published : 15 November 2023
DOI : https://doi.org/10.1038/s41380-023-02297-4
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Explore articles by subject
- Guide to authors
- Editorial policies
- Systematic review
- Open access
- Published: 30 October 2023
Application of the Expert Recommendations for Implementing Change (ERIC) compilation of strategies to health intervention implementation in low- and middle-income countries: a systematic review
- Kathryn L. Lovero ORCID: orcid.org/0000-0001-6067-8663 1 na1 ,
- Christopher G. Kemp 2 na1 ,
- Bradley H. Wagenaar 3 , 4 ,
- Ali Giusto 5 ,
- M. Claire Greene 6 ,
- Byron J. Powell 7 , 8 , 9 &
- Enola K. Proctor 7 , 8
Implementation Science volume 18 , Article number: 56 ( 2023 ) Cite this article
The Expert Recommendations for Implementing Change (ERIC) project developed a compilation of implementation strategies that are intended to standardize reporting and evaluation. Little is known about the application of ERIC in low- and middle-income countries (LMICs). We systematically reviewed the literature on the use and specification of ERIC strategies for health intervention implementation in LMICs to identify gaps and inform future research.
We searched peer-reviewed articles published through March 2023 in any language that (1) were conducted in an LMIC and (2) cited seminal ERIC articles or (3) mentioned ERIC in the title or abstract. Two co-authors independently screened all titles, abstracts, and full-text articles, then abstracted study, intervention, and implementation strategy characteristics of included studies.
The final sample included 60 studies describing research from all world regions, with over 30% published in the final year of our review period. Most studies took place in healthcare settings ( n = 52, 86.7%), while 11 (18.2%) took place in community settings and four (6.7%) at the policy level. Across studies, 548 distinct implementation strategies were identified with a median of six strategies (range 1–46 strategies) included in each study. Most studies ( n = 32, 53.3%) explicitly matched implementation strategies used for the ERIC compilation. Among those that did, 64 (87.3%) of the 73 ERIC strategies were represented. Many of the strategies not cited included those that target systems- or policy-level barriers. Nearly 85% of strategies included some component of strategy specification, though most only included specification of their action (75.2%), actor (57.3%), and action target (60.8%). A minority of studies employed randomized trials or high-quality quasi-experimental designs; only one study evaluated implementation strategy effectiveness.
While ERIC use in LMICs is rapidly growing, its application has not been consistent nor commonly used to test strategy effectiveness. Research in LMICs must better specify strategies and evaluate their impact on outcomes. Moreover, strategies that are tested need to be better specified, so they may be compared across contexts. Finally, strategies targeting policy-, systems-, and community-level determinants should be further explored.
Peer Review reports
Contributions to the literature
The ERIC compilation of implementation strategies has been widely adopted in high-income settings, but its usage and relevance across low- and middle-income countries (LMICs) have not been systematically explored.
This systematic review demonstrates that ERIC use is increasing in LMICs. Most individual ERIC strategies have been applied, though few targeted organizational- and policy-level change.
ERIC application was inconsistent and the specification of strategies was low; only one study tested strategy effectiveness.
Findings point to a need for training and resources to support specification and testing of implementation strategies in LMICs to build the evidence base on implementation strategy effectiveness across diverse settings.
The past two decades have been marked by rapid growth in the field of implementation science to address the large research-to-practice gap across contexts and health areas [ 1 ]. In recent years, the field’s focus has shifted from defining barriers and facilitators of implementing evidence-based practices to identifying strategies that effectively address and overcome these barriers. Implementation strategies are generally defined as the approaches or techniques used to enhance the adoption, implementation, sustainment, and scale-up of an evidence-based practice [ 2 , 3 ]. These strategies vary in complexity and can target determinants at the intervention-, patient-, provider-, organization-, community-, policy-, and funding levels [ 4 , 5 ].
Though the evidence base on implementation strategies is growing, current data on strategy effectiveness is mixed, with high variation in strategy effects observed across studies and outcomes [ 6 , 7 , 8 , 9 , 10 , 11 ]. Several reasons may contribute to this variation. It may be that certain strategies are not sufficient to improve implementation outcomes across contexts; it may also be that strategies were not appropriately matched to the contextual determinants or tailored to the setting [ 12 ]. However, reporting on implementation strategies often lacks the necessary information to determine why a strategy was or was not effective; for example, information on how a strategy was selected, adapted, and operationalized and whether or not the strategy was carried out as intended [ 7 , 13 , 14 , 15 , 16 , 17 ]. As such, calls for consistent, detailed reporting of implementation strategies have emerged in tandem with calls for increased research on strategy effectiveness [ 2 , 3 , 18 , 19 , 20 ].
To aid reporting efforts, the field has developed taxonomies of implementation strategies [ 21 , 22 ] and a methodology for specifying implementation strategies [ 3 ]. One strategy taxonomy, the Expert Recommendations for Implementing Change (ERIC) project, built upon a narrative review of the literature [ 23 ] and used a modified Delphi process to develop a compilation of implementation strategies, comprising 73 discrete strategies [ 21 ] which can be further grouped into nine thematic clusters [ 24 ]. The ERIC compilation has become the most commonly used taxonomy of strategies in the field of implementation science, with over 3000 citations. It has enabled a standardized language for naming implementation strategies that have been used to characterize implementation efforts both in prospective and retrospective analyses [ 25 , 26 , 27 , 28 , 29 , 30 ]. To support standardized strategy specification, Proctor et al. [ 3 ] developed guidelines to help stakeholders operationalize strategies based on specific domains, including the strategy actor, action, action target, temporality, dose, implementation outcome affected, and justification. These specification guidelines are consistent with the Patient Centered Outcomes Research Institute's Standards for Studies of Complex Interventions [ 31 ], and have the potential to not only improve our understanding of implementation strategy mechanisms, but also the required parameters for replication in other research and practice settings.
In low- and middle-income countries (LMICs), implementation science has become a key tool for helping to bridge the research-to-practice gap that is larger than that of high-income countries [ 32 , 33 ]. As such, the need to identify effective implementation strategies for efficient, effective delivery of evidence-based practices is imperative in these settings. In recent years, formal implementation research in LMICs has expanded as funding sources increasingly recognize the utility of this work. There is now a growing evidence base for using implementation frameworks [ 34 , 35 ] and measures [ 36 , 37 ], and specific determinants to implementation [ 38 ] in LMICs have been identified. However, little is known about effective implementation strategies in these settings [ 39 ].
The purpose of the present study is to report on the application of the ERIC compilation of implementation strategies in LMICs and provide recommendations to the field for improving its application moving forward. Our aims are twofold: (i) to systematically review the literature on the use of ERIC strategies in LMICs, including which specific strategies have been included in the research, how they were selected, how the strategies were used (i.e., specification and targeted intervention/health condition/population), and how they were adapted; and (ii) to assess evidence for the effectiveness of specific ERIC implementation strategies in LMICs.
We registered our systematic review protocol in the International Prospective Register of Systematic Reviews (PROSPERO # CRD42021268374) and followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines [ 40 ]. See Additional file 1 for the completed PRISMA checklist.
We searched PubMed, CINAHL, PsycINFO, CINAHL, EMBASE, SCOPUS, and Web of Science until March 27, 2023, to identify original peer-reviewed research in any language that cited either (1) the original implementation strategies compilation paper [ 23 ], (2) the ERIC compilation [ 21 ], or (3) the strategy categorization [ 24 ], or that mentioned the ‘Expert Recommendations for Implementing Change’ or ‘ERIC’ in the title or abstract, and that took place within an LMIC. LMIC classification was determined based on World Bank criteria [ 41 ]. The full search strategy for all databases is presented in Additional file 2 .
Covidence was used to remove duplicate studies and to conduct study screening [ 42 ]. A mix of two authors from a team of five (KL, CK, CG, AG, and BW) independently screened all titles, abstracts, and full-text articles, and noted reasons for excluding studies during full-text review. Studies passed the title/abstract screening stage if the title or abstract referenced the implementation of a health-related intervention and if it was possible that the study had been conducted in an LMIC. Studies passed the full-text screening stage if all criteria above were met and the study described the use of the ERIC compilation in implementation strategy selection, development, or classification (i.e., manuscripts that cited ERIC—for example, in the introduction or discussion—without indicating the application to the study strategies were excluded). Discrepancies in eligibility assessments were resolved through discussion until a consensus was reached.
Five authors (KL, CK, CG, AG, and BW) independently piloted a structured abstraction form with two studies each using a shared Google Sheets spreadsheet; all co-authors reviewed, critiqued, and approved the form. One of the two authors (CK and KL) then abstracted the study, intervention, and implementation strategy characteristics for the remaining studies (Additional file 3 ), while the other author-verified each abstraction, and then resolved any disagreement through discussion.
At the study level, we abstracted study settings, objectives, study design, and methods, whether the manuscript reported a study protocol or study results, any implementation research frameworks used, years of data collection, study populations, implementation outcomes reported, patient health and other outcomes reported, study limitations, and conclusions or lessons learned. We noted the types of independent variables represented in each study (i.e., intervention, implementation strategy, or context) based on which were systematically varied. Within each study, we also collected intervention names, intervention descriptions, associated health conditions, and target populations.
We then abstracted the discrete implementation strategies used and described in each study. At the implementation strategy level, we included descriptions of each strategy and noted whether the strategies were explicitly mapped to the ERIC strategy compilation, or to other strategy taxonomies (e.g., Behavior Change Wheel) [ 43 ]. We then abstracted each of the components of implementation strategy specification [ 3 ]; actor, action, action target, temporality, dose, implementation outcome(s) affected, and justification. We also noted any description of the hypothesized mechanism of action [ 19 , 44 ], any description of adaptations to the implementation strategy [ 18 , 45 ], and any assessment of implementation strategy fidelity [ 46 ]. Finally, we noted whether implementation strategy effect estimates were reported. Risk of bias was not assessed, as only one study evaluated strategy effectiveness and thus no meta-analysis of effectiveness was conducted.
Percentages were calculated for all categorical variables; these were used to summarize study, intervention, and implementation strategy characteristics. Quantitative meta-analysis of study findings was not possible given the heterogeneity in research questions and outcomes as well as the insufficient numbers of studies evaluating implementation strategy effects.
The database search yielded 659 articles, of which 441 were duplicates. We screened the remaining 218 article titles and abstracts and excluded 88, leaving 130 for full-text review. Of these, 41 were excluded that did not use ERIC (i.e., only cited ERIC in the manuscript introduction or discussion, without application to the present study), 12 that did not take place in an LMIC, 11 that did not meet multiple inclusion criteria, 5 that were not peer-reviewed, and 1 that was not primary research (Fig. 1 ).
PRISMA 2020 flowchart of systematic review
The final sample included 60 studies (Table 1 , see Additional file 3 for individual study characteristics), all published in English. The first study using ERIC in an LMIC was published in 2016, and the number of studies using ERIC in LMICs increased over the years, from just 1 in 2016 to 20 in 2022. Studies included data collected in all six WHO Regions, with the majority being conducted in the African region ( n = 36, 60.0%). Most studies focused on healthcare settings ( n = 52, 86.7%), while just 11 (18.3%) focused on community and four (6.7%) on policy-level settings. The most common health conditions targeted in studies were infectious diseases ( n = 19, 31.7%), maternal and child health ( n = 10, 16.7%), and mental health and substance use ( n = 10, 16.7%). Two studies did not focus on a specific health condition, but rather on strategies for implementing clinical trial recruitment and building a national framework for research.
Nearly three-quarters of studies described empirical research ( n = 43, 71.7%), the others being protocols for studies not yet completed ( n = 17, 23.3%). Study populations included patients ( n = 31, 51.7%), providers ( n = 43, 71.7%), policymakers ( n = 14, 23.3%), community members ( n = 8, 13.3%), and researchers ( n = 6, 10.0%); 34 (56.7%) studies included more than one of these study populations. Nineteen (31.7%) articles described formative implementation strategy design and nine (15.0%) described retrospective strategy specification. The majority ( n = 36, 60%) of studies included an impact evaluation, the most common design being quasi-experimental with no control ( n = 13, 21.7%) and a cluster randomized control trial ( n = 9, 15.0%). Of the 17 studies that included a control, n = 9 tested the intervention and n = 7 tested the implementation strategy. Implementation frameworks were used by 47 (78.3%) studies, with 36 (60.0%) citing implementation determinant frameworks, 24 (40.0%) citing evaluation frameworks, and seven (11.7%) citing a process framework. A total of 44 (73.4%) studies evaluated implementation outcomes, most commonly adoption ( n = 29), acceptability ( n = 27), and fidelity ( n = 26); 34 (56.7%) studies evaluated multiple implementation outcomes. Under half of studies ( n = 25, 41.7%) evaluated health outcomes.
Across the 60 studies, 548 strategies were proposed, 282 (51.5%) of which were planned for future delivery and 266 (48.5%) had been delivered in the study. The total number of strategies described per study ranged from 1 to 46 (median = 6). Despite all 60 studies referencing the use of ERIC, just 32 (53.3%) studies explicitly matched specific implementation strategies used in the study to a specific ERIC strategy (Table 2 ). One study described strategy development being guided by other frameworks in addition to ERIC [ 47 , 48 , 49 ]; all other studies cited only ERIC as the guiding framework. The most commonly utilized ERIC strategies were (1) conduct educational meetings ( n = 16, 2.9% of all 548 strategies proposed across studies); (2) audit and provide feedback ( n = 15, 2.7%); (3) assess for readiness and identify barriers and facilitators ( n = 13, 2.4%); and (4) build a coalition ( n = 12, 2.2%). Of the 73 ERIC strategies, 9 (12.3%) were not cited at all. These include altering patient/consumer fees, changing liability laws, developing disincentives, making billing easier, preparing patients/consumers to be active participants, starting a dissemination organization, using capitated payments, using other payment schemes, and visiting other sites. Strategies from all ERIC strategy categories were described, though 5 of 9 (55.5%) strategies in the Utilize Financial Strategies category were not used.
Eight (13.3%) studies did not include any component of strategy specification (i.e., they named strategies but did not describe them at all), representing 88 (16.1%) of the 548 strategies described across studies. Among the strategies that were specified (Table 3 ), the most common components described were action ( n = 412, 75.2%), action target ( n = 333, 60.8%), and actor ( n = 314, 57.3%). The study team itself comprised the majority of actors specified for implementation strategies, accounting for 185 (58.9%) of the 314 actors specified; action targets were most commonly healthcare providers, accounting for 196 (58.9%) of 333 action targets specified. The least commonly specified components were fidelity ( n = 17, 3.1%), adaptation ( n = 100, 18.2%), action mechanism ( n = 129, 23.5%), and targeted implementation outcomes ( n = 148, 27.1%). Only 12 (20.0%) studies described adaptation of implementation strategies, of which two (3.3%) described the use of Implementation mapping and one the use of human-centered design (1.7%) to guide adaptation, while nine (15.0%) described more general stakeholder engagement without the use of a specific framework of the process. Only one (0.5%) strategy was tested for independent effects: Gachau et al. found that audit and feedback significantly improved 24 of 34 indicators of pediatric guideline adherence in Kenya [ 50 ].
In the present systematic review, we found 60 studies that cited the use of ERIC strategies in LMIC settings. These studies included data from all WHO regions and focused on diverse health issues, with over 35% published in the final year of our review period, indicating a growing application of ERIC in LMICs. However, just over half explicitly matched implementation strategies employed to an ERIC strategy and 16% of strategies did not include any components of strategy specification. Moreover, a minority of studies employed randomized trials or high-quality quasi-experimental designs with controls, and just one study evaluated implementation strategy effectiveness.
While nearly half of the included studies did not explicitly match implementation strategies used in the study to a specific ERIC strategy, several notable points arose from the ERIC strategies that were reported. First, a wide variety of ERIC strategies were identified, with 88% of the 73 ERIC strategies represented. This suggests that almost all ERIC strategies can be applied to LMIC contexts. Yet, several strategies that seem critical to implementation in LMIC contexts—including capturing and sharing local knowledge, conducting local needs assessment, providing local technical assistance, and tailoring strategies—were not commonly cited in any of the included studies. This may be because these strategies were considered irrelevant or incompatible [ 35 ], redundant with other ERIC strategies [ 34 ], or part of routine processes for implementing health interventions [ 51 , 52 , 53 ] rather than discrete implementation strategies. Additionally, other ERIC strategies rarely, if ever, cited included those that required systems-level changes—such as change fees/incentives/billing, change policy, and enhancing local data systems and analysis—despite data suggesting organizational- and policy-level strategies are effective [ 54 ] and a critical component to closing the research-to-practice gap [ 55 ]. Finally, a number of strategies not used were those related to privatized health systems (e.g., alter consumer fees, change liability laws, use other payment schemes), which are less common in LMICs than centralized, public health systems, and thus may not be applicable or require adaptation in LMIC contexts. Further research is needed to explore why certain ERIC strategies have not been used in LMICs and if there are additional implementation strategies relevant to LMIC settings not currently included in the ERIC compilation.
Strengthening the evidence base for implementation strategies requires that their operationalization be reported in detail and that modifications to implementation strategies be systematically documented. While the included studies’ lack of ERIC strategy matching complicates the interpretation of specific ERIC strategy applicability to LMICs in this review, strategy matching itself may not be a requirement for understanding if and how certain implementation strategies are effective. However, implementation strategy components need to be specified in a way that allows their replication in research and practice and comparison across contexts [ 56 , 57 ]. Among the studies reviewed here, all but eight included some type of strategy specification. However, most strategies only included specification of their action (75%), action target (61%), and actor (57%). Strategy details, such as temporality and dose, are necessary to replicate them in further testing. Moreover, mechanisms of action and fidelity, described for just 24% and 3% of strategies, respectively, are required for generating theory and selecting strategies that appropriately target contextual determinants [ 19 ]. Also concerning are how few strategies (18%) included a description of their adaptation process, which is likely necessary to meet the specific needs of the study context [ 2 , 58 , 59 , 60 ]. Of the strategies that did include specification, most relied heavily on providers or research teams as actors and providers as action targets. This echoes our finding that few strategies addressing the organizational and policy levels were used and further highlights the dearth of research on implementation strategies that support population-level health improvements previously observed in high-income [ 61 ] and LMIC settings [ 39 ]. Moreover, the reliance on research team members as strategy actors threatens the generalizability of implementation strategies used in the studies to real-world settings.
Finally, though we had hoped to be able to assess ERIC strategy effectiveness in LMIC, just one strategy was evaluated in the studies included in our review. This is likely related to few studies defining strategy justification, mechanisms, and targeted implementation outcomes in their research. Instead, most studies looked at implementation outcomes without directly linking them to an identified determinant and a theory of change, precluding them from evaluating the individual strategy’s impact. Research teams may have lacked the resources to conduct a randomized control trial to rigorously test a strategy, as less than 20% of studies employed this design. However, while randomized control trials are the gold standard for research on intervention effectiveness, research on implementation strategies can successfully employ alternative designs, including interrupted time series, factorial, adaptive, and rollout designs [ 62 , 63 ]. These designs can provide more flexibility and feasibility in resource-limited settings, while simultaneously maximizing external validity [ 64 , 65 ].
The present study has several limitations. For one, we only focused on articles that used the ERIC compilation to inform study strategies and not more broadly defined implementation strategies. Therefore, our results may not be generalizable to all implementation strategy research in LMIC. We chose to focus on ERIC strategies as it is the taxonomy most commonly used in the field of implementation science, with over 3300 citations for the original [ 23 ] and refined [ 21 ] ERIC strategy papers and was developed using an expert review of existing strategy compilations and reviews. While we recognize that this biases toward research connected to non-LMIC academics, we note that around two-thirds of included studies were published in the final 2 years of our review period, suggesting that the use of ERIC is becoming more widespread in LMICs and highlighting the need to understand how to promote improved application and testing of these strategies. Second, we only included articles published in peer-reviewed journals. Owing to factors such as publication cost and language of these journals, this likely biased our findings to represent international and well-funded research. Further research is needed to explore non-ERIC strategy application in LMICs across languages and in the grey literature. Third, as we included study protocols in our sample, we cannot say with certainty that all strategies proposed will be applied in the research phase. However, the objective of this review was to provide a comprehensive description of the current state of this rapidly growing field (e.g., 35% of included papers published in the last year, 28% of studies still in the protocol phase), and inclusion of study protocols allows us to capture the most recent data. Finally, as many strategies employed were not matched to ERIC by the study authors, we were unable to draw conclusions about which may be the most relevant ERIC strategy(ies) in LMICs. However, those that did match indicated the individual ERIC strategies are applicable in LMICs.
This systematic review demonstrated the broad and growing use of the ERIC strategy taxonomy in LMICs, with inconsistency in application and very limited testing of ERIC strategy effectiveness. Moving forward, we provide the following recommendations to promote the development of implementation strategies to more rapidly close the research-to-practice gap in LMICs. First, research in LMICs must move beyond merely describing strategies to evaluating their effects on implementation outcomes. Moreover, strategies that are tested need to be better specified so that their effectiveness may be compared across studies and contexts and their mechanisms of change can be understood. Researchers should also consider reporting how the strategy would be deployed under routine, non-research-related conditions as well to promote application beyond the study period. Finally, strategies targeting policy, organizational, and community-level determinants should be explored to encourage change that supports scale-up and sustainability of individual, research-based implementation efforts in LMICs. To catalyze these lines of research, there is a need for greater capacity-building among researchers in LMICs to gain training in implementation research. Moreover, this training should directly involve and/or emphasize methods for the engagement of diverse local stakeholders, such as policymakers and community members, who may be better situated to develop and implement strategies at the systems level. Research funders, governments, and other implementers should consider encouraging work that includes each of these components.
Availability of data and materials
All articles included in this systematic review are publicly available. The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
Low- and middle-income countries
Expert Recommendations for Implementing Change
World Health Organization
Shelton RC, Lee M, Brotzman LE, Wolfenden L, Nathan N, Wainberg ML. What is dissemination and implementation science?: An introduction and opportunities to advance behavioral medicine and public health globally. Int J Behav Med. 2020;27(1):3–20.
Article PubMed Google Scholar
Powell BJ, Fernandez ME, Williams NJ, Aarons GA, Beidas RS, Lewis CC, et al. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health. 2019;7:3.
Article PubMed PubMed Central Google Scholar
Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8(1):139.
Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38(1):4–23.
Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.
Ariyo P, Zayed B, Riese V, Anton B, Latif A, Kilpatrick C, et al. Implementation strategies to reduce surgical site infections: A systematic review. Infect Control Hosp Epidemiol. 2019;40(3):287–300.
Baker R, Camosso‐Stefinovic J, Gillies C, Shaw EJ, Cheater F, Flottorp S, et al. Tailored interventions to address determinants of practice. Cochrane Database of Systematic Reviews. 2015(4).
Grimshaw JM, Eccles MP, Lavis JN, Hill SJ, Squires JE. Knowledge translation of research findings. Implement Sci. 2012;7(1):50.
Jones LK, Tilberry S, Gregor C, Yaeger LH, Hu Y, Sturm AC, et al. Implementation strategies to improve statin utilization in individuals with hypercholesterolemia: a systematic review and meta-analysis. Implement Sci. 2021;16(1):40.
Mills KT, Obst KM, Shen W, Molina S, Zhang H-J, He H, et al. Comparative effectiveness of implementation strategies for blood pressure control in hypertensive patients: a systematic review and meta-analysis. Ann Intern Med. 2018;168(2):110–20.
Powell BJ, Proctor EK, Glass JE. A Systematic Review of Strategies for Implementing Empirically Supported Mental Health Interventions. Res Soc Work Pract. 2014;24(2):192–212.
Powell BJ, Beidas RS, Lewis CC, Aarons GA, McMillen JC, Proctor EK, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44(2):177–94.
Lewis CC, Boyd MR, Walsh-Bailey C, Lyon AR, Beidas R, Mittman B, et al. A systematic review of empirical studies examining mechanisms of implementation in health. Implement Sci. 2020;15(1):21.
Nadeem E, Olin SS, Hill LC, Hoagwood KE, Horwitz SM. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q. 2013;91(2):354–94.
Prior M, Guerin M, Grimmer-Somers K. The effectiveness of clinical guideline implementation strategies–a synthesis of systematic review findings. J Eval Clin Pract. 2008;14(5):888–97.
Varsi C, Solberg Nes L, Kristjansdottir OB, Kelders SM, Stenberg U, Zangi HA, et al. Implementation Strategies to Enhance the Implementation of eHealth Programs for Patients With Chronic Illnesses: Realist Systematic Review. J Med Internet Res. 2019;21(9):e14255.
Wensing M, Bosch M, Grol R. Selecting, tailoring, and implementing knowledge translation interventions. Knowl Transl Health Care. 2009;94:113.
Haley AD, Powell BJ, Walsh-Bailey C, Krancari M, Gruß I, Shea CM, et al. Strengthening methods for tracking adaptations and modifications to implementation strategies. BMC Med Res Methodol. 2021;21(1):1–12.
Article Google Scholar
Lewis CC, Klasnja P, Powell B, Tuzzio L, Jones S, Walsh-Bailey C, et al. From classification to causality: advancing understanding of mechanisms of change in implementation science. Front Public Health. 2018;6:136.
Wilson PM, Sales A, Wensing M, Aarons GA, Flottorp S, Glidewell L, et al. Enhancing the reporting of implementation research. BioMed Central; 2017; 1–5.
Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10(1):21.
Slaughter SE, Zimmermann GL, Nuspl M, Hanson HM, Albrecht L, Esmail R, et al. Classification schemes for knowledge translation interventions: a practical resource for researchers. BMC Med Res Methodol. 2017;17(1):161.
Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69(2):123–57.
Waltz TJ, Powell BJ, Matthieu MM, Damschroder LJ, Chinman MJ, Smith JL, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study. Implement Sci. 2015;10(1):1–8.
Boyd MR, Powell BJ, Endicott D, Lewis CC. A method for tracking implementation strategies: an exemplar implementing measurement-based care in community behavioral health clinics. Behav Ther. 2018;49(4):525–37.
Bunger AC, Powell BJ, Robertson HA, MacDowell H, Birken SA, Shea C. Tracking implementation strategies: a description of a practical approach and early findings. Health Res Policy Syst. 2017;15(1):1–12.
Huynh AK, Hamilton AB, Farmer MM, Bean-Mayberry B, Stirman SW, Moin T, et al. A pragmatic approach to guide implementation evaluation research: strategy mapping for complex interventions. Front Public Health. 2018;6:134.
Perry CK, Damschroder LJ, Hemler JR, Woodson TT, Ono SS, Cohen DJ. Specifying and comparing implementation strategies across seven large implementation interventions: a practical application of theory. Implement Sci. 2019;14(1):1–13.
Yakovchenko V, Miech EJ, Chinman MJ, Chartier M, Gonzalez R, Kirchner JE, et al. Strategy Configurations Directly Linked to Higher Hepatitis C Virus Treatment Starts: An Applied Use of Configurational Comparative Methods. Medical Care. 2020;58(5).
Yakovchenko V, Morgan TR, Chinman MJ, Powell BJ, Gonzalez R, Park A, et al. Mapping the road to elimination: a 5-year evaluation of implementation strategies associated with hepatitis C treatment in the veterans health administration. BMC Health Serv Res. 2021;21(1):1348.
Institute PCOR. Standards for Studies of Complex Interventions 2019 [Available from: https://www.pcori.org/research/about-our-research/research-methodology/pcori-methodology-standards .
Patel V, Saxena S, Lund C, Thornicroft G, Baingana F, Bolton P, et al. The Lancet Commission on global mental health and sustainable development. Lancet. 2018;392(10157):1553–98.
Theobald S, Brandes N, Gyapong M, El-Saharty S, Proctor E, Diaz T, et al. Implementation research: new imperatives and opportunities in global health. Lancet. 2018;392(10160):2214–28.
Lovero KL, Dos Santos PF, Adam S, Bila C, Fernandes ME, Kann B, et al. Leveraging stakeholder engagement and virtual environments to develop a strategy for implementation of adolescent depression services integrated within primary care clinics of Mozambique. Frontiers Public Health. 2022;10.
Means AR, Kemp CG, Gwayi-Chore M-C, Gimbel S, Soi C, Sherr K, et al. Evaluating and optimizing the consolidated framework for implementation research (CFIR) for use in low- and middle-income countries: a systematic review. Implement Sci. 2020;15(1):17.
Aldridge LR, Kemp CG, Bass JK, Danforth K, Kane JC, Hamdani SU, et al. Psychometric performance of the Mental Health Implementation Science Tools (mhIST) across six low- and middle-income countries. Implement Sci Commun. 2022;3(1):54.
Haroz EE, Bolton P, Nguyen AJ, Lee C, Bogdanov S, Bass J, et al. Measuring implementation in global mental health: validation of a pragmatic implementation science measure in eastern Ukraine using an experimental vignette design. BMC Health Serv Res. 2019;19(1):262.
Article CAS PubMed PubMed Central Google Scholar
Esponda GM, Hartman S, Qureshi O, Sadler E, Cohen A, Kakuma R. Barriers and facilitators of mental health programmes in primary care in low-income and middle-income countries. Lancet Psychiat. 2019.
Wagenaar BH, Hammett WH, Jackson C, Atkins DL, Belus JM, Kemp CG. Implementation outcomes and strategies for depression interventions in low- and middle-income countries: a systematic review. Glob Ment Health (Cambridge, England). 2020;7:e7.
Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. Syst Rev. 2021;10(1):1–11.
The World Bank. World Bank Open Data Washington, DC: The World Bank; 2022 [Available from: https://data.worldbank.org/ .
Covidence systematic review software, Veritas Health Innovation, Melbourne, Australia. Available at www.covidence.org .
Michie S, Atkins L, West R. The behaviour change wheel. A guide to designing interventions 1st ed Great Britain: Silverback Publishing. 2014:1003–10.
Geng EH, Baumann AA, Powell BJ. Mechanism mapping to advance research on implementation strategies. PLoS Med. 2022;19(2):e1003918.
Miller CJ, Barnett ML, Baumann AA, Gutner CA, Wiltsey-Stirman S. The FRAME-IS: a framework for documenting modifications to implementation strategies in healthcare. Implement Sci. 2021;16(1):36.
Akiba CF, Powell BJ, Pence BW, Nguyen MXB, Golin C, Go V. The case for prioritizing implementation strategy fidelity measurement: benefits and challenges. Transl Behav Med. 2021;12(2):335–42.
Article PubMed Central Google Scholar
Cane J, O’Connor D, Michie S. Validation of the theoretical domains framework for use in behaviour change and implementation research. Implement Sci. 2012;7(1):37.
Merrill M. First principles of Instruction in CM Reigeluth & Carr (Eds.) Instructional design Theories and Models: Building a Common knowledge Base (Vol. III). New York: Routledge Publishers; 2009.
Michie S, van Stralen MM, West R. The behaviour change wheel: A new method for characterising and designing behaviour change interventions. Implement Sci. 2011;6(1):42.
Gachau S, Ayieko P, Gathara D, Mwaniki P, Ogero M, Akech S, et al. Does audit and feedback improve the adoption of recommended practices? Evidence from a longitudinal observational study of an emerging clinical network in Kenya. BMJ Glob Health. 2017;2(4):e000468.
Fernandez ME, ten Hoor GA, van Lieshout S, Rodriguez SA, Beidas RS, Parcel G, et al. Implementation mapping: using intervention mapping to develop implementation strategies. Frontiers in Public Health. 2019;7(158).
Group AMHR. Design, implementation, monitoring, and evaluation of mental health and psychosocial assistance programs for trauma survivors in low resource countries; a user’s manual for researchers and program implementers. United States: Johns Hopkins University Bloomberg School of Public Health; 2013.
Peters DH, Tran NT, Adam T. Implementation research in health: a practical guide: World Health Organization. 2013.
Sarkies MN, Bowles K-A, Skinner EH, Haas R, Lane H, Haines TP. The effectiveness of research implementation strategies for promoting evidence-informed policy and management decisions in healthcare: a systematic review. Implement Sci. 2017;12(1):132.
Purtle J, Nelson KL, Counts NZ, Yudell M. Population-based approaches to mental health: history, strategies, and evidence. Annu Rev Public Health. 2020;41:201–21.
Kirchner JE, Smith JL, Powell BJ, Waltz TJ, Proctor EK. Getting a clinical innovation into practice: an introduction to implementation strategies. Psychiatry Res. 2020;283:112467.
Michie S, Fixsen D, Grimshaw JM, Eccles MP. Specifying and reporting complex behaviour change interventions: the need for a scientific method. BioMed Central; 2009. 1–6.
Eisman AB, Hutton DW, Prosser LA, Smith SN, Kilbourne AM. Cost-effectiveness of the Adaptive Implementation of Effective Programs Trial (ADEPT): approaches to adopting implementation strategies. Implement Sci. 2020;15(1):109.
Geng EH, Mody A, Powell BJ. On-the-Go Adaptation of Implementation Approaches and Strategies in Health: Emerging Perspectives and Research Opportunities. Annu Rev Public Health. 2023;44(1):21–36.
Quanbeck A, Brown RT, Zgierska AE, Jacobson N, Robinson JM, Johnson RA, et al. A randomized matched-pairs study of feasibility, acceptability, and effectiveness of systems consultation: a novel implementation strategy for adopting clinical guidelines for opioid prescribing in primary care. Implement Sci. 2018;13(1):1–13.
Colquhoun HL, Squires JE, Kolehmainen N, Fraser C, Grimshaw JM. Methods for designing interventions to change healthcare professionals’ behaviour: a systematic review. Implement Sci. 2017;12(1):30.
Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L, et al. An overview of research and evaluation designs for dissemination and implementation. Annu Rev Public Health. 2017;38(1):1–22.
Handley MA, Lyles CR, McCulloch C, Cattamanchi A. Selecting and improving quasi-experimental designs in effectiveness and implementation research. Annu Rev Public Health. 2018;39(1):5–25.
Mazzucca S, Tabak RG, Pilar M, Ramsey AT, Baumann AA, Kryzer E, et al. Variation in research designs used to test the effectiveness of dissemination and implementation strategies: a review. Frontiers in Public Health. 2018;6.
Wagenaar BH, Sherr K, Fernandes Q, Wagenaar AC. Using routine health information systems for well-designed health evaluations in low-and middle-income countries. Health Policy Plan. 2016;31(1):129–35.
KLL, ACG, and MGC were supported by a grant from the National Institute of Mental Health (K01 MH120285, K23 MH128742, K01MH129572). BJP was supported in part through grants from the National Institute of Mental Health (R25MH080916, P50MH126219, R01MH124914), National Institute on Alcohol Abuse and Alcoholism (R01AA030480), National Institute on Drug Abuse (R01DA047876, P50DA054072), National Institute of Child Health and Human Development (R01HD103902), National Cancer Institute (P50CA19006, R01CA262325), National Heart, Lung, and Blood Institute (U24HL154426, R01HL157255), and the Agency for Healthcare Research and Quality (R13HS025632). Funders did not play any role in the study design, data collection, analysis, interpretation, or writing the manuscript.
Kathryn L. Lovero and Christopher G. Kemp contributed equally to this work.
Authors and Affiliations
Department of Sociomedical Sciences, Columbia University Mailman School of Public Health, New York, NY, USA
Kathryn L. Lovero
Department of International Health, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, USA
Christopher G. Kemp
Department of Global Health, University of Washington, Seattle, WA, USA
Bradley H. Wagenaar
Department of Epidemiology, University of Washington, Seattle, WA, USA
Department of Psychiatry, Columbia University Irving Medical Center, New York State Psychiatric Institute, New York, NY, USA
Program On Forced Migration and Health, Heilbrunn Department of Population and Family Health, Columbia University Mailman School of Public Health, New York, NY, USA
M. Claire Greene
Brown School, Center for Mental Health Services Research, Washington University in St. Louis, St. Louis, MO, USA
Byron J. Powell & Enola K. Proctor
Center for Dissemination & Implementation, Institute for Public Health, Washington University in St. Louis, St. Louis, MO, USA
Division of Infectious Diseases, John T. Milliken Department of Medicine, School of Medicine, Washington University in St. Louis, St. Louis, MO, USA
Byron J. Powell
You can also search for this author in PubMed Google Scholar
KLL, CGK, MCG, BHW, and AG conceived of the review. CGK and KLL contributed to the study design; assisted with article screening, data extraction, and interpretation of results; and drafted the manuscript. MCG, BHW, and AG contributed to the study design; assisted with article screening, data extraction, and interpretation of results; and provided feedback on manuscript drafts. BJP and EKP provided guidance on study design and interpretation of results as well as feedback on manuscript drafts. All authors read and approved the final manuscript.
Correspondence to Kathryn L. Lovero .
Ethics approval and consent to participate, consent for publication, competing interests.
The authors declare that they have no competing interests.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Additional file 1..
PRISMA 2020 checklist.
Additional file 2.
ERIC LMIC search protocol 2023.07.15R2.
Additional file 3.
Study descriptive information.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Reprints and Permissions
About this article
Cite this article.
Lovero, K.L., Kemp, C.G., Wagenaar, B.H. et al. Application of the Expert Recommendations for Implementing Change (ERIC) compilation of strategies to health intervention implementation in low- and middle-income countries: a systematic review. Implementation Sci 18 , 56 (2023). https://doi.org/10.1186/s13012-023-01310-2
Received : 14 March 2023
Accepted : 02 October 2023
Published : 30 October 2023
DOI : https://doi.org/10.1186/s13012-023-01310-2
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Implementation strategy
- Strategy specification
- Global health
- Submission enquiries: Access here and click Contact Us
- General enquiries: [email protected]
SYSTEMATIC REVIEW article
Machine learning-based model for predicting inpatient mortality in adults with traumatic brain injury: a systematic review and metaanalysis.
- 1 Fujian, China
- 2 The Second Affiliated Hospital of Fujian Medical University, China
The final, formatted version of the article will be published soon.
Background and objective: Predicting mortality from traumatic brain injury facilitates early data-driven treatment decisions. Machine learning has predicted mortality from traumatic brain injury in a growing number of studies, and the aim of this study was to conduct a meta-analysis of machine learning models in predicting mortality from traumatic brain injury.Methods: This systematic review and meta-analysis included searches of PubMed, Web of Science and Embase from inception to June 2023, supplemented by manual searches of study references and review articles. Data were analyzed using Stata 16.0 software. This study is registered with PROSPERO (CRD2023440875).Results: A total of 14 studies were included. The studies showed significant differences in the overall sample, model type and model validation. Predictive models performed well with a pooled AUC of 0.90 (95% CI: 0.87 to 0.92).Overall, this study highlights the excellent predictive capabilities of machine learning models in determining mortality following traumatic brain injury.However, it is important to note that the optimal machine learning modelling approachhas not yet been identified.
Keywords: Traumatic Brain Injury, machine learning, Mortality predictor, Meta-analysis, Inpatient mortality
Received: 30 Aug 2023; Accepted: 30 Oct 2023.
Copyright: © 2023 Wu, Lai, Huang, Lin, Chen and Huang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) or licensor are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.
* Correspondence: Mx. Shu Lin, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, 362000, Fujian Province, China
People also looked at
- Essay Database >
- Essays Samples >
- Essay Types >
- Literature Review Example
Systematic Review Literature Reviews Samples For Students
23 samples of this type
If you're seeking a possible way to streamline writing a Literature Review about Systematic Review, WowEssays.com paper writing service just might be able to help you out.
For starters, you should skim our vast collection of free samples that cover most various Systematic Review Literature Review topics and showcase the best academic writing practices. Once you feel that you've figured out the major principles of content structuring and drawn actionable insights from these expertly written Literature Review samples, composing your own academic work should go much smoother.
However, you might still find yourself in a situation when even using top-notch Systematic Review Literature Reviews doesn't let you get the job accomplished on time. In that case, you can get in touch with our writers and ask them to craft a unique Systematic Review paper according to your custom specifications. Buy college research paper or essay now!
Systematic Review Literature Review Examples
Systematic review, example of complications in removable implant prosthesis fabrication, delivery and function literature review, free based on current evidence how often should oral cancer patient visit their dental literature review example.
Don't waste your time searching for a sample.
Get your literature review done by professional writers!
Just from $10/page
Massage Aid And Fibromyalgia Literature Review
Yoga therapy reduces depression symptom in adult patients with depression literature review, introduction and background, picot question literature review examples, “shaving versus clipping: which preoperative hair removal method reduces the risk of surgical site infection", draw topic & writing ideas from this literature review on child abuse and neglect, expertly crafted literature review on research appraisal and literature review, 2a. appraising quantitative and qualitative research, good example of prevention of ventilation associated pneumonia literature review, chronic conditions prevention and management literature review examples, diabetes mellitus type 2: a literature critique, the relation between american board of orthodontics' literature review examples, discrepancy index and treatment time.
Introduction During the past many decades, considerable efforts have been made to develop reliable and standardized measurement tools in orthodontics. Quantitative indices like the peer assessment rating (PAR) and the objective grading system (OGS) have been successfully used so far to assess outcomes of an orthodontic treatment, but these are limited to occlusal aspects only (Cangialosi, 2004).
Literature Review On Critical Appraisal - Wright Et Al
In Wright et al.'s "Systematic Review of Antihypertensive therapies: Does the evidence assist in choosing a first-line drug?" the researchers review 38 different trials in order to determine if a first-line drug is identified in the realm of antihypertension drugs. According to the Critical Appraisal Skills Programme (CASP) appraisal tool, the researchers provide a valid study that can provide clear policy and practice change in the field of hypertension research and treatment.
The Origin Of Zika Virus: Free Sample Literature Review To Follow
Introduction, expertly written literature review on complications in fixed implant prosthesis fabrication, delivery, and function to follow, sample literature review on a critical review of the literature evaluating the uptake of, hpv vaccination among adolescent girls in the uk, good effects of iron deficiency anemia in pregnancy for mother and baby literature review example, iron supplementation during pregnancy, draw topic & writing ideas from this literature review on nursing practices and findings in prevention of ventilated associated pneumonia, effect of physical activity in preventing/treating cardiovascular disease literature review samples, good example of barriers of implementation tqm approach in hcos literature review, introduction and review, psychodynamic theory and psychodynamic therapy literature review sample, good literature review on psychosocial factors in health, example of literature review on literary analysis in international relations and comparative politics, mobile devices and applications literature review sample, literature review on mobile healthcare.
Password recovery email has been sent to [email protected]
Use your new password to log in
You are not register!
Now you can download documents directly to your device!
Check your email! An email with your password has already been sent to you! Now you can download documents directly to your device.
or Use the QR code to Save this Paper to Your Phone
The sample is NOT original!
Short on a deadline?
Don't waste time. Get help with 11% off using code - GETWOWED
No, thanks! I'm fine with missing my deadline