When you choose to publish with PLOS, your research makes an impact. Make your work accessible to all, without restrictions, and accelerate scientific discovery with options like preprints and published peer review that make your work more Open.

  • PLOS Biology
  • PLOS Climate
  • PLOS Complex Systems
  • PLOS Computational Biology
  • PLOS Digital Health
  • PLOS Genetics
  • PLOS Global Public Health
  • PLOS Medicine
  • PLOS Mental Health
  • PLOS Neglected Tropical Diseases
  • PLOS Pathogens
  • PLOS Sustainability and Transformation
  • PLOS Collections

How to Write a Peer Review

research paper on review

When you write a peer review for a manuscript, what should you include in your comments? What should you leave out? And how should the review be formatted?

This guide provides quick tips for writing and organizing your reviewer report.

Review Outline

Use an outline for your reviewer report so it’s easy for the editors and author to follow. This will also help you keep your comments organized.

Think about structuring your review like an inverted pyramid. Put the most important information at the top, followed by details and examples in the center, and any additional points at the very bottom.

research paper on review

Here’s how your outline might look:

1. Summary of the research and your overall impression

In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript’s strengths and weaknesses. Think about this as your “take-home” message for the editors. End this section with your recommended course of action.

2. Discussion of specific areas for improvement

It’s helpful to divide this section into two parts: one for major issues and one for minor issues. Within each section, you can talk about the biggest issues first or go systematically figure-by-figure or claim-by-claim. Number each item so that your points are easy to follow (this will also make it easier for the authors to respond to each point). Refer to specific lines, pages, sections, or figure and table numbers so the authors (and editors) know exactly what you’re talking about.

Major vs. minor issues

What’s the difference between a major and minor issue? Major issues should consist of the essential points the authors need to address before the manuscript can proceed. Make sure you focus on what is  fundamental for the current study . In other words, it’s not helpful to recommend additional work that would be considered the “next step” in the study. Minor issues are still important but typically will not affect the overall conclusions of the manuscript. Here are some examples of what would might go in the “minor” category:

  • Missing references (but depending on what is missing, this could also be a major issue)
  • Technical clarifications (e.g., the authors should clarify how a reagent works)
  • Data presentation (e.g., the authors should present p-values differently)
  • Typos, spelling, grammar, and phrasing issues

3. Any other points

Confidential comments for the editors.

Some journals have a space for reviewers to enter confidential comments about the manuscript. Use this space to mention concerns about the submission that you’d want the editors to consider before sharing your feedback with the authors, such as concerns about ethical guidelines or language quality. Any serious issues should be raised directly and immediately with the journal as well.

This section is also where you will disclose any potentially competing interests, and mention whether you’re willing to look at a revised version of the manuscript.

Do not use this space to critique the manuscript, since comments entered here will not be passed along to the authors.  If you’re not sure what should go in the confidential comments, read the reviewer instructions or check with the journal first before submitting your review. If you are reviewing for a journal that does not offer a space for confidential comments, consider writing to the editorial office directly with your concerns.

Get this outline in a template

Giving Feedback

Giving feedback is hard. Giving effective feedback can be even more challenging. Remember that your ultimate goal is to discuss what the authors would need to do in order to qualify for publication. The point is not to nitpick every piece of the manuscript. Your focus should be on providing constructive and critical feedback that the authors can use to improve their study.

If you’ve ever had your own work reviewed, you already know that it’s not always easy to receive feedback. Follow the golden rule: Write the type of review you’d want to receive if you were the author. Even if you decide not to identify yourself in the review, you should write comments that you would be comfortable signing your name to.

In your comments, use phrases like “ the authors’ discussion of X” instead of “ your discussion of X .” This will depersonalize the feedback and keep the focus on the manuscript instead of the authors.

General guidelines for effective feedback

research paper on review

  • Justify your recommendation with concrete evidence and specific examples.
  • Be specific so the authors know what they need to do to improve.
  • Be thorough. This might be the only time you read the manuscript.
  • Be professional and respectful. The authors will be reading these comments too.
  • Remember to say what you liked about the manuscript!

research paper on review

Don’t

  • Recommend additional experiments or  unnecessary elements that are out of scope for the study or for the journal criteria.
  • Tell the authors exactly how to revise their manuscript—you don’t need to do their work for them.
  • Use the review to promote your own research or hypotheses.
  • Focus on typos and grammar. If the manuscript needs significant editing for language and writing quality, just mention this in your comments.
  • Submit your review without proofreading it and checking everything one more time.

Before and After: Sample Reviewer Comments

Keeping in mind the guidelines above, how do you put your thoughts into words? Here are some sample “before” and “after” reviewer comments

✗ Before

“The authors appear to have no idea what they are talking about. I don’t think they have read any of the literature on this topic.”

✓ After

“The study fails to address how the findings relate to previous research in this area. The authors should rewrite their Introduction and Discussion to reference the related literature, especially recently published work such as Darwin et al.”

“The writing is so bad, it is practically unreadable. I could barely bring myself to finish it.”

“While the study appears to be sound, the language is unclear, making it difficult to follow. I advise the authors work with a writing coach or copyeditor to improve the flow and readability of the text.”

“It’s obvious that this type of experiment should have been included. I have no idea why the authors didn’t use it. This is a big mistake.”

“The authors are off to a good start, however, this study requires additional experiments, particularly [type of experiment]. Alternatively, the authors should include more information that clarifies and justifies their choice of methods.”

Suggested Language for Tricky Situations

You might find yourself in a situation where you’re not sure how to explain the problem or provide feedback in a constructive and respectful way. Here is some suggested language for common issues you might experience.

What you think : The manuscript is fatally flawed. What you could say: “The study does not appear to be sound” or “the authors have missed something crucial”.

What you think : You don’t completely understand the manuscript. What you could say : “The authors should clarify the following sections to avoid confusion…”

What you think : The technical details don’t make sense. What you could say : “The technical details should be expanded and clarified to ensure that readers understand exactly what the researchers studied.”

What you think: The writing is terrible. What you could say : “The authors should revise the language to improve readability.”

What you think : The authors have over-interpreted the findings. What you could say : “The authors aim to demonstrate [XYZ], however, the data does not fully support this conclusion. Specifically…”

What does a good review look like?

Check out the peer review examples at F1000 Research to see how other reviewers write up their reports and give constructive feedback to authors.

Time to Submit the Review!

Be sure you turn in your report on time. Need an extension? Tell the journal so that they know what to expect. If you need a lot of extra time, the journal might need to contact other reviewers or notify the author about the delay.

Tip: Building a relationship with an editor

You’ll be more likely to be asked to review again if you provide high-quality feedback and if you turn in the review on time. Especially if it’s your first review for a journal, it’s important to show that you are reliable. Prove yourself once and you’ll get asked to review again!

  • Getting started as a reviewer
  • Responding to an invitation
  • Reading a manuscript
  • Writing a peer review

The contents of the Peer Review Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

The contents of the Writing Center are also available as a live, interactive training session, complete with slides, talking points, and activities. …

There’s a lot to consider when deciding where to submit your work. Learn how to choose a journal that will help your study reach its audience, while reflecting your values as a researcher…

Page Content

Overview of the review report format, the first read-through, first read considerations, spotting potential major flaws, concluding the first reading, rejection after the first reading, before starting the second read-through, doing the second read-through, the second read-through: section by section guidance, how to structure your report, on presentation and style, criticisms & confidential comments to editors, the recommendation, when recommending rejection, additional resources, step by step guide to reviewing a manuscript.

When you receive an invitation to peer review, you should be sent a copy of the paper's abstract to help you decide whether you wish to do the review. Try to respond to invitations promptly - it will prevent delays. It is also important at this stage to declare any potential Conflict of Interest.

The structure of the review report varies between journals. Some follow an informal structure, while others have a more formal approach.

" Number your comments!!! " (Jonathon Halbesleben, former Editor of Journal of Occupational and Organizational Psychology)

Informal Structure

Many journals don't provide criteria for reviews beyond asking for your 'analysis of merits'. In this case, you may wish to familiarize yourself with examples of other reviews done for the journal, which the editor should be able to provide or, as you gain experience, rely on your own evolving style.

Formal Structure

Other journals require a more formal approach. Sometimes they will ask you to address specific questions in your review via a questionnaire. Or they might want you to rate the manuscript on various attributes using a scorecard. Often you can't see these until you log in to submit your review. So when you agree to the work, it's worth checking for any journal-specific guidelines and requirements. If there are formal guidelines, let them direct the structure of your review.

In Both Cases

Whether specifically required by the reporting format or not, you should expect to compile comments to authors and possibly confidential ones to editors only.

Reviewing with Empathy

Following the invitation to review, when you'll have received the article abstract, you should already understand the aims, key data and conclusions of the manuscript. If you don't, make a note now that you need to feedback on how to improve those sections.

The first read-through is a skim-read. It will help you form an initial impression of the paper and get a sense of whether your eventual recommendation will be to accept or reject the paper.

Keep a pen and paper handy when skim-reading.

Try to bear in mind the following questions - they'll help you form your overall impression:

  • What is the main question addressed by the research? Is it relevant and interesting?
  • How original is the topic? What does it add to the subject area compared with other published material?
  • Is the paper well written? Is the text clear and easy to read?
  • Are the conclusions consistent with the evidence and arguments presented? Do they address the main question posed?
  • If the author is disagreeing significantly with the current academic consensus, do they have a substantial case? If not, what would be required to make their case credible?
  • If the paper includes tables or figures, what do they add to the paper? Do they aid understanding or are they superfluous?

While you should read the whole paper, making the right choice of what to read first can save time by flagging major problems early on.

Editors say, " Specific recommendations for remedying flaws are VERY welcome ."

Examples of possibly major flaws include:

  • Drawing a conclusion that is contradicted by the author's own statistical or qualitative evidence
  • The use of a discredited method
  • Ignoring a process that is known to have a strong influence on the area under study

If experimental design features prominently in the paper, first check that the methodology is sound - if not, this is likely to be a major flaw.

You might examine:

  • The sampling in analytical papers
  • The sufficient use of control experiments
  • The precision of process data
  • The regularity of sampling in time-dependent studies
  • The validity of questions, the use of a detailed methodology and the data analysis being done systematically (in qualitative research)
  • That qualitative research extends beyond the author's opinions, with sufficient descriptive elements and appropriate quotes from interviews or focus groups

Major Flaws in Information

If methodology is less of an issue, it's often a good idea to look at the data tables, figures or images first. Especially in science research, it's all about the information gathered. If there are critical flaws in this, it's very likely the manuscript will need to be rejected. Such issues include:

  • Insufficient data
  • Unclear data tables
  • Contradictory data that either are not self-consistent or disagree with the conclusions
  • Confirmatory data that adds little, if anything, to current understanding - unless strong arguments for such repetition are made

If you find a major problem, note your reasoning and clear supporting evidence (including citations).

After the initial read and using your notes, including those of any major flaws you found, draft the first two paragraphs of your review - the first summarizing the research question addressed and the second the contribution of the work. If the journal has a prescribed reporting format, this draft will still help you compose your thoughts.

The First Paragraph

This should state the main question addressed by the research and summarize the goals, approaches, and conclusions of the paper. It should:

  • Help the editor properly contextualize the research and add weight to your judgement
  • Show the author what key messages are conveyed to the reader, so they can be sure they are achieving what they set out to do
  • Focus on successful aspects of the paper so the author gets a sense of what they've done well

The Second Paragraph

This should provide a conceptual overview of the contribution of the research. So consider:

  • Is the paper's premise interesting and important?
  • Are the methods used appropriate?
  • Do the data support the conclusions?

After drafting these two paragraphs, you should be in a position to decide whether this manuscript is seriously flawed and should be rejected (see the next section). Or whether it is publishable in principle and merits a detailed, careful read through.

Even if you are coming to the opinion that an article has serious flaws, make sure you read the whole paper. This is very important because you may find some really positive aspects that can be communicated to the author. This could help them with future submissions.

A full read-through will also make sure that any initial concerns are indeed correct and fair. After all, you need the context of the whole paper before deciding to reject. If you still intend to recommend rejection, see the section "When recommending rejection."

Once the paper has passed your first read and you've decided the article is publishable in principle, one purpose of the second, detailed read-through is to help prepare the manuscript for publication. You may still decide to recommend rejection following a second reading.

" Offer clear suggestions for how the authors can address the concerns raised. In other words, if you're going to raise a problem, provide a solution ." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Preparation

To save time and simplify the review:

  • Don't rely solely upon inserting comments on the manuscript document - make separate notes
  • Try to group similar concerns or praise together
  • If using a review program to note directly onto the manuscript, still try grouping the concerns and praise in separate notes - it helps later
  • Note line numbers of text upon which your notes are based - this helps you find items again and also aids those reading your review

Now that you have completed your preparations, you're ready to spend an hour or so reading carefully through the manuscript.

As you're reading through the manuscript for a second time, you'll need to keep in mind the argument's construction, the clarity of the language and content.

With regard to the argument’s construction, you should identify:

  • Any places where the meaning is unclear or ambiguous
  • Any factual errors
  • Any invalid arguments

You may also wish to consider:

  • Does the title properly reflect the subject of the paper?
  • Does the abstract provide an accessible summary of the paper?
  • Do the keywords accurately reflect the content?
  • Is the paper an appropriate length?
  • Are the key messages short, accurate and clear?

Not every submission is well written. Part of your role is to make sure that the text’s meaning is clear.

Editors say, " If a manuscript has many English language and editing issues, please do not try and fix it. If it is too bad, note that in your review and it should be up to the authors to have the manuscript edited ."

If the article is difficult to understand, you should have rejected it already. However, if the language is poor but you understand the core message, see if you can suggest improvements to fix the problem:

  • Are there certain aspects that could be communicated better, such as parts of the discussion?
  • Should the authors consider resubmitting to the same journal after language improvements?
  • Would you consider looking at the paper again once these issues are dealt with?

On Grammar and Punctuation

Your primary role is judging the research content. Don't spend time polishing grammar or spelling. Editors will make sure that the text is at a high standard before publication. However, if you spot grammatical errors that affect clarity of meaning, then it's important to highlight these. Expect to suggest such amendments - it's rare for a manuscript to pass review with no corrections.

A 2010 study of nursing journals found that 79% of recommendations by reviewers were influenced by grammar and writing style (Shattel, et al., 2010).

1. The Introduction

A well-written introduction:

  • Sets out the argument
  • Summarizes recent research related to the topic
  • Highlights gaps in current understanding or conflicts in current knowledge
  • Establishes the originality of the research aims by demonstrating the need for investigations in the topic area
  • Gives a clear idea of the target readership, why the research was carried out and the novelty and topicality of the manuscript

Originality and Topicality

Originality and topicality can only be established in the light of recent authoritative research. For example, it's impossible to argue that there is a conflict in current understanding by referencing articles that are 10 years old.

Authors may make the case that a topic hasn't been investigated in several years and that new research is required. This point is only valid if researchers can point to recent developments in data gathering techniques or to research in indirectly related fields that suggest the topic needs revisiting. Clearly, authors can only do this by referencing recent literature. Obviously, where older research is seminal or where aspects of the methodology rely upon it, then it is perfectly appropriate for authors to cite some older papers.

Editors say, "Is the report providing new information; is it novel or just confirmatory of well-known outcomes ?"

It's common for the introduction to end by stating the research aims. By this point you should already have a good impression of them - if the explicit aims come as a surprise, then the introduction needs improvement.

2. Materials and Methods

Academic research should be replicable, repeatable and robust - and follow best practice.

Replicable Research

This makes sufficient use of:

  • Control experiments
  • Repeated analyses
  • Repeated experiments

These are used to make sure observed trends are not due to chance and that the same experiment could be repeated by other researchers - and result in the same outcome. Statistical analyses will not be sound if methods are not replicable. Where research is not replicable, the paper should be recommended for rejection.

Repeatable Methods

These give enough detail so that other researchers are able to carry out the same research. For example, equipment used or sampling methods should all be described in detail so that others could follow the same steps. Where methods are not detailed enough, it's usual to ask for the methods section to be revised.

Robust Research

This has enough data points to make sure the data are reliable. If there are insufficient data, it might be appropriate to recommend revision. You should also consider whether there is any in-built bias not nullified by the control experiments.

Best Practice

During these checks you should keep in mind best practice:

  • Standard guidelines were followed (e.g. the CONSORT Statement for reporting randomized trials)
  • The health and safety of all participants in the study was not compromised
  • Ethical standards were maintained

If the research fails to reach relevant best practice standards, it's usual to recommend rejection. What's more, you don't then need to read any further.

3. Results and Discussion

This section should tell a coherent story - What happened? What was discovered or confirmed?

Certain patterns of good reporting need to be followed by the author:

  • They should start by describing in simple terms what the data show
  • They should make reference to statistical analyses, such as significance or goodness of fit
  • Once described, they should evaluate the trends observed and explain the significance of the results to wider understanding. This can only be done by referencing published research
  • The outcome should be a critical analysis of the data collected

Discussion should always, at some point, gather all the information together into a single whole. Authors should describe and discuss the overall story formed. If there are gaps or inconsistencies in the story, they should address these and suggest ways future research might confirm the findings or take the research forward.

4. Conclusions

This section is usually no more than a few paragraphs and may be presented as part of the results and discussion, or in a separate section. The conclusions should reflect upon the aims - whether they were achieved or not - and, just like the aims, should not be surprising. If the conclusions are not evidence-based, it's appropriate to ask for them to be re-written.

5. Information Gathered: Images, Graphs and Data Tables

If you find yourself looking at a piece of information from which you cannot discern a story, then you should ask for improvements in presentation. This could be an issue with titles, labels, statistical notation or image quality.

Where information is clear, you should check that:

  • The results seem plausible, in case there is an error in data gathering
  • The trends you can see support the paper's discussion and conclusions
  • There are sufficient data. For example, in studies carried out over time are there sufficient data points to support the trends described by the author?

You should also check whether images have been edited or manipulated to emphasize the story they tell. This may be appropriate but only if authors report on how the image has been edited (e.g. by highlighting certain parts of an image). Where you feel that an image has been edited or manipulated without explanation, you should highlight this in a confidential comment to the editor in your report.

6. List of References

You will need to check referencing for accuracy, adequacy and balance.

Where a cited article is central to the author's argument, you should check the accuracy and format of the reference - and bear in mind different subject areas may use citations differently. Otherwise, it's the editor’s role to exhaustively check the reference section for accuracy and format.

You should consider if the referencing is adequate:

  • Are important parts of the argument poorly supported?
  • Are there published studies that show similar or dissimilar trends that should be discussed?
  • If a manuscript only uses half the citations typical in its field, this may be an indicator that referencing should be improved - but don't be guided solely by quantity
  • References should be relevant, recent and readily retrievable

Check for a well-balanced list of references that is:

  • Helpful to the reader
  • Fair to competing authors
  • Not over-reliant on self-citation
  • Gives due recognition to the initial discoveries and related work that led to the work under assessment

You should be able to evaluate whether the article meets the criteria for balanced referencing without looking up every reference.

7. Plagiarism

By now you will have a deep understanding of the paper's content - and you may have some concerns about plagiarism.

Identified Concern

If you find - or already knew of - a very similar paper, this may be because the author overlooked it in their own literature search. Or it may be because it is very recent or published in a journal slightly outside their usual field.

You may feel you can advise the author how to emphasize the novel aspects of their own study, so as to better differentiate it from similar research. If so, you may ask the author to discuss their aims and results, or modify their conclusions, in light of the similar article. Of course, the research similarities may be so great that they render the work unoriginal and you have no choice but to recommend rejection.

"It's very helpful when a reviewer can point out recent similar publications on the same topic by other groups, or that the authors have already published some data elsewhere ." (Editor feedback)

Suspected Concern

If you suspect plagiarism, including self-plagiarism, but cannot recall or locate exactly what is being plagiarized, notify the editor of your suspicion and ask for guidance.

Most editors have access to software that can check for plagiarism.

Editors are not out to police every paper, but when plagiarism is discovered during peer review it can be properly addressed ahead of publication. If plagiarism is discovered only after publication, the consequences are worse for both authors and readers, because a retraction may be necessary.

For detailed guidelines see COPE's Ethical guidelines for reviewers and Wiley's Best Practice Guidelines on Publishing Ethics .

8. Search Engine Optimization (SEO)

After the detailed read-through, you will be in a position to advise whether the title, abstract and key words are optimized for search purposes. In order to be effective, good SEO terms will reflect the aims of the research.

A clear title and abstract will improve the paper's search engine rankings and will influence whether the user finds and then decides to navigate to the main article. The title should contain the relevant SEO terms early on. This has a major effect on the impact of a paper, since it helps it appear in search results. A poor abstract can then lose the reader's interest and undo the benefit of an effective title - whilst the paper's abstract may appear in search results, the potential reader may go no further.

So ask yourself, while the abstract may have seemed adequate during earlier checks, does it:

  • Do justice to the manuscript in this context?
  • Highlight important findings sufficiently?
  • Present the most interesting data?

Editors say, " Does the Abstract highlight the important findings of the study ?"

If there is a formal report format, remember to follow it. This will often comprise a range of questions followed by comment sections. Try to answer all the questions. They are there because the editor felt that they are important. If you're following an informal report format you could structure your report in three sections: summary, major issues, minor issues.

  • Give positive feedback first. Authors are more likely to read your review if you do so. But don't overdo it if you will be recommending rejection
  • Briefly summarize what the paper is about and what the findings are
  • Try to put the findings of the paper into the context of the existing literature and current knowledge
  • Indicate the significance of the work and if it is novel or mainly confirmatory
  • Indicate the work's strengths, its quality and completeness
  • State any major flaws or weaknesses and note any special considerations. For example, if previously held theories are being overlooked

Major Issues

  • Are there any major flaws? State what they are and what the severity of their impact is on the paper
  • Has similar work already been published without the authors acknowledging this?
  • Are the authors presenting findings that challenge current thinking? Is the evidence they present strong enough to prove their case? Have they cited all the relevant work that would contradict their thinking and addressed it appropriately?
  • If major revisions are required, try to indicate clearly what they are
  • Are there any major presentational problems? Are figures & tables, language and manuscript structure all clear enough for you to accurately assess the work?
  • Are there any ethical issues? If you are unsure it may be better to disclose these in the confidential comments section

Minor Issues

  • Are there places where meaning is ambiguous? How can this be corrected?
  • Are the correct references cited? If not, which should be cited instead/also? Are citations excessive, limited, or biased?
  • Are there any factual, numerical or unit errors? If so, what are they?
  • Are all tables and figures appropriate, sufficient, and correctly labelled? If not, say which are not

Your review should ultimately help the author improve their article. So be polite, honest and clear. You should also try to be objective and constructive, not subjective and destructive.

You should also:

  • Write clearly and so you can be understood by people whose first language is not English
  • Avoid complex or unusual words, especially ones that would even confuse native speakers
  • Number your points and refer to page and line numbers in the manuscript when making specific comments
  • If you have been asked to only comment on specific parts or aspects of the manuscript, you should indicate clearly which these are
  • Treat the author's work the way you would like your own to be treated

Most journals give reviewers the option to provide some confidential comments to editors. Often this is where editors will want reviewers to state their recommendation - see the next section - but otherwise this area is best reserved for communicating malpractice such as suspected plagiarism, fraud, unattributed work, unethical procedures, duplicate publication, bias or other conflicts of interest.

However, this doesn't give reviewers permission to 'backstab' the author. Authors can't see this feedback and are unable to give their side of the story unless the editor asks them to. So in the spirit of fairness, write comments to editors as though authors might read them too.

Reviewers should check the preferences of individual journals as to where they want review decisions to be stated. In particular, bear in mind that some journals will not want the recommendation included in any comments to authors, as this can cause editors difficulty later - see Section 11 for more advice about working with editors.

You will normally be asked to indicate your recommendation (e.g. accept, reject, revise and resubmit, etc.) from a fixed-choice list and then to enter your comments into a separate text box.

Recommending Acceptance

If you're recommending acceptance, give details outlining why, and if there are any areas that could be improved. Don't just give a short, cursory remark such as 'great, accept'. See Improving the Manuscript

Recommending Revision

Where improvements are needed, a recommendation for major or minor revision is typical. You may also choose to state whether you opt in or out of the post-revision review too. If recommending revision, state specific changes you feel need to be made. The author can then reply to each point in turn.

Some journals offer the option to recommend rejection with the possibility of resubmission – this is most relevant where substantial, major revision is necessary.

What can reviewers do to help? " Be clear in their comments to the author (or editor) which points are absolutely critical if the paper is given an opportunity for revisio n." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Recommending Rejection

If recommending rejection or major revision, state this clearly in your review (and see the next section, 'When recommending rejection').

Where manuscripts have serious flaws you should not spend any time polishing the review you've drafted or give detailed advice on presentation.

Editors say, " If a reviewer suggests a rejection, but her/his comments are not detailed or helpful, it does not help the editor in making a decision ."

In your recommendations for the author, you should:

  • Give constructive feedback describing ways that they could improve the research
  • Keep the focus on the research and not the author. This is an extremely important part of your job as a reviewer
  • Avoid making critical confidential comments to the editor while being polite and encouraging to the author - the latter may not understand why their manuscript has been rejected. Also, they won't get feedback on how to improve their research and it could trigger an appeal

Remember to give constructive criticism even if recommending rejection. This helps developing researchers improve their work and explains to the editor why you felt the manuscript should not be published.

" When the comments seem really positive, but the recommendation is rejection…it puts the editor in a tough position of having to reject a paper when the comments make it sound like a great paper ." (Jonathon Halbesleben, Editor of Journal of Occupational and Organizational Psychology)

Visit our Wiley Author Learning and Training Channel for expert advice on peer review.

Watch the video, Ethical considerations of Peer Review

How to write a good scientific review article

Affiliation.

  • 1 The FEBS Journal Editorial Office, Cambridge, UK.
  • PMID: 35792782
  • DOI: 10.1111/febs.16565

Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up to date with developments in a particular area of research. A good review article provides readers with an in-depth understanding of a field and highlights key gaps and challenges to address with future research. Writing a review article also helps to expand the writer's knowledge of their specialist area and to develop their analytical and communication skills, amongst other benefits. Thus, the importance of building review-writing into a scientific career cannot be overstated. In this instalment of The FEBS Journal's Words of Advice series, I provide detailed guidance on planning and writing an informative and engaging literature review.

© 2022 Federation of European Biochemical Societies.

Publication types

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • CAREER FEATURE
  • 04 December 2020
  • Correction 09 December 2020

How to write a superb literature review

Andy Tay is a freelance writer based in Singapore.

You can also search for this author in PubMed   Google Scholar

Literature reviews are important resources for scientists. They provide historical context for a field while offering opinions on its future trajectory. Creating them can provide inspiration for one’s own research, as well as some practice in writing. But few scientists are trained in how to write a review — or in what constitutes an excellent one. Even picking the appropriate software to use can be an involved decision (see ‘Tools and techniques’). So Nature asked editors and working scientists with well-cited reviews for their tips.

Access options

Access Nature and 54 other Nature Portfolio journals

Get Nature+, our best-value online-access subscription

24,99 € / 30 days

cancel any time

Subscribe to this journal

Receive 51 print issues and online access

185,98 € per year

only 3,65 € per issue

Rent or buy this article

Prices vary by article type

Prices may be subject to local taxes which are calculated during checkout

doi: https://doi.org/10.1038/d41586-020-03422-x

Interviews have been edited for length and clarity.

Updates & Corrections

Correction 09 December 2020 : An earlier version of the tables in this article included some incorrect details about the programs Zotero, Endnote and Manubot. These have now been corrected.

Hsing, I.-M., Xu, Y. & Zhao, W. Electroanalysis 19 , 755–768 (2007).

Article   Google Scholar  

Ledesma, H. A. et al. Nature Nanotechnol. 14 , 645–657 (2019).

Article   PubMed   Google Scholar  

Brahlek, M., Koirala, N., Bansal, N. & Oh, S. Solid State Commun. 215–216 , 54–62 (2015).

Choi, Y. & Lee, S. Y. Nature Rev. Chem . https://doi.org/10.1038/s41570-020-00221-w (2020).

Download references

Related Articles

research paper on review

  • Research management

How to boost your research: take a sabbatical in policy

How to boost your research: take a sabbatical in policy

World View 21 FEB 24

Structural biology for researchers with low vision

Structural biology for researchers with low vision

Career Column 19 FEB 24

Just 5 women have won a top maths prize in the past 90 years

Just 5 women have won a top maths prize in the past 90 years

News 16 FEB 24

China conducts first nationwide review of retractions and research misconduct

China conducts first nationwide review of retractions and research misconduct

News 12 FEB 24

Could roving researchers help address the challenge of taking parental leave?

Could roving researchers help address the challenge of taking parental leave?

Career Feature 07 FEB 24

Best practice for LGBTQ+ data collection by STEM organizations

Correspondence 06 FEB 24

Open-access publishing: citation advantage is unproven

Correspondence 13 FEB 24

How journals are fighting back against a wave of questionable images

How journals are fighting back against a wave of questionable images

News Explainer 12 FEB 24

Faculty Positions at City University of Hong Kong (Dongguan)

CityU (Dongguan) warmly invites individuals from diverse backgrounds to apply for various faculty positions available at the levels of Professor...

Dongguan, Guangdong, China

City University of Hong Kong (Dongguan)

research paper on review

Principal Clinical Investigator in Immuno-Oncology

A new wave of Immunotherapeutics drugs is coming and its development requires specific expertise in the field of clinical research, clinical immuno...

Villejuif (Ville), L'Haÿ-les-Roses (FR)

GUSTAVE ROUSSY

research paper on review

Recruitment of Global Talent at the Institute of Zoology, Chinese Academy of Sciences (IOZ, CAS)

The Institute of Zoology (IOZ), Chinese Academy of Sciences (CAS), is seeking global talents around the world.

Beijing, China

Institute of Zoology, Chinese Academy of Sciences (IOZ, CAS)

research paper on review

Position Opening for Principal Investigator GIBH

Guangzhou, Guangdong, China

Guangzhou Institutes of Biomedicine and Health(GIBH), Chinese Academy of Sciences

research paper on review

Faculty Positions in Multiscale Research Institute for Complex Systems, Fudan University

The Multiscale Research Institute for Complex Systems (MRICS) at Fudan University is located at the Zhangjiang Campus of Fudan University.

Shanghai, China

Fudan University

research paper on review

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • What Is Peer Review? | Types & Examples

What Is Peer Review? | Types & Examples

Published on December 17, 2021 by Tegan George . Revised on June 22, 2023.

Peer review, sometimes referred to as refereeing , is the process of evaluating submissions to an academic journal. Using strict criteria, a panel of reviewers in the same subject area decides whether to accept each submission for publication.

Peer-reviewed articles are considered a highly credible source due to the stringent process they go through before publication.

There are various types of peer review. The main difference between them is to what extent the authors, reviewers, and editors know each other’s identities. The most common types are:

  • Single-blind review
  • Double-blind review
  • Triple-blind review

Collaborative review

Open review.

Relatedly, peer assessment is a process where your peers provide you with feedback on something you’ve written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

Table of contents

What is the purpose of peer review, types of peer review, the peer review process, providing feedback to your peers, peer review example, advantages of peer review, criticisms of peer review, other interesting articles, frequently asked questions about peer reviews.

Many academic fields use peer review, largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the manuscript. For this reason, academic journals are among the most credible sources you can refer to.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure.

Peer assessment is often used in the classroom as a pedagogical tool. Both receiving feedback and providing it are thought to enhance the learning process, helping students think critically and collaboratively.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research paper on review

Depending on the journal, there are several types of peer review.

Single-blind peer review

The most common type of peer review is single-blind (or single anonymized) review . Here, the names of the reviewers are not known by the author.

While this gives the reviewers the ability to give feedback without the possibility of interference from the author, there has been substantial criticism of this method in the last few years. Many argue that single-blind reviewing can lead to poaching or intellectual theft or that anonymized comments cause reviewers to be too harsh.

Double-blind peer review

In double-blind (or double anonymized) review , both the author and the reviewers are anonymous.

Arguments for double-blind review highlight that this mitigates any risk of prejudice on the side of the reviewer, while protecting the nature of the process. In theory, it also leads to manuscripts being published on merit rather than on the reputation of the author.

Triple-blind peer review

While triple-blind (or triple anonymized) review —where the identities of the author, reviewers, and editors are all anonymized—does exist, it is difficult to carry out in practice.

Proponents of adopting triple-blind review for journal submissions argue that it minimizes potential conflicts of interest and biases. However, ensuring anonymity is logistically challenging, and current editing software is not always able to fully anonymize everyone involved in the process.

In collaborative review , authors and reviewers interact with each other directly throughout the process. However, the identity of the reviewer is not known to the author. This gives all parties the opportunity to resolve any inconsistencies or contradictions in real time, and provides them a rich forum for discussion. It can mitigate the need for multiple rounds of editing and minimize back-and-forth.

Collaborative review can be time- and resource-intensive for the journal, however. For these collaborations to occur, there has to be a set system in place, often a technological platform, with staff monitoring and fixing any bugs or glitches.

Lastly, in open review , all parties know each other’s identities throughout the process. Often, open review can also include feedback from a larger audience, such as an online forum, or reviewer feedback included as part of the final published product.

While many argue that greater transparency prevents plagiarism or unnecessary harshness, there is also concern about the quality of future scholarship if reviewers feel they have to censor their comments.

In general, the peer review process includes the following steps:

  • First, the author submits the manuscript to the editor.
  • Reject the manuscript and send it back to the author, or
  • Send it onward to the selected peer reviewer(s)
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made.
  • Lastly, the edited manuscript is sent back to the author. They input the edits and resubmit it to the editor for publication.

The peer review process

In an effort to be transparent, many journals are now disclosing who reviewed each article in the published product. There are also increasing opportunities for collaboration and feedback, with some journals allowing open communication between reviewers and authors.

It can seem daunting at first to conduct a peer review or peer assessment. If you’re not sure where to start, there are several best practices you can use.

Summarize the argument in your own words

Summarizing the main argument helps the author see how their argument is interpreted by readers, and gives you a jumping-off point for providing feedback. If you’re having trouble doing this, it’s a sign that the argument needs to be clearer, more concise, or worded differently.

If the author sees that you’ve interpreted their argument differently than they intended, they have an opportunity to address any misunderstandings when they get the manuscript back.

Separate your feedback into major and minor issues

It can be challenging to keep feedback organized. One strategy is to start out with any major issues and then flow into the more minor points. It’s often helpful to keep your feedback in a numbered list, so the author has concrete points to refer back to.

Major issues typically consist of any problems with the style, flow, or key points of the manuscript. Minor issues include spelling errors, citation errors, or other smaller, easy-to-apply feedback.

Tip: Try not to focus too much on the minor issues. If the manuscript has a lot of typos, consider making a note that the author should address spelling and grammar issues, rather than going through and fixing each one.

The best feedback you can provide is anything that helps them strengthen their argument or resolve major stylistic issues.

Give the type of feedback that you would like to receive

No one likes being criticized, and it can be difficult to give honest feedback without sounding overly harsh or critical. One strategy you can use here is the “compliment sandwich,” where you “sandwich” your constructive criticism between two compliments.

Be sure you are giving concrete, actionable feedback that will help the author submit a successful final draft. While you shouldn’t tell them exactly what they should do, your feedback should help them resolve any issues they may have overlooked.

As a rule of thumb, your feedback should be:

  • Easy to understand
  • Constructive

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Below is a brief annotated research example. You can view examples of peer feedback by hovering over the highlighted sections.

Influence of phone use on sleep

Studies show that teens from the US are getting less sleep than they were a decade ago (Johnson, 2019) . On average, teens only slept for 6 hours a night in 2021, compared to 8 hours a night in 2011. Johnson mentions several potential causes, such as increased anxiety, changed diets, and increased phone use.

The current study focuses on the effect phone use before bedtime has on the number of hours of sleep teens are getting.

For this study, a sample of 300 teens was recruited using social media, such as Facebook, Instagram, and Snapchat. The first week, all teens were allowed to use their phone the way they normally would, in order to obtain a baseline.

The sample was then divided into 3 groups:

  • Group 1 was not allowed to use their phone before bedtime.
  • Group 2 used their phone for 1 hour before bedtime.
  • Group 3 used their phone for 3 hours before bedtime.

All participants were asked to go to sleep around 10 p.m. to control for variation in bedtime . In the morning, their Fitbit showed the number of hours they’d slept. They kept track of these numbers themselves for 1 week.

Two independent t tests were used in order to compare Group 1 and Group 2, and Group 1 and Group 3. The first t test showed no significant difference ( p > .05) between the number of hours for Group 1 ( M = 7.8, SD = 0.6) and Group 2 ( M = 7.0, SD = 0.8). The second t test showed a significant difference ( p < .01) between the average difference for Group 1 ( M = 7.8, SD = 0.6) and Group 3 ( M = 6.1, SD = 1.5).

This shows that teens sleep fewer hours a night if they use their phone for over an hour before bedtime, compared to teens who use their phone for 0 to 1 hours.

Peer review is an established and hallowed process in academia, dating back hundreds of years. It provides various fields of study with metrics, expectations, and guidance to ensure published work is consistent with predetermined standards.

  • Protects the quality of published research

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. Any content that raises red flags for reviewers can be closely examined in the review stage, preventing plagiarized or duplicated research from being published.

  • Gives you access to feedback from experts in your field

Peer review represents an excellent opportunity to get feedback from renowned experts in your field and to improve your writing through their feedback and guidance. Experts with knowledge about your subject matter can give you feedback on both style and content, and they may also suggest avenues for further research that you hadn’t yet considered.

  • Helps you identify any weaknesses in your argument

Peer review acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process. This way, you’ll end up with a more robust, more cohesive article.

While peer review is a widely accepted metric for credibility, it’s not without its drawbacks.

  • Reviewer bias

The more transparent double-blind system is not yet very common, which can lead to bias in reviewing. A common criticism is that an excellent paper by a new researcher may be declined, while an objectively lower-quality submission by an established researcher would be accepted.

  • Delays in publication

The thoroughness of the peer review process can lead to significant delays in publishing time. Research that was current at the time of submission may not be as current by the time it’s published. There is also high risk of publication bias , where journals are more likely to publish studies with positive findings than studies with negative findings.

  • Risk of human error

By its very nature, peer review carries a risk of human error. In particular, falsification often cannot be detected, given that reviewers would have to replicate entire experiments to ensure the validity of results.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Discourse analysis
  • Cohort study
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Peer review is a process of evaluating submissions to an academic journal. Utilizing rigorous criteria, a panel of reviewers in the same subject area decide whether to accept each submission for publication. For this reason, academic journals are often considered among the most credible sources you can use in a research project– provided that the journal itself is trustworthy and well-regarded.

In general, the peer review process follows the following steps: 

  • Reject the manuscript and send it back to author, or 
  • Send it onward to the selected peer reviewer(s) 
  • Next, the peer review process occurs. The reviewer provides feedback, addressing any major or minor issues with the manuscript, and gives their advice regarding what edits should be made. 
  • Lastly, the edited manuscript is sent back to the author. They input the edits, and resubmit it to the editor for publication.

Peer review can stop obviously problematic, falsified, or otherwise untrustworthy research from being published. It also represents an excellent opportunity to get feedback from renowned experts in your field. It acts as a first defense, helping you ensure your argument is clear and that there are no gaps, vague terms, or unanswered questions for readers who weren’t involved in the research process.

Peer-reviewed articles are considered a highly credible source due to this stringent process they go through before publication.

Many academic fields use peer review , largely to determine whether a manuscript is suitable for publication. Peer review enhances the credibility of the published manuscript.

However, peer review is also common in non-academic settings. The United Nations, the European Union, and many individual nations use peer review to evaluate grant applications. It is also widely used in medical and health-related fields as a teaching or quality-of-care measure. 

A credible source should pass the CRAAP test  and follow these guidelines:

  • The information should be up to date and current.
  • The author and publication should be a trusted authority on the subject you are researching.
  • The sources the author cited should be easy to find, clear, and unbiased.
  • For a web source, the URL and layout should signify that it is trustworthy.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

George, T. (2023, June 22). What Is Peer Review? | Types & Examples. Scribbr. Retrieved February 22, 2024, from https://www.scribbr.com/methodology/peer-review/

Is this article helpful?

Tegan George

Tegan George

Other students also liked, what are credible sources & how to spot them | examples, ethical considerations in research | types & examples, applying the craap test & evaluating sources, what is your plagiarism score.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • PLoS Comput Biol
  • v.9(7); 2013 Jul

Logo of ploscomp

Ten Simple Rules for Writing a Literature Review

Marco pautasso.

1 Centre for Functional and Evolutionary Ecology (CEFE), CNRS, Montpellier, France

2 Centre for Biodiversity Synthesis and Analysis (CESAB), FRB, Aix-en-Provence, France

Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications [1] . For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively [2] . Given such mountains of papers, scientists cannot be expected to examine in detail every single new paper relevant to their interests [3] . Thus, it is both advantageous and necessary to rely on regular summaries of the recent literature. Although recognition for scientists mainly comes from primary research, timely literature reviews can lead to new synthetic insights and are often widely read [4] . For such summaries to be useful, however, they need to be compiled in a professional way [5] .

When starting from scratch, reviewing the literature can require a titanic amount of work. That is why researchers who have spent their career working on a certain research issue are in a perfect position to review that literature. Some graduate schools are now offering courses in reviewing the literature, given that most research students start their project by producing an overview of what has already been done on their research issue [6] . However, it is likely that most scientists have not thought in detail about how to approach and carry out a literature review.

Reviewing the literature requires the ability to juggle multiple tasks, from finding and evaluating relevant material to synthesising information from various sources, from critical thinking to paraphrasing, evaluating, and citation skills [7] . In this contribution, I share ten simple rules I learned working on about 25 literature reviews as a PhD and postdoctoral student. Ideas and insights also come from discussions with coauthors and colleagues, as well as feedback from reviewers and editors.

Rule 1: Define a Topic and Audience

How to choose which topic to review? There are so many issues in contemporary science that you could spend a lifetime of attending conferences and reading the literature just pondering what to review. On the one hand, if you take several years to choose, several other people may have had the same idea in the meantime. On the other hand, only a well-considered topic is likely to lead to a brilliant literature review [8] . The topic must at least be:

  • interesting to you (ideally, you should have come across a series of recent papers related to your line of work that call for a critical summary),
  • an important aspect of the field (so that many readers will be interested in the review and there will be enough material to write it), and
  • a well-defined issue (otherwise you could potentially include thousands of publications, which would make the review unhelpful).

Ideas for potential reviews may come from papers providing lists of key research questions to be answered [9] , but also from serendipitous moments during desultory reading and discussions. In addition to choosing your topic, you should also select a target audience. In many cases, the topic (e.g., web services in computational biology) will automatically define an audience (e.g., computational biologists), but that same topic may also be of interest to neighbouring fields (e.g., computer science, biology, etc.).

Rule 2: Search and Re-search the Literature

After having chosen your topic and audience, start by checking the literature and downloading relevant papers. Five pieces of advice here:

  • keep track of the search items you use (so that your search can be replicated [10] ),
  • keep a list of papers whose pdfs you cannot access immediately (so as to retrieve them later with alternative strategies),
  • use a paper management system (e.g., Mendeley, Papers, Qiqqa, Sente),
  • define early in the process some criteria for exclusion of irrelevant papers (these criteria can then be described in the review to help define its scope), and
  • do not just look for research papers in the area you wish to review, but also seek previous reviews.

The chances are high that someone will already have published a literature review ( Figure 1 ), if not exactly on the issue you are planning to tackle, at least on a related topic. If there are already a few or several reviews of the literature on your issue, my advice is not to give up, but to carry on with your own literature review,

An external file that holds a picture, illustration, etc.
Object name is pcbi.1003149.g001.jpg

The bottom-right situation (many literature reviews but few research papers) is not just a theoretical situation; it applies, for example, to the study of the impacts of climate change on plant diseases, where there appear to be more literature reviews than research studies [33] .

  • discussing in your review the approaches, limitations, and conclusions of past reviews,
  • trying to find a new angle that has not been covered adequately in the previous reviews, and
  • incorporating new material that has inevitably accumulated since their appearance.

When searching the literature for pertinent papers and reviews, the usual rules apply:

  • be thorough,
  • use different keywords and database sources (e.g., DBLP, Google Scholar, ISI Proceedings, JSTOR Search, Medline, Scopus, Web of Science), and
  • look at who has cited past relevant papers and book chapters.

Rule 3: Take Notes While Reading

If you read the papers first, and only afterwards start writing the review, you will need a very good memory to remember who wrote what, and what your impressions and associations were while reading each single paper. My advice is, while reading, to start writing down interesting pieces of information, insights about how to organize the review, and thoughts on what to write. This way, by the time you have read the literature you selected, you will already have a rough draft of the review.

Of course, this draft will still need much rewriting, restructuring, and rethinking to obtain a text with a coherent argument [11] , but you will have avoided the danger posed by staring at a blank document. Be careful when taking notes to use quotation marks if you are provisionally copying verbatim from the literature. It is advisable then to reformulate such quotes with your own words in the final draft. It is important to be careful in noting the references already at this stage, so as to avoid misattributions. Using referencing software from the very beginning of your endeavour will save you time.

Rule 4: Choose the Type of Review You Wish to Write

After having taken notes while reading the literature, you will have a rough idea of the amount of material available for the review. This is probably a good time to decide whether to go for a mini- or a full review. Some journals are now favouring the publication of rather short reviews focusing on the last few years, with a limit on the number of words and citations. A mini-review is not necessarily a minor review: it may well attract more attention from busy readers, although it will inevitably simplify some issues and leave out some relevant material due to space limitations. A full review will have the advantage of more freedom to cover in detail the complexities of a particular scientific development, but may then be left in the pile of the very important papers “to be read” by readers with little time to spare for major monographs.

There is probably a continuum between mini- and full reviews. The same point applies to the dichotomy of descriptive vs. integrative reviews. While descriptive reviews focus on the methodology, findings, and interpretation of each reviewed study, integrative reviews attempt to find common ideas and concepts from the reviewed material [12] . A similar distinction exists between narrative and systematic reviews: while narrative reviews are qualitative, systematic reviews attempt to test a hypothesis based on the published evidence, which is gathered using a predefined protocol to reduce bias [13] , [14] . When systematic reviews analyse quantitative results in a quantitative way, they become meta-analyses. The choice between different review types will have to be made on a case-by-case basis, depending not just on the nature of the material found and the preferences of the target journal(s), but also on the time available to write the review and the number of coauthors [15] .

Rule 5: Keep the Review Focused, but Make It of Broad Interest

Whether your plan is to write a mini- or a full review, it is good advice to keep it focused 16 , 17 . Including material just for the sake of it can easily lead to reviews that are trying to do too many things at once. The need to keep a review focused can be problematic for interdisciplinary reviews, where the aim is to bridge the gap between fields [18] . If you are writing a review on, for example, how epidemiological approaches are used in modelling the spread of ideas, you may be inclined to include material from both parent fields, epidemiology and the study of cultural diffusion. This may be necessary to some extent, but in this case a focused review would only deal in detail with those studies at the interface between epidemiology and the spread of ideas.

While focus is an important feature of a successful review, this requirement has to be balanced with the need to make the review relevant to a broad audience. This square may be circled by discussing the wider implications of the reviewed topic for other disciplines.

Rule 6: Be Critical and Consistent

Reviewing the literature is not stamp collecting. A good review does not just summarize the literature, but discusses it critically, identifies methodological problems, and points out research gaps [19] . After having read a review of the literature, a reader should have a rough idea of:

  • the major achievements in the reviewed field,
  • the main areas of debate, and
  • the outstanding research questions.

It is challenging to achieve a successful review on all these fronts. A solution can be to involve a set of complementary coauthors: some people are excellent at mapping what has been achieved, some others are very good at identifying dark clouds on the horizon, and some have instead a knack at predicting where solutions are going to come from. If your journal club has exactly this sort of team, then you should definitely write a review of the literature! In addition to critical thinking, a literature review needs consistency, for example in the choice of passive vs. active voice and present vs. past tense.

Rule 7: Find a Logical Structure

Like a well-baked cake, a good review has a number of telling features: it is worth the reader's time, timely, systematic, well written, focused, and critical. It also needs a good structure. With reviews, the usual subdivision of research papers into introduction, methods, results, and discussion does not work or is rarely used. However, a general introduction of the context and, toward the end, a recapitulation of the main points covered and take-home messages make sense also in the case of reviews. For systematic reviews, there is a trend towards including information about how the literature was searched (database, keywords, time limits) [20] .

How can you organize the flow of the main body of the review so that the reader will be drawn into and guided through it? It is generally helpful to draw a conceptual scheme of the review, e.g., with mind-mapping techniques. Such diagrams can help recognize a logical way to order and link the various sections of a review [21] . This is the case not just at the writing stage, but also for readers if the diagram is included in the review as a figure. A careful selection of diagrams and figures relevant to the reviewed topic can be very helpful to structure the text too [22] .

Rule 8: Make Use of Feedback

Reviews of the literature are normally peer-reviewed in the same way as research papers, and rightly so [23] . As a rule, incorporating feedback from reviewers greatly helps improve a review draft. Having read the review with a fresh mind, reviewers may spot inaccuracies, inconsistencies, and ambiguities that had not been noticed by the writers due to rereading the typescript too many times. It is however advisable to reread the draft one more time before submission, as a last-minute correction of typos, leaps, and muddled sentences may enable the reviewers to focus on providing advice on the content rather than the form.

Feedback is vital to writing a good review, and should be sought from a variety of colleagues, so as to obtain a diversity of views on the draft. This may lead in some cases to conflicting views on the merits of the paper, and on how to improve it, but such a situation is better than the absence of feedback. A diversity of feedback perspectives on a literature review can help identify where the consensus view stands in the landscape of the current scientific understanding of an issue [24] .

Rule 9: Include Your Own Relevant Research, but Be Objective

In many cases, reviewers of the literature will have published studies relevant to the review they are writing. This could create a conflict of interest: how can reviewers report objectively on their own work [25] ? Some scientists may be overly enthusiastic about what they have published, and thus risk giving too much importance to their own findings in the review. However, bias could also occur in the other direction: some scientists may be unduly dismissive of their own achievements, so that they will tend to downplay their contribution (if any) to a field when reviewing it.

In general, a review of the literature should neither be a public relations brochure nor an exercise in competitive self-denial. If a reviewer is up to the job of producing a well-organized and methodical review, which flows well and provides a service to the readership, then it should be possible to be objective in reviewing one's own relevant findings. In reviews written by multiple authors, this may be achieved by assigning the review of the results of a coauthor to different coauthors.

Rule 10: Be Up-to-Date, but Do Not Forget Older Studies

Given the progressive acceleration in the publication of scientific papers, today's reviews of the literature need awareness not just of the overall direction and achievements of a field of inquiry, but also of the latest studies, so as not to become out-of-date before they have been published. Ideally, a literature review should not identify as a major research gap an issue that has just been addressed in a series of papers in press (the same applies, of course, to older, overlooked studies (“sleeping beauties” [26] )). This implies that literature reviewers would do well to keep an eye on electronic lists of papers in press, given that it can take months before these appear in scientific databases. Some reviews declare that they have scanned the literature up to a certain point in time, but given that peer review can be a rather lengthy process, a full search for newly appeared literature at the revision stage may be worthwhile. Assessing the contribution of papers that have just appeared is particularly challenging, because there is little perspective with which to gauge their significance and impact on further research and society.

Inevitably, new papers on the reviewed topic (including independently written literature reviews) will appear from all quarters after the review has been published, so that there may soon be the need for an updated review. But this is the nature of science [27] – [32] . I wish everybody good luck with writing a review of the literature.

Acknowledgments

Many thanks to M. Barbosa, K. Dehnen-Schmutz, T. Döring, D. Fontaneto, M. Garbelotto, O. Holdenrieder, M. Jeger, D. Lonsdale, A. MacLeod, P. Mills, M. Moslonka-Lefebvre, G. Stancanelli, P. Weisberg, and X. Xu for insights and discussions, and to P. Bourne, T. Matoni, and D. Smith for helpful comments on a previous draft.

Funding Statement

This work was funded by the French Foundation for Research on Biodiversity (FRB) through its Centre for Synthesis and Analysis of Biodiversity data (CESAB), as part of the NETSEED research project. The funders had no role in the preparation of the manuscript.

  • PRO Courses Guides New Tech Help Pro Expert Videos About wikiHow Pro Upgrade Sign In
  • EDIT Edit this Article
  • EXPLORE Tech Help Pro About Us Random Article Quizzes Request a New Article Community Dashboard This Or That Game Popular Categories Arts and Entertainment Artwork Books Movies Computers and Electronics Computers Phone Skills Technology Hacks Health Men's Health Mental Health Women's Health Relationships Dating Love Relationship Issues Hobbies and Crafts Crafts Drawing Games Education & Communication Communication Skills Personal Development Studying Personal Care and Style Fashion Hair Care Personal Hygiene Youth Personal Care School Stuff Dating All Categories Arts and Entertainment Finance and Business Home and Garden Relationship Quizzes Cars & Other Vehicles Food and Entertaining Personal Care and Style Sports and Fitness Computers and Electronics Health Pets and Animals Travel Education & Communication Hobbies and Crafts Philosophy and Religion Work World Family Life Holidays and Traditions Relationships Youth
  • Browse Articles
  • Learn Something New
  • Quizzes Hot
  • This Or That Game New
  • Train Your Brain
  • Explore More
  • Support wikiHow
  • About wikiHow
  • Log in / Sign up
  • Education and Communications
  • Critical Reviews

How to Write an Article Review

Last Updated: September 8, 2023 Fact Checked

This article was co-authored by Jake Adams . Jake Adams is an academic tutor and the owner of Simplifi EDU, a Santa Monica, California based online tutoring business offering learning resources and online tutors for academic subjects K-College, SAT & ACT prep, and college admissions applications. With over 14 years of professional tutoring experience, Jake is dedicated to providing his clients the very best online tutoring experience and access to a network of excellent undergraduate and graduate-level tutors from top colleges all over the nation. Jake holds a BS in International Business and Marketing from Pepperdine University. There are 13 references cited in this article, which can be found at the bottom of the page. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 3,067,737 times.

An article review is both a summary and an evaluation of another writer's article. Teachers often assign article reviews to introduce students to the work of experts in the field. Experts also are often asked to review the work of other professionals. Understanding the main points and arguments of the article is essential for an accurate summation. Logical evaluation of the article's main theme, supporting arguments, and implications for further research is an important element of a review . Here are a few guidelines for writing an article review.

Education specialist Alexander Peterman recommends: "In the case of a review, your objective should be to reflect on the effectiveness of what has already been written, rather than writing to inform your audience about a subject."

Things You Should Know

  • Read the article very closely, and then take time to reflect on your evaluation. Consider whether the article effectively achieves what it set out to.
  • Write out a full article review by completing your intro, summary, evaluation, and conclusion. Don't forget to add a title, too!
  • Proofread your review for mistakes (like grammar and usage), while also cutting down on needless information. [1] X Research source

Preparing to Write Your Review

Step 1 Understand what an article review is.

  • Article reviews present more than just an opinion. You will engage with the text to create a response to the scholarly writer's ideas. You will respond to and use ideas, theories, and research from your studies. Your critique of the article will be based on proof and your own thoughtful reasoning.
  • An article review only responds to the author's research. It typically does not provide any new research. However, if you are correcting misleading or otherwise incorrect points, some new data may be presented.
  • An article review both summarizes and evaluates the article.

Step 2 Think about the organization of the review article.

  • Summarize the article. Focus on the important points, claims, and information.
  • Discuss the positive aspects of the article. Think about what the author does well, good points she makes, and insightful observations.
  • Identify contradictions, gaps, and inconsistencies in the text. Determine if there is enough data or research included to support the author's claims. Find any unanswered questions left in the article.

Step 3 Preview the article.

  • Make note of words or issues you don't understand and questions you have.
  • Look up terms or concepts you are unfamiliar with, so you can fully understand the article. Read about concepts in-depth to make sure you understand their full context.

Step 4 Read the article closely.

  • Pay careful attention to the meaning of the article. Make sure you fully understand the article. The only way to write a good article review is to understand the article.

Step 5 Put the article into your words.

  • With either method, make an outline of the main points made in the article and the supporting research or arguments. It is strictly a restatement of the main points of the article and does not include your opinions.
  • After putting the article in your own words, decide which parts of the article you want to discuss in your review. You can focus on the theoretical approach, the content, the presentation or interpretation of evidence, or the style. You will always discuss the main issues of the article, but you can sometimes also focus on certain aspects. This comes in handy if you want to focus the review towards the content of a course.
  • Review the summary outline to eliminate unnecessary items. Erase or cross out the less important arguments or supplemental information. Your revised summary can serve as the basis for the summary you provide at the beginning of your review.

Step 6 Write an outline of your evaluation.

  • What does the article set out to do?
  • What is the theoretical framework or assumptions?
  • Are the central concepts clearly defined?
  • How adequate is the evidence?
  • How does the article fit into the literature and field?
  • Does it advance the knowledge of the subject?
  • How clear is the author's writing? Don't: include superficial opinions or your personal reaction. Do: pay attention to your biases, so you can overcome them.

Writing the Article Review

Step 1 Come up with...

  • For example, in MLA , a citation may look like: Duvall, John N. "The (Super)Marketplace of Images: Television as Unmediated Mediation in DeLillo's White Noise ." Arizona Quarterly 50.3 (1994): 127-53. Print. [10] X Trustworthy Source Purdue Online Writing Lab Trusted resource for writing and citation guidelines Go to source

Step 3 Identify the article.

  • For example: The article, "Condom use will increase the spread of AIDS," was written by Anthony Zimmerman, a Catholic priest.

Step 4 Write the introduction....

  • Your introduction should only be 10-25% of your review.
  • End the introduction with your thesis. Your thesis should address the above issues. For example: Although the author has some good points, his article is biased and contains some misinterpretation of data from others’ analysis of the effectiveness of the condom.

Step 5 Summarize the article.

  • Use direct quotes from the author sparingly.
  • Review the summary you have written. Read over your summary many times to ensure that your words are an accurate description of the author's article.

Step 6 Write your critique.

  • Support your critique with evidence from the article or other texts.
  • The summary portion is very important for your critique. You must make the author's argument clear in the summary section for your evaluation to make sense.
  • Remember, this is not where you say if you liked the article or not. You are assessing the significance and relevance of the article.
  • Use a topic sentence and supportive arguments for each opinion. For example, you might address a particular strength in the first sentence of the opinion section, followed by several sentences elaborating on the significance of the point.

Step 7 Conclude the article review.

  • This should only be about 10% of your overall essay.
  • For example: This critical review has evaluated the article "Condom use will increase the spread of AIDS" by Anthony Zimmerman. The arguments in the article show the presence of bias, prejudice, argumentative writing without supporting details, and misinformation. These points weaken the author’s arguments and reduce his credibility.

Step 8 Proofread.

  • Make sure you have identified and discussed the 3-4 key issues in the article.

Sample Article Reviews

research paper on review

Expert Q&A

Jake Adams

You Might Also Like

Write Articles

  • ↑ https://writing.wisc.edu/handbook/grammarpunct/proofreading/
  • ↑ https://libguides.cmich.edu/writinghelp/articlereview
  • ↑ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4548566/
  • ↑ Jake Adams. Academic Tutor & Test Prep Specialist. Expert Interview. 24 July 2020.
  • ↑ https://guides.library.queensu.ca/introduction-research/writing/critical
  • ↑ https://www.iup.edu/writingcenter/writing-resources/organization-and-structure/creating-an-outline.html
  • ↑ https://writing.umn.edu/sws/assets/pdf/quicktips/titles.pdf
  • ↑ https://owl.purdue.edu/owl/research_and_citation/mla_style/mla_formatting_and_style_guide/mla_works_cited_periodicals.html
  • ↑ https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4548565/
  • ↑ https://writingcenter.uconn.edu/wp-content/uploads/sites/593/2014/06/How_to_Summarize_a_Research_Article1.pdf
  • ↑ https://www.uis.edu/learning-hub/writing-resources/handouts/learning-hub/how-to-review-a-journal-article
  • ↑ https://writingcenter.unc.edu/tips-and-tools/editing-and-proofreading/

About This Article

Jake Adams

If you have to write an article review, read through the original article closely, taking notes and highlighting important sections as you read. Next, rewrite the article in your own words, either in a long paragraph or as an outline. Open your article review by citing the article, then write an introduction which states the article’s thesis. Next, summarize the article, followed by your opinion about whether the article was clear, thorough, and useful. Finish with a paragraph that summarizes the main points of the article and your opinions. To learn more about what to include in your personal critique of the article, keep reading the article! Did this summary help you? Yes No

  • Send fan mail to authors

Reader Success Stories

Prince Asiedu-Gyan

Prince Asiedu-Gyan

Apr 22, 2022

Did this article help you?

research paper on review

Sammy James

Sep 12, 2017

Juabin Matey

Juabin Matey

Aug 30, 2017

Kristi N.

Oct 25, 2023

Vanita Meghrajani

Vanita Meghrajani

Jul 21, 2016

Am I a Narcissist or an Empath Quiz

Featured Articles

Start a Text Conversation with a Girl

Trending Articles

How to Take the Perfect Thirst Trap

Watch Articles

Wrap a Round Gift

  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info

wikiHow Tech Help Pro:

Level up your tech skills and stay ahead of the curve

  • Interesting
  • Scholarships
  • UGC-CARE Journals
  • iLovePhD Web Stories

How to Write a Best Review Paper to Get More Citation

Review Paper Writing Guide

Dr. Sowndarya Somasundaram

How to write review paper

Table of contents

What is a review paper, difference between a review paper and a research paper.

  • 6 Types of Review Papers

Purpose of Review Paper

Criteria for good review paper, step-by-step systematic procedure to write a review paper, title, abstract, keywords, introduction, various topics to discuss the critical issues, conclusion and future perspectives, acknowledgment       .

Are you new to academia? Do you want to learn how to write a good review paper to get in-depth knowledge about your domain? you are at the right place. In this article, you will learn how to write the best review paper in a step-by-step systematic procedure with a sample review article format to get more Citation.

A review paper, or a literature review , is a thorough, analytical examination of previously published literature. It also provides an overview of current research works on a particular topic in chronological order.

  • The main objective of writing a review paper is to evaluate the existing data or results, which can be done through analysis, modeling, classification, comparison, and summary.  
  • Review papers can help to identify the research gaps, to explore potential areas in a particular field.
  • It helps to come out with new conclusions from already published works.
  • Any scholar or researcher or scientist who wants to carry out research on a specific theme, first read the review articles relevant to that research area to understand the research gap for arriving at the problem statement.
  • Writing a review article provides clarity, novelty, and contribution to the area of research and it demands a great level of in-depth understanding of the subject and a well-structured arrangement of discussions and arguments.
  • Some journals publish only review papers, and they do not accept research articles. It is important to check the journal submission guidelines.

The difference between a review paper and a research paper is presented below.

6 T ypes of Review Papers

The review papers are classified in to six main categories based on the theme and it is presented in the figure below.

Types of Review Papers

The purpose of a review paper is to assess a particular research question, theoretical or practical approach which provides readers with in-depth knowledge and state-of-the-art understanding of the research area.

The purpose of the review paper can vary based on their specific type and research needs.

  • Provide a unified, collective overview of the current state of knowledge on a specific research topic and provide an inclusive foundation on a research theme.
  • Identify ambiguity, and contradictions in existing results or data.
  • Highlight the existing methodological approaches, research techniques, and unique perceptions.
  • Develop theoretical outlines to resolve and work on published research.
  • Discuss research gaps and future perspectives.

A good review paper needs to achieve three important criteria. ( Palmatier et al 2017 ).

  • First, the area of research should be suitable for writing a review paper so that the author finds sufficient published literature.
  • The review paper should be written with suitable literature, detailed discussion, sufficient data/results to support the interpretation, and persuasive language style.
  • A completed review paper should provide substantial new innovative ideas to the readers based on the comparison of published works.

Review papers are widely read by many researchers and it helps to get more citations for author. So, it is important to learn how to write a review paper and find a journal to publish .

Time needed:  20 days and 7 hours

The systematic procedural steps to write the best review paper are as follows:

Select a suitable area in your research field formulate clear objectives, and prepare the specific research hypotheses that are to be explored.

Designing your research work is an important step for any researcher. Based on the objectives, develop a clear methodology or protocol to review a review paper.

Thorough analysis and understanding of different published works help the author to identify suitable and relevant data/results that will be used to write the paper.

The degree of analysis to evaluate the collected data varies by extensive review. The examination of trends, patterns, ideas, comparisons, and relationships in the study provides deeper knowledge on that area of research .

Interpretation of results is very important for a good review paper. The author should present the discussion systematically without any ambiguity. The results can be presented in descriptive form, tables, and figures. The new insights should have an in-depth discussion of the topic in line with fundamentals. Finally, the author is expected to present the limitations of the existing study with future perspectives.

Steps to Write a Review Paper

Sample Review Article Format

Write an effective and suitable title, abstract, and keywords relevant to your review paper. This will maximize the visibility of your paper online for the readers to find your work. Your title and abstract should be clear, concise, appropriate, and informative.

Present a detailed introduction to your research which is published in chronological order in your own words. Don’t summarize the published literature. The introduction should encourage the readers to read your paper.

Make sure you present a critical discussion, not a descriptive summary of the topic. If there is contradictory research in your area of research, verify to include an element of debate and present both sides of the argument. A good review paper can resolve the conflict between contradictory works.

The written review paper should achieve your objectives. Hence, the review paper should leave the reader with a clear understanding of the following questions:

What they can understand from the review paper?

What still remains a requirement of further investigation in the research area?

This can include making suggestions for future scope on the theme as part of your conclusion.

The authors can submit a brief acknowledgment of any financial, instrumentation, and academic support received about research work.

Citing references at appropriate places in the article is necessary and important to avoid plagiarism. Each journal has its referencing style. Therefore, the references need to be listed at the end of the manuscript. The number of references in the review paper is usually higher than in a research paper .

I hope this article will give you a clear idea of how to write a review paper. Please give your valuable comments.

iLovePhD Learn to Research Find Journals to Publish

research paper on review

  • Academic Citations
  • Academic Writing
  • Citation Boost
  • citation management
  • Citation Strategies
  • Citation Techniques
  • Citations Tips
  • effective writing
  • Literature Review
  • Paper Impact
  • Paper Writing
  • Research Paper
  • research strategies
  • review article
  • review paper
  • Sample Review Paper
  • scholarly impact
  • Scholarly writing
  • Scientific Review
  • Types of Review Paper

Dr. Sowndarya Somasundaram

List of UGC Care Journals Discontinued from Jan 2024

7 types of journal peer review process, 100 work motivational quotes.

Nice thank you for your clarification

How to write a review paper on the relevant of science to education

This blog is very informative. Is it true that an increase in the number of citations improves the quality and impact of a review paper?

Thanks for your ideas. Really helpful

it was very well information

LEAVE A REPLY Cancel reply

Save my name, email, and website in this browser for the next time I comment.

Notify me of follow-up comments by email.

Notify me of new posts by email.

Email Subscription

ilovephd logo

iLovePhD is a research education website to know updated research-related information. It helps researchers to find top journals for publishing research articles and get an easy manual for research tools. The main aim of this website is to help Ph.D. scholars who are working in various domains to get more valuable ideas to carry out their research. Learn the current groundbreaking research activities around the world, love the process of getting a Ph.D.

WhatsApp Channel

Join iLovePhD WhatsApp Channel Now!

Contact us: [email protected]

Copyright © 2019-2024 - iLovePhD

  • Artificial intelligence

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

Writing a Literature Review

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

A literature review is a document or section of a document that collects key sources on a topic and discusses those sources in conversation with each other (also called synthesis ). The lit review is an important genre in many disciplines, not just literature (i.e., the study of works of literature such as novels and plays). When we say “literature review” or refer to “the literature,” we are talking about the research ( scholarship ) in a given field. You will often see the terms “the research,” “the scholarship,” and “the literature” used mostly interchangeably.

Where, when, and why would I write a lit review?

There are a number of different situations where you might write a literature review, each with slightly different expectations; different disciplines, too, have field-specific expectations for what a literature review is and does. For instance, in the humanities, authors might include more overt argumentation and interpretation of source material in their literature reviews, whereas in the sciences, authors are more likely to report study designs and results in their literature reviews; these differences reflect these disciplines’ purposes and conventions in scholarship. You should always look at examples from your own discipline and talk to professors or mentors in your field to be sure you understand your discipline’s conventions, for literature reviews as well as for any other genre.

A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections. In these cases, the lit review just needs to cover scholarship that is important to the issue you are writing about; sometimes it will also cover key sources that informed your research methodology.

Lit reviews can also be standalone pieces, either as assignments in a class or as publications. In a class, a lit review may be assigned to help students familiarize themselves with a topic and with scholarship in their field, get an idea of the other researchers working on the topic they’re interested in, find gaps in existing research in order to propose new projects, and/or develop a theoretical framework and methodology for later research. As a publication, a lit review usually is meant to help make other scholars’ lives easier by collecting and summarizing, synthesizing, and analyzing existing research on a topic. This can be especially helpful for students or scholars getting into a new research area, or for directing an entire community of scholars toward questions that have not yet been answered.

What are the parts of a lit review?

Most lit reviews use a basic introduction-body-conclusion structure; if your lit review is part of a larger paper, the introduction and conclusion pieces may be just a few sentences while you focus most of your attention on the body. If your lit review is a standalone piece, the introduction and conclusion take up more space and give you a place to discuss your goals, research methods, and conclusions separately from where you discuss the literature itself.

Introduction:

  • An introductory paragraph that explains what your working topic and thesis is
  • A forecast of key topics or texts that will appear in the review
  • Potentially, a description of how you found sources and how you analyzed them for inclusion and discussion in the review (more often found in published, standalone literature reviews than in lit review sections in an article or research paper)
  • Summarize and synthesize: Give an overview of the main points of each source and combine them into a coherent whole
  • Analyze and interpret: Don’t just paraphrase other researchers – add your own interpretations where possible, discussing the significance of findings in relation to the literature as a whole
  • Critically Evaluate: Mention the strengths and weaknesses of your sources
  • Write in well-structured paragraphs: Use transition words and topic sentence to draw connections, comparisons, and contrasts.

Conclusion:

  • Summarize the key findings you have taken from the literature and emphasize their significance
  • Connect it back to your primary research question

How should I organize my lit review?

Lit reviews can take many different organizational patterns depending on what you are trying to accomplish with the review. Here are some examples:

  • Chronological : The simplest approach is to trace the development of the topic over time, which helps familiarize the audience with the topic (for instance if you are introducing something that is not commonly known in your field). If you choose this strategy, be careful to avoid simply listing and summarizing sources in order. Try to analyze the patterns, turning points, and key debates that have shaped the direction of the field. Give your interpretation of how and why certain developments occurred (as mentioned previously, this may not be appropriate in your discipline — check with a teacher or mentor if you’re unsure).
  • Thematic : If you have found some recurring central themes that you will continue working with throughout your piece, you can organize your literature review into subsections that address different aspects of the topic. For example, if you are reviewing literature about women and religion, key themes can include the role of women in churches and the religious attitude towards women.
  • Qualitative versus quantitative research
  • Empirical versus theoretical scholarship
  • Divide the research by sociological, historical, or cultural sources
  • Theoretical : In many humanities articles, the literature review is the foundation for the theoretical framework. You can use it to discuss various theories, models, and definitions of key concepts. You can argue for the relevance of a specific theoretical approach or combine various theorical concepts to create a framework for your research.

What are some strategies or tips I can use while writing my lit review?

Any lit review is only as good as the research it discusses; make sure your sources are well-chosen and your research is thorough. Don’t be afraid to do more research if you discover a new thread as you’re writing. More info on the research process is available in our "Conducting Research" resources .

As you’re doing your research, create an annotated bibliography ( see our page on the this type of document ). Much of the information used in an annotated bibliography can be used also in a literature review, so you’ll be not only partially drafting your lit review as you research, but also developing your sense of the larger conversation going on among scholars, professionals, and any other stakeholders in your topic.

Usually you will need to synthesize research rather than just summarizing it. This means drawing connections between sources to create a picture of the scholarly conversation on a topic over time. Many student writers struggle to synthesize because they feel they don’t have anything to add to the scholars they are citing; here are some strategies to help you:

  • It often helps to remember that the point of these kinds of syntheses is to show your readers how you understand your research, to help them read the rest of your paper.
  • Writing teachers often say synthesis is like hosting a dinner party: imagine all your sources are together in a room, discussing your topic. What are they saying to each other?
  • Look at the in-text citations in each paragraph. Are you citing just one source for each paragraph? This usually indicates summary only. When you have multiple sources cited in a paragraph, you are more likely to be synthesizing them (not always, but often
  • Read more about synthesis here.

The most interesting literature reviews are often written as arguments (again, as mentioned at the beginning of the page, this is discipline-specific and doesn’t work for all situations). Often, the literature review is where you can establish your research as filling a particular gap or as relevant in a particular way. You have some chance to do this in your introduction in an article, but the literature review section gives a more extended opportunity to establish the conversation in the way you would like your readers to see it. You can choose the intellectual lineage you would like to be part of and whose definitions matter most to your thinking (mostly humanities-specific, but this goes for sciences as well). In addressing these points, you argue for your place in the conversation, which tends to make the lit review more compelling than a simple reporting of other sources.

COMMUNICATION IN THE BIOLOGICAL SCIENCES Department of Biology

LITERATURE REVIEW PAPER

WHAT IS A REVIEW PAPER?

CHOOSING A TOPIC

RESEARCHING A TOPIC

HOW TO WRITE THE PAPER    

The purpose of a review paper is to succinctly review recent progress in a particular topic. Overall, the paper summarizes the current state of knowledge of the topic. It creates an understanding of the topic for the reader by discussing the findings presented in recent research papers .

A review paper is not a "term paper" or book report . It is not merely a report on some references you found. Instead, a review paper synthesizes the results from several primary literature papers to produce a coherent argument about a topic or focused description of a field.

Examples of scientific reviews can be found in:

                Current Opinion in Cell Biology

                Current Opinion in Genetics & Development

                Annual Review of Plant Physiology and Plant Molecular Biology

                Annual Review of Physiology

                Trends in Ecology & Evolution

You should read articles from one or more of these sources to get examples of how your paper should be organized.

Scientists commonly use reviews to communicate with each other and the general public. There are a wide variety of review styles from ones aimed at a general audience (e.g., Scientific American ) to those directed at biologists within a particular subdiscipline (e.g., Annual Review of Physiology ).

A key aspect of a review paper is that it provides the evidence for a particular point of view in a field. Thus, a large focus of your paper should be a description of the data that support or refute that point of view. In addition, you should inform the reader of the experimental techniques that were used to generate the data.

The emphasis of a review paper is interpreting the primary literature on the subject.  You need to read several original research articles on the same topic and make your own conclusions about the meanings of those papers.

Click here for advice on choosing a topic.  

Click here for advice on doing research on your topic.  

HOW TO WRITE THE PAPER

Overview of the Paper: Your paper should consist of four general sections:

Review articles contain neither a materials and methods section nor an abstract.

Organizing the Paper: Use topic headings. Do not use a topic heading that reads, "Body of the paper." Instead the topic headings should refer to the actual concepts or ideas covered in that section.

Example  

What Goes into Each Section:

Home  

  • Systematic review
  • Open access
  • Published: 19 February 2024

‘It depends’: what 86 systematic reviews tell us about what strategies to use to support the use of research in clinical practice

  • Annette Boaz   ORCID: orcid.org/0000-0003-0557-1294 1 ,
  • Juan Baeza 2 ,
  • Alec Fraser   ORCID: orcid.org/0000-0003-1121-1551 2 &
  • Erik Persson 3  

Implementation Science volume  19 , Article number:  15 ( 2024 ) Cite this article

990 Accesses

55 Altmetric

Metrics details

The gap between research findings and clinical practice is well documented and a range of strategies have been developed to support the implementation of research into clinical practice. The objective of this study was to update and extend two previous reviews of systematic reviews of strategies designed to implement research evidence into clinical practice.

We developed a comprehensive systematic literature search strategy based on the terms used in the previous reviews to identify studies that looked explicitly at interventions designed to turn research evidence into practice. The search was performed in June 2022 in four electronic databases: Medline, Embase, Cochrane and Epistemonikos. We searched from January 2010 up to June 2022 and applied no language restrictions. Two independent reviewers appraised the quality of included studies using a quality assessment checklist. To reduce the risk of bias, papers were excluded following discussion between all members of the team. Data were synthesised using descriptive and narrative techniques to identify themes and patterns linked to intervention strategies, targeted behaviours, study settings and study outcomes.

We identified 32 reviews conducted between 2010 and 2022. The reviews are mainly of multi-faceted interventions ( n  = 20) although there are reviews focusing on single strategies (ICT, educational, reminders, local opinion leaders, audit and feedback, social media and toolkits). The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. Furthermore, a lot of nuance lies behind these headline findings, and this is increasingly commented upon in the reviews themselves.

Combined with the two previous reviews, 86 systematic reviews of strategies to increase the implementation of research into clinical practice have been identified. We need to shift the emphasis away from isolating individual and multi-faceted interventions to better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice. This will involve drawing on a wider range of research perspectives (including social science) in primary studies and diversifying the types of synthesis undertaken to include approaches such as realist synthesis which facilitate exploration of the context in which strategies are employed.

Peer Review reports

Contribution to the literature

Considerable time and money is invested in implementing and evaluating strategies to increase the implementation of research into clinical practice.

The growing body of evidence is not providing the anticipated clear lessons to support improved implementation.

Instead what is needed is better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice.

This would involve a more central role in implementation science for a wider range of perspectives, especially from the social, economic, political and behavioural sciences and for greater use of different types of synthesis, such as realist synthesis.

Introduction

The gap between research findings and clinical practice is well documented and a range of interventions has been developed to increase the implementation of research into clinical practice [ 1 , 2 ]. In recent years researchers have worked to improve the consistency in the ways in which these interventions (often called strategies) are described to support their evaluation. One notable development has been the emergence of Implementation Science as a field focusing explicitly on “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice” ([ 3 ] p. 1). The work of implementation science focuses on closing, or at least narrowing, the gap between research and practice. One contribution has been to map existing interventions, identifying 73 discreet strategies to support research implementation [ 4 ] which have been grouped into 9 clusters [ 5 ]. The authors note that they have not considered the evidence of effectiveness of the individual strategies and that a next step is to understand better which strategies perform best in which combinations and for what purposes [ 4 ]. Other authors have noted that there is also scope to learn more from other related fields of study such as policy implementation [ 6 ] and to draw on methods designed to support the evaluation of complex interventions [ 7 ].

The increase in activity designed to support the implementation of research into practice and improvements in reporting provided the impetus for an update of a review of systematic reviews of the effectiveness of interventions designed to support the use of research in clinical practice [ 8 ] which was itself an update of the review conducted by Grimshaw and colleagues in 2001. The 2001 review [ 9 ] identified 41 reviews considering a range of strategies including educational interventions, audit and feedback, computerised decision support to financial incentives and combined interventions. The authors concluded that all the interventions had the potential to promote the uptake of evidence in practice, although no one intervention seemed to be more effective than the others in all settings. They concluded that combined interventions were more likely to be effective than single interventions. The 2011 review identified a further 13 systematic reviews containing 313 discrete primary studies. Consistent with the previous review, four main strategy types were identified: audit and feedback; computerised decision support; opinion leaders; and multi-faceted interventions (MFIs). Nine of the reviews reported on MFIs. The review highlighted the small effects of single interventions such as audit and feedback, computerised decision support and opinion leaders. MFIs claimed an improvement in effectiveness over single interventions, although effect sizes remained small to moderate and this improvement in effectiveness relating to MFIs has been questioned in a subsequent review [ 10 ]. In updating the review, we anticipated a larger pool of reviews and an opportunity to consolidate learning from more recent systematic reviews of interventions.

This review updates and extends our previous review of systematic reviews of interventions designed to implement research evidence into clinical practice. To identify potentially relevant peer-reviewed research papers, we developed a comprehensive systematic literature search strategy based on the terms used in the Grimshaw et al. [ 9 ] and Boaz, Baeza and Fraser [ 8 ] overview articles. To ensure optimal retrieval, our search strategy was refined with support from an expert university librarian, considering the ongoing improvements in the development of search filters for systematic reviews since our first review [ 11 ]. We also wanted to include technology-related terms (e.g. apps, algorithms, machine learning, artificial intelligence) to find studies that explored interventions based on the use of technological innovations as mechanistic tools for increasing the use of evidence into practice (see Additional file 1 : Appendix A for full search strategy).

The search was performed in June 2022 in the following electronic databases: Medline, Embase, Cochrane and Epistemonikos. We searched for articles published since the 2011 review. We searched from January 2010 up to June 2022 and applied no language restrictions. Reference lists of relevant papers were also examined.

We uploaded the results using EPPI-Reviewer, a web-based tool that facilitated semi-automation of the screening process and removal of duplicate studies. We made particular use of a priority screening function to reduce screening workload and avoid ‘data deluge’ [ 12 ]. Through machine learning, one reviewer screened a smaller number of records ( n  = 1200) to train the software to predict whether a given record was more likely to be relevant or irrelevant, thus pulling the relevant studies towards the beginning of the screening process. This automation did not replace manual work but helped the reviewer to identify eligible studies more quickly. During the selection process, we included studies that looked explicitly at interventions designed to turn research evidence into practice. Studies were included if they met the following pre-determined inclusion criteria:

The study was a systematic review

Search terms were included

Focused on the implementation of research evidence into practice

The methodological quality of the included studies was assessed as part of the review

Study populations included healthcare providers and patients. The EPOC taxonomy [ 13 ] was used to categorise the strategies. The EPOC taxonomy has four domains: delivery arrangements, financial arrangements, governance arrangements and implementation strategies. The implementation strategies domain includes 20 strategies targeted at healthcare workers. Numerous EPOC strategies were assessed in the review including educational strategies, local opinion leaders, reminders, ICT-focused approaches and audit and feedback. Some strategies that did not fit easily within the EPOC categories were also included. These were social media strategies and toolkits, and multi-faceted interventions (MFIs) (see Table  2 ). Some systematic reviews included comparisons of different interventions while other reviews compared one type of intervention against a control group. Outcomes related to improvements in health care processes or patient well-being. Numerous individual study types (RCT, CCT, BA, ITS) were included within the systematic reviews.

We excluded papers that:

Focused on changing patient rather than provider behaviour

Had no demonstrable outcomes

Made unclear or no reference to research evidence

The last of these criteria was sometimes difficult to judge, and there was considerable discussion amongst the research team as to whether the link between research evidence and practice was sufficiently explicit in the interventions analysed. As we discussed in the previous review [ 8 ] in the field of healthcare, the principle of evidence-based practice is widely acknowledged and tools to change behaviour such as guidelines are often seen to be an implicit codification of evidence, despite the fact that this is not always the case.

Reviewers employed a two-stage process to select papers for inclusion. First, all titles and abstracts were screened by one reviewer to determine whether the study met the inclusion criteria. Two papers [ 14 , 15 ] were identified that fell just before the 2010 cut-off. As they were not identified in the searches for the first review [ 8 ] they were included and progressed to assessment. Each paper was rated as include, exclude or maybe. The full texts of 111 relevant papers were assessed independently by at least two authors. To reduce the risk of bias, papers were excluded following discussion between all members of the team. 32 papers met the inclusion criteria and proceeded to data extraction. The study selection procedure is documented in a PRISMA literature flow diagram (see Fig.  1 ). We were able to include French, Spanish and Portuguese papers in the selection reflecting the language skills in the study team, but none of the papers identified met the inclusion criteria. Other non- English language papers were excluded.

figure 1

PRISMA flow diagram. Source: authors

One reviewer extracted data on strategy type, number of included studies, local, target population, effectiveness and scope of impact from the included studies. Two reviewers then independently read each paper and noted key findings and broad themes of interest which were then discussed amongst the wider authorial team. Two independent reviewers appraised the quality of included studies using a Quality Assessment Checklist based on Oxman and Guyatt [ 16 ] and Francke et al. [ 17 ]. Each study was rated a quality score ranging from 1 (extensive flaws) to 7 (minimal flaws) (see Additional file 2 : Appendix B). All disagreements were resolved through discussion. Studies were not excluded in this updated overview based on methodological quality as we aimed to reflect the full extent of current research into this topic.

The extracted data were synthesised using descriptive and narrative techniques to identify themes and patterns in the data linked to intervention strategies, targeted behaviours, study settings and study outcomes.

Thirty-two studies were included in the systematic review. Table 1. provides a detailed overview of the included systematic reviews comprising reference, strategy type, quality score, number of included studies, local, target population, effectiveness and scope of impact (see Table  1. at the end of the manuscript). Overall, the quality of the studies was high. Twenty-three studies scored 7, six studies scored 6, one study scored 5, one study scored 4 and one study scored 3. The primary focus of the review was on reviews of effectiveness studies, but a small number of reviews did include data from a wider range of methods including qualitative studies which added to the analysis in the papers [ 18 , 19 , 20 , 21 ]. The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. In this section, we discuss the different EPOC-defined implementation strategies in turn. Interestingly, we found only two ‘new’ approaches in this review that did not fit into the existing EPOC approaches. These are a review focused on the use of social media and a review considering toolkits. In addition to single interventions, we also discuss multi-faceted interventions. These were the most common intervention approach overall. A summary is provided in Table  2 .

Educational strategies

The overview identified three systematic reviews focusing on educational strategies. Grudniewicz et al. [ 22 ] explored the effectiveness of printed educational materials on primary care physician knowledge, behaviour and patient outcomes and concluded they were not effective in any of these aspects. Koota, Kääriäinen and Melender [ 23 ] focused on educational interventions promoting evidence-based practice among emergency room/accident and emergency nurses and found that interventions involving face-to-face contact led to significant or highly significant effects on patient benefits and emergency nurses’ knowledge, skills and behaviour. Interventions using written self-directed learning materials also led to significant improvements in nurses’ knowledge of evidence-based practice. Although the quality of the studies was high, the review primarily included small studies with low response rates, and many of them relied on self-assessed outcomes; consequently, the strength of the evidence for these outcomes is modest. Wu et al. [ 20 ] questioned if educational interventions aimed at nurses to support the implementation of evidence-based practice improve patient outcomes. Although based on evaluation projects and qualitative data, their results also suggest that positive changes on patient outcomes can be made following the implementation of specific evidence-based approaches (or projects). The differing positive outcomes for educational strategies aimed at nurses might indicate that the target audience is important.

Local opinion leaders

Flodgren et al. [ 24 ] was the only systemic review focusing solely on opinion leaders. The review found that local opinion leaders alone, or in combination with other interventions, can be effective in promoting evidence‐based practice, but this varies both within and between studies and the effect on patient outcomes is uncertain. The review found that, overall, any intervention involving opinion leaders probably improves healthcare professionals’ compliance with evidence-based practice but varies within and across studies. However, how opinion leaders had an impact could not be determined because of insufficient details were provided, illustrating that reporting specific details in published studies is important if diffusion of effective methods of increasing evidence-based practice is to be spread across a system. The usefulness of this review is questionable because it cannot provide evidence of what is an effective opinion leader, whether teams of opinion leaders or a single opinion leader are most effective, or the most effective methods used by opinion leaders.

Pantoja et al. [ 26 ] was the only systemic review focusing solely on manually generated reminders delivered on paper included in the overview. The review explored how these affected professional practice and patient outcomes. The review concluded that manually generated reminders delivered on paper as a single intervention probably led to small to moderate increases in adherence to clinical recommendations, and they could be used as a single quality improvement intervention. However, the authors indicated that this intervention would make little or no difference to patient outcomes. The authors state that such a low-tech intervention may be useful in low- and middle-income countries where paper records are more likely to be the norm.

ICT-focused approaches

The three ICT-focused reviews [ 14 , 27 , 28 ] showed mixed results. Jamal, McKenzie and Clark [ 14 ] explored the impact of health information technology on the quality of medical and health care. They examined the impact of electronic health record, computerised provider order-entry, or decision support system. This showed a positive improvement in adherence to evidence-based guidelines but not to patient outcomes. The number of studies included in the review was low and so a conclusive recommendation could not be reached based on this review. Similarly, Brown et al. [ 28 ] found that technology-enabled knowledge translation interventions may improve knowledge of health professionals, but all eight studies raised concerns of bias. The De Angelis et al. [ 27 ] review was more promising, reporting that ICT can be a good way of disseminating clinical practice guidelines but conclude that it is unclear which type of ICT method is the most effective.

Audit and feedback

Sykes, McAnuff and Kolehmainen [ 29 ] examined whether audit and feedback were effective in dementia care and concluded that it remains unclear which ingredients of audit and feedback are successful as the reviewed papers illustrated large variations in the effectiveness of interventions using audit and feedback.

Non-EPOC listed strategies: social media, toolkits

There were two new (non-EPOC listed) intervention types identified in this review compared to the 2011 review — fewer than anticipated. We categorised a third — ‘care bundles’ [ 36 ] as a multi-faceted intervention due to its description in practice and a fourth — ‘Technology Enhanced Knowledge Transfer’ [ 28 ] was classified as an ICT-focused approach. The first new strategy was identified in Bhatt et al.’s [ 30 ] systematic review of the use of social media for the dissemination of clinical practice guidelines. They reported that the use of social media resulted in a significant improvement in knowledge and compliance with evidence-based guidelines compared with more traditional methods. They noted that a wide selection of different healthcare professionals and patients engaged with this type of social media and its global reach may be significant for low- and middle-income countries. This review was also noteworthy for developing a simple stepwise method for using social media for the dissemination of clinical practice guidelines. However, it is debatable whether social media can be classified as an intervention or just a different way of delivering an intervention. For example, the review discussed involving opinion leaders and patient advocates through social media. However, this was a small review that included only five studies, so further research in this new area is needed. Yamada et al. [ 31 ] draw on 39 studies to explore the application of toolkits, 18 of which had toolkits embedded within larger KT interventions, and 21 of which evaluated toolkits as standalone interventions. The individual component strategies of the toolkits were highly variable though the authors suggest that they align most closely with educational strategies. The authors conclude that toolkits as either standalone strategies or as part of MFIs hold some promise for facilitating evidence use in practice but caution that the quality of many of the primary studies included is considered weak limiting these findings.

Multi-faceted interventions

The majority of the systematic reviews ( n  = 20) reported on more than one intervention type. Some of these systematic reviews focus exclusively on multi-faceted interventions, whilst others compare different single or combined interventions aimed at achieving similar outcomes in particular settings. While these two approaches are often described in a similar way, they are actually quite distinct from each other as the former report how multiple strategies may be strategically combined in pursuance of an agreed goal, whilst the latter report how different strategies may be incidentally used in sometimes contrasting settings in the pursuance of similar goals. Ariyo et al. [ 35 ] helpfully summarise five key elements often found in effective MFI strategies in LMICs — but which may also be transferrable to HICs. First, effective MFIs encourage a multi-disciplinary approach acknowledging the roles played by different professional groups to collectively incorporate evidence-informed practice. Second, they utilise leadership drawing on a wide set of clinical and non-clinical actors including managers and even government officials. Third, multiple types of educational practices are utilised — including input from patients as stakeholders in some cases. Fourth, protocols, checklists and bundles are used — most effectively when local ownership is encouraged. Finally, most MFIs included an emphasis on monitoring and evaluation [ 35 ]. In contrast, other studies offer little information about the nature of the different MFI components of included studies which makes it difficult to extrapolate much learning from them in relation to why or how MFIs might affect practice (e.g. [ 28 , 38 ]). Ultimately, context matters, which some review authors argue makes it difficult to say with real certainty whether single or MFI strategies are superior (e.g. [ 21 , 27 ]). Taking all the systematic reviews together we may conclude that MFIs appear to be more likely to generate positive results than single interventions (e.g. [ 34 , 45 ]) though other reviews should make us cautious (e.g. [ 32 , 43 ]).

While multi-faceted interventions still seem to be more effective than single-strategy interventions, there were important distinctions between how the results of reviews of MFIs are interpreted in this review as compared to the previous reviews [ 8 , 9 ], reflecting greater nuance and debate in the literature. This was particularly noticeable where the effectiveness of MFIs was compared to single strategies, reflecting developments widely discussed in previous studies [ 10 ]. We found that most systematic reviews are bounded by their clinical, professional, spatial, system, or setting criteria and often seek to draw out implications for the implementation of evidence in their areas of specific interest (such as nursing or acute care). Frequently this means combining all relevant studies to explore the respective foci of each systematic review. Therefore, most reviews we categorised as MFIs actually include highly variable numbers and combinations of intervention strategies and highly heterogeneous original study designs. This makes statistical analyses of the type used by Squires et al. [ 10 ] on the three reviews in their paper not possible. Further, it also makes extrapolating findings and commenting on broad themes complex and difficult. This may suggest that future research should shift its focus from merely examining ‘what works’ to ‘what works where and what works for whom’ — perhaps pointing to the value of realist approaches to these complex review topics [ 48 , 49 ] and other more theory-informed approaches [ 50 ].

Some reviews have a relatively small number of studies (i.e. fewer than 10) and the authors are often understandably reluctant to engage with wider debates about the implications of their findings. Other larger studies do engage in deeper discussions about internal comparisons of findings across included studies and also contextualise these in wider debates. Some of the most informative studies (e.g. [ 35 , 40 ]) move beyond EPOC categories and contextualise MFIs within wider systems thinking and implementation theory. This distinction between MFIs and single interventions can actually be very useful as it offers lessons about the contexts in which individual interventions might have bounded effectiveness (i.e. educational interventions for individual change). Taken as a whole, this may also then help in terms of how and when to conjoin single interventions into effective MFIs.

In the two previous reviews, a consistent finding was that MFIs were more effective than single interventions [ 8 , 9 ]. However, like Squires et al. [ 10 ] this overview is more equivocal on this important issue. There are four points which may help account for the differences in findings in this regard. Firstly, the diversity of the systematic reviews in terms of clinical topic or setting is an important factor. Secondly, there is heterogeneity of the studies within the included systematic reviews themselves. Thirdly, there is a lack of consistency with regards to the definition and strategies included within of MFIs. Finally, there are epistemological differences across the papers and the reviews. This means that the results that are presented depend on the methods used to measure, report, and synthesise them. For instance, some reviews highlight that education strategies can be useful to improve provider understanding — but without wider organisational or system-level change, they may struggle to deliver sustained transformation [ 19 , 44 ].

It is also worth highlighting the importance of the theory of change underlying the different interventions. Where authors of the systematic reviews draw on theory, there is space to discuss/explain findings. We note a distinction between theoretical and atheoretical systematic review discussion sections. Atheoretical reviews tend to present acontextual findings (for instance, one study found very positive results for one intervention, and this gets highlighted in the abstract) whilst theoretically informed reviews attempt to contextualise and explain patterns within the included studies. Theory-informed systematic reviews seem more likely to offer more profound and useful insights (see [ 19 , 35 , 40 , 43 , 45 ]). We find that the most insightful systematic reviews of MFIs engage in theoretical generalisation — they attempt to go beyond the data of individual studies and discuss the wider implications of the findings of the studies within their reviews drawing on implementation theory. At the same time, they highlight the active role of context and the wider relational and system-wide issues linked to implementation. It is these types of investigations that can help providers further develop evidence-based practice.

This overview has identified a small, but insightful set of papers that interrogate and help theorise why, how, for whom, and in which circumstances it might be the case that MFIs are superior (see [ 19 , 35 , 40 ] once more). At the level of this overview — and in most of the systematic reviews included — it appears to be the case that MFIs struggle with the question of attribution. In addition, there are other important elements that are often unmeasured, or unreported (e.g. costs of the intervention — see [ 40 ]). Finally, the stronger systematic reviews [ 19 , 35 , 40 , 43 , 45 ] engage with systems issues, human agency and context [ 18 ] in a way that was not evident in the systematic reviews identified in the previous reviews [ 8 , 9 ]. The earlier reviews lacked any theory of change that might explain why MFIs might be more effective than single ones — whereas now some systematic reviews do this, which enables them to conclude that sometimes single interventions can still be more effective.

As Nilsen et al. ([ 6 ] p. 7) note ‘Study findings concerning the effectiveness of various approaches are continuously synthesized and assembled in systematic reviews’. We may have gone as far as we can in understanding the implementation of evidence through systematic reviews of single and multi-faceted interventions and the next step would be to conduct more research exploring the complex and situated nature of evidence used in clinical practice and by particular professional groups. This would further build on the nuanced discussion and conclusion sections in a subset of the papers we reviewed. This might also support the field to move away from isolating individual implementation strategies [ 6 ] to explore the complex processes involving a range of actors with differing capacities [ 51 ] working in diverse organisational cultures. Taxonomies of implementation strategies do not fully account for the complex process of implementation, which involves a range of different actors with different capacities and skills across multiple system levels. There is plenty of work to build on, particularly in the social sciences, which currently sits at the margins of debates about evidence implementation (see for example, Normalisation Process Theory [ 52 ]).

There are several changes that we have identified in this overview of systematic reviews in comparison to the review we published in 2011 [ 8 ]. A consistent and welcome finding is that the overall quality of the systematic reviews themselves appears to have improved between the two reviews, although this is not reflected upon in the papers. This is exhibited through better, clearer reporting mechanisms in relation to the mechanics of the reviews, alongside a greater attention to, and deeper description of, how potential biases in included papers are discussed. Additionally, there is an increased, but still limited, inclusion of original studies conducted in low- and middle-income countries as opposed to just high-income countries. Importantly, we found that many of these systematic reviews are attuned to, and comment upon the contextual distinctions of pursuing evidence-informed interventions in health care settings in different economic settings. Furthermore, systematic reviews included in this updated article cover a wider set of clinical specialities (both within and beyond hospital settings) and have a focus on a wider set of healthcare professions — discussing both similarities, differences and inter-professional challenges faced therein, compared to the earlier reviews. These wider ranges of studies highlight that a particular intervention or group of interventions may work well for one professional group but be ineffective for another. This diversity of study settings allows us to consider the important role context (in its many forms) plays on implementing evidence into practice. Examining the complex and varied context of health care will help us address what Nilsen et al. ([ 6 ] p. 1) described as, ‘society’s health problems [that] require research-based knowledge acted on by healthcare practitioners together with implementation of political measures from governmental agencies’. This will help us shift implementation science to move, ‘beyond a success or failure perspective towards improved analysis of variables that could explain the impact of the implementation process’ ([ 6 ] p. 2).

This review brings together 32 papers considering individual and multi-faceted interventions designed to support the use of evidence in clinical practice. The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. Combined with the two previous reviews, 86 systematic reviews of strategies to increase the implementation of research into clinical practice have been conducted. As a whole, this substantial body of knowledge struggles to tell us more about the use of individual and MFIs than: ‘it depends’. To really move forwards in addressing the gap between research evidence and practice, we may need to shift the emphasis away from isolating individual and multi-faceted interventions to better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice. This will involve drawing on a wider range of perspectives, especially from the social, economic, political and behavioural sciences in primary studies and diversifying the types of synthesis undertaken to include approaches such as realist synthesis which facilitate exploration of the context in which strategies are employed. Harvey et al. [ 53 ] suggest that when context is likely to be critical to implementation success there are a range of primary research approaches (participatory research, realist evaluation, developmental evaluation, ethnography, quality/ rapid cycle improvement) that are likely to be appropriate and insightful. While these approaches often form part of implementation studies in the form of process evaluations, they are usually relatively small scale in relation to implementation research as a whole. As a result, the findings often do not make it into the subsequent systematic reviews. This review provides further evidence that we need to bring qualitative approaches in from the periphery to play a central role in many implementation studies and subsequent evidence syntheses. It would be helpful for systematic reviews, at the very least, to include more detail about the interventions and their implementation in terms of how and why they worked.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Before and after study

Controlled clinical trial

Effective Practice and Organisation of Care

High-income countries

Information and Communications Technology

Interrupted time series

Knowledge translation

Low- and middle-income countries

Randomised controlled trial

Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003;362:1225–30. https://doi.org/10.1016/S0140-6736(03)14546-1 .

Article   PubMed   Google Scholar  

Green LA, Seifert CM. Translation of research into practice: why we can’t “just do it.” J Am Board Fam Pract. 2005;18:541–5. https://doi.org/10.3122/jabfm.18.6.541 .

Eccles MP, Mittman BS. Welcome to Implementation Science. Implement Sci. 2006;1:1–3. https://doi.org/10.1186/1748-5908-1-1 .

Article   PubMed Central   Google Scholar  

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:2–14. https://doi.org/10.1186/s13012-015-0209-1 .

Article   Google Scholar  

Waltz TJ, Powell BJ, Matthieu MM, Damschroder LJ, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study. Implement Sci. 2015;10:1–8. https://doi.org/10.1186/s13012-015-0295-0 .

Nilsen P, Ståhl C, Roback K, et al. Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Sci. 2013;8:2–12. https://doi.org/10.1186/1748-5908-8-63 .

Rycroft-Malone J, Seers K, Eldh AC, et al. A realist process evaluation within the Facilitating Implementation of Research Evidence (FIRE) cluster randomised controlled international trial: an exemplar. Implementation Sci. 2018;13:1–15. https://doi.org/10.1186/s13012-018-0811-0 .

Boaz A, Baeza J, Fraser A, European Implementation Score Collaborative Group (EIS). Effective implementation of research into practice: an overview of systematic reviews of the health literature. BMC Res Notes. 2011;4:212. https://doi.org/10.1186/1756-0500-4-212 .

Article   PubMed   PubMed Central   Google Scholar  

Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L, et al. Changing provider behavior – an overview of systematic reviews of interventions. Med Care. 2001;39 8Suppl 2:II2–45.

Google Scholar  

Squires JE, Sullivan K, Eccles MP, et al. Are multifaceted interventions more effective than single-component interventions in changing health-care professionals’ behaviours? An overview of systematic reviews. Implement Sci. 2014;9:1–22. https://doi.org/10.1186/s13012-014-0152-6 .

Salvador-Oliván JA, Marco-Cuenca G, Arquero-Avilés R. Development of an efficient search filter to retrieve systematic reviews from PubMed. J Med Libr Assoc. 2021;109:561–74. https://doi.org/10.5195/jmla.2021.1223 .

Thomas JM. Diffusion of innovation in systematic review methodology: why is study selection not yet assisted by automation? OA Evid Based Med. 2013;1:1–6.

Effective Practice and Organisation of Care (EPOC). The EPOC taxonomy of health systems interventions. EPOC Resources for review authors. Oslo: Norwegian Knowledge Centre for the Health Services; 2016. epoc.cochrane.org/epoc-taxonomy . Accessed 9 Oct 2023.

Jamal A, McKenzie K, Clark M. The impact of health information technology on the quality of medical and health care: a systematic review. Health Inf Manag. 2009;38:26–37. https://doi.org/10.1177/183335830903800305 .

Menon A, Korner-Bitensky N, Kastner M, et al. Strategies for rehabilitation professionals to move evidence-based knowledge into practice: a systematic review. J Rehabil Med. 2009;41:1024–32. https://doi.org/10.2340/16501977-0451 .

Oxman AD, Guyatt GH. Validation of an index of the quality of review articles. J Clin Epidemiol. 1991;44:1271–8. https://doi.org/10.1016/0895-4356(91)90160-b .

Article   CAS   PubMed   Google Scholar  

Francke AL, Smit MC, de Veer AJ, et al. Factors influencing the implementation of clinical guidelines for health care professionals: a systematic meta-review. BMC Med Inform Decis Mak. 2008;8:1–11. https://doi.org/10.1186/1472-6947-8-38 .

Jones CA, Roop SC, Pohar SL, et al. Translating knowledge in rehabilitation: systematic review. Phys Ther. 2015;95:663–77. https://doi.org/10.2522/ptj.20130512 .

Scott D, Albrecht L, O’Leary K, Ball GDC, et al. Systematic review of knowledge translation strategies in the allied health professions. Implement Sci. 2012;7:1–17. https://doi.org/10.1186/1748-5908-7-70 .

Wu Y, Brettle A, Zhou C, Ou J, et al. Do educational interventions aimed at nurses to support the implementation of evidence-based practice improve patient outcomes? A systematic review. Nurse Educ Today. 2018;70:109–14. https://doi.org/10.1016/j.nedt.2018.08.026 .

Yost J, Ganann R, Thompson D, Aloweni F, et al. The effectiveness of knowledge translation interventions for promoting evidence-informed decision-making among nurses in tertiary care: a systematic review and meta-analysis. Implement Sci. 2015;10:1–15. https://doi.org/10.1186/s13012-015-0286-1 .

Grudniewicz A, Kealy R, Rodseth RN, Hamid J, et al. What is the effectiveness of printed educational materials on primary care physician knowledge, behaviour, and patient outcomes: a systematic review and meta-analyses. Implement Sci. 2015;10:2–12. https://doi.org/10.1186/s13012-015-0347-5 .

Koota E, Kääriäinen M, Melender HL. Educational interventions promoting evidence-based practice among emergency nurses: a systematic review. Int Emerg Nurs. 2018;41:51–8. https://doi.org/10.1016/j.ienj.2018.06.004 .

Flodgren G, O’Brien MA, Parmelli E, et al. Local opinion leaders: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD000125.pub5 .

Arditi C, Rège-Walther M, Durieux P, et al. Computer-generated reminders delivered on paper to healthcare professionals: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2017. https://doi.org/10.1002/14651858.CD001175.pub4 .

Pantoja T, Grimshaw JM, Colomer N, et al. Manually-generated reminders delivered on paper: effects on professional practice and patient outcomes. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD001174.pub4 .

De Angelis G, Davies B, King J, McEwan J, et al. Information and communication technologies for the dissemination of clinical practice guidelines to health professionals: a systematic review. JMIR Med Educ. 2016;2:e16. https://doi.org/10.2196/mededu.6288 .

Brown A, Barnes C, Byaruhanga J, McLaughlin M, et al. Effectiveness of technology-enabled knowledge translation strategies in improving the use of research in public health: systematic review. J Med Internet Res. 2020;22:e17274. https://doi.org/10.2196/17274 .

Sykes MJ, McAnuff J, Kolehmainen N. When is audit and feedback effective in dementia care? A systematic review. Int J Nurs Stud. 2018;79:27–35. https://doi.org/10.1016/j.ijnurstu.2017.10.013 .

Bhatt NR, Czarniecki SW, Borgmann H, et al. A systematic review of the use of social media for dissemination of clinical practice guidelines. Eur Urol Focus. 2021;7:1195–204. https://doi.org/10.1016/j.euf.2020.10.008 .

Yamada J, Shorkey A, Barwick M, Widger K, et al. The effectiveness of toolkits as knowledge translation strategies for integrating evidence into clinical care: a systematic review. BMJ Open. 2015;5:e006808. https://doi.org/10.1136/bmjopen-2014-006808 .

Afari-Asiedu S, Abdulai MA, Tostmann A, et al. Interventions to improve dispensing of antibiotics at the community level in low and middle income countries: a systematic review. J Glob Antimicrob Resist. 2022;29:259–74. https://doi.org/10.1016/j.jgar.2022.03.009 .

Boonacker CW, Hoes AW, Dikhoff MJ, Schilder AG, et al. Interventions in health care professionals to improve treatment in children with upper respiratory tract infections. Int J Pediatr Otorhinolaryngol. 2010;74:1113–21. https://doi.org/10.1016/j.ijporl.2010.07.008 .

Al Zoubi FM, Menon A, Mayo NE, et al. The effectiveness of interventions designed to increase the uptake of clinical practice guidelines and best practices among musculoskeletal professionals: a systematic review. BMC Health Serv Res. 2018;18:2–11. https://doi.org/10.1186/s12913-018-3253-0 .

Ariyo P, Zayed B, Riese V, Anton B, et al. Implementation strategies to reduce surgical site infections: a systematic review. Infect Control Hosp Epidemiol. 2019;3:287–300. https://doi.org/10.1017/ice.2018.355 .

Borgert MJ, Goossens A, Dongelmans DA. What are effective strategies for the implementation of care bundles on ICUs: a systematic review. Implement Sci. 2015;10:1–11. https://doi.org/10.1186/s13012-015-0306-1 .

Cahill LS, Carey LM, Lannin NA, et al. Implementation interventions to promote the uptake of evidence-based practices in stroke rehabilitation. Cochrane Database Syst Rev. 2020. https://doi.org/10.1002/14651858.CD012575.pub2 .

Pedersen ER, Rubenstein L, Kandrack R, Danz M, et al. Elusive search for effective provider interventions: a systematic review of provider interventions to increase adherence to evidence-based treatment for depression. Implement Sci. 2018;13:1–30. https://doi.org/10.1186/s13012-018-0788-8 .

Jenkins HJ, Hancock MJ, French SD, Maher CG, et al. Effectiveness of interventions designed to reduce the use of imaging for low-back pain: a systematic review. CMAJ. 2015;187:401–8. https://doi.org/10.1503/cmaj.141183 .

Bennett S, Laver K, MacAndrew M, Beattie E, et al. Implementation of evidence-based, non-pharmacological interventions addressing behavior and psychological symptoms of dementia: a systematic review focused on implementation strategies. Int Psychogeriatr. 2021;33:947–75. https://doi.org/10.1017/S1041610220001702 .

Noonan VK, Wolfe DL, Thorogood NP, et al. Knowledge translation and implementation in spinal cord injury: a systematic review. Spinal Cord. 2014;52:578–87. https://doi.org/10.1038/sc.2014.62 .

Albrecht L, Archibald M, Snelgrove-Clarke E, et al. Systematic review of knowledge translation strategies to promote research uptake in child health settings. J Pediatr Nurs. 2016;31:235–54. https://doi.org/10.1016/j.pedn.2015.12.002 .

Campbell A, Louie-Poon S, Slater L, et al. Knowledge translation strategies used by healthcare professionals in child health settings: an updated systematic review. J Pediatr Nurs. 2019;47:114–20. https://doi.org/10.1016/j.pedn.2019.04.026 .

Bird ML, Miller T, Connell LA, et al. Moving stroke rehabilitation evidence into practice: a systematic review of randomized controlled trials. Clin Rehabil. 2019;33:1586–95. https://doi.org/10.1177/0269215519847253 .

Goorts K, Dizon J, Milanese S. The effectiveness of implementation strategies for promoting evidence informed interventions in allied healthcare: a systematic review. BMC Health Serv Res. 2021;21:1–11. https://doi.org/10.1186/s12913-021-06190-0 .

Zadro JR, O’Keeffe M, Allison JL, Lembke KA, et al. Effectiveness of implementation strategies to improve adherence of physical therapist treatment choices to clinical practice guidelines for musculoskeletal conditions: systematic review. Phys Ther. 2020;100:1516–41. https://doi.org/10.1093/ptj/pzaa101 .

Van der Veer SN, Jager KJ, Nache AM, et al. Translating knowledge on best practice into improving quality of RRT care: a systematic review of implementation strategies. Kidney Int. 2011;80:1021–34. https://doi.org/10.1038/ki.2011.222 .

Pawson R, Greenhalgh T, Harvey G, et al. Realist review–a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10Suppl 1:21–34. https://doi.org/10.1258/1355819054308530 .

Rycroft-Malone J, McCormack B, Hutchinson AM, et al. Realist synthesis: illustrating the method for implementation research. Implementation Sci. 2012;7:1–10. https://doi.org/10.1186/1748-5908-7-33 .

Johnson MJ, May CR. Promoting professional behaviour change in healthcare: what interventions work, and why? A theory-led overview of systematic reviews. BMJ Open. 2015;5:e008592. https://doi.org/10.1136/bmjopen-2015-008592 .

Metz A, Jensen T, Farley A, Boaz A, et al. Is implementation research out of step with implementation practice? Pathways to effective implementation support over the last decade. Implement Res Pract. 2022;3:1–11. https://doi.org/10.1177/26334895221105585 .

May CR, Finch TL, Cornford J, Exley C, et al. Integrating telecare for chronic disease management in the community: What needs to be done? BMC Health Serv Res. 2011;11:1–11. https://doi.org/10.1186/1472-6963-11-131 .

Harvey G, Rycroft-Malone J, Seers K, Wilson P, et al. Connecting the science and practice of implementation – applying the lens of context to inform study design in implementation research. Front Health Serv. 2023;3:1–15. https://doi.org/10.3389/frhs.2023.1162762 .

Download references

Acknowledgements

The authors would like to thank Professor Kathryn Oliver for her support in the planning the review, Professor Steve Hanney for reading and commenting on the final manuscript and the staff at LSHTM library for their support in planning and conducting the literature search.

This study was supported by LSHTM’s Research England QR strategic priorities funding allocation and the National Institute for Health and Care Research (NIHR) Applied Research Collaboration South London (NIHR ARC South London) at King’s College Hospital NHS Foundation Trust. Grant number NIHR200152. The views expressed are those of the author(s) and not necessarily those of the NIHR, the Department of Health and Social Care or Research England.

Author information

Authors and affiliations.

Health and Social Care Workforce Research Unit, The Policy Institute, King’s College London, Virginia Woolf Building, 22 Kingsway, London, WC2B 6LE, UK

Annette Boaz

King’s Business School, King’s College London, 30 Aldwych, London, WC2B 4BG, UK

Juan Baeza & Alec Fraser

Federal University of Santa Catarina (UFSC), Campus Universitário Reitor João Davi Ferreira Lima, Florianópolis, SC, 88.040-900, Brazil

Erik Persson

You can also search for this author in PubMed   Google Scholar

Contributions

AB led the conceptual development and structure of the manuscript. EP conducted the searches and data extraction. All authors contributed to screening and quality appraisal. EP and AF wrote the first draft of the methods section. AB, JB and AF performed result synthesis and contributed to the analyses. AB wrote the first draft of the manuscript and incorporated feedback and revisions from all other authors. All authors revised and approved the final manuscript.

Corresponding author

Correspondence to Annette Boaz .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: appendix a., additional file 2: appendix b., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Boaz, A., Baeza, J., Fraser, A. et al. ‘It depends’: what 86 systematic reviews tell us about what strategies to use to support the use of research in clinical practice. Implementation Sci 19 , 15 (2024). https://doi.org/10.1186/s13012-024-01337-z

Download citation

Received : 01 November 2023

Accepted : 05 January 2024

Published : 19 February 2024

DOI : https://doi.org/10.1186/s13012-024-01337-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation
  • Interventions
  • Clinical practice
  • Research evidence
  • Multi-faceted

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research paper on review

  • - Google Chrome

Intended for healthcare professionals

  • Access provided by Google Indexer
  • My email alerts
  • BMA member login
  • Username * Password * Forgot your log in details? Need to activate BMA Member Log In Log in via OpenAthens Log in via your institution

Home

Search form

  • Advanced search
  • Search responses
  • Search blogs
  • Effect of exercise for...

Effect of exercise for depression: systematic review and network meta-analysis of randomised controlled trials

Linked editorial.

Exercise for the treatment of depression

  • Related content
  • Peer review
  • Michael Noetel , senior lecturer 1 ,
  • Taren Sanders , senior research fellow 2 ,
  • Daniel Gallardo-Gómez , doctoral student 3 ,
  • Paul Taylor , deputy head of school 4 ,
  • Borja del Pozo Cruz , associate professor 5 6 ,
  • Daniel van den Hoek , senior lecturer 7 ,
  • Jordan J Smith , senior lecturer 8 ,
  • John Mahoney , senior lecturer 9 ,
  • Jemima Spathis , senior lecturer 9 ,
  • Mark Moresi , lecturer 4 ,
  • Rebecca Pagano , senior lecturer 10 ,
  • Lisa Pagano , postdoctoral fellow 11 ,
  • Roberta Vasconcellos , doctoral student 2 ,
  • Hugh Arnott , masters student 2 ,
  • Benjamin Varley , doctoral student 12 ,
  • Philip Parker , pro vice chancellor research 13 ,
  • Stuart Biddle , professor 14 15 ,
  • Chris Lonsdale , deputy provost 13
  • 1 School of Psychology, University of Queensland, St Lucia, QLD 4072, Australia
  • 2 Institute for Positive Psychology and Education, Australian Catholic University, North Sydney, NSW, Australia
  • 3 Department of Physical Education and Sport, University of Seville, Seville, Spain
  • 4 School of Health and Behavioural Sciences, Australian Catholic University, Strathfield, NSW, Australia
  • 5 Department of Clinical Biomechanics and Sports Science, University of Southern Denmark, Odense, Denmark
  • 6 Biomedical Research and Innovation Institute of Cádiz (INiBICA) Research Unit, University of Cádiz, Spain
  • 7 School of Health and Behavioural Sciences, University of the Sunshine Coast, Petrie, QLD, Australia
  • 8 School of Education, University of Newcastle, Callaghan, NSW, Australia
  • 9 School of Health and Behavioural Sciences, Australian Catholic University, Banyo, QLD, Australia
  • 10 School of Education, Australian Catholic University, Strathfield, NSW, Australia
  • 11 Australian Institute of Health Innovation, Macquarie University, Macquarie Park, NSW, Australia
  • 12 Children’s Hospital Westmead Clinical School, University of Sydney, Westmead, NSW, Australia
  • 13 Australian Catholic University, North Sydney, NSW, Australia
  • 14 Centre for Health Research, University of Southern Queensland, Springfield, QLD, Australia
  • 15 Faculty of Sport and Health Science, University of Jyvaskyla, Jyvaskyla, Finland
  • Correspondence to: M Noetel m.noetel{at}uq.edu.au (or @mnoetel on Twitter)
  • Accepted 15 January 2024

Objective To identify the optimal dose and modality of exercise for treating major depressive disorder, compared with psychotherapy, antidepressants, and control conditions.

Design Systematic review and network meta-analysis.

Methods Screening, data extraction, coding, and risk of bias assessment were performed independently and in duplicate. Bayesian arm based, multilevel network meta-analyses were performed for the primary analyses. Quality of the evidence for each arm was graded using the confidence in network meta-analysis (CINeMA) online tool.

Data sources Cochrane Library, Medline, Embase, SPORTDiscus, and PsycINFO databases.

Eligibility criteria for selecting studies Any randomised trial with exercise arms for participants meeting clinical cut-offs for major depression.

Results 218 unique studies with a total of 495 arms and 14 170 participants were included. Compared with active controls (eg, usual care, placebo tablet), moderate reductions in depression were found for walking or jogging (n=1210, κ=51, Hedges’ g −0.62, 95% credible interval −0.80 to −0.45), yoga (n=1047, κ=33, g −0.55, −0.73 to −0.36), strength training (n=643, κ=22, g −0.49, −0.69 to −0.29), mixed aerobic exercises (n=1286, κ=51, g −0.43, −0.61 to −0.24), and tai chi or qigong (n=343, κ=12, g −0.42, −0.65 to −0.21). The effects of exercise were proportional to the intensity prescribed. Strength training and yoga appeared to be the most acceptable modalities. Results appeared robust to publication bias, but only one study met the Cochrane criteria for low risk of bias. As a result, confidence in accordance with CINeMA was low for walking or jogging and very low for other treatments.

Conclusions Exercise is an effective treatment for depression, with walking or jogging, yoga, and strength training more effective than other exercises, particularly when intense. Yoga and strength training were well tolerated compared with other treatments. Exercise appeared equally effective for people with and without comorbidities and with different baseline levels of depression. To mitigate expectancy effects, future studies could aim to blind participants and staff. These forms of exercise could be considered alongside psychotherapy and antidepressants as core treatments for depression.

Systematic review registration PROSPERO CRD42018118040.

Figure1

  • Download figure
  • Open in new tab
  • Download powerpoint

Introduction

Major depressive disorder is a leading cause of disability worldwide 1 and has been found to lower life satisfaction more than debt, divorce, and diabetes 2 and to exacerbate comorbidities, including heart disease, 3 anxiety, 4 and cancer. 5 Although people with major depressive disorder often respond well to drug treatments and psychotherapy, 6 7 many are resistant to treatment. 8 In addition, access to treatment for many people with depression is limited, with only 51% treatment coverage for high income countries and 20% for low and lower-middle income countries. 9 More evidence based treatments are therefore needed.

Exercise may be an effective complement or alternative to drugs and psychotherapy. 10 11 12 13 14 In addition to mental health benefits, exercise also improves a range of physical and cognitive outcomes. 15 16 17 Clinical practice guidelines in the US, UK, and Australia recommend physical activity as part of treatment for depression. 18 19 20 21 But these guidelines do not provide clear, consistent recommendations about dose or exercise modality. British guidelines recommend group exercise programmes 20 21 and offer general recommendations to increase any form of physical activity, 21 the American Psychiatric Association recommends any dose of aerobic exercise or resistance training, 20 and Australian and New Zealand guidelines suggest a combination of strength and vigorous aerobic exercises, with at least two or three bouts weekly. 19

Authors of guidelines may find it hard to provide consistent recommendations on the basis of existing mainly pairwise meta-analyses—that is, assessing a specific modality versus a specific comparator in a distinct group of participants. 12 13 22 These meta-analyses have come under scrutiny for pooling heterogeneous treatments and heterogenous comparisons leading to ambiguous effect estimates. 23 Reviews also face the opposite problem, excluding exercise treatments such as yoga, tai chi, and qigong because grouping them with strength training might be inappropriate. 23 Overviews of reviews have tried to deal with this problem by combining pairwise meta-analyses on individual treatments. A recent such overview found no differences between exercise modalities. 13 Comparing effect sizes between different pairwise meta-analyses can also lead to confusion because of differences in analytical methods used between meta-analysis, such as choice of a control to use as the referent. Network meta-analyses are a better way to precisely quantify differences between interventions as they simultaneously model the direct and indirect comparisons between interventions. 24

Network meta-analyses have been used to compare different types of psychotherapy and pharmacotherapy for depression. 6 25 26 For exercise, they have shown that dose and modality influence outcomes for cognition, 16 back pain, 15 and blood pressure. 17 Two network meta-analyses explored the effects of exercise on depression: one among older adults 27 and the other for mental health conditions. 28 Because of the inclusion criteria and search strategies used, these reviews might have been under-powered to explore moderators such as dose and modality (κ=15 and κ=71, respectively). To resolve conflicting findings in existing reviews, we comprehensively searched randomised trials on exercise for depression to ensure our review was adequately powered to identify the optimal dose and modality of exercise. For example, a large overview of reviews found effects on depression to be proportional to intensity, with vigorous exercise appearing to be better, 13 but a later meta-analysis found no such effects. 22 We explored whether recommendations differ based on participants’ sex, age, and baseline level of depression.

Given the challenges presented by behaviour change in people with depression, 29 we also identified autonomy support or behaviour change techniques that might improve the effects of intervention. 30 Behaviour change techniques such as self-monitoring and action planning have been shown to influence the effects of physical activity interventions in adults (>18 years) 31 and older adults (>60 years) 32 with differing effectiveness of techniques in different populations. We therefore tested whether any intervention components from the behaviour change technique taxonomy were associated with higher or lower intervention effects. 30 Other meta-analyses found that physical activity interventions work better when they provide people with autonomy (eg, choices, invitational language). 33 Autonomy is not well captured in the taxonomy for behaviour change technique. We therefore tested whether effects were stronger in studies that provided more autonomy support to patients. Finally, to understand the mechanism of intervention effects, such as self-confidence, affect, and physical fitness, we collated all studies that conducted formal mediation analyses.

Our findings are presented according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses-Network Meta-analyses (PRISMA-NMA) guidelines (see supplementary file, section S0; all supplementary files, data, and code are also available at https://osf.io/nzw6u/ ). 34 We amended our analysis strategy after registering our review; these changes were to better align with new norms established by the Cochrane Comparing Multiple Interventions Methods Group. 35 These norms were introduced between the publication of our protocol and the preparation of this manuscript. The largest change was using the confidence in network meta-analysis (CINeMA) 35 online tool instead of the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) guidelines and adopting methods to facilitate assessments—for example, instead of using an omnibus test for all treatments, we assessed publication bias for each treatment compared with active controls. We also modelled acceptability (through dropout rate), which was not predefined but was adopted in response to a reviewer’s comment.

Eligibility criteria

To be eligible for inclusion, studies had to be randomised controlled trials that included exercise as a treatment for depression and included participants who met the criteria for major depressive disorder, either clinician diagnosed or identified through participant self-report as exceeding established clinical thresholds (eg, scored >13 on the Beck depression inventory-II). 36 Studies could meet these criteria when all the participants had depression or when the study reported depression outcomes for a subgroup of participants with depression at the start of the study.

We defined exercise as “planned, structured and repetitive bodily movement done to improve or maintain one or more components of physical fitness.” 37 Unlike recent reviews, 12 22 we included studies with more than one exercise arm and multifaceted interventions (eg, health and exercise counselling) as long as they contained a substantial exercise component. These trials could be included because network meta-analysis methods allows for the grouping of those interventions into homogenous nodes. Unlike the most recent Cochrane review, 12 we also included participants with physical comorbidities such as arthritis and participants with postpartum depression because the Diagnostic Statistical Manual of Mental Health Disorders , fifth edition, removed the postpartum onset specifier after that analysis was completed. 23 Studies were excluded if interventions were shorter than one week, depression was not reported as an outcome, and data were insufficient to calculate an effect size for each arm. Any comparison condition was included, allowing us to quantify the effects against established treatments (eg, selective serotonin reuptake inhibitors (SSRIs), cognitive behavioural therapy), active control conditions (usual care, placebo tablet, stretching, educational control, and social support), or waitlist control conditions. Published and unpublished studies were included, with no restrictions on language applied.

Information sources

We adapted the search strategy from the most recent Cochrane review, 12 adding keywords for yoga, tai chi, and qigong, as they met our definition for exercise. We conducted database searches, without filters or date limits, in The Cochrane Library via CENTRAL, SPORTDiscus via Embase, and Medline, Embase, and PsycINFO via Ovid. Searches of the databases were conducted on 17 December 2018 and 7 August 2020 and last updated on 3 June 2023 (see supplementary file section S1 for full search strategies). We assessed full texts of all included studies from two systematic reviews of exercise for depression. 12 22

Study selection and data collection

To select studies, we removed duplicate records in Covidence 38 and then screened each title and abstract independently and in duplicate. Conflicts were resolved through discussion or consultation with a third reviewer. The same methods were used for full text screening.

We used the Extraction 1.0 randomised controlled trial data extraction forms in Covidence. 38 Data were extracted independently and in duplicate, with conflicts resolved through discussion with a third reviewer.

For each study, we extracted a description of the interventions, including frequency, intensity, and type and time of each exercise intervention. Using the Compendium of Physical Activities, 39 we calculated the energy expenditure dose of exercise for each arm as metabolic equivalents of task (METs) min/week. Two authors evaluated each exercise intervention using the Behaviour Change Taxonomy version 1 30 for behaviour change techniques explicitly described in each exercise arm. They also rated the level of autonomy offered to participants, on a scale from 1 (no choice) to 10 (full autonomy). We also extracted descriptions of the other arms within the randomised trials, including other treatment or control conditions; participants’ age, sex, comorbidities, and baseline severity of depressive symptoms; and each trial’s location and whether or not the trial was funded.

Risk of bias in individual studies

We used Cochrane’s risk of bias tool for randomised controlled trials. 40 Risk of bias was rated independently and in duplicate, with conflicts resolved through discussion with a third reviewer.

Summary measures and synthesis

For main and moderation analyses, we used bayesian arm based multilevel network meta-analysis models. 41 All network meta-analytical approaches allow users to assess the effects of treatments against a range of comparisons. The bayesian arm based models allowed us to also assess the influence of hypothesised moderators, such as intensity, dose, age, and sex. Many network meta-analyses use contrast based methods, comparing post-test scores between study arms. 41 Arm based meta-analyses instead describe the population-averaged absolute effect size for each treatment arm (ie, each arm’s change score). 41 As a result, the summary measure we used was the standardised mean change from baseline, calculated as standardised mean differences with correction for small studies (Hedges’ g). In keeping with the norms from the included studies, effect sizes describe treatment effects on depression, such that larger negative numbers represent stronger effects on symptoms. Using National Institute for Health and Care Excellence guidelines, 42 we standardised change scores for different depression scales (eg, Beck depression inventory, Hamilton depression rating scale) using an internal reference standard for each scale (for each scale, the average of pooled standard deviations at baseline) reported in our meta-analysis. Because depression scores generally show regression to the mean, even in control conditions, we present effect sizes as improvements beyond active control conditions. This convention makes our results comparable to existing, contrast based meta-analyses.

Active control conditions (usual care, placebo tablet, stretching, educational control, and social support) were grouped to increase power for moderation analyses, for parsimony in the network graph, and because they all showed similar arm based pooled effect sizes (Hedges’ g between −0.93 and −1.00 for all, with no statistically significant differences). We separated waitlist control from these active control conditions because it typically shows poorer effects in treatment for depression. 43

Bayesian meta-analyses were conducted in R 44 using the brms package. 45 We preregistered informative priors based on the distributional parameters of our meta-analytical model. 46 We nested effects within arms to manage dependency between multiple effect sizes from the same participants. 46 For example, if one study reported two self-reported measures of depression, or reported both self-report and clinician rated depression, we nested these effect sizes within the arm to account for both pieces of information while controlling for dependency between effects. 46 Finally, we compared absolute effect sizes against a standardised minimum clinically important difference, 0.5 standard deviations of the change score. 47 From our data, this corresponded to a large change in before and after scores (Hedges’ g −1.16), a moderate change compared with waitlist control (g −0.55), or a small benefit when compared with active controls (g −0.20). For credibility assessments comparing exercise modalities, we used the netmeta package 48 and CINeMA. 49 We also used netmeta to model acceptability, comparing the odds ratio for drop-out rate in each arm.

Additional analyses

All prespecified moderation and sensitivity analyses were performed. We moderated for participant characteristics, including participants’ sex, age, baseline symptom severity, and presence or absence of comorbidities; duration of the intervention (weeks); weekly dose of the intervention; duration between completion of treatment and measurement, to test robustness to remission (in response to a reviewer’s suggestion); amount of autonomy provided in the exercise prescription; and presence of each behaviour change technique. As preregistered, we moderated for behaviour change techniques in three ways: through meta-regression, including all behaviour change techniques simultaneously for primary analysis; including one behaviour change technique at a time (using 99% credible intervals to somewhat control for multiple comparisons) in exploratory analyses; and through meta-analytical classification and regression trees (metaCART), which allowed for interactions between moderating variables (eg, if goal setting combined with feedback had synergistic effects). 50 We conducted sensitivity analyses for risk of bias, assessing whether studies with low versus unclear or high risk of bias on each domain showed statistically significant differences in effect sizes.

Credibility assessment

To assess the credibility of each comparison against active control, we used CINeMA. 35 49 This online tool was designed by the Cochrane Comparing Multiple Interventions Methods Group as an adaptation of GRADE for network meta-analyses. 35 In line with recommended guidelines, for each comparison we made judgements for within study bias, reporting bias, indirectness, imprecision, heterogeneity, and incoherence. Similar to GRADE, we considered the evidence for comparisons to show high confidence then downgraded on the basis of concerns in each domain, as follows:

Within study bias —Comparisons were downgraded when most of the studies providing direct evidence for comparisons were unclear or high risk.

Reporting bias —Publication bias was assessed in three ways. For each comparison with at least 10 studies 51 we created funnel plots, including estimates of effect sizes after removing studies with statistically significant findings (ie, worst case estimates) 52 ; calculated an s value, representing how strong publication bias would need to be to nullify meta-analytical effects 52 ; and conducted a multilevel Egger’s regression test, indicative of small study bias. Given these tests are not recommended for comparisons with fewer than 10 studies, 51 those comparisons were considered to show “some concerns.”

Indirectness — Our primary population of interest was adults with major depression. Studies were considered to be indirect if they focused on one sex only (>90% male or female), participants with comorbidities (eg, heart disease), adolescents and young adults (14-20 years), or older adults (>60 years). We flagged these studies as showing some concerns if one of these factors was present, and as “major concerns” if two of these factors were present. Evidence from comparisons was classified as some concerns or major concerns using majority rating for studies directly informing the comparison.

Imprecision — As per CINeMA, we used the clinically important difference of Hedges’ g=0.2 to ascribe a zone of equivalence, where differences were not considered clinically significant (−0.2<g<0.2). Studies were flagged as some concerns for imprecision if the bounds of the 95% credible interval extended across that zone, and they were flagged as major concerns if the bounds extended to the other side of the zone of equivalence (such that effects could be harmful).

Heterogeneity — Prediction intervals account for heterogeneity differently from credible intervals. 35 As a result, CINeMA accounts for heterogeneity by assessing whether the prediction intervals and the credible intervals lead to different conclusions about clinical significance (using the same zone of equivalence from imprecision). Comparisons are flagged as some concerns if the prediction interval crosses into, or out of, the zone of equivalence once (eg, from helpful to no meaningful effect), and as major concerns if the prediction interval crosses the zone twice (eg, from helpful and harmful).

Incoherence — Incoherence assesses whether the network meta-analysis provides similar estimates when using direct evidence (eg, randomised controlled trials on strength training versus SSRI) compared with indirect evidence (eg, randomised controlled trials where either strength training or SSRI uses waitlist control). Incoherence provides some evidence the network may violate the assumption of transitivity: that the only systematic difference between arms is the treatment, not other confounders. We assessed incoherence using two methods: Firstly, a global design-by-treatment interaction to assess for incoherence across the whole network, 35 49 and, secondly, separating indirect and direct evidence (SIDE method) for each comparison through netsplitting to see whether differences between those effect estimates were statistically significant. We flagged comparisons as some concerns if either no direct comparisons were available or direct and indirect evidence gave different conclusions about clinical significance (eg, from helpful to no meaningful effect, as per imprecision and heterogeneity). Again, we classified comparisons as major concerns if the direct and indirect evidence changed the sign of the effect or changed both limits of the credible interval. 35 49

Patient and public involvement

We discussed the aims and design of this study with members of the public, including those who had experienced depression. Several of our authors have experienced major depressive episodes, but beyond that we did not include patients in the conduct of this review.

Study selection

The PRISMA flow diagram outlines the study selection process ( fig 1 ). We used two previous reviews to identify potentially eligible studies for inclusion. 12 22 Database searches identified 18 658 possible studies. After 5505 duplicates had been removed, two reviewers independently screened 13 115 titles and abstracts. After screening, two reviewers independently reviewed 1738 full text articles. Supplementary file section S2 shows the consensus reasons for exclusion. A total of 218 unique studies described in 246 reports were included, totalling 495 arms and 14 170 participants. Supplementary file section S3 lists the references and characteristics of the included studies.

Fig 1

Flow of studies through review

Network geometry

As preregistered, we removed nodes with fewer than 100 participants. Using this filter, most interventions contained comparisons with at least four other nodes in the network geometry ( fig 2 ). The results of the global test design-by-treatment interaction model were not statistically significant, supporting the assumption of transitivity (χ 2 =94.92, df=75, P=0.06). When net-splitting was used on all possible combinations in the network, for two out of the 120 comparisons we found statistically significant incoherence between direct and indirect evidence (SSRI v waitlist control; cognitive behavioural therapy v tai chi or qigong). Overall, we found little statistical evidence that the model violated the assumption of transitivity. Qualitative differences were, however, found for participant characteristics between different arms (see supplementary file, section S4). For example, some interventions appeared to be prescribed more frequently among people with severe depression (eg, 7/16 studies using SSRIs) compared with other interventions (eg, 1/15 studies using aerobic exercise combined with therapy). Similarly, some interventions appeared more likely to be prescribed for older adults (eg, mean age, tai chi=59 v dance=31) or women (eg, per cent female: dance=88% v cycling=53%). Given that plausible mechanisms exist for these systematic differences (eg, the popularity of tai chi among older adults), 53 there are reasons to believe that allocation to treatment arms would be less than perfectly random. We have factored these biases in our certainty estimates through indirectness ratings.

Fig 2

Network geometry indicating number of participants in each arm (size of points) and number of comparisons between arms (thickness of lines). SSRI=selective serotonin reuptake inhibitor

Risk of bias within studies

Supplementary file section S5 provides the risk of bias ratings for each study. Few studies explicitly blinded participants and staff ( fig 3 ). As a result, overall risk of bias for most studies was unclear or high, and effect sizes could include expectancy effects, among other biases. However, sensitivity analyses suggested that effect sizes were not influenced by any risk of bias criteria owing to wide credible intervals (see supplementary file, section S6). Nevertheless, certainty ratings for all treatments arms were downgraded owing to high risk of bias in the studies informing the comparison.

Fig 3

Risk of bias summary plot showing percentage of included studies judged to be low, unclear, or high risk across Cochrane criteria for randomised trials

Synthesis of results

Supplementary file section S7 presents a forest plot of Hedges’ g values for each study. Figure 4 shows the predicted effects of each treatment compared with active controls. Compared with active controls, large reductions in depression were found for dance (n=107, κ=5, Hedges’ g −0.96, 95% credible interval −1.36 to −0.56) and moderate reductions for walking or jogging (n=1210, κ=51, g −0.63, −0.80 to −0.46), yoga (n=1047, κ=33, g=−0.55, −0.73 to −0.36), strength training (n=643, κ=22, g=−0.49, −0.69 to −0.29), mixed aerobic exercises (n=1286, κ=51, g=−0.43, −0.61 to −0.25), and tai chi or qigong (n=343, κ=12, g=−0.42, −0.65 to −0.21). Moderate, clinically meaningful effects were also present when exercise was combined with SSRIs (n=268, κ=11, g=−0.55, −0.86 to −0.23) or aerobic exercise was combined with psychotherapy (n=404, κ=15, g=−0.54, −0.76 to −0.32). All these treatments were significantly stronger than the standardised minimum clinically important difference compared with active control (g=−0.20), equating to an absolute g value of −1.16. Dance, exercise combined with SSRIs, and walking or jogging were the treatments most likely to perform best when modelling the surface under the cumulative ranking curve ( fig 4 ). For acceptability, the odds of participants dropping out of the study were lower for strength training (n=247, direct evidence κ=6, odds ratio 0.55, 95% credible interval 0.31 to 0.99) and yoga (n=264, κ=5, 0.57, 0.35 to 0.94) than for active control. The rate of dropouts was not significantly different from active control in any other arms (see supplementary file, section S8).

Fig 4

Predicted effects of different exercise modalities on major depression compared with active controls (eg, usual care), with 95% credible intervals. The estimate of effects for the active control condition was a before and after change of Hedges’ g of −0.95 (95% credible interval −1.10 to −0.79), n=3554, κ =113. Colour represents SUCRA from most likely to be helpful (dark purple) to least likely to be helpful (light purple). SSRI=selective serotonin reuptake inhibitor; SUCRA=surface under the cumulative ranking curve

Consistent with other meta-analyses, effects were moderate for cognitive behaviour therapy alone (n=712, κ=20, g=−0.55, −0.75 to −0.37) and small for SSRIs (n=432, κ=16, g=−0.26, −0.50 to −0.01) compared with active controls ( fig 4 ). These estimates are comparable to those of reviews that focused directly on psychotherapy (g=−0.67, −0.79 to −0.56) 7 or pharmacotherapy (g=−0.30, –0.34 to −0.26). 25 However, our review was not designed to find all studies of these treatments, so these estimates should not usurp these directly focused systematic reviews.

Despite the large number of studies in the network, confidence in the effects were low ( fig 5 ). This was largely due to the high within study bias described in the risk of bias summary plot. Reporting bias was also difficult to robustly assess because direct comparison with active control was often only provided in fewer than 10 studies. Many studies focused on one sex only, older adults, or those with comorbidities, so most arms had some concerns about indirect comparisons. Credible intervals were seldom wide enough to change decision making, so concerns about imprecision were few. Heterogeneity did plausibly change some conclusions around clinical significance. Few studies showed problematic incoherence, meaning direct and indirect evidence usually agreed. Overall, walking or jogging had low confidence, with other modalities being very low.

Fig 5

Summary table for credibility assessment using confidence in network meta-analysis (CINeMA). SSRI=selective serotonin reuptake inhibitor

Moderation by participant characteristics

The optimal modality appeared to be moderated by age and sex. Compared with models that only included exercise modality (R 2 =0.65), R 2 was higher for models that included interactions with sex (R 2 =0.71) and age (R 2 =0.69). R 2 showed no substantial increase for models including baseline depression (R 2 =0.67) or comorbidities (R 2 =0.66; see supplementary file, section S9).

Effects appeared larger for women than men for strength training and cycling ( fig 6 ). Effects appeared to be larger for men than women when prescribing yoga, tai chi, and aerobic exercise alongside psychotherapy. Yoga and aerobic exercise alongside psychotherapy appeared more effective for older participants than younger people ( fig 7 ). Strength training appeared more effective when prescribed to younger participants than older participants. Some estimates were associated with substantial uncertainty because some modalities were not well studied in some groups (eg, tai chi for younger adults), and mean age of the sample was only available for 71% of the studies.

Fig 6

Effects of interventions versus active control on depression (lower is better) by sex. Shading represents 95% credible intervals

Fig 7

Effects of interventions versus active control on depression (lower is better) by age. Shading represents 95% credible intervals

Moderation by intervention and design characteristics

Across modalities, a clear dose-response curve was observed for intensity of exercise prescribed ( fig 8 ). Although light physical activity (eg, walking, hatha yoga) still provided clinically meaningful effects (g=−0.58, −0.82 to −0.33), expected effects were stronger for vigorous exercise (eg, running, interval training; g=−0.74, −1.10 to −0.38). This finding did not appear to be due to increased weekly energy expenditure: credible intervals were wide, which meant that the dose-response curve for METs/min prescribed per week was unclear (see supplementary file, section S10). Weak evidence suggested that shorter interventions (eg, 10 weeks: g=−0.53, −0.71 to −0.35) worked somewhat better than longer ones (eg, 30 weeks: g=−0.37, −0.79 to 0.03), with wide credible intervals again indicating high uncertainty (see supplementary file, section S11). We also moderated for the lag between the end of treatment and the measurement of the outcome. We found no indication that participants were likely to relapse within the measurement period (see supplementary file, section S12); effects remained steady when measured either directly after the intervention (g=−0.59, −0.80 to −0.39) or up to six months later (g=−0.63, −0.87 to −0.40).

Fig 8

Dose-response curve for intensity (METs) across exercise modalities compared with active control. METs=metabolic equivalents of task

Supplementary file section S13 provides coding for the behaviour change techniques and autonomy for each exercise arm. None of the behaviour change techniques significantly moderated overall effects. Contrary to expectations, studies describing a level of participant autonomy (ie, choice over frequency, intensity, type, or time) tended to show weaker effects (g=−0.28, −0.78 to 0.23) than those that did not (g=−0.75, −1.17 to −0.33; see supplementary file, section S14). This effect was consistent whether or not we included studies that used physical activity counselling (usually high autonomy).

Use of group exercise appeared to moderate the effects: although the overall effects were similar for individual (g=−1.10, −1.57 to −0.64) and group exercise (g=−1.16, −1.61 to −0.73), some interventions were better delivered in groups (yoga) and some were better delivered individually (strength training, mixed aerobic exercise; see supplementary file, section S15).

As preregistered, we tested whether study funding moderated effects. Models that included whether a study was funded did explain more variance (R 2 =0.70) compared with models that included treatment alone (R 2 =0.65). Funded studies showed stronger effects (g=−1.01, −1.19 to −0.82) than unfunded studies (g=−0.77, −1.09 to −0.46). We also moderated for the type of measure (self-report v clinician report). This did not explain a substantial amount of variance in the outcome (R 2 =0.66).

Sensitivity analyses

Evidence of publication bias was found for overall estimates of exercise on depression compared with active controls, although not enough to nullify effects. The multilevel Egger’s test showed significance (F 1,98 =23.93, P<0.001). Funnel plots showed asymmetry, but the result of pooled effects remained statistically significant when only including non-significant studies (see supplementary file, section S16). No amount of publication bias would be sufficient to shrink effects to zero (s value=not possible). To reduce effects below clinical significance thresholds, studies with statistically significant results would need to be reported 58 times more frequently than studies with non-significant results.

Qualitative synthesis of mediation effects

Only a few of the studies used explicit mediation analyses to test hypothesised mechanisms of action. 54 55 56 57 58 59 One study found that both aerobic exercise and yoga led to decreased depression because participants ruminated less. 54 The study found that the effects of aerobic exercise (but not yoga) were mediated by increased acceptance. 54 “Perceived hassles” and awareness were not statistically significant mediators. 54 Another study found that the effects of yoga were mediated by increased self-compassion, but not rumination, self-criticism, tolerance of uncertainty, body awareness, body trust, mindfulness, and attentional biases. 55 One study found that the effects from an aerobic exercise intervention were not mediated by long term physical activity, but instead were mediated by exercise specific affect regulation (eg, self-control for exercise). 57 Another study found that neither exercise self-efficacy nor depression coping self-efficacy mediated effects of aerobic exercise. 56 Effects of aerobic exercise were not mediated by the N2 amplitude from electroencephalography, hypothesised as a neuro-correlate of cognitive control deficits. 58 Increased physical activity did not appear to mediate the effects of physical activity counselling on depression. 59 It is difficult to infer strong conclusions about mechanisms on the basis of this small number of studies with low power.

Summary of evidence

In this systematic review and meta-analysis of randomised controlled trials, exercise showed moderate effects on depression compared with active controls, either alone or in combination with other established treatments such as cognitive behaviour therapy. In isolation, the most effective exercise modalities were walking or jogging, yoga, strength training, and dancing. Although walking or jogging were effective for both men and women, strength training was more effective for women, and yoga or qigong was more effective for men. Yoga was somewhat more effective among older adults, and strength training was more effective among younger people. The benefits from exercise tended to be proportional to the intensity prescribed, with vigorous activity being better. Benefits were equally effective for different weekly doses, for people with different comorbidities, or for different baseline levels of depression. Although confidence in many of the results was low, treatment guidelines may be overly conservative by conditionally recommending exercise as complementary or alternative treatment for patients in whom psychotherapy or pharmacotherapy is either ineffective or unacceptable. 60 Instead, guidelines for depression ought to include prescriptions for exercise and consider adapting the modality to participants’ characteristics and recommending more vigorous intensity exercises.

Our review did not uncover clear causal mechanisms, but the trends in the data are useful for generating hypotheses. It is unlikely that any single causal mechanism explains all the findings in the review. Instead, we hypothesise that a combination of social interaction, 61 mindfulness or experiential acceptance, 62 increased self-efficacy, 33 immersion in green spaces, 63 neurobiological mechanisms, 64 and acute positive affect 65 combine to generate outcomes. Meta-analyses have found each of these factors to be associated with decreases in depressive symptoms, but no single treatment covers all mechanisms. Some may more directly promote mindfulness (eg, yoga), be more social (eg, group exercise), be conducted in green spaces (eg, walking), provide a more positive affect (eg, “runner’s high”’), or be more conducive to acute adaptations that may increase self-efficacy (eg, strength). 66 Exercise modalities such as running may satisfy many of the mechanisms, but they are unlikely to directly promote the mindful self-awareness provided by yoga and qigong. Both these forms of exercise are often practised in groups with explicit mindfulness but seldom have fast and objective feedback loops that improve self-efficacy. Adequately powered studies testing multiple mediators may help to focus more on understanding why exercise helps depression and less on whether exercise helps. We argue that understanding these mechanisms of action is important for personalising prescriptions and better understanding effective treatments.

Our review included more studies than many existing reviews on exercise for depression. 13 22 27 28 As a result, we were able to combine the strengths of various approaches to exercise and to make more nuanced and precise conclusions. For example, even taking conservative estimates (ie, the least favourable end of the credible interval), practitioners can expect patients to experience clinically significant effects from walking, running, yoga, qigong, strength training, and mixed aerobic exercise. Because we simultaneously assessed more than 200 studies, credible intervals were narrower than those in most existing meta-analyses. 13 We were also able to explore non-linear relationships between outcomes and moderators, such as frequency, intensity, and time. These analyses supported some existing findings—for example, our study and the study by Heissel et al 22 found that shorter interventions had stronger effects, at least for six months; our study and the study by Singh et al 13 both found that effects were stronger with vigorous intensity exercise compared with light and moderate exercise. However, most existing reviews found various treatment modalities to be equally effective. 13 27 In our review, some types of exercise had stronger effect sizes than others. We attribute this to the study level data available in a network meta-analysis compared with an overview of reviews 24 and higher power compared with meta-analyses with smaller numbers of included studies. 22 28 Overviews of reviews have the ability to more easily cover a wider range of participants, interventions, and outcomes, but also risk double counting randomised trials that are included in separate meta-analyses. They often include heterogeneous studies without having as much control over moderation analyses (eg, Singh et al included studies covering both prevention and treatment 13 ). Some of those reviews grouped interventions such as yoga with heterogeneous interventions such as stretching and qigong. 13 This practise of combining different interventions makes it harder to interpret meta-analytical estimates. We used methods that enabled us to separately analyse the effects of these treatment modalities. In so doing, we found that these interventions do have different effects, with yoga being an intervention with strong effects and stretching being better described as an active control condition. Network meta-analyses revealed the same phenomenon with psychotherapy: researchers once concluded there was a dodo bird verdict, whereby “everybody has won, and all must have prizes,” 67 until network meta-analyses showed some interventions were robustly more effective than others. 6 26

Predictors of acceptability and outcomes

We found evidence to suggest good acceptability of yoga and strength training; although the measurement of study drop-out is an imperfect proxy of adherence. Participants may complete the study without doing any exercise or may continue exercising and drop out of the study for other reasons. Nevertheless, these are useful data when considering adherence.

Behaviour change techniques, which are designed to increase adherence, did not meaningfully moderate the effect sizes from exercise. This may be due to several factors. It may be that the modality explains most of the variance between effects, such that behaviour change techniques (eg, presence or absence of feedback) did not provide a meaningful contribution. Many forms of exercise potentially contain therapeutic benefits beyond just energy expenditure. These characteristics of a modality may be more influential than coexisting behaviour change techniques. Alternatively, researchers may have used behaviour change techniques such as feedback or goal setting without explicitly reporting them in the study methods. Given the inherent challenges of behaviour change among people with depression, 29 and the difficulty in forecasting which strategies are likely to be effective, 68 we see the identification of effective techniques as important.

We did find that autonomy, as provided in the methods of included studies, predicted effects, but in the opposite direction to our hypotheses: more autonomy was associated with weaker effects. Physical activity counselling, which usually provides a great deal of patient autonomy, was among the lowest effect sizes in our meta-analysis. Higher autonomy judgements were associated with weaker outcomes regardless of whether physical activity counselling was included in the model. One explanation for these data is that people with depression benefit from the clear direction and accountability of a standardised prescription. When provided with more freedom, the low self-efficacy that is symptomatic of depression may stop patients from setting an appropriate level of challenge (eg, they may be less likely to choose vigorous exercise). Alternatively, participants were likely autonomous when self-selecting into trials with exercise modalities they enjoyed, or those that fit their social circumstances. After choosing something value aligned, autonomy within the trial may not have helpful. Either way, data should be interpreted with caution. Our judgement of the autonomy provided in the methods may not reflect how much autonomy support patients actually felt. The patient’s perceived autonomy is likely determined by a range of factors not described in the methods (eg, the social environment created by those delivering the programme, or their social identity), so other studies that rely on patient reports of the motivational climate are likely to be more reliable. 33 Our findings reiterate the importance of considering these patient reports in future research of exercise for depression.

Our findings suggest that practitioners could advocate for most patients to engage in exercise. Those patients may benefit from guidance on intensity (ie, vigorous) and types of exercise that appear to work well (eg, walking, running, mixed aerobic exercise, strength training, yoga, tai chi, qigong) and be well tolerated (eg, strength training and yoga). If social determinants permit, 66 engaging in group exercise or structured programmes could provide support and guidance to achieve better outcomes. Health services may consider offering these programmes as an alternative or adjuvant treatment for major depression. Specifically, although the confidence in the evidence for exercise is less strong than for cognitive behavioural therapy, the effect sizes seem comparable, so it may be an alternative for patients who prefer not to engage in psychotherapy. Previous reviews on those with mild-moderate depression have found similar effects for exercise or SSRIs, or the two combined. 13 14 In contrast, we found some forms of exercise to have stronger effects than SSRIs alone. Our findings are likely related to the larger power in our review (n=14 170) compared with previous reviews (eg, n=2551), 14 and our ability to better account for heterogeneity in exercise prescriptions. Exercise may therefore be considered a viable alternative to drug treatment. We also found evidence that exercise increases the effects of SSRIs, so offering exercise may act as an adjuvant for those already taking drugs. We agree with consensus statements that professionals should still account for patients’ values, preferences, and constraints, ensuring there is shared decision making around what best suits the patient. 66 Our review provides data to help inform that decision.

Strengths, limitations, and future directions

Based on our findings, dance appears to be a promising treatment for depression, with large effects found compared with other interventions in our review. But the small number of studies, low number of participants, and biases in the study designs prohibits us from recommending dance more strongly. Given most research for the intervention has been in young women (88% female participants, mean age 31 years), it is also important for future research to assess the generalisability of the effects to different populations, using robust experimental designs.

The studies we found may be subject to a range of experimental biases. In particular, researchers seldom blinded participants or staff delivering the intervention to the study’s hypotheses. Blinding for exercise interventions may be harder than for drugs 23 ; however, future studies could attempt to blind participants and staff to the study’s hypotheses to avoid expectancy effects. 69 Some of our ratings are for studies published before the proliferation of reporting checklists, so the ratings might be too critical. 23 For example, before CONSORT, few authors explicitly described how they generated a random sequence. 23 Therefore, our risk of bias judgements may be too conservative. Similarly, we planned to use the Cochrane risk of bias (RoB) 1 tool 40 so we could use the most recent Cochrane review of exercise and depression 12 to calibrate our raters, and because RoB 2 had not yet been published. 70 Although assessments of bias between the two tools are generally comparable, 71 the RoB 1 tool can be more conservative when assessing open label studies with subjective assessments (eg, unblinded studies with self-reported measures for depression). 71 As a result, future reviews should consider using the latest risk of bias tool, which may lead to different assessments of bias in included studies.

Most of the main findings in this review appear robust to risks from publication bias. Specifically, pooled effect sizes decreased when accounting for risk of publication bias, but no degree of publication bias could nullify effects. We did not exclude grey literature, but our search strategy was not designed to systematically search grey literature or trial registries. Doing so can detect additional eligible studies 72 and reveal the numbers of completed studies that remain unpublished. 73 Future reviews should consider more systematic searches for this kind of literature to better quantify and mitigate risk of publication bias.

Similarly, our review was able to integrate evidence that directly compared exercise with other treatment modalities such as SSRIs or psychotherapy, while also informing estimates using indirect evidence (eg, comparing the relative effects of strength training and SSRIs when tested against a waitlist control). Our review did not, however, include all possible sources of indirect evidence. Network meta-analyses exist that directly focus on psychotherapy 7 and pharmacotherapy, 25 and these combined for treating depression. 6 Those reviews include more than 500 studies comparing psychological or drug interventions with controls. Harmonising the findings of those reviews with ours would provide stronger data on indirect effects.

Our review found some interesting moderators by age and sex, but these were at the study level rather than individual level—that is, rather than being able to determine whether women engaging in a strength intervention benefit more than men, we could only conclude that studies with more women showed larger effects than studies with fewer women. These studies may have been tailored towards women, so effects may be subject to confounding, as both sex and intervention may have changed. The same finding applied to age, where studies on older adults were likely adapted specifically to this age group. These between study differences may explain the heterogeneity in the effects of interventions, and confounding means our moderators for age and sex should be interpreted cautiously. Future reviews should consider individual patient meta-analyses to allow for more detailed assessments of participant level moderators.

Finally, for many modalities, the evidence is derived from small trials (eg, the median number of walking or jogging arms was 17). In addition to reducing risks from bias, primary research may benefit from deconstruction designs or from larger, head-to-head analyses of exercise modalities to better identify what works best for each candidate.

Clinical and policy implications

Our findings support the inclusion of exercise as part of clinical practice guidelines for depression, particularly vigorous intensity exercise. Doing so may help bridge the gap in treatment coverage by increasing the range of first line options for patients and health systems. 9 Globally there has been an attempt to reduce stigma associated with seeking treatment for depression. 74 Exercise may support this effort by providing patients with treatment options that carry less stigma. In low resource or funding constrained settings, group exercise interventions may provide relatively low cost alternatives for patients with depression and for health systems. When possible, ideal treatment may involve individualised care with a multidisciplinary team, where exercise professionals could take responsibility for ensuring the prescription is safe, personalised, challenging, and supported. In addition, those delivering psychotherapy may want to direct some time towards tackling cognitive and behavioural barriers to exercise. Exercise professionals might need to be trained in the management of depression (eg, managing risk) and to be mindful of the scope of their practice while providing support to deal with this major cause of disability.

Conclusions

Depression imposes a considerable global burden. Many exercise modalities appear to be effective treatments, particularly walking or jogging, strength training, and yoga, but confidence in many of the findings was low. We found preliminary data that may help practitioners tailor interventions to individuals (eg, yoga for older men, strength training for younger women). The World Health Organization recommends physical activity for everyone, including those with chronic conditions and disabilities, 75 but not everyone can access treatment easily. Many patients may have physical, psychological, or social barriers to participation. Still, some interventions with few costs, side effects, or pragmatic barriers, such as walking and jogging, are effective across people with different personal characteristics, severity of depression, and comorbidities. Those who are able may want to choose more intense exercise in a structured environment to further decrease depression symptoms. Health systems may want to provide these treatments as alternatives or adjuvants to other established interventions (cognitive behaviour therapy, SSRIs), while also attenuating risks to physical health associated with depression. 3 Therefore, effective exercise modalities could be considered alongside those intervention as core treatments for depression.

What is already known on this topic

Depression is a leading cause of disability, and exercise is often recommended alongside first line treatments such as pharmacotherapy and psychotherapy

Treatment guidelines and previous reviews disagree on how to prescribe exercise to best treat depression

What this study adds

Various exercise modalities are effective (walking, jogging, mixed aerobic exercise, strength training, yoga, tai chi, qigong) and well tolerated (especially strength training and yoga)

Effects appeared proportional to the intensity of exercise prescribed and were stronger for group exercise and interventions with clear prescriptions

Preliminary evidence suggests interactions between types of exercise and patients’ personal characteristics

Ethics statements

Ethical approval.

Not required.

Acknowledgments

We thank Lachlan McKee for his assistance with data extraction. We also thank Juliette Grosvenor and another librarian (anonymous) for their review of our search strategy.

Contributors: MN led the project, drafted the manuscript, and is the guarantor. MN, TS, PT, MM, BdPC, PP, SB, and CL drafted the initial study protocol. MN, TS, PT, BdPC, DvdH, JS, MM, RP, LP, RV, HA, and BV conducted screening, extraction, and risk of bias assessment. MN, JS, and JM coded methods for behaviour change techniques. MN and DGG conducted statistical analyses. PP, SB, and CL provided supervision and mentorship. All authors reviewed and approved the final manuscript. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

Funding: None received.

Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/disclosure-of-interest/ and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

Data sharing Data and code for reproducing analyses are available on the Open Science Framework ( https://osf.io/nzw6u/ ).

The lead author (MN) affirms that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned (and, if relevant, registered) have been explained.

Dissemination to participants and related patient and public communities: We plan to disseminate the findings of this study to lay audiences through mainstream and social media.

Provenance and peer review: Not commissioned; externally peer reviewed.

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/ .

  • ↵ World Health Organization. Depression. 2020 [cited 2020 Mar 12]. https://www.who.int/news-room/fact-sheets/detail/depression
  • ↵ Birkjær M, Kaats M, Rubio A. Wellbeing adjusted life years: A universal metric to quantify the happiness return on investment. Happiness Research Institute; 2020. https://www.happinessresearchinstitute.com/waly-report
  • Jacobson NC ,
  • Pinquart M ,
  • Duberstein PR
  • Cuijpers P ,
  • Karyotaki E ,
  • Vinkers CH ,
  • Cipriani A ,
  • Furukawa TA
  • Strawbridge R ,
  • Marwood L ,
  • Santomauro D ,
  • Collins PY ,
  • Generaal E ,
  • Lawlor DA ,
  • Cooney GM ,
  • Recchia F ,
  • Miller CT ,
  • Mundell NL ,
  • Gallardo-Gómez D ,
  • Del Pozo-Cruz J ,
  • Álvarez-Barbosa F ,
  • Alfonso-Rosa RM ,
  • Del Pozo Cruz B
  • Salcher-Konrad M ,
  • ↵ National Collaborating Centre for Mental Health (UK). Depression: The Treatment and Management of Depression in Adults (Updated Edition). Leicester (UK): British Psychological Society; https://www.ncbi.nlm.nih.gov/pubmed/22132433
  • Bassett D ,
  • ↵ American Psychiatric Association. Practice Guideline for the Treatment of Patients with Major Depressive Disorder. Third Edition. Washington, DC: American Psychiatric Association; 2010. 87 p. https://psychiatryonline.org/pb/assets/raw/sitewide/practice_guidelines/guidelines/mdd-1410197717630.pdf
  • ↵ NICE. Depression in adults: treatment and management. [cited 2023 Mar 13]. National Institute for Health and Care Excellence; 2022 https://www.nice.org.uk/guidance/ng222/resources
  • Heissel A ,
  • Brokmeier LL ,
  • Ekkekakis P
  • ↵ Chaimani A, Caldwell DM, Li T, Higgins JPT, Salanti G. Undertaking network meta-analyses. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al., editors. Cochrane Handbook for Systematic Reviews of Interventions. Cochrane; 2022. www.training.cochrane.org/handbook
  • Furukawa TA ,
  • Salanti G ,
  • Miller KJ ,
  • Gonçalves-Bradley DC ,
  • Areerob P ,
  • Hennessy D ,
  • Mesagno C ,
  • Glowacki K ,
  • Duncan MJ ,
  • Gainforth H ,
  • Richardson M ,
  • Johnston M ,
  • Abraham C ,
  • Whittington C ,
  • McAteer J ,
  • French DP ,
  • Olander EK ,
  • Chisholm A ,
  • Mc Sharry J
  • Ntoumanis N ,
  • Prestwich A ,
  • Caldwell DM ,
  • Nikolakopoulou A ,
  • Higgins JPT ,
  • Papakonstantinou T ,
  • Caspersen CJ ,
  • Powell KE ,
  • Christenson GM
  • ↵ Veritas Health Innovation. Covidence systematic review software. Melbourne, Australia; 2023. www.covidence.org
  • Ainsworth BE ,
  • Haskell WL ,
  • Herrmann SD ,
  • Altman DG ,
  • Gøtzsche PC ,
  • Cochrane Bias Methods Group ,
  • Cochrane Statistical Methods Group
  • Hodges JS ,
  • ↵ Dias S, Welton NJ, Sutton AJ, Ades AE. NICE DSU technical support document 2: a generalised linear modelling framework for pairwise and network meta-analysis of randomised controlled trials. In: National Institute for Health and Care Excellence (NICE), editor. NICE Decision Support Unit Technical Support Documents. London: Citeseer; 2011. https://www.ncbi.nlm.nih.gov/books/NBK310366/
  • Faltinsen E ,
  • Todorovac A ,
  • Staxen Bruun L ,
  • ↵ R Core Team. R: A language and environment for statistical computing. Vienna, Austria: R Foundation for Statistical Computing; 2022. https://www.R-project.org/
  • Hengartner MP ,
  • Balduzzi S ,
  • Dusseldorp E ,
  • Sterne JAC ,
  • Sutton AJ ,
  • Ioannidis JPA ,
  • Mathur MB ,
  • VanderWeele TJ
  • Leung LYL ,
  • La Rocque CL ,
  • Mazurka R ,
  • Stuckless TJR ,
  • Harkness KL
  • Vollbehr NK ,
  • Hoenders HJR ,
  • Bartels-Velthuis AA ,
  • Zeibig JM ,
  • Seiffer B ,
  • Ehmann PJ ,
  • Alderman BL
  • Bombardier CH ,
  • Gibbons LE ,
  • ↵ American Psychological Association. Clinical practice guideline for the treatment of depression across three age cohorts. American Psychological Association; 2019. https://www.apa.org/depression-guideline/
  • van Straten A ,
  • Reynolds CF 3rd .
  • Johannsen M ,
  • Nissen ER ,
  • Lundorff M ,
  • Coventry PA ,
  • Schuch FB ,
  • Deslandes AC ,
  • Gosmann NP ,
  • Fleck MP de A
  • Saunders DH ,
  • Phillips SM
  • Teychenne M ,
  • Hunsley J ,
  • Di Giulio G
  • Milkman KL ,
  • Hecksteden A ,
  • Savović J ,
  • ↵ Richter B, Hemmingsen B. Comparison of the Cochrane risk of bias tool 1 (RoB 1) with the updated Cochrane risk of bias tool 2 (RoB 2). Cochrane; 2021. Report No.: 1. https://community.cochrane.org/sites/default/files/uploads/inline-files/RoB1_2_project_220529_BR%20KK%20formatted.pdf
  • Chandler J ,
  • Lefebvre C ,
  • Glanville J ,
  • Briscoe S ,
  • Coronado-Montoya S ,
  • Kwakkenbos L ,
  • Steele RJ ,
  • Turner EH ,
  • Angermeyer MC ,
  • van der Auwera S ,
  • Schomerus G
  • Al-Ansari SS ,

research paper on review

This paper is in the following e-collection/theme issue:

Published on 21.2.2024 in Vol 26 (2024)

Effects of eHealth Interventions on 24-Hour Movement Behaviors Among Preschoolers: Systematic Review and Meta-Analysis

Authors of this article:

Author Orcid Image

  • Shan Jiang 1 , MSc   ; 
  • Johan Y Y Ng 1 , PhD   ; 
  • Kar Hau Chong 2 , PhD   ; 
  • Bo Peng 1 , MSc   ; 
  • Amy S Ha 1 , PhD  

1 Department of Sports Science and Physical Education, The Chinese University of Hong Kong, Hong Kong, China (Hong Kong)

2 School of Health and Society and Early Start, Faculty of the Arts, Social Sciences and Humanities, University of Wollongong, Wollongong, Australia

Corresponding Author:

Amy S Ha, PhD

Department of Sports Science and Physical Education

The Chinese University of Hong Kong

G05 Kwok Sports Building, Shatin, N.T.

China (Hong Kong)

Phone: 852 3943 6083

Email: [email protected]

Background: The high prevalence of unhealthy movement behaviors among young children remains a global public health issue. eHealth is considered a cost-effective approach that holds great promise for enhancing health and related behaviors. However, previous research on eHealth interventions aimed at promoting behavior change has primarily focused on adolescents and adults, leaving a limited body of evidence specifically pertaining to preschoolers.

Objective: This review aims to examine the effectiveness of eHealth interventions in promoting 24-hour movement behaviors, specifically focusing on improving physical activity (PA) and sleep duration and reducing sedentary behavior among preschoolers. In addition, we assessed the moderating effects of various study characteristics on intervention effectiveness.

Methods: We searched 6 electronic databases (PubMed, Ovid, SPORTDiscus, Scopus, Web of Science, and Cochrane Central Register of Controlled Trials) for experimental studies with a randomization procedure that examined the effectiveness of eHealth interventions on 24-hour movement behaviors among preschoolers aged 2 to 6 years in February 2023. The study outcomes included PA, sleep duration, and sedentary time. A meta-analysis was conducted to assess the pooled effect using a random-effects model, and subgroup analyses were conducted to explore the potential effects of moderating factors such as intervention duration, intervention type, and risk of bias (ROB). The included studies underwent a rigorous ROB assessment using the Cochrane ROB tool. Moreover, the certainty of evidence was evaluated using the GRADE (Grading of Recommendations Assessment, Development, and Evaluation) assessment.

Results: Of the 7191 identified records, 19 (0.26%) were included in the systematic review. The meta-analysis comprised a sample of 2971 preschoolers, which was derived from 13 included studies. Compared with the control group, eHealth interventions significantly increased moderate to vigorous PA (Hedges g =0.16, 95% CI 0.03-0.30; P =.02) and total PA (Hedges g =0.37, 95% CI 0.02-0.72; P =.04). In addition, eHealth interventions significantly reduced sedentary time (Hedges g =−0.15, 95% CI −0.27 to −0.02; P =.02) and increased sleep duration (Hedges g =0.47, 95% CI 0.18-0.75; P =.002) immediately after the intervention. However, no significant moderating effects were observed for any of the variables assessed ( P >.05). The quality of evidence was rated as “moderate” for moderate to vigorous intensity PA and sedentary time outcomes and “low” for sleep outcomes.

Conclusions: eHealth interventions may be a promising strategy to increase PA, improve sleep, and reduce sedentary time among preschoolers. To effectively promote healthy behaviors in early childhood, it is imperative for future studies to prioritize the development of rigorous comparative trials with larger sample sizes. In addition, researchers should thoroughly examine the effects of potential moderators. There is also a pressing need to comprehensively explore the long-term effects resulting from these interventions.

Trial Registration: PROSPERO CRD42022365003; http://tinyurl.com/3nnfdwh3

Introduction

Physical activity (PA), sedentary behavior (SB), and sleep are integrated as “24-hour movement behaviors” owing to the collective effect on daily movement patterns. The 24-hour movement paradigm acknowledges the possibility of categorizing these behaviors according to their intensity levels across a full day. This encompasses a diverse range of activities, including sleep; SB (eg, screen time, reclining, or lying down); and light, moderate, or vigorous PA [ 1 ]. Globally, the “24-hour movement behaviors” paradigm has already been recognized and adopted into movement guidelines [ 2 ]. In 2020, the World Health Organization (WHO) released guidelines on PA and SB that incorporate all 3 movement behaviors [ 3 ]. The health benefits of engaging in PA, getting the recommended sleep, and reducing sedentary time are well documented. Recent reviews have shown a positive association between PA; sleep; and a wide range of child outcomes related to mental health, cognition, and cardiometabolism [ 4 - 6 ]. In addition, it is worth mentioning that different domains of SB can have varying health effects. For instance, non–screen-based sedentary activities such as reading or studying have been associated with favorable cognitive development in children [ 7 ]. Conversely, screen-based sedentary time, also referred to as “screen time,” has been found to have adverse effects on health-related outcomes [ 8 ]. Moreover, prior research has indicated that imbalances in 24-hour movement behaviors—specifically, elevated sedentary screen time coupled with diminished levels of PA and sleep—could potentially increase the risk of depression [ 9 ] and result in poor health-related quality of life [ 10 ]. Therefore, any change in one of the movement behaviors may lead to a compensatory increase or decrease in one or both behaviors.

However, insufficient healthy levels of 24-hour movement behaviors in early childhood have remained one of the most critical global public health challenges [ 11 , 12 ]. According to the WHO guidelines [ 3 ], preschool children are recommended to engage in adequate daily PA, consisting of 180 minutes, with 60 minutes dedicated to moderate to vigorous PA (MVPA). In addition, they should ensure sufficient sleep, ranging from 10 to 13 hours, while limiting sedentary recreational screen time to no more than 60 minutes per day. Unfortunately, a significant proportion of preschoolers do not meet the PA guidelines (<50% across studies) [ 13 ]. Furthermore, previous studies have consistently demonstrated that preschoolers exceed the screen time recommendations set by the WHO. A comprehensive meta-analysis of 44 studies revealed that only 35.6% of children aged between 2 and 5 years met the guideline of limiting daily screen time to 1 hour. Moreover, when examining the integration of 24-hour movement behaviors [ 8 ], another meta-analysis discovered that only 13% of children worldwide adhere to all 3 behavior guidelines [ 14 ].

Preschoolers play a crucial role in laying the foundation for long-term physical health and overall well-being [ 15 , 16 ]. Improving PA levels, minimizing SB, and prioritizing quality sleep in young children have multiple benefits, including positively influencing their physical fitness [ 17 , 18 ], promoting the development of motor and cognitive skills [ 19 , 20 ], and preventing childhood obesity [ 21 ] and associated health issues [ 14 , 22 , 23 ]. Several studies have shown that these healthy behavior patterns can shape lifelong habits that extend from childhood through adolescence and into adulthood [ 5 , 24 ].

Although these statistics are concerning, attempts to address the issue through various interventions have yielded inconsistent findings [ 25 - 28 ]. For instance, a meta-analysis of PA intervention studies involving preschoolers revealed only small to moderate effects in enhancing PA, suggesting room for improvement in achieving the desired outcomes [ 29 ]. In a meta-analysis conducted by Fangupo et al [ 30 ], no intervention effect was observed on daytime sleep duration for young children. Interestingly, earlier research has also elucidated overflow effects stemming from interventions focusing on a specific behavior, impacting other behaviors that were not the primary target. A systematic review highlighted that interventions aimed at enhancing PA in children aged <5 years led to a reduction in screen time by approximately 32 minutes [ 31 ]. It is crucial to understand that as time is finite, the durations dedicated to PA, sedentary time, and sleep are interconnected within 24 hours. Thus, we need effective interventions for preschool children that holistically address all components of 24-hour movement behaviors.

eHealth broadly refers to a diverse array of information and communication technologies used to facilitate the delivery of health care [ 32 , 33 ]. The rapid evolution of digitalization in recent decades has led to the widespread adoption of eHealth in interventions [ 28 , 34 ]. Recent reviews [ 35 - 38 ] suggest that with the global proliferation of eHealth interventions, health promotion via these platforms is evolving to become more accessible and user-friendly, garnering acceptance among adolescents and adults. Previous reviews have underscored the effectiveness of these digital platforms in enhancing various movement behavior outcomes across diverse age groups, including children aged 6 to 12 years [ 39 ], adolescents [ 40 ], adults [ 41 ], and older adults [ 42 ]. Specifically, a meta-analysis indicated that eHealth interventions have successfully promoted PA among individuals with noncommunicable diseases [ 43 ]. Another review showed that computer, mobile, and wearable technologies have the potential to mitigate sedentary time effectively [ 41 ]. Previous studies have targeted different participant groups to investigate the impact of eHealth on sleep outcomes. Deng et al [ 44 ] conducted a meta-analysis demonstrating that eHealth interventions for adults with insomnia are effective in improving sleep and can be considered a promising treatment. Nevertheless, a review focusing on healthy adolescents found that there has not been any school-based eHealth interventions focusing on sleep outcomes [ 45 ].

Indeed, child-centered strategies such as gamification are used in some digital apps and have been shown to encourage children’s PA [ 46 - 48 ]. A considerable body of work has addressed the pivotal role of parental influence and role modeling in cultivating healthy lifestyle habits in children [ 49 , 50 ]. Physical literacy, a multidimensional concept encompassing various aspects of PA such as the affective, physical, cognitive, and behavioral dimensions, plays a vital role in enhancing PA engagement [ 51 ]. Ha et al [ 52 ] conducted a web-based parent-focused intervention, revealing that enhancing parents’ physical literacy can effectively support children’s participation in PA. By understanding and promoting physical literacy, parents can provide valuable support to their children, fostering a lifelong commitment to healthy and active lifestyles. Although eHealth interventions offer promise, there are conflicting findings regarding their impact, especially when they are parent supported and targeted at young children. A previous meta-analysis examining eHealth interventions targeted at parents found no significant impact on children’s BMI. In addition, no studies have included children aged <5 years [ 50 ]. Similarly, a recent systematic review observed that eHealth interventions aimed at parents showed no significant effectiveness in enhancing PA levels in young children [ 53 ]. However, the prevalence of digital device use in young children has become widespread. For instance, studies conducted in England (the United Kingdom), Estonia, and the United States have reported that, on average, 83% of children aged 5 years use a digital device at least once a week [ 54 ]. Research also revealed that in the United States, approximately three-fourths of children had their own mobile device by the age of 4 years, and nearly all children (96.6%) used mobile devices [ 55 ]. Consequently, there is an urgent need to harness the potential of digital platforms and explore whether they can effectively deliver interventions to preschoolers [ 56 ].

In previous research, there has been a lack of studies examining the effectiveness of eHealth behavior change interventions among preschoolers. Although a systematic review found a significant effect of digital health interventions on the PA of preschoolers [ 53 ], this review did not include sedentary time and sleep in its inclusion criteria, and there is a lack of conclusive statements owing to the insufficient number of studies, and no quantitative methods were available for synthesizing the evidence on the effectiveness of eHealth interventions. To our knowledge, no systematic review or meta-analysis has distinctly investigated the effects of eHealth interventions on 24-hour movement behaviors in preschoolers or the factors that may influence their implementation. Therefore, the aims of this study were (1) to assess the effectiveness of eHealth interventions on 24-hour movement behaviors (improving PA and sleep duration and decreasing sedentary time) and (2) to examine the moderating effects of study characteristics (eg, intervention duration, intervention type, and outcome measurement tools) on intervention effectiveness.

This review was registered with PROSPERO (CRD42022365003) and conducted in accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [ 57 ].

Eligibility Criteria

This review included trials with a randomization procedure that examined the outcomes of interventions using information and communication technology. These interventions targeted at least 1 movement behavior in preschool children aged 2 to 6 years. Studies were excluded if (1) the control groups received intervention using eHealth technology and (2) published in a non-English language. Full details are provided in Multimedia Appendix 1 [ 58 ].

Search and Selection

The following databases were systematically searched from inception to February 08, 2023: PubMed, Ovid, SPORTDiscus, Scopus, Web of Science, and Cochrane Central Register of Controlled Trials. We used the search terms “eHealth,” “Physical activity,” “Sedentary behavior,” “Sleep,” “preschooler,” and their Medical Subject Headings terms. The complete search strategy is described in Multimedia Appendix 2 [ 59 - 61 ]. A manual search of the reference lists of the included publications was performed to identify additional eligible studies for potential inclusion. Two independent reviewers (SJ and BP) screened the titles and abstracts and subsequent full-text articles for eligibility. Discrepancies that emerged during the selection process were effectively resolved through a discussion involving 3 authors (SJ, BP, and JYYN).

Data Extraction

A comprehensive data extraction form was developed (SJ) and refined (SJ and BP) based on the Cochrane Handbook for Systematic Reviews of Interventions [ 62 ]. Extracted information included bibliographic details (authors, title, journal, and year); study details (country, design, retention rate); participants’ characteristics (number of children and demographics); intervention type (parent supported, teacher led, or child centered), intervention’s theoretical basis, duration, delivery tool, and intensity; comparison (sample size, activity type); outcomes (behavioral variables with baseline and postintervention means with SDs), and measurement tools. Regarding the categorization of intervention types, we have established a clear classification. Specifically, in child-centered interventions, children are the direct beneficiaries, participating autonomously with less guidance from guardians. This can be accomplished using an exergaming system or designed mobile health games. In parent-supported interventions, parents are involved in educational programs and instructions that improve parents’ knowledge of preschoolers’ healthy movement behaviors. A teacher-led intervention involves supervising preschoolers’ PA during school time or participating in structured PA sessions aimed at improving healthy indicators. For data that were either incomplete or absent within the main text, we sought to reach out to the respective authors through email correspondence.

Risk of Bias

The included studies were assessed for risk of bias (ROB) using the revised Cochrane ROB2 tool [ 63 ]. The following domains of bias were assessed for each study: selection (random sequence generation and allocation concealment), performance and detection (masking of participants, personnel, and assessors), deviations from intended interventions, missing outcome data, measurement of the outcome, appropriateness of analysis (selection of the reported outcome), and bias arising from period and carryover effects (for crossover studies) [ 63 ]. The studies were ranked as low risk, some concerns, or high risk for each domain. The ROB was evaluated independently by 2 authors (SJ and BP). Any discrepancies were resolved through discussions with the author (JYYN).

Outcomes and Data Synthesis

Our outcome targeted any of the following movement behaviors: PA (MVPA and total PA), sedentary time (screen time and sitting time), or sleep duration. Meta-analysis was conducted in R (version 4.2.1; R Group for Statistical Computing) using the meta , metafor , and metareg packages [ 64 ]. A random-effects model (Hartung-Knapp method) was used to calculate pooled estimates (Hedges g , a type of standardized mean difference) to account for variations in participants and measurement methods of movement behavior outcomes [ 65 ]. Multimedia Appendix 3 [ 63 - 65 ] describes the processing of missing data. Hedges g and their corresponding variances were calculated using the pre- and postintervention mean scores and SDs. However, if some studies had changes in baseline and postintervention data or if there were significant differences in their baseline data [ 59 - 61 ], we used the within-group difference in means and their SDs for intervention and control groups to calculate the effect size. Values of 0.2, 0.5, and 0.8 represent small, moderate, and large effect sizes, respectively. A positive effect size indicated a beneficial effect on the intervention group compared with the control group. The between-group heterogeneity of the synthesized effect sizes was examined using the Cochran Q test and I 2 statistics. I 2 values of 25%, 50%, and 75% indicated low, moderate, and high levels of heterogeneity, respectively. Subgroup analyses were conducted based on the following factors: (1) intervention duration (0-3 months vs >3 months) and (2) type of intervention (child centered, parent focused, or teacher led). (3) Types of outcome measurement tools (objective vs self-reported) and (4) ROB (low risk, some concerns, or high risks).

Furthermore, we performed meta-regression analyses to examine the impact of potential moderators on the overall effect size. Potential moderators included 5 variables, as specified in the subgroup analyses, and 2 continuous variables (sample size and intervention length). These variables were selected based on existing evidence that highlights their significant moderating effects on eHealth interventions targeting movement behaviors [ 53 , 66 , 67 ]. Sensitivity analyses were performed using the leave-one-out method. Publication bias was visualized using funnel plot symmetry and quantified using the Eggertest score, for which P <.05 indicates a significant publication bias [ 68 ].

Quality Assessment of the Overall Evidence

GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) 2 criteria were used to assess the certainty of evidence for the effect of eHealth interventions on the targeted outcomes [ 69 , 70 ]. The GRADE assessment was completed using GRADEpro, and the quality of evidence was classified as high (≥4 points overall), moderate (3 points), low (2 points), or very low (≤1 point) [ 70 ].

Study Selection

The database search yielded 7140 records, with an additional 51 records identified from the reference lists of relevant systematic reviews. There were 64 articles screened for full text, and 45 articles were excluded. The reasons for exclusion are listed in Multimedia Appendix 4 . A total of 19 studies reporting the effectiveness of interventions on movement behaviors were included in the systematic review [ 17 , 59 - 61 , 71 - 85 ], and 13 studies were included in the meta-analysis [ 59 - 61 , 76 - 85 ]. The PRISMA flowchart of the study selection process is shown in Figure 1 and PRISMA checklists are in Multimedia Appendices 5 and 6 .

research paper on review

Study Characteristics

The study characteristics are described in Table 1 . In the 19 studies, 2971 preschoolers from 6 regions were included. A total of 18 studies were conducted in high-income countries, and only 1 study was conducted in an upper middle–income country, according to the World Bank classification ( Multimedia Appendix 7 ) [ 86 ]. Most included studies were conducted during and after 2017. For the study design, 16 studies were 2-arm randomized controlled trials (RCTs), with 11 using a parallel group design [ 17 , 59 - 61 , 71 - 74 , 76 , 77 , 84 ], 2 being cluster RCTs [ 82 , 83 ], 2 pilot RCTs [ 79 , 81 ], and 1 crossover study [ 85 ]. The remaining 3 studies consisted of 2-arm experimental studies with a randomization procedure [ 75 , 78 , 80 ]. The sample size ranged from 34 preschoolers to 617 preschoolers. The study details are presented in Multimedia Appendices 8 and 9 [ 59 - 61 , 76 - 85 ].

a I: intervention.

b C: control.

c ECEC: early childhood education and care.

d PA: physical activity.

e SB: sedentary behavior.

f mHealth: mobile health.

g MINISTOP: mobile-based intervention intended to stop obesity in preschoolers.

h FMS: fundamental movement skills.

Intervention Details

The included studies used various delivery channels of eHealth technologies for the intervention. Seven studies used smartphone apps [ 59 - 61 , 74 ] and social media (Facebook and WhatsApp) [ 75 , 80 , 82 ]; 3 studies used an exergaming program [ 17 , 73 , 85 ]; 3 studies used the internet, with interventions including informational websites [ 83 , 84 ] and tablet computers [ 72 ]; and several studies used technology to dispatch reminders to exercise and send motivational messages encouraging persistence. Specifically, studies sent text messages and made telephone calls [ 71 , 76 - 79 , 81 ].

The intervention duration ranged from 1 week [ 78 ] to 36 months [ 77 ]. Seven studies had interventions that lasted >3 months [ 59 , 61 , 71 , 76 , 77 , 80 , 82 ]. Only 3 studies included follow-up assessment after intervention, with durations of 6 weeks [ 84 ], 3 months [ 72 ], and 6 months [ 60 ]. Regarding intervention types, this study consisted of 12 studies supported by parents [ 59 - 61 , 71 , 72 , 75 - 77 , 79 - 81 , 84 ], 3 studies led by teachers [ 78 , 82 , 83 ], and 4 studies involving eHealth interventions directed at children [ 17 , 73 , 74 , 85 ].

The comparison groups included a waitlist control group (n=4) [ 74 , 79 , 81 , 84 ], education as usual (n=7) [ 17 , 59 , 75 , 78 , 80 , 82 , 85 ], and an additional non-eHealth intervention (n=8) [ 59 - 61 , 71 - 73 , 76 , 77 ]. A total of 14 studies targeted PA [ 17 , 59 - 61 , 72 - 75 , 77 , 78 , 80 , 81 , 83 , 85 ], 12 studies targeted SB [ 59 - 61 , 71 , 76 - 80 , 82 , 84 , 85 ], and 4 studies targeted sleep duration [ 71 , 76 , 81 , 84 ]. Notably, no studies examined all 3 movement behaviors.

Meta-Analyses

Meta-analyses demonstrated that eHealth interventions produced significant improvements in MVPA (Hedges g =0.16, 95% CI 0.03-0.30; P =.02; 7/13, 54%) and total PA (Hedges g =0.37, 95% CI 0.02-0.72; P =.04; 2/13, 15%), as shown in Figure 2 A [ 77 , 78 , 80 - 83 , 85 ]. For SB outcomes, another meta-analysis showed a significant decrease (Hedges g =−0.15, 95% CI −0.27 to −0.02; P =.02; 8/13, 62%), as shown in Figure 2 B [ 76 - 80 , 82 , 84 , 85 ]. Finally, meta-analysis also showed that there were significant improvements in sleep duration (Hedges g =0.47, 95% CI 0.18-0.75; P <.01; 3/13, 23%), as shown in Figure 2 C [ 76 , 81 , 84 ].

Owing to the heterogeneity among the included studies, the mobile-based intervention intended to stop obesity in preschoolers (MINISTOP) project’s 3 studies solely reported the difference in pre-to-post comparison [ 60 , 61 , 76 ]. Consequently, their inclusion in the pooled analysis with other studies was deemed inappropriate. We analyzed a series of MINISTOP studies separately and presented the findings using a forest plot. The pooled analysis indicated that no significant change in MVPA (Hedges g =−0.03, 95% CI −0.15 to 0.09; P =.66; 3/6, 50%; Multimedia Appendix 10 [ 59 - 61 , 76 - 85 ]) was observed between the intervention and control groups. An intervention effect was found in reducing SB (Hedges g =0.02, 95% CI −0.13 to 0.16; P =.83; 3/6, 50%; Multimedia Appendix 10 ) immediately after the intervention, as indicated in Multimedia Appendix 10 . Nonetheless, this effect was not statistically significant. All the results showed negligible heterogeneity ( I 2 =0).

research paper on review

Subgroup Analyses and Meta-Regression

Table 2 shows the subgroup analysis and meta-regression results of MVPA and sedentary time according to study characteristics. No significant moderating effects were observed for any of the variables assessed ( P >.05). The complete results of the subgroup analyses are presented in Multimedia Appendix 11 [ 59 - 61 , 76 - 85 ].

a MVPA: moderate to vigorous physical activity.

b N/A: not applicable.

c Teacher focused studies as a reference group.

Sensitivity Analyses and Publication Bias

Sensitivity analysis indicated that no individual study had an excessive influence on the results. The omitted meta-analytic estimates were not significantly different from those associated with the combined analysis, and all estimates were within the 95% CI. Forest plots of the sensitivity analysis for MVPA, sedentary time, and sleep are summarized in Multimedia Appendix 12 [ 59 - 61 , 76 - 85 ]. The significance of Egger’s test results provided evidence for asymmetry of the funnel plots (MVPA: t 5 =3.27; P =.02; Multimedia Appendix 13 ; sedentary time: t 6 =−3.37; P =.02; Multimedia Appendix 14 ). However, we could not distinguish chance from true asymmetry using the funnel plot asymmetry test because <10 studies were included in our meta-analysis [ 86 ].

ROB of Studies

Multimedia Appendix 15 [ 59 - 61 , 76 - 85 ] summarizes the overall ROB assessment for all the included papers. Six studies were considered to have a low ROB [ 59 , 74 , 76 , 77 , 79 , 85 ], and the remaining 13 were considered to have some concerns regarding the ROB [ 17 , 60 , 61 , 71 - 73 , 75 , 78 , 80 - 84 ]. Furthermore, 7 studies did not disclose randomization methods clearly [ 17 , 72 , 75 , 78 , 80 , 82 , 83 ], so they were rated as having some concerns about random sequence generation. All studies were rated as having a low risk for the measurement of outcomes based on the use of objective measurement tools or reliable questionnaires in each study. Four studies were rated as ‘some concerns’ of reporting bias because neither published study protocols nor registered trial records were presented [ 72 , 75 , 78 , 80 ].

Quality of the Evidence

The GRADE scores are shown in Multimedia Appendix 16 , and we deemed the overall quality of evidence to be moderate to low. The quality of evidence for MVPA and sedentary time outcomes was rated as “moderate,” considering the low ROB, absence of heterogeneity in participants’ outcomes, and high precision in results. As eHealth interventions are often combined with other intervention approaches, all evaluations of directness were assessed as “Indirectness.” There were high imprecisions with the sample size included in the study for total PA and sleep, which were graded as “Low.”

Principal Findings

This study systematically reviewed the effectiveness of eHealth interventions targeting 24-hour movement behaviors among preschool-aged children. Most studies assessed interventions aimed at increasing PA and decreasing SB. Few studies targeted sleep, and no studies have addressed a combination of all 24-hour movement behaviors. Overall, these studies showed trends supporting the effectiveness of eHealth interventions in increasing PA and sleep duration and reducing sedentary time immediately after the intervention; however, only short-term effects were found, and all trials were judged to be of low to moderate quality.

This review demonstrates a small positive effect of eHealth interventions targeting increases in preschooler’s MVPA (Hedges g =0.16) and total PA (Hedges g =0.37) immediately after the intervention. One possible explanation could be that eHealth interventions, while providing new opportunities for PA, might not be sufficient to result in significant overall activity increases. This might require expanding activity opportunities, extending new activity options, and enhancing broader activity strategies to achieve substantial benefits. Our findings echo the argument made in a previous study of young children that PA interventions had a small effect on MVPA [ 87 ]. Another meta-analysis found a positive impact of PA interventions with small to moderate effects on total PA (Hedges g =0.44) and moderate effects on MVPA (Hedges g =0.51) [ 29 ]. There is no conclusive explanation as to why MVPA and total PA were seen to have a smaller effect in our study, but this could be attributed to most interventions thus far concentrating on devising PA programs of diverse intensities without distinct objectives, including low-intensity PA, MVPA, and total PA (eg, activities such as outdoor active play and structured gross motor activity sessions in childcare environments). Moreover, our results are consistent with previous review findings that digital platforms can potentially increase PA among preschoolers [ 53 ]. Hence, future interventions should aim to optimize their effectiveness in increasing PA among young children. In addition, further research is warranted to investigate the mechanisms of the changes associated with these PA outcomes. This will help enhance the size and sustainability of the effects observed in eHealth interventions.

We found no significant improvement in MVPA for mobile app interventions (MINISTOP project). This is in contrast to a review of studies focusing on mobile apps and technologies, which highlighted the significant potential to enhance PA [ 88 ]. It is worth noting that the MINISTOP project aimed to reduce obesity as its primary outcome rather than targeting MVPA. In addition, studies concentrating solely on educating parents without implementing direct interventions for children have not achieved the desired enhancements in MVPA. Thus, we cannot draw conclusions about mobile apps because few intervention studies have used these means of communication for young children and their guardians. Given the small number of studies included in our meta-analysis, the positive, negative, and null findings of the individual studies may have attenuated the results. Thus, considering the popularity and cost-effectiveness of mobile apps in the new generation, future research should investigate the potential of using emerging and novel technologies, such as mobile health, for preschoolers.

Our meta-analysis suggests that eHealth interventions may be an effective strategy for decreasing sedentary time in preschoolers, although the magnitude of the effect was small (Hedges g =−0.15) and short term. Nonetheless, the significance should not be understated, given that many studies indicate that reduced sedentary time during childhood correlates with improved physical and mental health outcomes in subsequent years [ 16 , 21 , 89 ]. In the subgroup analysis, the effect of eHealth interventions on sedentary time varied depending on whether accelerometer or questionnaire measures were used. The questionnaire measures yielded higher levels of sedentary time, although this difference was not statistically significant. This observation aligns with findings from the existing literature, suggesting that questionnaire-based assessments tend to overestimate the actual sedentary time. For a more accurate evaluation of the impact of eHealth interventions, future research should consider using device-based measurement methods [ 90 ].

Interestingly, most eHealth interventions aimed to increase children’s PA and reduce sedentary time with parental support. Previous research has shown that parental and family involvement were among the key intervention components that encouraged significant improvement in children’s health behaviors and a decrease in sedentary time [ 91 , 92 ]. Likewise, Ha et al [ 49 ] found that parents’ physical literacy predicts children’s values toward PA, and concurrent interventions that target enhancing parents’ physical literacy for PA in the family context may be more effective in raising children’s PA values. However, our subgroup analysis showed no significant improvements in MVPA or reductions in sedentary time with the parent-supported interventions. This result also aligns with a prior review indicating that parent-directed digital interventions were ineffective in improving PA [ 53 ]. In that review, 8 studies, all published before 2020, primarily used digital platforms to convey health information and education to parents. Notably, in the wake of the COVID-19 pandemic, there has been a marked increase in research centered on leveraging technology to improve children’s PA, leading to more recent studies in 3 years [ 93 ]. Furthermore, the discourse regarding the comparative value of targeting either parents or children exclusively is not a novel debate within intervention research. In contrast to the review, our study featured a larger sample size and included a quantitative analysis of effect sizes in the interventions. These insights indicate that prevailing eHealth interventions, even with parental support, may fail to effectively engage preschoolers. Recognizing the reciprocal dynamics between parents and young children can offer insights for refining digital interventions. Therefore, preliminary research is imperative to comprehensively understand the perceptions, attitudes, and driving factors of parents. Recognizing the reciprocal dynamics between parents and young children is crucial in understanding how they influence their children’s PA and SB.

Intervention duration is also an essential component for conducting acceptable and highly effective interventions. Another subgroup analysis found that interventions with a duration of <3 months had a significantly greater effect on PA and sedentary time than those with a duration of >3 months, although the results were not significant. This notion is corroborated by another systematic review, which demonstrated the difficulty in sustaining long-term behavior change, potentially attributed to the diminishing effects of behavior change interventions mediated by digital technology [ 41 ].

The meta-analysis, involving 3 studies, revealed an immediate improvement in sleep duration following the intervention. Previous research has extensively examined the influence of sleep duration during the preschool years on physical, cognitive, and psychosocial development. For instance, the systematic review by Chaput et al [ 6 ] involving 25 studies revealed a correlation between shorter sleep duration and diminished emotion regulation in children aged 0 to 4 years. Recent findings also suggest that maintaining an extended sleep duration during the early preschool stages is significant for subsequent behavioral outcomes [ 24 ]. However, few studies have focused on effective interventions to improve sleep outcomes [ 45 , 94 ]. Consequently, further research is warranted to explore the impact of eHealth interventions on sleep outcomes among preschoolers.

Increasing awareness of the interconnected nature of 24-hour movement behaviors highlights their intrinsic interdependence [ 14 ]. However, none of the studies in our review specifically investigated the intervention effects on all 3 movement behaviors. Generally, conventional analytical methods do not adequately consider these indicators during analysis. Therefore, future research should explore alternative approaches, such as compositional analyses, to attain a more profound comprehension of whether an optimal equilibrium is present among SB, light PA, MVPA, and sleep [ 90 , 95 , 96 ]. Furthermore, most studies in our review examined the immediate postintervention effect. Consequently, insights into the enduring nature of alterations in 24-hour movement behaviors remain elusive. Further studies should include long-term follow-up assessments. In addition, it would be interesting to obtain more insights into the feasibility of incorporating wearable devices and apps into the design of eHealth interventions. This information could inform the design of wearables and apps that effectively enhance PA, diminish sedentary time, and enhance sleep, thereby maximizing their impact on public health. Moreover, the overall quality of the interventions was suboptimal, lacking thorough descriptions or proper execution in areas such as randomization, blinded outcome assessment, valid measurement of 24-hour movement behaviors, and adjusted differences between groups. In our meta-analysis, we observed that lower-quality studies exhibited a more pronounced positive impact on the targeted outcomes. Thus, it is essential to interpret the results cautiously, recognizing that there could be an overestimation of the effect of eHealth interventions in studies of lower quality owing to potential bias. This mirrors the findings from previous reviews on eHealth childhood PA [ 53 ] and behavior change interventions among adolescents [ 45 ].

Strengths and Limitations

This systematic review has some strengths. First, this study is the first meta-analysis to quantitatively assess the effects of previously conducted RCTs using eHealth interventions on 24-hour movement behaviors in preschoolers. Second, the review was conducted rigorously, encompassing comprehensive terms and using an extensive systematic search strategy. We focused on robust evidence from RCT studies, assessed the quality using the GRADE approach, and adhered to a preregistered protocol. This meticulous approach reduces the heterogeneity and provides a more precise estimation of the effects.

Nonetheless, several limitations of our study should be noted. First, the quality of the studies included in this review was generally low and lacked rigorous study designs. Second, the small number of studies discerned over the decade spanned by this meta-analysis underscores the nascent state of this research domain, even considering significant technological advancements and their widespread acceptance. Third, although we systematically screened relevant electronic databases to identify studies, the search was restricted to studies published in English. Finally, the lack of evidence regarding sustained effects beyond the immediate postintervention period underscores the need for extended follow-up. Future studies should strive to elucidate strategies for maintaining the intervention effects over the preschooler’s trajectory.

Future Research and Implications

This study highlights the significant avenues for future research. First, further research is warranted to develop eHealth interventions that yield larger effect sizes and higher quality, specifically in identifying effective 24-hour movement behaviors. It is worth noting that none of the eligible eHealth interventions addressed the comprehensive integration of 24-hour movement behaviors in preschoolers, despite the increasing recognition of the interdependence between PA, SB), and sleep. Second, many studies were conducted in Western and high-income countries, prompting the need for further exploration of the effectiveness of eHealth behavior change interventions in other country settings. Third, our study’s focus was primarily on the quantitative aspects of 24-hour movement behaviors, warranting future studies to also delve into the qualitative facets, such as motor skills and sleep quality. In addition, it is crucial to recognize the pivotal role of objective measurement tools in comprehending movement behaviors among young children. Given the sporadic and unstructured nature of preschoolers’ activities, it becomes challenging for parents and teachers to accurately discern shifts in MVPA and SB, even if they have occurred. This highlights the importance of using objective measurement tools for precise insights into these behaviors. Finally, future research in this field should prioritize broadening the focus and incorporate additional dimensions, such as physical, affective, and cognitive indicators. This approach may promote the holistic development of young children and contribute to advancements in the field of health outcomes. By considering these dimensions, researchers can also gain a comprehensive understanding of the various factors that influence children’s overall well-being and physical literacy development.

Given the multifaceted nature of intervention moderators, further research is warranted to establish optimal patterns of daily movement behaviors and to gain deeper insights into the mechanisms underlying change when addressing the amalgamation of 24-hour movement behaviors in preschoolers. Indeed, future interventions should also draw from the effective behavior change techniques used in single-behavior eHealth interventions and apply them to interventions targeting multiple healthy movement behaviors. Moreover, collaborative engagement with parents and teachers throughout both the developmental and implementation phases of these interventions will play a pivotal role in their success. In addition, capitalizing on emerging and novel technologies may offer a valuable avenue to enhance the effectiveness and feasibility of these interventions.

Conclusions

The findings suggest that eHealth interventions may hold promise in improving 24-hour movement behaviors, particularly by increasing PA, improving sleep duration, and reducing sedentary time among preschoolers. However, these effects were relatively modest and transient and were observed primarily immediately after the intervention. Furthermore, the overall quality of the evidence was rated as moderate to low. As a result, there is a pressing need for rigorous and high-quality research endeavors to develop eHealth interventions capable of effectively enhancing both the quantity and quality of 24-hour movement behaviors simultaneously. These interventions should strive to maintain their effects over extended periods.

Acknowledgments

The authors of this study would like to express their sincere gratitude to the authors who responded to their emails and generously provided detailed information and data regarding their studies. Their cooperation has been instrumental in advancing this study.

Data Availability

The data sets generated during and analyzed during this study are available from the corresponding author on reasonable request.

Authors' Contributions

SJ drafted the manuscript. SJ, ASH, and JYYN were responsible for the concept and design of the study. SJ and BP screened all abstracts full texts, extracted all data, performed the risk of bias, and conducted the quality assessment. SJ performed the statistical analyses. SJ, JYYN, KHC, and ASH critically revised the manuscript for important intellectual content. All authors participated in developing the review’s methodology, contributed to multiple manuscript drafts, and gave their approval for the final version.

Conflicts of Interest

None declared.

Eligibility criteria for study inclusion.

Search strategy.

Missing data processing.

Exclusion studies.

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) checklist.

PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) abstract checklist.

Number of studies included per country and income economy.

Summary of intervention details in the included studies.

Characteristics of the included studies including physical activity, sedentary behavior, and sleep outcomes.

Forest plot of the mobile-based intervention intended to stop obesity in preschoolers (MINISTOP) results.

Forest plots of the subgroup analyses of moderate to vigorous physical activity and sedentary behavior.

Sensitive analysis.

Moderate to vigorous physical activity bias funnel.

Sedentary behavior bias funnel.

Risk of bias.

GRADE (Grading of Recommendations Assessment, Development, and Evaluation) assessment results.

  • Shirazipour CH, Raines C, Diniz MA, Salvy SJ, Haile RW, Freedland SJ, et al. The 24-hour movement paradigm: an integrated approach to the measurement and promotion of daily activity in cancer clinical trials. Contemp Clin Trials Commun. Apr 2023;32:101081. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • WHO guidelines on physical activity and sedentary behaviour. World Health Organization. 2020. URL: https://www.who.int/publications/i/item/9789240015128 [accessed 2024-01-18]
  • Guidelines on physical activity, sedentary behaviour and sleep for children under 5 years of age. World Health Organization. 2019. URL: https://iris.who.int/bitstream/handle/10665/311664/9789241550536-eng.pdf?sequence=1 [accessed 2024-01-18]
  • Kuzik N, Poitras VJ, Tremblay MS, Lee EY, Hunter S, Carson V. Systematic review of the relationships between combinations of movement behaviours and health indicators in the early years (0-4 years). BMC Public Health. Nov 20, 2017;17(Suppl 5):849. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rollo S, Antsygina O, Tremblay MS. The whole day matters: understanding 24-hour movement guideline adherence and relationships with health indicators across the lifespan. J Sport Health Sci. Dec 2020;9(6):493-510. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chaput JP, Gray CE, Poitras VJ, Carson V, Gruber R, Birken CS, et al. Systematic review of the relationships between sleep duration and health indicators in the early years (0-4 years). BMC Public Health. Nov 20, 2017;17(Suppl 5):855. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hu R, Zheng H, Lu C. The association between sedentary screen time, non-screen-based sedentary time, and overweight in Chinese preschool children: a cross-sectional study. Front Pediatr. Dec 2021;9:767608. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • McArthur BA, Volkova V, Tomopoulos S, Madigan S. Global prevalence of meeting screen time guidelines among children 5 years and younger: a systematic review and meta-analysis. JAMA Pediatr. Apr 01, 2022;176(4):373-383. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • da Costa BG, Chaput JP, Lopes MV, Malheiros LE, Silva KS. Movement behaviors and their association with depressive symptoms in Brazilian adolescents: a cross-sectional study. J Sport Health Sci. Mar 2022;11(2):252-259. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Del Pozo-Cruz B, Perales F, Parker P, Lonsdale C, Noetel M, Hesketh KD, et al. Joint physical-activity/screen-time trajectories during early childhood: socio-demographic predictors and consequences on health-related quality-of-life and socio-emotional outcomes. Int J Behav Nutr Phys Act. Jul 08, 2019;16(1):55. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Okely AD, Kariippanon KE, Guan H, Taylor EK, Suesse T, Cross PL, et al. Global effect of COVID-19 pandemic on physical activity, sedentary behaviour and sleep among 3- to 5-year-old children: a longitudinal study of 14 countries. BMC Public Health. May 17, 2021;21(1):940. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Okely AD, Reilly JJ, Tremblay MS, Kariippanon KE, Draper CE, El Hamdouchi A, et al. Cross-sectional examination of 24-hour movement behaviours among 3- and 4-year-old children in urban and rural settings in low-income, middle-income and high-income countries: the SUNRISE study protocol. BMJ Open. Oct 25, 2021;11(10):e049267. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Tucker P. The physical activity levels of preschool-aged children: a systematic review. Early Child Res Q. Oct 2008;23(4):547-558. [ FREE Full text ] [ CrossRef ]
  • Feng J, Zheng C, Sit CH, Reilly JJ, Huang WY. Associations between meeting 24-hour movement guidelines and health in the early years: a systematic review and meta-analysis. J Sports Sci. Nov 28, 2021;39(22):2545-2557. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Feng J, Huang WY, Reilly JJ, Wong SH. Compliance with the WHO 24-h movement guidelines and associations with body weight status among preschool children in Hong Kong. Appl Physiol Nutr Metab. Oct 2021;46(10):1273-1278. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rodriguez-Ayllon M, Cadenas-Sánchez C, Estévez-López F, Muñoz NE, Mora-Gonzalez J, Migueles JH, et al. Role of physical activity and sedentary behavior in the mental health of preschoolers, children and adolescents: a systematic review and meta-analysis. Sports Med. Sep 16, 2019;49(9):1383-1410. [ CrossRef ] [ Medline ]
  • Gao Z, Lee JE, Zeng N, Pope ZC, Zhang Y, Li X. Home-based exergaming on preschoolers' energy expenditure, cardiovascular fitness, body mass index and cognitive flexibility: a randomized controlled trial. J Clin Med. Oct 21, 2019;8(10):1745. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wong RS, Tung KT, Chan BN, Ho FK, Rao N, Chan KL, et al. Early-life activities mediate the association between family socioeconomic status in early childhood and physical fitness in early adolescence. Sci Rep. Jan 07, 2022;12(1):81. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zeng N, Ayyub M, Sun H, Wen X, Xiang P, Gao Z. Effects of physical activity on motor skills and cognitive development in early childhood: a systematic review. Biomed Res Int. 2017;2017:2760716. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Carson V, Hunter S, Kuzik N, Wiebe SA, Spence JC, Friedman A, et al. Systematic review of physical activity and cognitive development in early childhood. J Sci Med Sport. Jul 2016;19(7):573-578. [ CrossRef ] [ Medline ]
  • Talarico R, Janssen I. Compositional associations of time spent in sleep, sedentary behavior and physical activity with obesity measures in children. Int J Obes (Lond). Aug 2018;42(8):1508-1514. [ CrossRef ] [ Medline ]
  • Cliff DP, McNeill J, Vella SA, Howard SJ, Santos R, Batterham M, et al. Adherence to 24-hour movement guidelines for the early years and associations with social-cognitive development among Australian preschool children. BMC Public Health. Nov 20, 2017;17(Suppl 5):857. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Christian H, Murray K, Trost SG, Schipperijn J, Trapp G, Maitland C, et al. Meeting the Australian 24-hour movement guidelines for the early years is associated with better social-emotional development in preschool boys. Prev Med Rep. Jun 2022;27:101770. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guerlich K, Avraam D, Cadman T, Calas L, Charles MA, Elhakeem A, et al. Sleep duration in preschool age and later behavioral and cognitive outcomes: an individual participant data meta-analysis in five European cohorts. Eur Child Adolesc Psychiatry. Jan 2024;33(1):167-177. [ CrossRef ] [ Medline ]
  • Johnstone A, Hughes AR, Martin A, Reilly JJ. Utilising active play interventions to promote physical activity and improve fundamental movement skills in children: a systematic review and meta-analysis. BMC Public Health. Jun 26, 2018;18(1):789. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lee AM, Chavez S, Bian J, Thompson LA, Gurka MJ, Williamson VG, et al. Efficacy and effectiveness of mobile health technologies for facilitating physical activity in adolescents: scoping review. JMIR Mhealth Uhealth. Feb 12, 2019;7(2):e11847. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Morgan EH, Schoonees A, Sriram U, Faure M, Seguin-Fowler RA. Caregiver involvement in interventions for improving children's dietary intake and physical activity behaviors. Cochrane Database Syst Rev. Jan 05, 2020;1(1):CD012547. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hammersley ML, Wyse RJ, Jones RA, Okely AD, Wolfenden L, Eckermann S, et al. Telephone and web-based delivery of healthy eating and active living interventions for parents of children aged 2 to 6 years: mixed methods process evaluation of the time for healthy habits translation trial. J Med Internet Res. May 26, 2022;24(5):e35771. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Gordon ES, Tucker P, Burke SM, Carron AV. Effectiveness of physical activity interventions for preschoolers: a meta-analysis. Res Q Exerc Sport. Sep 2013;84(3):287-294. [ CrossRef ] [ Medline ]
  • Fangupo LJ, Haszard JJ, Reynolds AN, Lucas AW, McIntosh DR, Richards R, et al. Do sleep interventions change sleep duration in children aged 0-5 years? A systematic review and meta-analysis of randomised controlled trials. Sleep Med Rev. Oct 2021;59:101498. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Downing KL, Hnatiuk JA, Hinkley T, Salmon J, Hesketh KD. Interventions to reduce sedentary behaviour in 0-5-year-olds: a systematic review and meta-analysis of randomised controlled trials. Br J Sports Med. Mar 06, 2018;52(5):314-321. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Oh H, Rizo C, Enkin M, Jadad A. What is eHealth (3): a systematic review of published definitions. J Med Internet Res. Feb 24, 2005;7(1):e1. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Boogerd EA, Arts T, Engelen LJ, van de Belt TH. "What is eHealth": time for an update? JMIR Res Protoc. Mar 12, 2015;4(1):e29. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Liu S, Li J, Wan DY, Li R, Qu Z, Hu Y, et al. Effectiveness of eHealth self-management interventions in patients with heart failure: systematic review and meta-analysis. J Med Internet Res. Sep 26, 2022;24(9):e38697. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Brown HE, Atkin AJ, Panter J, Wong G, Chinapaw MJ, van Sluijs EM. Family-based interventions to increase physical activity in children: a systematic review, meta-analysis and realist synthesis. Obes Rev. Apr 2016;17(4):345-360. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • He Z, Wu H, Yu F, Fu J, Sun S, Huang T, et al. Effects of smartphone-based interventions on physical activity in children and adolescents: systematic review and meta-analysis. JMIR Mhealth Uhealth. Feb 01, 2021;9(2):e22601. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Bonvicini L, Pingani I, Venturelli F, Patrignani N, Bassi MC, Broccoli S, et al. Effectiveness of mobile health interventions targeting parents to prevent and treat childhood obesity: systematic review. Prev Med Rep. Oct 2022;29:101940. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Western MJ, Armstrong ME, Islam I, Morgan K, Jones UF, Kelson MJ. The effectiveness of digital interventions for increasing physical activity in individuals of low socioeconomic status: a systematic review and meta-analysis. Int J Behav Nutr Phys Act. Nov 09, 2021;18(1):148. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Lau PW, Lau EY, Wong DP, Ransdell L. A systematic review of information and communication technology-based interventions for promoting physical activity behavior change in children and adolescents. J Med Internet Res. Jul 13, 2011;13(3):e48. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Rose T, Barker M, Maria Jacob C, Morrison L, Lawrence W, Strömmer S, et al. A systematic review of digital interventions for improving the diet and physical activity behaviors of adolescents. J Adolesc Health. Dec 2017;61(6):669-677. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Stephenson A, McDonough SM, Murphy MH, Nugent CD, Mair JL. Using computer, mobile and wearable technology enhanced interventions to reduce sedentary behaviour: a systematic review and meta-analysis. Int J Behav Nutr Phys Act. Aug 11, 2017;14(1):105. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Yerrakalva D, Yerrakalva D, Hajna S, Griffin S. Effects of mobile health app interventions on sedentary time, physical activity, and fitness in older adults: systematic review and meta-analysis. J Med Internet Res. Nov 28, 2019;21(11):e14343. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Duan Y, Shang B, Liang W, Du G, Yang M, Rhodes RE. Effects of eHealth-based multiple health behavior change interventions on physical activity, healthy diet, and weight in people with noncommunicable diseases: systematic review and meta-analysis. J Med Internet Res. Feb 22, 2021;23(2):e23786. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Deng W, M J J van der Kleij R, Shen H, Wei J, Brakema EA, Guldemond N, et al. eHealth-based psychosocial interventions for adults with insomnia: systematic review and meta-analysis of randomized controlled trials. J Med Internet Res. Mar 14, 2023;25:e39250. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Champion KE, Parmenter B, McGowan C, Spring B, Wafford QE, Gardner LA, et al. Health4Life team. Effectiveness of school-based eHealth interventions to prevent multiple lifestyle risk behaviours among adolescents: a systematic review and meta-analysis. Lancet Digit Health. Sep 2019;1(5):e206-e221. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Johnson D, Deterding S, Kuhn KA, Staneva A, Stoyanov S, Hides L. Gamification for health and wellbeing: a systematic review of the literature. Internet Interv. Nov 2016;6:89-106. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Kozak AT, Buscemi J, Hawkins MA, Wang ML, Breland JY, Ross KM, et al. Technology-based interventions for weight management: current randomized controlled trial evidence and future directions. J Behav Med. Feb 2017;40(1):99-111. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Benzing V, Schmidt M. Exergaming for children and adolescents: strengths, weaknesses, opportunities and threats. J Clin Med. Nov 08, 2018;7(11):422. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ha AS, Jia J, Ng FF, Ng JY. Parent’s physical literacy enhances children’s values towards physical activity: a serial mediation model. Psychol Sport Exerc. Nov 2022;63:102297. [ FREE Full text ] [ CrossRef ]
  • Hammersley ML, Jones RA, Okely AD. Parent-focused childhood and adolescent overweight and obesity eHealth interventions: a systematic review and meta-analysis. J Med Internet Res. Jul 21, 2016;18(7):e203. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cornish K, Fox G, Fyfe T, Koopmans E, Pousette A, Pelletier CA. Understanding physical literacy in the context of health: a rapid scoping review. BMC Public Health. Oct 19, 2020;20(1):1569. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ha AS, He Q, Lubans DR, Chan CH, Ng JY. Parent-focused online intervention to promote parents' physical literacy and support children's physical activity: results from a quasi-experimental trial. BMC Public Health. Jul 12, 2022;22(1):1330. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Swindle T, Poosala AB, Zeng N, Børsheim E, Andres A, Bellows LL. Digital intervention strategies for increasing physical activity among preschoolers: systematic review. J Med Internet Res. Jan 11, 2022;24(1):e28230. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • International early learning and child well-being study. Organisation for Economic Co-operation and Development. 2020. URL: https://www.oecd.org/education/school/early-learning-and-child-well-being-study/ [accessed 2024-01-18]
  • Kabali HK, Irigoyen MM, Nunez-Davis R, Budacki JG, Mohanty SH, Leister KP, et al. Exposure and use of mobile media devices by young children. Pediatrics. Dec 2015;136(6):1044-1050. [ CrossRef ] [ Medline ]
  • McCloskey ML, Thompson DA, Chamberlin B, Clark L, Johnson SL, Bellows LL. Mobile device use among rural, low-income families and the feasibility of an app to encourage preschoolers' physical activity: qualitative study. JMIR Pediatr Parent. Dec 06, 2018;1(2):e10858. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Page MJ, McKenzie JE, Bossuyt PM, Boutron I, Hoffmann TC, Mulrow CD, et al. The PRISMA 2020 statement: an updated guideline for reporting systematic reviews. BMJ. Mar 29, 2021;372:n71. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JP, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ. Jul 21, 2009;339(jul21 1):b2700. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Alexandrou C, Henriksson H, Henström M, Henriksson P, Delisle Nyström C, Bendtsen M, et al. Effectiveness of a smartphone app (MINISTOP 2.0) integrated in primary child health care to promote healthy diet and physical activity behaviors and prevent obesity in preschool-aged children: randomized controlled trial. Int J Behav Nutr Phys Act. Feb 21, 2023;20(1):22. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Delisle Nyström C, Sandin S, Henriksson P, Henriksson H, Maddison R, Löf M. A 12-month follow-up of a mobile-based (mHealth) obesity prevention intervention in pre-school children: the MINISTOP randomized controlled trial. BMC Public Health. May 24, 2018;18(1):658. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Nyström CD, Sandin S, Henriksson P, Henriksson H, Trolle-Lagerros Y, Larsson C, et al. Mobile-based intervention intended to stop obesity in preschool-aged children: the MINISTOP randomized controlled trial. Am J Clin Nutr. Jun 2017;105(6):1327-1335. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Higgins JP, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, et al. Cochrane Handbook for Systematic Reviews of Interventions. Hoboken, NJ. John Wiley & Sons; 2019.
  • Sterne JA, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ. Aug 28, 2019;366:l4898. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Balduzzi S, Rücker G, Schwarzer G. How to perform a meta-analysis with R: a practical tutorial. Evid Based Ment Health. Nov 28, 2019;22(4):153-160. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • IntHout J, Ioannidis JP, Borm GF. The Hartung-Knapp-Sidik-Jonkman method for random effects meta-analysis is straightforward and considerably outperforms the standard DerSimonian-Laird method. BMC Med Res Methodol. Feb 18, 2014;14:25. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ferguson T, Olds T, Curtis R, Blake H, Crozier AJ, Dankiw K, et al. Effectiveness of wearable activity trackers to increase physical activity and improve health: a systematic review of systematic reviews and meta-analyses. Lancet Digit Health. Aug 2022;4(8):e615-e626. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ho RS, Chan EK, Liu KK, Wong SH. Active video game on children and adolescents' physical activity and weight management: a network meta-analysis. Scand J Med Sci Sports. Aug 2022;32(8):1268-1286. [ CrossRef ] [ Medline ]
  • Egger M, Davey Smith G, Schneider M, Minder C. Bias in meta-analysis detected by a simple, graphical test. BMJ. Sep 13, 1997;315(7109):629-634. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guyatt GH, Oxman AD, Kunz R, Vist GE, Falck-Ytter Y, Schünemann HJ, et al. GRADE Working Group. What is "quality of evidence" and why is it important to clinicians? BMJ. May 03, 2008;336(7651):995-998. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, et al. GRADE Working Group. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ. Apr 26, 2008;336(7650):924-926. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Garrison MM, Christakis DA. The impact of a healthy media use intervention on sleep in preschool children. Pediatrics. Sep 2012;130(3):492-499. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Sun A, Cheng J, Bui Q, Liang Y, Ng T, Chen JL. Home-based and technology-centered childhood obesity prevention for Chinese mothers with preschool-aged children. J Transcult Nurs. Nov 08, 2017;28(6):616-624. [ CrossRef ] [ Medline ]
  • Fu Y, Burns RD, Constantino N, Zhang P. Differences in step counts, motor competence, and enjoyment between an exergaming group and a non-exergaming group. Games Health J. Oct 2018;7(5):335-340. [ CrossRef ] [ Medline ]
  • Trost SG, Brookes DS. Effectiveness of a novel digital application to promote fundamental movement skills in 3- to 6-year-old children: a randomized controlled trial. J Sports Sci. Feb 27, 2021;39(4):453-459. [ CrossRef ] [ Medline ]
  • Yarımkaya E, Esentürk OK, İlhan EL, Karasu N. A WhatsApp-delivered intervention to promote physical activity in young children with autism spectrum disorder. Int J Dev Disabil. Feb 18, 2022;68(5):732-743. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Haines J, McDonald J, O'Brien A, Sherry B, Bottino CJ, Schmidt ME, et al. Healthy habits, happy homes: randomized trial to improve household routines for obesity prevention among preschool-aged children. JAMA Pediatr. Nov 01, 2013;167(11):1072-1079. [ CrossRef ] [ Medline ]
  • Barkin SL, Heerman WJ, Sommer EC, Martin NC, Buchowski MS, Schlundt D, et al. Effect of a behavioral intervention for underserved preschool-age children on change in body mass index: a randomized clinical trial. JAMA. Aug 07, 2018;320(5):450-460. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Byun W, Lau EY, Brusseau TA. Feasibility and effectiveness of a wearable technology-based physical activity intervention in preschoolers: a pilot study. Int J Environ Res Public Health. Aug 23, 2018;15(9):1821. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Downing KL, Salmon J, Hinkley T, Hnatiuk JA, Hesketh KD. Feasibility and efficacy of a parent-focused, text message-delivered intervention to reduce sedentary behavior in 2- to 4-year-old children (mini movers): pilot randomized controlled trial. JMIR Mhealth Uhealth. Feb 09, 2018;6(2):e39. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Ling J, Robbins LB, Zhang N, Kerver JM, Lyons H, Wieber N, et al. Using Facebook in a healthy lifestyle intervention: feasibility and preliminary efficacy. West J Nurs Res. Dec 09, 2018;40(12):1818-1842. [ CrossRef ] [ Medline ]
  • Yoong SL, Grady A, Stacey F, Polimeni M, Clayton O, Jones J, et al. A pilot randomized controlled trial examining the impact of a sleep intervention targeting home routines on young children's (3-6 years) physical activity. Pediatr Obes. Apr 11, 2019;14(4):e12481. [ CrossRef ] [ Medline ]
  • Andersen E, Øvreås S, Jørgensen KA, Borch-Jenssen J, Moser T. Children's physical activity level and sedentary behaviour in Norwegian early childhood education and care: effects of a staff-led cluster-randomised controlled trial. BMC Public Health. Nov 04, 2020;20(1):1651. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Hoffman JA, Schmidt EM, Arguello DJ, Eyllon MN, Castaneda-Sceppa C, Cloutier G, et al. Online preschool teacher training to promote physical activity in young children: a pilot cluster randomized controlled trial. Sch Psychol. Mar 2020;35(2):118-127. [ CrossRef ] [ Medline ]
  • Marsh S, Taylor R, Galland B, Gerritsen S, Parag V, Maddison R. Results of the 3 Pillars Study (3PS), a relationship-based programme targeting parent-child interactions, healthy lifestyle behaviours, and the home environment in parents of preschool-aged children: a pilot randomised controlled trial. PLoS One. Sep 17, 2020;15(9):e0238977. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zeng N, Lee JE, Gao Z. Effects of home-based exergaming on preschool children’s cognition, sedentary behavior, and physical activity: a randomized crossover trial. Brain Behav Immun Integr. Jan 2023;1:100002. [ FREE Full text ] [ CrossRef ]
  • Sterne JA, Sutton AJ, Ioannidis JP, Terrin N, Jones DR, Lau J, et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ. Jul 22, 2011;343(jul22 1):d4002. [ CrossRef ] [ Medline ]
  • Hnatiuk JA, Brown HE, Downing KL, Hinkley T, Salmon J, Hesketh KD. Interventions to increase physical activity in children 0-5 years old: a systematic review, meta-analysis and realist synthesis. Obes Rev. Jan 2019;20(1):75-87. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Wu CC, Huang CW, Wang YC, Islam MM, Kung WM, Weng YC, et al. mHealth research for weight loss, physical activity, and sedentary behavior: bibliometric analysis. J Med Internet Res. Jun 08, 2022;24(6):e35747. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Zhang J, Yang SX, Wang L, Han LH, Wu XY. The influence of sedentary behaviour on mental health among children and adolescents: a systematic review and meta-analysis of longitudinal studies. J Affect Disord. Jun 01, 2022;306:90-114. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Cliff DP, Hesketh KD, Vella SA, Hinkley T, Tsiros MD, Ridgers ND, et al. Objectively measured sedentary behaviour and health and development in children and adolescents: systematic review and meta-analysis. Obes Rev. Apr 2016;17(4):330-344. [ CrossRef ] [ Medline ]
  • Kelishadi R, Azizi-Soleiman F. Controlling childhood obesity: a systematic review on strategies and challenges. J Res Med Sci. Oct 2014;19(10):993-1008. [ FREE Full text ] [ Medline ]
  • Marsh S, Foley LS, Wilks DC, Maddison R. Family-based interventions for reducing sedentary time in youth: a systematic review of randomized controlled trials. Obes Rev. Feb 2014;15(2):117-133. [ CrossRef ] [ Medline ]
  • Neville RD, Lakes KD, Hopkins WG, Tarantino G, Draper CE, Beck R, et al. Global changes in child and adolescent physical activity during the COVID-19 pandemic: a systematic review and meta-analysis. JAMA Pediatr. Sep 01, 2022;176(9):886-894. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Teesson M, Champion KE, Newton NC, Kay-Lambkin F, Chapman C, Thornton L, et al. Health4Life Team. Study protocol of the Health4Life initiative: a cluster randomised controlled trial of an eHealth school-based program targeting multiple lifestyle risk behaviours among young Australians. BMJ Open. Jul 13, 2020;10(7):e035662. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chastin SF, Palarea-Albaladejo J, Dontje ML, Skelton DA. Combined effects of time spent in physical activity, sedentary behaviors and sleep on obesity and cardio-metabolic health markers: a novel compositional data analysis approach. PLoS One. 2015;10(10):e0139984. [ FREE Full text ] [ CrossRef ] [ Medline ]
  • Chong KH, Parrish AM, Cliff DP, Dumuid D, Okely AD. Changes in 24-hour movement behaviours during the transition from primary to secondary school among Australian children. Eur J Sport Sci. Aug 02, 2022;22(8):1276-1286. [ FREE Full text ] [ CrossRef ] [ Medline ]

Abbreviations

Edited by T de Azevedo Cardoso; submitted 19.09.23; peer-reviewed by W Liang, M Zhou, Y Zhang, EJ Buckler; comments to author 11.10.23; revised version received 04.11.23; accepted 18.01.24; published 21.02.24.

©Shan Jiang, Johan Y Y Ng, Kar Hau Chong, Bo Peng, Amy S Ha. Originally published in the Journal of Medical Internet Research (https://www.jmir.org), 21.02.2024.

This is an open-access article distributed under the terms of the Creative Commons Attribution License (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work, first published in the Journal of Medical Internet Research, is properly cited. The complete bibliographic information, a link to the original publication on https://www.jmir.org/, as well as this copyright and license information must be included.

  • Mobile Site
  • Staff Directory
  • Advertise with Ars

Filter by topic

  • Biz & IT
  • Gaming & Culture

Front page layout

AI gone wild —

Scientists aghast at bizarre ai rat with huge genitals in peer-reviewed article, it's unclear how such egregiously bad images made it through peer-review..

Beth Mole - Feb 15, 2024 11:16 pm UTC

An actual laboratory rat, who is intrigued.

Appall and scorn ripped through scientists' social media networks Thursday as several egregiously bad AI-generated figures circulated from a peer-reviewed article recently published in a reputable journal. Those figures—which the authors acknowledge in the article's text were made by Midjourney—are all uninterpretable. They contain gibberish text and, most strikingly, one includes an image of a rat with grotesquely large and bizarre genitals, as well as a text label of "dck."

AI-generated Figure 1 of the paper. This image is supposed to show spermatogonial stem cells isolated, purified, and cultured from rat testes.

The article in question is titled "Cellular functions of spermatogonial stem cells in relation to JAK/STAT signaling pathway," which was authored by three researchers in China, including the corresponding author Dingjun Hao of Xi’an Honghui Hospital. It was published online Tuesday in the journal Frontiers in Cell and Developmental Biology.

Frontiers did not immediately respond to Ars' request for comment, but we will update this post with any response.

Figure 2 is supposed to be a diagram of the JAK-STAT signaling pathway.

But the rat's package is far from the only problem. Figure 2 is less graphic but equally mangled. While it's intended to be a diagram of a complex signaling pathway, it instead is a jumbled mess. One scientific integrity expert questioned whether it provided an overly complicated explanation of "how to make a donut with colorful sprinkles." Like the first image, the diagram is rife with nonsense text and baffling images. Figure 3 is no better, offering a collage of small circular images that are densely annotated with gibberish. The image is supposed to provide visual representations of how the signaling pathway from Figure 2 regulates the biological properties of spermatogonial stem cells.

Some scientists online questioned whether the article's text was also AI-generated. One user noted that AI detection software determined that it was likely to be AI-generated; however, as Ars has reported previously, such software is unreliable .

Figure 3 is supposed to show the regulation of biological properties of spermatogonial stem cells by JAK/STAT signaling pathway.

The images, while egregious examples, highlight a growing problem in scientific publishing. A scientist's success relies heavily on their publication record, with a large volume of publications, frequent publishing, and articles appearing in top-tier journals, all of which earn scientists more prestige. The system incentivizes less-than-scrupulous researchers to push through low-quality articles, which, in the era of AI chatbots, could potentially be generated with the help of AI. Researchers worry that the growing use of AI will make published research less trustworthy. As such, research journals have recently set new authorship guidelines for AI-generated text to try to address the problem. But for now, as the Frontiers article shows, there are clearly some gaps.

reader comments

Channel ars technica.

IMAGES

  1. Buy an Article Review Papers Prepared by Professionals

    research paper on review

  2. INTRODUCTION TO RESEARCH PAPER

    research paper on review

  3. review paper

    research paper on review

  4. literature review article examples Sample of research literature review

    research paper on review

  5. 😍 Literature review in research proposal. Write a Literature Review in

    research paper on review

  6. (PDF) Descriptive Review for Research Paper Format

    research paper on review

VIDEO

  1. Research Paper Review

  2. Research Paper Review

  3. Writing a Review Paper: What,Why, How?

  4. Lecture No. 5, How to Write a Research Paper

  5. Step-by-step approach to starting and completing a good research paper

  6. Study With Me

COMMENTS

  1. How to review a paper

    Writing a good review requires expertise in the field, an intimate knowledge of research methods, a critical mind, the ability to give fair and constructive feedback, and sensitivity to the feelings of authors on the receiving end.

  2. How to write a review paper

    Steps for Writing a Review Paper Before You Begin to Search or Write Clearly define the topic. Typically, a review writer works in the related field and already has a good knowledge of the topic, but not neces-sarily.

  3. How to Write a Peer Review

    1. Summary of the research and your overall impression In your own words, summarize what the manuscript claims to report. This shows the editor how you interpreted the manuscript and will highlight any major differences in perspective between you and the other reviewers. Give an overview of the manuscript's strengths and weaknesses.

  4. What is a review article?

    A review article can also be called a literature review, or a review of literature. It is a survey of previously published research on a topic. It should give an overview of current thinking on the topic. And, unlike an original research article, it will not present new experimental results.

  5. Step by Step Guide to Reviewing a Manuscript

    First Read Considerations Keep a pen and paper handy when skim-reading. Try to bear in mind the following questions - they'll help you form your overall impression: What is the main question addressed by the research? Is it relevant and interesting? How original is the topic?

  6. Writing a Literature Review Research Paper: A step-by-step approach

    Writing a literature review in the pre or post-qualification, will be required to undertake a literature review, either as part of a course of study, as a key step in the research process. A ...

  7. How to Write a Literature Review

    Step 1 - Search for relevant literature Step 2 - Evaluate and select sources Step 3 - Identify themes, debates, and gaps Step 4 - Outline your literature review's structure Step 5 - Write your literature review Free lecture slides Other interesting articles Frequently asked questions Introduction Quick Run-through Step 1 & 2 Step 3 Step 4 Step 5

  8. How to write a good scientific review article

    Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up to date with developments in a particular area of research.

  9. How to write a superb literature review

    One of my favourite review-style articles 3 presents a plot bringing together data from multiple research papers (many of which directly contradict each other). This is then used to identify broad ...

  10. How to write a good scientific review article

    Literature reviews are valuable resources for the scientific community. With research accelerating at an unprecedented speed in recent years and more and more original papers being published, review articles have become increasingly important as a means to keep up-to-date with developments in a particular area of research.

  11. What Is Peer Review?

    Double-blind review. Triple-blind review. Collaborative review. Open review. Relatedly, peer assessment is a process where your peers provide you with feedback on something you've written, based on a set of criteria or benchmarks from an instructor. They then give constructive feedback, compliments, or guidance to help you improve your draft.

  12. Writing, reading, and critiquing reviews

    J. 2021; 12 (3) 10.36834/cmej.69739 6. Scoping Review. Aims to quickly map a research area, documenting key concepts, sources of evidence, methodologies used. Typically, scoping reviews do not judge the quality of the papers included in the review. They tend to produce descriptive accounts of a topic area.

  13. Literature review as a research methodology: An ...

    This paper discusses literature review as a methodology for conducting research and offers an overview of different types of reviews, as well as some guidelines to how to both conduct and evaluate a literature review paper. It also discusses common pitfalls and how to get literature reviews published.

  14. Ten Simple Rules for Writing a Literature Review

    Literature reviews are in great demand in most scientific fields. Their need stems from the ever-increasing output of scientific publications .For example, compared to 1991, in 2008 three, eight, and forty times more papers were indexed in Web of Science on malaria, obesity, and biodiversity, respectively .Given such mountains of papers, scientists cannot be expected to examine in detail every ...

  15. How to write the literature review of your research paper

    The main purpose of the review is to introduce the readers to the need for conducting the said research. A literature review should begin with a thorough literature search using the main keywords in relevant online databases such as Google Scholar, PubMed, etc. Once all the relevant literature has been gathered, it should be organized as ...

  16. How to Write an Article Review (with Sample Reviews)

    1 Understand what an article review is. An article review is written for an audience who is knowledgeable in the subject instead of a general audience. When writing an article review, you will summarize the main ideas, arguments, positions, and findings, and then critique the article's contributions to the field and overall effectiveness. [2]

  17. How to Write a Best Review Paper to Get More Citation

    A review paper, or a literature review, is a thorough, analytical examination of previously published literature. It also provides an overview of current research works on a particular topic in chronological order.

  18. What is the difference between a research paper and a review paper

    A research paper is based on original research. The kind of research may vary depending on your field or the topic (experiments, survey, interview, questionnaire, etc.), but authors need to collect and analyze raw data and conduct an original study. The research paper will be based on the analysis and interpretation of this data.

  19. (PDF) How to Review a Research Paper

    Peer review in health sciences. Second edition. London: BMJ Books, 2003:45-61. PDF | workshop at University of Diyala , Dec. 4, 2016 for the academic staff and post graduate students | Find, read ...

  20. Writing a Literature Review

    A literature review can be a part of a research paper or scholarly article, usually falling after the introduction and before the research methods sections.

  21. Writing Review Papers

    WHAT IS A REVIEW PAPER? The purpose of a review paper is to succinctly review recent progress in a particular topic. Overall, the paper summarizes the current state of knowledge of the topic. It creates an understanding of the topic for the reader by discussing the findings presented in recent research papers. A review paper is not a "term ...

  22. How to Write a Review Paper

    Are you thinking about writing a review paper but are not sure how to go about it? In this video, I'll cover the characteristics of a good review paper, prov...

  23. 5 Differences between a research paper and a review paper

    One of the most popular questions on our Q&A forum - What is the difference between a research paper and a review paper? - led us to conclude that of all the types of scholarly literature, researchers tend to be most perplexed by the distinction between a research paper and a review paper.

  24. 'It depends': what 86 systematic reviews tell us about what strategies

    This review updates and extends our previous review of systematic reviews of interventions designed to implement research evidence into clinical practice. To identify potentially relevant peer-reviewed research papers, we developed a comprehensive systematic literature search strategy based on the terms used in the Grimshaw et al. [ 9 ] and ...

  25. Effect of exercise for depression: systematic review and network meta

    Objective To identify the optimal dose and modality of exercise for treating major depressive disorder, compared with psychotherapy, antidepressants, and control conditions. Design Systematic review and network meta-analysis. Methods Screening, data extraction, coding, and risk of bias assessment were performed independently and in duplicate. Bayesian arm based, multilevel network meta ...

  26. PDF The Impact of Infrastructure on Development Outcomes

    e Policy Research Working Paper Series disseminates the ndings of work in progress to encourage the exchange of ideas about development issues. An objective of the series is to get the ndings out quickly, even if the presentations are less than fully polished. ... review papers. Finally, to ensure that more recent (as yet unpublished) research ...

  27. Journal of Medical Internet Research

    Background: The high prevalence of unhealthy movement behaviors among young children remains a global public health issue. eHealth is considered a cost-effective approach that holds great promise for enhancing health and related behaviors. However, previous research on eHealth interventions aimed at promoting behavior change has primarily focused on adolescents and adults, leaving a limited ...

  28. Scientists aghast at bizarre AI rat with huge genitals in peer-reviewed

    Enlarge / AI-generated Figure 1 of the paper. This image is supposed to show spermatogonial stem cells isolated, purified, and cultured from rat testes.