Interactive PowerPoint Presentation about Clinical Trials

  • Format Lessons
  • Language/s English
  • Target Audience Further education, Self-directed learning
  • Difficulty Introductory
  • = 1, orange: orangeStars() }, event: { mouseover: isLoggedIn() ? mouseoverStars.bind($data, 1) : null, mouseout: isLoggedIn() ? mouseoutStars.bind($data, 1): null }' >
  • = 2, orange: orangeStars() }, event: { mouseover: isLoggedIn() ? mouseoverStars.bind($data, 2) : null, mouseout: isLoggedIn() ? mouseoutStars.bind($data, 2): null }' >
  • = 3, orange: orangeStars() }, event: { mouseover: isLoggedIn() ? mouseoverStars.bind($data, 3) : null, mouseout: isLoggedIn() ? mouseoutStars.bind($data, 3): null }' >
  • = 4, orange: orangeStars() }, event: { mouseover: isLoggedIn() ? mouseoverStars.bind($data, 4) : null, mouseout: isLoggedIn() ? mouseoutStars.bind($data, 4): null }' >
  • = 5, orange: orangeStars() }, event: { mouseover: isLoggedIn() ? mouseoverStars.bind($data, 5) : null, mouseout: isLoggedIn() ? mouseoutStars.bind($data, 5): null }' >

Key Concepts addressed

  • 2-1a Comparison groups should be similar
  • 2-1d People should not know which treatment they get
  • 1-1a Treatments can harm
  • 1-2e Comparisons are needed to identify treatment effects

An interactive Powerpoint presentation for people thinking about participating in a clinical trial or interested in learning about them.

The European Communication on Research Awareness Needs (ECRAN) Project has created two lay-friendly resources explaining clinical trials for people who want to know more about them:

  • a 5-minute animated film available in 23 languages, and
  • an interactive PowerPoint presentation for people considering whether to participate in clinical trials.

In this video, Iain Chalmers introduces the slides, which can be downloaded below.

The slide presentation uses a PowerPoint slide show file format with macros enabled.

  • The PPS file (PowerPoint not needed)
  • The PPT file (PowerPoint needed)

“Clinical trials” is a PowerPoint slide presentation for patients or a lay audience.  The presentation covers:

  • What are clinical trials?
  • Why are clinical trials important
  • Are you considering enrolling in a clinical trial?
  • The clinical trial process
  • Informed consent
  • Rights and protections
  • Trials registers

The format works well as a self-directed presentation or as a teaching aid.  The presentation uses interactive self-test questions as you go along to make sure that you’ve been paying attention!

Another great feature of this resource is that it is pitched at the right level.  It doesn’t overload you with too much information.  At the same time, they have managed to cover all the bases.

Leave a Reply Cancel reply

clinical research studies ppt

You may also like

Qualitative research.

Finding and appraising qualitative evidence

Clinical Trials Career

For lecture on 3 June 2021

Diagnostic tests

Resources for teaching LR etc

Privacy Overview

introduction to clinical research methodology

Introduction to Clinical Research Methodology

Oct 16, 2013

620 likes | 1.5k Views

Introduction to Clinical Research Methodology. Introduction Overview of the Scientific Method Criteria Supporting the Causal Nature of an Association Outline of Available Research Designs. From The Book of Daniel, Chapter One.

Share Presentation

  • sir austin bradford hill
  • prospective study
  • artifactual association
  • single risk factor exposure
  • retrospective study prospective

evangelina

Presentation Transcript

Introduction to ClinicalResearch Methodology • Introduction • Overview of the Scientific Method • Criteria Supporting the Causal Nature of an Association • Outline of Available Research Designs

From The Book of Daniel, Chapter One 12 Try thy servants, I beseech thee, ten days; and let them give us pulse (leguminous plants) to eat and water to drink... 13 Then let our countenances be looked upon before thee; and the countenances of the youths who eat of the king’s food... 14 So, he harkened unto them and tried them in this matter, and tried them ten days... 15 And at the end of ten days their countenances appeared fairer, and they were fatter in the flesh, than all of the youths that did eat of the king’s food.

Galen, Second Century All who drink of this remedy recover in a short time, except those whom it does not help, who all die. Therefore it is obvious that it fails only in incurable cases.

Lind’s Treatise on Scurvy, Part 1 ... I took twelve patients... (with) scurvy... Their cases were as similar as I could have them... They lay together in one place and had one diet common to all. Two of these were ordered each a quart of cyder a day. Two others took twenty-five drops of elixir of vitriol three times a day upon an empty stomach. Two others took two spoonfuls of vinegar three times a day... Two of the worst patients were put upon a course of seawater. Of this they drank half a pint very day. Two others had each two oranges and one lemon given them every day. C.P. Stewart and D. Guthrie, Eds. Edinburgh University Press, 1953.

Lind’s Treatise on Scurvy, Part 2 The two remaining patients took an electuary recovered by a hospital surgeon made of garlic, mustard, balsam of peru and myrrh. The consequence was that the most sudden and visible good effects were perceived from the use of oranges and lemons; one of those who had taken them being at the end of six days fit for duty. The other was the best recovered of any in his condition and was appointed nurse to the rest of the sick. C.P. Stewart and D. Guthrie, Eds. Edinburgh University Press, 1953.

Definition: Epidemiology • Epidemiology is the study of the distribution and determinants of health and disease in populations, and is the basic science underlying much of public health and preventive medicine.

Definition: Clinical Epidemiology • Clinical Epidemiology extends the principles of epidemiology to the critical evaluation of diagnostic and therapeutic modalities in clinical practice.

Definition: Biostatistics • Biostatistics is concerned with the development of statistical theory and methods, and their application to the biomedical sciences.

Types of Associations between Factors Under Study • None (independent) • Artifactual association (spurious or false association) • Chance (unsystemic variation) • Bias (systematic variation) • Indirect association • Causal association (direct association)

Overview of the Scientific Method Study Sample Statistical Inference Conclusion About a Population(Association) Biological Inference Conclusion About Scientific Theory(Causation)

Criteria Supporting the Causal Nature of an Association • Coherence with existing information • Time sequence • Specificity • Consistency • Strength • Quantitative strength • Dose-response relationship • Study design

Options in Research Design • Analytic Studies • Experimental Study • Prospective Cohort Study • Retrospective Cohort Study • Case-Control Study • Descriptive Studies • Analyses of Secular Trends • Case Series • Case Reports

Case Report • Definition • A clinical description of a single patient • Use • Hypothesis generation • Limitation • Generalizability - patient may be atypical

Case Series • Definition • A clinical description of a number of patients with a disease • Use • Characterization of the illness • Limitation • No control group: cannot determine which factors in the description are unique to the illness.

Analysis of Secular Trends • Definition • A study comparing geographic and/or time trends of an illness to trends in risk factors • Use • Rapid and easy support for or disproof of hypotheses • Limitation • Cannot differentiate among those hypotheses consistent with the data

Case-Control Study • Definition • A study comparing diseased patients to non-diseased patients, looking for differences in risk factors • Use • The study of any number of risk factors or etiologies for a single disease, especially a relatively rare disease • Limitation • Certain specific biases must be avoided, e.g., historically obtained data must be complete and accurate

Cohort Study • Definition • A study comparing patients with a risk factor/exposure to others without the risk factor/exposure for differences in outcome. • Use • The study of any of a number of outcomes from a single risk factor/exposure • Limitation • Prolonged and costly

Case-Control Studies Disease Present(cases) Absent (controls) Present(exposed) A B Absent (not exposed) C D Factor Cohort Studies

Prospective Study Retrospective Study Prospective vs. Retrospective Studies EventsUnder Study Time

Experimental Study • Definition • A study in which the risk factor/exposure of interest is controlled by the investigator; randomization is generally used • Use • Most convincing demonstration of causality • Limitation • Logistic and ethical difficulties in its application to human studies

Options in Research Design-1 • Analytic Studies • Experimental Study • Prospective Cohort Study • Retrospective Cohort Study • Case-Control Study • Descriptive Studies • Analyses of Secular Trends • Case Series • Case Reports

Options in Research Design - 2 • Options in Directionality • Case-Control (case-history, case-referent, retrospective, trohoc) study • Cohort Study (follow-up, prospective) • Experimental Study (intervention trial, clinical trial) • Options in Timing • Retrospective study (retrolective, historical, non-concurrent) • Prospective study (prolective) • Cross-sectional study

Options in Research Design • Experimental Study • Prospective Cohort Study • Retrospective Cohort Study • Case-Control Study • Analyses of Secular Trends • Case Series • Case Reports

Sir Austin Bradford Hill “All scientific work is incomplete-- whether it be observational or experimental. All scientific work is liable to be upset or modified by advancing knowledge. That does not confer upon us a freedom to ignore the knowledge we already have, or to postpone the action that it appears to demand at a given time. “Who knows, asked Robert Browning, but the world may end tonight? True, but on available evidence most of us make ready to commute on the 8:30 the next day.” Proceedings of the Royal Society of Medicine, 1965:58;295

  • More by User

Introduction to the Clinical Research Centers and Clinical Research Unit

Introduction to the Clinical Research Centers and Clinical Research Unit

Introduction to the Clinical Research Centers and Clinical Research Unit . Agenda. Overview Administrative Issues Role of the Research Subject Advocate Nursing Policies and Procedures Bionutrition Specimen Processing Gateway Services Statistics, Information Technologies

656 views • 41 slides

Introduction to Basic Statistics for Clinical Research

Introduction to Basic Statistics for Clinical Research

Learning the Language. Concept of SamplingVariable typesCategorical (qualitative; nominal)OrdinalNumerical (continuous; interval; ratio)Independent vs. Correlated DataParametric vs. Non-parametric. Learning the Language. Study DesignSampling StrategyAnalysis TypesDiscrete vs. Time-dependent

557 views • 12 slides

Introduction to Research Methodology

Introduction to Research Methodology

Introduction to Research Methodology. Lynn W Zimmerman, PhD. What is Research?. endless sometimes painful c onvincing experiment findings consistent analysis implications. publish statistics never perfect boring time consuming experts unpredictable. University of Hawaii data.

1.01k views • 19 slides

Introduction to Statistical Computing in Clinical Research

Introduction to Statistical Computing in Clinical Research

Today.... Course overviewCourse objectivesCourse details: grading, homework, etcSchedule, lecture overviewWhere does Stata fit in?Basic data analysis with StataStata demosLab. Course Objectives. Introduce you to using STATA and Excel forData managementBasic statistical and epidemiologic an

537 views • 39 slides

Introduction to Survey Methodology

Introduction to Survey Methodology

Introduction to Survey Methodology. Survey Research and Design Spring 2006 Class #2. Today’s Agenda. Introduce the concept of total survey error and Dillman’s tailored design method Discuss possible topics for group projects Divide into groups and get started on project.

2.32k views • 20 slides

Introduction to Research Methodology

Introduction to Research Methodology. Acquiring Knowledge Ways of Knowing. Tenacity Intuition Authority Rationalism Empiricism Science. Tenacity. A willingness to accept ideas as valid because they have been accepted for so long or repeated so often that they seem true. Intuition.

444 views • 28 slides

Introduction to Clinical Research and Research Questions

Introduction to Clinical Research and Research Questions

Introduction to Clinical Research and Research Questions . Thomas B. Newman, MD,MPH Professor of Epidemiology & Biostatistics and Pediatrics, UCSF Epi 150.03, August 2, 2009. Outline. Anatomy and Physiology of Research Research questions Examples. Anatomy of research: What it’s made of.

943 views • 46 slides

Introduction to Research Ethics: Research vs. Clinical Therapy

Introduction to Research Ethics: Research vs. Clinical Therapy

Introduction to Research Ethics: Research vs. Clinical Therapy. 4 October 2012 Joal Hill, JD, MPH, PhD. Objectives. Define research Distinguish research from “experimental” (clinical innovation) Distinguish research from practice Who should review/approve? And why?. Research (legal).

481 views • 28 slides

Introduction to Clinical Research Design

Introduction to Clinical Research Design

Introduction to Clinical Research Design. Lee E. Morrow, MD, MS Assistant Professor of Medicine Creighton University. Descriptive Describe incidence of outcomes over time Case Reports Case Series Registries Cross Sections. Analytic Analyze associations between predictors and outcomes

886 views • 62 slides

Introduction to Statistical Computing in Clinical Research

Introduction to Statistical Computing in Clinical Research. Biostatistics 212 Lecture 1. Today. Course overview Course objectives Course details: grading, homework, etc Schedule, lecture overview Where does Stata fit in? Basic data analysis with Stata Stata demos. Course Objectives.

476 views • 38 slides

Introduction to Clinical Research and Research Questions

Introduction to Clinical Research and Research Questions. Thomas B. Newman, MD,MPH Professor of Epidemiology & Biostatistics and Pediatrics, UCSF Epi 150.03, August 1, 2011. Outline. Anatomy and Physiology of Research Research questions Examples. Anatomy of research: What it’s made of.

661 views • 47 slides

Introduction to Clinical Research

Introduction to Clinical Research

Introduction to Clinical Research. Clinical Research Practice 1. This Course Will Introduce You To:. The basics of clinical research, types of clinical trials and why clinical research is necessary.

6.23k views • 56 slides

Introduction to Clinical Research and Research Questions

Introduction to Clinical Research and Research Questions. Thomas B. Newman, MD,MPH Professor of Epidemiology & Biostatistics and Pediatrics, UCSF Epi 150.03, August 1, 2012. Outline. Anatomy and Physiology of Research Research questions Examples: jaundice in newborns.

819 views • 55 slides

Introduction to Statistical Computing in Clinical Research

Introduction to Statistical Computing in Clinical Research. Biostatistics 212 Lecture 1. Today. Course overview Course objectives Course details: grading, homework, etc Schedule, lecture overview Where does Stata fit in? Basic data analysis with Stata Stata demos Lab. Course Objectives.

549 views • 39 slides

Introduction to Statistical Computing in Clinical Research

Introduction to Statistical Computing in Clinical Research. Biostatistics 212. Today. Course overview Course objectives Course details: grading, homework, etc Schedule, lecture overview Where does Stata fit in? Basic data analysis with Stata Stata demos. Course Objectives.

451 views • 35 slides

Introduction to Research Methodology

Introduction to Research Methodology. Dr. Masoud Hemmasi. Research Methodology- -Dr. Masoud Hemmasi. College of Business Research Director and Prof. of Mgt. & Quant. Methods Ph. D. in Business Administration (Strategic Management), 1983 Honors/Awards/Recognitions:

1.86k views • 81 slides

Introduction to the Clinical Research Centers and Clinical Research Unit

Introduction to the Clinical Research Centers and Clinical Research Unit. Agenda. Overview Administrative Issues Role of the Research Subject Advocate Nursing Policies and Procedures Bionutrition Specimen Processing Gateway Services Statistics, Information Technologies

535 views • 41 slides

INTRODUCTION TO CLINICAL RESEARCH Scientific Concepts for Clinical Research

INTRODUCTION TO CLINICAL RESEARCH Scientific Concepts for Clinical Research

INTRODUCTION TO CLINICAL RESEARCH Scientific Concepts for Clinical Research Karen Bandeen-Roche, Ph.D. July 12, 2010. Acknowledgements. Scott Zeger Marie Diener-West ICTR Leadership / Team. Section 1: The Science of Clinical Investigation.

654 views • 30 slides

Introduction to Clinical Research

Clinical research is a basic term which is given to all the researches which are carried out in humans and also enables the doctors to find some better ways that determines the safety and effectiveness of medications as well as devices for their use. Visit - www.dysmech.com/skilling-entrepreneurship/clinical-research

88 views • 0 slides

Research Methodology Lesson 01  Introduction to Research  Methodology and the  Scientific Method

Research Methodology Lesson 01 Introduction to Research Methodology and the Scientific Method

Research Methodology Lesson 01 Introduction to Research Methodology and the Scientific Method Dr. Dharmakeerthi Sri Ranjan Faculty of Mass Media SriPalee Campus University of Colombo Sri Lanka.

236 views • 18 slides

Introduction to Research Methodology

Introduction to Research Methodology. András István KUN associate professor, University of Debrecen. The s cientific method. In its broadest sense science is any systematic knowledge that is capable of resulting in a correct prediction or reliable outcome.

1.21k views • 26 slides

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • Pediatr Investig
  • v.3(4); 2019 Dec

Logo of pedinvest

Clinical research study designs: The essentials

Ambika g. chidambaram.

1 Children's Hospital of Philadelphia, Philadelphia Pennsylvania, USA

Maureen Josephson

In clinical research, our aim is to design a study which would be able to derive a valid and meaningful scientific conclusion using appropriate statistical methods. The conclusions derived from a research study can either improve health care or result in inadvertent harm to patients. Hence, this requires a well‐designed clinical research study that rests on a strong foundation of a detailed methodology and governed by ethical clinical principles. The purpose of this review is to provide the readers an overview of the basic study designs and its applicability in clinical research.

Introduction

In clinical research, our aim is to design a study, which would be able to derive a valid and meaningful scientific conclusion using appropriate statistical methods that can be translated to the “real world” setting. 1 Before choosing a study design, one must establish aims and objectives of the study, and choose an appropriate target population that is most representative of the population being studied. The conclusions derived from a research study can either improve health care or result in inadvertent harm to patients. Hence, this requires a well‐designed clinical research study that rests on a strong foundation of a detailed methodology and is governed by ethical principles. 2

From an epidemiological standpoint, there are two major types of clinical study designs, observational and experimental. 3 Observational studies are hypothesis‐generating studies, and they can be further divided into descriptive and analytic. Descriptive observational studies provide a description of the exposure and/or the outcome, and analytic observational studies provide a measurement of the association between the exposure and the outcome. Experimental studies, on the other hand, are hypothesis testing studies. It involves an intervention that tests the association between the exposure and outcome. Each study design is different, and so it would be important to choose a design that would most appropriately answer the question in mind and provide the most valuable information. We will be reviewing each study design in detail (Figure  1 ).

An external file that holds a picture, illustration, etc.
Object name is PED4-3-245-g001.jpg

Overview of clinical research study designs

Observational study designs

Observational studies ask the following questions: what, who, where and when. There are many study designs that fall under the umbrella of descriptive study designs, and they include, case reports, case series, ecologic study, cross‐sectional study, cohort study and case‐control study (Figure  2 ).

An external file that holds a picture, illustration, etc.
Object name is PED4-3-245-g002.jpg

Classification of observational study designs

Case reports and case series

Every now and then during clinical practice, we come across a case that is atypical or ‘out of the norm’ type of clinical presentation. This atypical presentation is usually described as case reports which provides a detailed and comprehensive description of the case. 4 It is one of the earliest forms of research and provides an opportunity for the investigator to describe the observations that make a case unique. There are no inferences obtained and therefore cannot be generalized to the population which is a limitation. Most often than not, a series of case reports make a case series which is an atypical presentation found in a group of patients. This in turn poses the question for a new disease entity and further queries the investigator to look into mechanistic investigative opportunities to further explore. However, in a case series, the cases are not compared to subjects without the manifestations and therefore it cannot determine which factors in the description are unique to the new disease entity.

Ecologic study

Ecological studies are observational studies that provide a description of population group characteristics. That is, it describes characteristics to all individuals within a group. For example, Prentice et al 5 measured incidence of breast cancer and per capita intake of dietary fat, and found a correlation that higher per capita intake of dietary fat was associated with an increased incidence of breast cancer. But the study does not conclude specifically which subjects with breast cancer had a higher dietary intake of fat. Thus, one of the limitations with ecologic study designs is that the characteristics are attributed to the whole group and so the individual characteristics are unknown.

Cross‐sectional study

Cross‐sectional studies are study designs used to evaluate an association between an exposure and outcome at the same time. It can be classified under either descriptive or analytic, and therefore depends on the question being answered by the investigator. Since, cross‐sectional studies are designed to collect information at the same point of time, this provides an opportunity to measure prevalence of the exposure or the outcome. For example, a cross‐sectional study design was adopted to estimate the global need for palliative care for children based on representative sample of countries from all regions of the world and all World Bank income groups. 6 The limitation of cross‐sectional study design is that temporal association cannot be established as the information is collected at the same point of time. If a study involves a questionnaire, then the investigator can ask questions to onset of symptoms or risk factors in relation to onset of disease. This would help in obtaining a temporal sequence between the exposure and outcome. 7

Case‐control study

Case‐control studies are study designs that compare two groups, such as the subjects with disease (cases) to the subjects without disease (controls), and to look for differences in risk factors. 8 This study is used to study risk factors or etiologies for a disease, especially if the disease is rare. Thus, case‐control studies can also be hypothesis testing studies and therefore can suggest a causal relationship but cannot prove. It is less expensive and less time‐consuming than cohort studies (described in section “Cohort study”). An example of a case‐control study was performed in Pakistan evaluating the risk factors for neonatal tetanus. They retrospectively reviewed a defined cohort for cases with and without neonatal tetanus. 9 They found a strong association of the application of ghee (clarified butter) as a risk factor for neonatal tetanus. Although this suggests a causal relationship, cause cannot be proven by this methodology (Figure  3 ).

An external file that holds a picture, illustration, etc.
Object name is PED4-3-245-g003.jpg

Case‐control study design

One of the limitations of case‐control studies is that they cannot estimate prevalence of a disease accurately as a proportion of cases and controls are studied at a time. Case‐control studies are also prone to biases such as recall bias, as the subjects are providing information based on their memory. Hence, the subjects with disease are likely to remember the presence of risk factors compared to the subjects without disease.

One of the aspects that is often overlooked is the selection of cases and controls. It is important to select the cases and controls appropriately to obtain a meaningful and scientifically sound conclusion and this can be achieved by implementing matching. Matching is defined by Gordis et al as ‘the process of selecting the controls so that they are similar to the cases in certain characteristics such as age, race, sex, socioeconomic status and occupation’ 7 This would help identify risk factors or probable etiologies that are not due to differences between the cases and controls.

Cohort study

Cohort studies are study designs that compare two groups, such as the subjects with exposure/risk factor to the subjects without exposure/risk factor, for differences in incidence of outcome/disease. Most often, cohort study designs are used to study outcome(s) from a single exposure/risk factor. Thus, cohort studies can also be hypothesis testing studies and can infer and interpret a causal relationship between an exposure and a proposed outcome, but cannot establish it (Figure  4 ).

An external file that holds a picture, illustration, etc.
Object name is PED4-3-245-g004.jpg

Cohort study design

Cohort studies can be classified as prospective and retrospective. 7 Prospective cohort studies follow subjects from presence of risk factors/exposure to development of disease/outcome. This could take up to years before development of disease/outcome, and therefore is time consuming and expensive. On the other hand, retrospective cohort studies identify a population with and without the risk factor/exposure based on past records and then assess if they had developed the disease/outcome at the time of study. Thus, the study design for prospective and retrospective cohort studies are similar as we are comparing populations with and without exposure/risk factor to development of outcome/disease.

Cohort studies are typically chosen as a study design when the suspected exposure is known and rare, and the incidence of disease/outcome in the exposure group is suspected to be high. The choice between prospective and retrospective cohort study design would depend on the accuracy and reliability of the past records regarding the exposure/risk factor.

Some of the biases observed with cohort studies include selection bias and information bias. Some individuals who have the exposure may refuse to participate in the study or would be lost to follow‐up, and in those instances, it becomes difficult to interpret the association between an exposure and outcome. Also, if the information is inaccurate when past records are used to evaluate for exposure status, then again, the association between the exposure and outcome becomes difficult to interpret.

Case‐control studies based within a defined cohort

Case‐control studies based within a defined cohort is a form of study design that combines some of the features of a cohort study design and a case‐control study design. When a defined cohort is embedded in a case‐control study design, all the baseline information collected before the onset of disease like interviews, surveys, blood or urine specimens, then the cohort is followed onset of disease. One of the advantages of following the above design is that it eliminates recall bias as the information regarding risk factors is collected before onset of disease. Case‐control studies based within a defined cohort can be further classified into two types: Nested case‐control study and Case‐cohort study.

Nested case‐control study

A nested case‐control study consists of defining a cohort with suspected risk factors and assigning a control within a cohort to the subject who develops the disease. 10 Over a period, cases and controls are identified and followed as per the investigator's protocol. Hence, the case and control are matched on calendar time and length of follow‐up. When this study design is implemented, it is possible for the control that was selected early in the study to develop the disease and become a case in the latter part of the study.

Case‐cohort Study

A case‐cohort study is similar to a nested case‐control study except that there is a defined sub‐cohort which forms the groups of individuals without the disease (control), and the cases are not matched on calendar time or length of follow‐up with the control. 11 With these modifications, it is possible to compare different disease groups with the same sub‐cohort group of controls and eliminates matching between the case and control. However, these differences will need to be accounted during analysis of results.

Experimental study design

The basic concept of experimental study design is to study the effect of an intervention. In this study design, the risk factor/exposure of interest/treatment is controlled by the investigator. Therefore, these are hypothesis testing studies and can provide the most convincing demonstration of evidence for causality. As a result, the design of the study requires meticulous planning and resources to provide an accurate result.

The experimental study design can be classified into 2 groups, that is, controlled (with comparison) and uncontrolled (without comparison). 1 In the group without controls, the outcome is directly attributed to the treatment received in one group. This fails to prove if the outcome was truly due to the intervention implemented or due to chance. This can be avoided if a controlled study design is chosen which includes a group that does not receive the intervention (control group) and a group that receives the intervention (intervention/experiment group), and therefore provide a more accurate and valid conclusion.

Experimental study designs can be divided into 3 broad categories: clinical trial, community trial, field trial. The specifics of each study design are explained below (Figure  5 ).

An external file that holds a picture, illustration, etc.
Object name is PED4-3-245-g005.jpg

Experimental study designs

Clinical trial

Clinical trials are also known as therapeutic trials, which involve subjects with disease and are placed in different treatment groups. It is considered a gold standard approach for epidemiological research. One of the earliest clinical trial studies was performed by James Lind et al in 1747 on sailors with scurvy. 12 Lind divided twelve scorbutic sailors into six groups of two. Each group received the same diet, in addition to a quart of cider (group 1), twenty‐five drops of elixir of vitriol which is sulfuric acid (group 2), two spoonfuls of vinegar (group 3), half a pint of seawater (group 4), two oranges and one lemon (group 5), and a spicy paste plus a drink of barley water (group 6). The group who ate two oranges and one lemon had shown the most sudden and visible clinical effects and were taken back at the end of 6 days as being fit for duty. During Lind's time, this was not accepted but was shown to have similar results when repeated 47 years later in an entire fleet of ships. Based on the above results, in 1795 lemon juice was made a required part of the diet of sailors. Thus, clinical trials can be used to evaluate new therapies, such as new drug or new indication, new drug combination, new surgical procedure or device, new dosing schedule or mode of administration, or a new prevention therapy.

While designing a clinical trial, it is important to select the population that is best representative of the general population. Therefore, the results obtained from the study can be generalized to the population from which the sample population was selected. It is also as important to select appropriate endpoints while designing a trial. Endpoints need to be well‐defined, reproducible, clinically relevant and achievable. The types of endpoints include continuous, ordinal, rates and time‐to‐event, and it is typically classified as primary, secondary or tertiary. 2 An ideal endpoint is a purely clinical outcome, for example, cure/survival, and thus, the clinical trials will become very long and expensive trials. Therefore, surrogate endpoints are used that are biologically related to the ideal endpoint. Surrogate endpoints need to be reproducible, easily measured, related to the clinical outcome, affected by treatment and occurring earlier than clinical outcome. 2

Clinical trials are further divided into randomized clinical trial, non‐randomized clinical trial, cross‐over clinical trial and factorial clinical trial.

Randomized clinical trial

A randomized clinical trial is also known as parallel group randomized trials or randomized controlled trials. Randomized clinical trials involve randomizing subjects with similar characteristics to two groups (or multiple groups): the group that receives the intervention/experimental therapy and the other group that received the placebo (or standard of care). 13 This is typically performed by using a computer software, manually or by other methods. Hence, we can measure the outcomes and efficacy of the intervention/experimental therapy being studied without bias as subjects have been randomized to their respective groups with similar baseline characteristics. This type of study design is considered gold standard for epidemiological research. However, this study design is generally not applicable to rare and serious disease process as it would unethical to treat that group with a placebo. Please see section “Randomization” for detailed explanation regarding randomization and placebo.

Non‐randomized clinical trial

A non‐randomized clinical trial involves an approach to selecting controls without randomization. With this type of study design a pattern is usually adopted, such as, selection of subjects and controls on certain days of the week. Depending on the approach adopted, the selection of subjects becomes predictable and therefore, there is bias with regards to selection of subjects and controls that would question the validity of the results obtained.

Historically controlled studies can be considered as a subtype of non‐randomized clinical trial. In this study design subtype, the source of controls is usually adopted from the past, such as from medical records and published literature. 1 The advantages of this study design include being cost‐effective, time saving and easily accessible. However, since this design depends on already collected data from different sources, the information obtained may not be accurate, reliable, lack uniformity and/or completeness as well. Though historically controlled studies maybe easier to conduct, the disadvantages will need to be taken into account while designing a study.

Cross‐over clinical trial

In cross‐over clinical trial study design, there are two groups who undergoes the same intervention/experiment at different time periods of the study. That is, each group serves as a control while the other group is undergoing the intervention/experiment. 14 Depending on the intervention/experiment, a ‘washout’ period is recommended. This would help eliminate residuals effects of the intervention/experiment when the experiment group transitions to be the control group. Hence, the outcomes of the intervention/experiment will need to be reversible as this type of study design would not be possible if the subject is undergoing a surgical procedure.

Factorial trial

A factorial trial study design is adopted when the researcher wishes to test two different drugs with independent effects on the same population. Typically, the population is divided into 4 groups, the first with drug A, the second with drug B, the third with drug A and B, and the fourth with neither drug A nor drug B. The outcomes for drug A are compared to those on drug A, drug A and B and to those who were on drug B and neither drug A nor drug B. 15 The advantages of this study design that it saves time and helps to study two different drugs on the same study population at the same time. However, this study design would not be applicable if either of the drugs or interventions overlaps with each other on modes of action or effects, as the results obtained would not attribute to a particular drug or intervention.

Community trial

Community trials are also known as cluster‐randomized trials, involve groups of individuals with and without disease who are assigned to different intervention/experiment groups. Hence, groups of individuals from a certain area, such as a town or city, or a certain group such as school or college, will undergo the same intervention/experiment. 16 Hence, the results will be obtained at a larger scale; however, will not be able to account for inter‐individual and intra‐individual variability.

Field trial

Field trials are also known as preventive or prophylactic trials, and the subjects without the disease are placed in different preventive intervention groups. 16 One of the hypothetical examples for a field trial would be to randomly assign to groups of a healthy population and to provide an intervention to a group such as a vitamin and following through to measure certain outcomes. Hence, the subjects are monitored over a period of time for occurrence of a particular disease process.

Overview of methodologies used within a study design

Randomization.

Randomization is a well‐established methodology adopted in research to prevent bias due to subject selection, which may impact the result of the intervention/experiment being studied. It is one of the fundamental principles of an experimental study designs and ensures scientific validity. It provides a way to avoid predicting which subjects are assigned to a certain group and therefore, prevent bias on the final results due to subject selection. This also ensures comparability between groups as most baseline characteristics are similar prior to randomization and therefore helps to interpret the results regarding the intervention/experiment group without bias.

There are various ways to randomize and it can be as simple as a ‘flip of a coin’ to use computer software and statistical methods. To better describe randomization, there are three types of randomization: simple randomization, block randomization and stratified randomization.

Simple randomization

In simple randomization, the subjects are randomly allocated to experiment/intervention groups based on a constant probability. That is, if there are two groups A and B, the subject has a 0.5 probability of being allocated to either group. This can be performed in multiple ways, and one of which being as simple as a ‘flip of a coin’ to using random tables or numbers. 17 The advantage of using this methodology is that it eliminates selection bias. However, the disadvantage with this methodology is that an imbalance in the number allocated to each group as well as the prognostic factors between groups. Hence, it is more challenging in studies with a small sample size.

Block randomization

In block randomization, the subjects of similar characteristics are classified into blocks. The aim of block randomization is to balance the number of subjects allocated to each experiment/intervention group. For example, let's assume that there are four subjects in each block, and two of the four subjects in each block will be randomly allotted to each group. Therefore, there will be two subjects in one group and two subjects in the other group. 17 The disadvantage with this methodology is that there is still a component of predictability in the selection of subjects and the randomization of prognostic factors is not performed. However, it helps to control the balance between the experiment/intervention groups.

Stratified randomization

In stratified randomization, the subjects are defined based on certain strata, which are covariates. 18 For example, prognostic factors like age can be considered as a covariate, and then the specified population can be randomized within each age group related to an experiment/intervention group. The advantage with this methodology is that it enables comparability between experiment/intervention groups and thus makes result analysis more efficient. But, with this methodology the covariates will need to be measured and determined before the randomization process. The sample size will help determine the number of strata that would need to be chosen for a study.

Blinding is a methodology adopted in a study design to intentionally not provide information related to the allocation of the groups to the subject participants, investigators and/or data analysts. 19 The purpose of blinding is to decrease influence associated with the knowledge of being in a particular group on the study result. There are 3 forms of blinding: single‐blinded, double‐blinded and triple‐blinded. 1 In single‐blinded studies, otherwise called as open‐label studies, the subject participants are not revealed which group that they have been allocated to. However, the investigator and data analyst will be aware of the allocation of the groups. In double‐blinded studies, both the study participants and the investigator will be unaware of the group to which they were allocated to. Double‐blinded studies are typically used in clinical trials to test the safety and efficacy of the drugs. In triple‐blinded studies, the subject participants, investigators and data analysts will not be aware of the group allocation. Thus, triple‐blinded studies are more difficult and expensive to design but the results obtained will exclude confounding effects from knowledge of group allocation.

Blinding is especially important in studies where subjective response are considered as outcomes. This is because certain responses can be modified based on the knowledge of the experiment group that they are in. For example, a group allocated in the non‐intervention group may not feel better as they are not getting the treatment, or an investigator may pay more attention to the group receiving treatment, and thereby potentially affecting the final results. However, certain treatments cannot be blinded such as surgeries or if the treatment group requires an assessment of the effect of intervention such as quitting smoking.

Placebo is defined in the Merriam‐Webster dictionary as ‘an inert or innocuous substance used especially in controlled experiments testing the efficacy of another substance (such as drug)’. 20 A placebo is typically used in a clinical research study to evaluate the safety and efficacy of a drug/intervention. This is especially useful if the outcome measured is subjective. In clinical drug trials, a placebo is typically a drug that resembles the drug to be tested in certain characteristics such as color, size, shape and taste, but without the active substance. This helps to measure effects of just taking the drug, such as pain relief, compared to the drug with the active substance. If the effect is positive, for example, improvement in mood/pain, then it is called placebo effect. If the effect is negative, for example, worsening of mood/pain, then it is called nocebo effect. 21

The ethics of placebo‐controlled studies is complex and remains a debate in the medical research community. According to the Declaration of Helsinki on the use of placebo released in October 2013, “The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best proven intervention(s), except in the following circumstances:

Where no proven intervention exists, the use of placebo, or no intervention, is acceptable; or

Where for compelling and scientifically sound methodological reasons the use of any intervention less effective than the best proven one, the use of placebo, or no intervention is necessary to determine the efficacy or safety of an intervention and the patients who receive any intervention less effective than the best proven one, placebo, or no intervention will not be subject to additional risks of serious or irreversible harm as a result of not receiving the best proven intervention.

Extreme care must be taken to avoid abuse of this option”. 22

Hence, while designing a research study, both the scientific validity and ethical aspects of the study will need to be thoroughly evaluated.

Bias has been defined as “any systematic error in the design, conduct or analysis of a study that results in a mistaken estimate of an exposure's effect on the risk of disease”. 23 There are multiple types of biases and so, in this review we will focus on the following types: selection bias, information bias and observer bias. Selection bias is when a systematic error is committed while selecting subjects for the study. Selection bias will affect the external validity of the study if the study subjects are not representative of the population being studied and therefore, the results of the study will not be generalizable. Selection bias will affect the internal validity of the study if the selection of study subjects in each group is influenced by certain factors, such as, based on the treatment of the group assigned. One of the ways to decrease selection bias is to select the study population that would representative of the population being studied, or to randomize (discussed in section “Randomization”).

Information bias is when a systematic error is committed while obtaining data from the study subjects. This can be in the form of recall bias when subject is required to remember certain events from the past. Typically, subjects with the disease tend to remember certain events compared to subjects without the disease. Observer bias is a systematic error when the study investigator is influenced by the certain characteristics of the group, that is, an investigator may pay closer attention to the group receiving the treatment versus the group not receiving the treatment. This may influence the results of the study. One of the ways to decrease observer bias is to use blinding (discussed in section “Blinding”).

Thus, while designing a study it is important to take measure to limit bias as much as possible so that the scientific validity of the study results is preserved to its maximum.

Overview of drug development in the United States of America

Now that we have reviewed the various clinical designs, clinical trials form a major part in development of a drug. In the United States, the Food and Drug Administration (FDA) plays an important role in getting a drug approved for clinical use. It includes a robust process that involves four different phases before a drug can be made available to the public. Phase I is conducted to determine a safe dose. The study subjects consist of normal volunteers and/or subjects with disease of interest, and the sample size is typically small and not more than 30 subjects. The primary endpoint consists of toxicity and adverse events. Phase II is conducted to evaluate of safety of dose selected in Phase I, to collect preliminary information on efficacy and to determine factors to plan a randomized controlled trial. The study subjects consist of subjects with disease of interest and the sample size is also small but more that Phase I (40–100 subjects). The primary endpoint is the measure of response. Phase III is conducted as a definitive trial to prove efficacy and establish safety of a drug. Phase III studies are randomized controlled trials and depending on the drug being studied, it can be placebo‐controlled, equivalence, superiority or non‐inferiority trials. The study subjects consist of subjects with disease of interest, and the sample size is typically large but no larger than 300 to 3000. Phase IV is performed after a drug is approved by the FDA and it is also called the post‐marketing clinical trial. This phase is conducted to evaluate new indications, to determine safety and efficacy in long‐term follow‐up and new dosing regimens. This phase helps to detect rare adverse events that would not be picked up during phase III studies and decrease in the delay in the release of the drug in the market. Hence, this phase depends heavily on voluntary reporting of side effects and/or adverse events by physicians, non‐physicians or drug companies. 2

We have discussed various clinical research study designs in this comprehensive review. Though there are various designs available, one must consider various ethical aspects of the study. Hence, each study will require thorough review of the protocol by the institutional review board before approval and implementation.

CONFLICT OF INTEREST

Chidambaram AG, Josephson M. Clinical research study designs: The essentials . Pediatr Invest . 2019; 3 :245‐252. 10.1002/ped4.12166 [ CrossRef ] [ Google Scholar ]

  • All Resource

PPT Templates

Single slides.

  • Pitch Deck 204 templates
  • Animation 326 templates
  • Vertical Report 316 templates
  • Business 793 templates
  • Finance 55 templates
  • Construction 44 templates
  • IT/Commerce 171 templates
  • Medical 63 templates
  • Education 45 templates
  • Lifestyle 386 templates
  • Pitch Decks 138 templates
  • Business 536 templates
  • Finance 20 templates
  • Construction 75 templates
  • IT/Commerce 73 templates
  • Medical 27 templates
  • Lifestyle 578 templates
  • Pitch Decks 140 templates
  • Business 469 templates
  • Finance 19 templates
  • Construction 64 templates
  • IT/Commerce 72 templates
  • Medical 29 templates
  • Education 39 templates
  • Lifestyle 490 templates
  • Cover 266 templates
  • Agenda 97 templates
  • Overview 216 templates
  • CEO 28 templates
  • Our Team 142 templates
  • Organization 48 templates
  • History 38 templates
  • Vision, Mission 109 templates
  • Problem, Solution 193 templates
  • Opportunity 154 templates
  • Business Model 158 templates
  • Product, Services 299 templates
  • Technology 65 templates
  • Market 155 templates
  • Prices 56 templates
  • Customers 55 templates
  • Competitor 113 templates
  • Business Process 151 templates
  • Analysis 222 templates
  • Strategy 120 templates
  • Marketing, Sales 61 templates
  • Profit, Loss 69 templates
  • Financials 247 templates
  • Timeline 122 templates
  • Proposal 40 templates
  • Contact Us 272 templates
  • Break Slides 16 templates
  • List 359 templates
  • Process 351 templates
  • Cycle 177 templates
  • Hierarchy 98 templates
  • Relationship 152 templates
  • Matrix 86 templates
  • Pyramid 67 templates
  • Tables 145 templates
  • Map 96 templates
  • Puzzles 163 templates
  • Graph 217 templates
  • Infographics 436 templates
  • SWOT 111 templates
  • Icon 418 templates
  • Theme Slides 138 templates
  • Mockup 42 templates
  • Column 315 templates
  • Line 199 templates
  • Pie 139 templates
  • Bar 179 templates
  • Area 130 templates
  • X Y,Scatter 16 templates
  • Stock 59 templates
  • Surface 3 templates
  • Doughnut 256 templates
  • Bubble 65 templates
  • Radar 83 templates
  • Free PPT Templates 2,101 templates
  • Free Keynote 2,017 templates
  • Free Google Slides 2,098 templates
  • Free Theme Slides 35 templates
  • Free Diagram 126 templates
  • Free Chart 49 templates
  • New Updates

Result for ' clinical research '

214 Templates are available.

  • Sort by Accuracy
  • Sort by Newest

Medical research PowerPoint Presentations Samples_41 slides

Medical research PowerPoint Presentations Samples

Built-in custom color palette Data charts (editable via Excel) 100% vector (fully editable maps, infographic, icons) Free images and artwork Smart and innovative presentation slides Modern layouts based on master slides

Scientific research Professional PPT_41 slides

Scientific research Professional PPT

Easy customization 100% fully editable PowerPoint slides Fully editable content (graphics and text) via PowerPoint - No Photoshop needed! Vector icons 100% editable Free images and artwork Professional business presentation

Fashion research Book Layout Design powerpoint presentation download_29 slides

Fashion research Book Layout Design powerpoint presentation download

Highly editable presentation template. Replaceable the image into placeholder Professional and unique slides Creatively crafted slides Easy color change

Medical Scientific research Annual Report presentation slide design_29 slides

Medical Scientific research Annual Report presentation slide design

Easy to edit and customize Shapes and text are 100% editable Professionally designed Premade color variation Modern layouts based on master slides

Scientific research slides presentation_25 slides

Scientific research slides presentation

Scalable vectorial PowerPoint shapes and PowerPoint icons Shapes and text are 100% editable Possible to change shape and color properties Easy to customize without graphic design skills High quality, editable pre-designed slides

Genetic research - Free Design Template_6 slides

Genetic research - Free Design Template

Free images and artwork Landscape orientation style Clean, modern, and creative slides Latest Templates support version

Google Slides Templates Free Download - Genetic research_6 slides

Google Slides Templates Free Download - Genetic research

Modern and clean design Professional business presentation All images included Easy to change colors

research into drugs and vaccines to combat COVID-19 PowerPoint Slide_1 slides

research into drugs and vaccines to combat COVID-19 PowerPoint Slide

covid19, virus, research, vaccines, drug, technology, technology trends, retention technology, how to use, production process

Market research Single Slide_2 slides

Market research Single Slide

Market, Market Size, Market Trend, Market Needs, Market distribution, Market Research, Market Insight

Market research Slide_2 slides

Market research Slide

Modern and clean design Fully editable content (graphics and text) via PowerPoint - No Photoshop needed! 16:9 aspect ratio Best investors pitch deck Ready to use presentation slides on data analytics

Market research Presentation Slide_2 slides

Market research Presentation Slide

Quick and easy to customize Compatible with all major Microsoft PowerPoint versions, Keynote and Google Slides Best investors pitch deck Suitable for creative projects Ready to use presentation slides on data analytics

Market research PowerPoint Design_2 slides

Market research PowerPoint Design

Drag & drop image placeholders Completely editable presentation template Professional and unique slides Creatively crafted slides Ready to use presentation slides on data analytics

World Market research Page Template_2 slides

World Market research Page Template

Quick and easy to customize Premium & modern multipurpose For professionals and educators Professionally designed infographic templates Changable into PDF, JPG, and PNG formats

Food Market research Single Page_2 slides

Food Market research Single Page

Easy to edit and customize 100% fully editable PowerPoint slides Professional and unique slides Professionally designed infographic templates Readily available in both 4:3 and 16:9 aspect ratio

Market research Chart Simple Slide_2 slides

Market research Chart Simple Slide

All elements are editable Drag & drop image placeholders 100% fully editable PowerPoint slides Modern business plan Changable into PDF, JPG, and PNG formats

Market research Template_2 slides

Market research Template

Quick and easy to customize Built-in custom color palette Possible to change shape and color properties Beautiful presentation decks and templates Professionally designed infographic templates

Market research Single Deck_2 slides

Market research Single Deck

Market Insight, Market Forecasts, Market Needs, Market Expansion, Market Size

clinical Trial Roadmap_2 slides

clinical Trial Roadmap

Easy to edit and customize Quick and easy to customize Dark & light backgrounds

Business research Topics - Free Professional PowerPoint Templates_6 slides

Business research Topics - Free Professional PowerPoint Templates

Quick and easy to customize Fully editable content (graphics and text) via PowerPoint - No Photoshop needed! Presentation photos are included; Landscape orientation style Changable into PDF, JPG, and PNG formats

Business research Topics - Free Presentation Template_6 slides

Business research Topics - Free Presentation Template

Landscape orientation style Changable into PDF, JPG, and PNG formats Quick and easy to customize Presentation photos are included;

1 / 11 page

Free Slides

Slide Members

[email protected]

All Rights Reserved 2024 © Copyright Slide Members

Information

  • Privacy Policy
  • Terms & Conditions

Recent Slides

  • 18+ New Templates Update (PPT templates & Google slides)
  • 12+ Recently Powerpoint Templates & Google slides Update
  • 10+ Latest weekly update Powerpoint Templates & Google slides

NIMH Logo

Transforming the understanding and treatment of mental illnesses.

Información en español

Celebrating 75 Years! Learn More >>

  • Opportunities & Announcements
  • Funding Strategy for Grants
  • Grant Writing & Approval Process
  • Managing Grants
  • Clinical Research
  • Small Business Research

NIMH Clinical Research Toolbox

NIMH Clinical Research Toolbox

The NIMH Clinical Research Toolbox serves as an information repository for NIMH staff and the clinical research community, particularly those receiving NIMH funding. The Toolbox contains resources such as NIH and NIMH policy and guidance documents, templates, sample forms, links to additional resources, and other materials to assist clinical investigators in the development and conduct of high-quality clinical research studies.

Use of these templates and forms is optional; the resources can be used as-is or customized to serve study team needs. In cases where institutions provide research teams with institution-specific templates and forms for clinical research documentation, NIMH expects researchers to follow their institutional policies for document use. Nevertheless, the materials on this page can be consulted to assure that study teams are meeting NIMH expectations.

Protocol Templates

Protocol associated documents, regulatory documents and associated case report forms, clinical research education, support, and training (crest) program overview.

  • Human Subject Risk

Data and Safety Monitoring for Clinical Trials

Reportable events, recruitment, suicide prevention research, good clinical practice training, data sharing, educational presentations, clinical research start up.

NIMH encourages investigators to consider using one of the protocol templates below when developing a clinical research protocol. In cases where an institutional review board (IRB) has a recommended or required protocol template, reviewing the documents included below is still suggested as there may be sections that a study team may opt to include in an effort to develop a comprehensive research protocol.

NIH has developed a Clinical e-Protocol Writing Tool  to support the collaborative writing and review of protocols for behavioral and social sciences research involving humans, and of phase 2 and 3 clinical trial protocols that require a Food and Drug Administration (FDA) Investigational New Drug (IND) or Investigational Device Exemption (IDE) Application.

NIH-FDA Phase 2 and 3 IND/IDE Clinical Trial Protocol Template  

This clinical trial protocol template is a suggested format for Phase 2 and 3 clinical trials funded by NIH that are being conducted under a FDA IND or IDE Application.

Investigators for such trials are encouraged to use this template when developing protocols for NIH-funded clinical trial(s). This template may also be useful to others developing phase 2 and 3 IND/IDE clinical trials.

NIH Behavioral and Social Clinical Trials Template  

This clinical trial protocol template is a suggested format for behavioral or psychosocial clinical trials funded by NIH. Investigators for such studies are encouraged to use this template when developing protocols for NIH-funded clinical trial(s). This template may also be useful to others developing behavioral of psychosocial research studies.

Back to Table of Contents

NIMH Clinical Manual of Procedures (MOP) Template [Word]

This template provides a recommended structure for developing consistent instructions on study procedure implementation and data collection across participant and clinical site activities. It details the study’s organization, operations, procedures, data management, and quality control.

NIMH Clinical Monitoring Plan Template [Word]

This template provides a recommended structure for a plan to conduct internal or independent review of Good Clinical Practices (GCP), human subject safety, and data integrity throughout the lifecycle of a study.

Informed Consent Materials

Often study teams will be provided with informed consent form templates and guidance on requirements for the informed consent process by their institutions. Below is additional guidance and materials to support a thorough informed consent process.

Sample NDA Informed Consent Language

The NIMH Data Archive (NDA) receives de-identified human subjects data collected from hundreds of research projects across many scientific domains, and makes these data available to enable collaborative science. This NDA sample informed consent language for data sharing can be adapted when using one of the NDA platforms.

Regulatory Document Checklists by Study Type The following checklists are intended to help the investigator community identify a set of core documents to be organized within a single study specific folder, either electronically, hard copy, or a mixture of both formats. NIMH encourages study teams to verify what additional documents, or alternative formats of the documents in the checklists, their institution and IRB require.

NIMH Regulatory Document Checklist for non-Clinical Trial Human Subjects Research [Word]

Study teams can use this checklist to compile essential documents for the conduct of a NIMH-funded study that does not meet the NIH definition of a clinical trial  and is research on human subjects.

NIMH Regulatory Document Checklist for Clinical Trials without Investigational Product [Word]

Study teams can use this checklist to compile essential documents for the conduct of a NIMH-funded NIH defined clinical trial  that does not involve an investigational drug or device.

NIMH Regulatory Document Checklist for Human Subjects Research Clinical Trials with Investigational Product not under a FDA IND/IDE [Word]

Study teams can use this checklist to compile essential documents for the conduct of a NIMH-funded NIH defined clinical trial  with an investigational drug or device that is not under a FDA IND or IDE.

NIMH Regulatory Document Checklist for a Study under a FDA IND or IDE [Word]

Study teams can use this checklist to compile essential documents for the conduct of a NIMH-funded NIH defined clinical trial  or non-clinical trial with an investigational drug or device under a FDA IND or IDE.

Necessary Documents for Reportable Events

NIMH Reportable Events Log Template [Word]

This document provides a log template for documenting reportable events. The types of events that require reporting may vary by institution, IRB, sponsor, state, and other factors.

NIMH Study-Wide Protocol Deviation Log Template [Word]

This document provides a log template for tracking all protocol deviations/violations across a study.

NIMH Subject-Specific Protocol Deviation Log Template [Word]

This document provides a log template for tracking subject-specific protocol deviations/violations. If captured electronically, subject-specific deviation logs can be exported into a study-wide deviation log.

NIMH Study-Wide Adverse Events (AE) Log Template [Word]

This document provides a log template for tracking all adverse events (AEs), including serious adverse events (SAEs), across a study.

NIMH Subject-Specific Adverse Event (AE) Log Template [Word]

This document provides a log template for tracking adverse events (AEs), including serious adverse events (SAEs), for each subject. If captured electronically, subject-specific AE logs can be exported into an electronic study-wide AE log.

Necessary Documents for Studies with Pharmacy/Investigational Product

FDA Form 1572 Statement of Investigator  

This FDA form should be signed by the investigator prior to study initiation to provide certain information to the sponsor, and assure that he/she will comply with FDA regulations related to the conduct of a clinical investigation of an investigational drug or biologic.

NIMH Investigational Product (IP) Management Standard Operating Procedure (SOP) Template [Word]

This document provides a sample standard operating procedures (SOP) template to document how investigational product (IP) will be received, stored, monitored, labeled, dispensed, and destroyed.

NIMH Investigational Product Storage Temperature Log Template [Word]

This document provides a log template for recording the daily temperatures for investigational product (IP).

NIMH Master Investigational Product Dispensing and Accountability Log Template [Word]

This document provides a log template for capturing all investigational product (IP) dispensed to and returned by participants for the duration of the study.

NIMH Subject-Specific Investigational Product Dispensation and Accountability Log Template [Word]

This document provides a log template for capturing all investigational product (IP) dispensed to an individual participant and returned by that participant. This log is typically placed in each subject’s study binder (study blind is maintained, if applicable).

Screening and Enrollment Logs and Materials

NIMH Participant Pre-Screening Log Template [Word]

This document provides a log template for all potential participants who have completed initial screening procedures (i.e. phone screens or internet screening surveys; typically, prior to signing written informed consent). This log should capture the number of participants eligible for an official screening visit, as well as the number ineligible with the reasons for ineligibility listed.

NIMH Participant Enrollment Log Template [Word]

This document provides a log template for chronologically documenting the participants who have been enrolled in the study.

NIMH Inclusion/Exclusion Checklist Template [Word]

This document provides a sample checklist to customize according to protocol-specific eligibility criteria. A qualified and appropriately-delegated study team member should sign and date to confirm eligibility once all criteria have been assessed. If criteria are assessed on different visit dates, this checklist should be reformatted to reflect which criteria are assessed on which visit dates, and who is responsible for assessing them.

NIMH Documentation of Informed Consent Template [Word]

This document provides a sample form template for documenting the informed consent process.

Additional Participant Tracking Logs and Materials

NIMH Concomitant Medication Log Template [Word]

This document provides a log template for recording each participant’s medications throughout the study. This log is typically reviewed at all subject study visits and is located in each participant’s study binder.

NIMH Research Sample Inventory/Tracking Log [Word]

This document provides a log template for tracking the collection and storage of research samples.

Staff Training and Administrative Tracking Logs and Materials

NIMH Good Clinical Practice (GCP) Training Log Template [Word]

This document provides a log template for documenting completion of Good Clinical Practice (GCP) training requirements. Note: all NIH-funded investigators and staff who are involved in the conduct, oversight, or management of clinical trials should be trained in Good Clinical Practice (GCP), consistent with principles of the International Conference on Harmonisation (ICH) E6 (R2). Individual institutions may require GCP training regardless of funding source or clinical trial status.

NIMH Study Training Log Template [Word]

This document provides a log template for documenting staff trainings for study-specific procedures (i.e., trainings for diagnostic interview administration, study protocol adherence, phlebotomy, outcomes measures, OSHA Bloodborne Pathogens, etc.).

NIMH Delegation of Authority Log Template [Word]

This document can be used to record all study staff members’ significant study-related duties, as delegated by the Principal Investigator (PI). Most studies opt to use a log format, such as the Delegation of Authority log, because it captures study staff on one page and includes space to document the addition or removal of specific study tasks for individual staff members.

NIMH Monitoring Visit Log Template [Word]

This document is typically completed by the clinical site monitor to document dates and purpose of clinical site monitoring visits.

NIMH Note to File (NTF) Template [Word]

This document provides a sample template for generating notes-to-file, which are written to acknowledge a discrepancy or problem with the study’s conduct, or for other administrative purposes (such as to document where study materials are stored).

On-Site Monitoring

Even though it is the NIMH’s expectation that grantees will provide adequate oversight of their clinical research, NIMH Program Officials may require additional levels of on-site monitoring conducted by NIMH staff. Clinical monitoring helps ensure the rights and well-being of human subjects are protected; the reported clinical research study data are accurate, complete, and verifiable; and the conduct of the study is in compliance with the study protocol, Good Clinical Practice (GCP), and the regulations of applicable agencies.

The NIMH Clinical Research Education, Support, and Training (CREST) Program provides ongoing educational and technical support from NIMH staff for clinical research project grants selected for consultation and/or site visit(s). The CREST Program aims to ensure that the reported clinical research study data are accurate, complete, and verifiable, the conduct of the study is in compliance with the study protocol, Good Clinical Practice (GCP) and the regulations of applicable agencies, and the rights and well-being of human subjects are protected, in accordance with 45 CFR 46 (Protection of Human Subjects) and, as applicable, 21 CFR part 50 (Protection of Human Subjects).

To promote clinical research that is compliant with GCP and human subject regulations, the CREST Program includes phone conversations, email consultation, and/or site visit(s) from NIMH staff, as needed, to assess and provide written feedback and recommendations on planned or ongoing clinical research protocols. Documents relating to the conduct of the clinical research, such as current IRB approved protocols, informed consent documents, source documents, and drug accountability records, as applicable, may be reviewed for compliance with applicable Federal regulations, and institutional and IRB policies.

Research project grants selected for inclusion in the CREST Program might include clinical research studies with “significantly-greater-than-minimal risk” to subjects (e.g., an intervention or invasive procedure with high potential for serious adverse events; see NIMH Risk-Based Monitoring Guidance ); a study intervention under a FDA Investigational New Drug or Investigational Device Exemption; or other studies identified by NIMH staff that may benefit from inclusion in CREST. CREST is separate and distinct from “for cause” audits of clinical research. Research grants may be included in CREST at any time during the study lifecycle, although projects are generally identified and selected for the program at the initiation of the grant.

NIMH Clinical Research Education Support and Training (CREST) Program Overview

This page provides a description of the NIMH CREST Program’s purpose, process for inclusion, and operating procedures.

Site Visits

NIMH Clinical Research Education, Support, and Training Program (CREST): Comprehensive Visit Report Template [Word]

This template provides a recommended structure for a CREST site visit report, as well as a sample matrix of regulatory criteria that CREST monitors look at while at site initiation visits (SIVs), interim monitoring visits (IMVs) and close out visits (COVs). It is to be used as a starting point for preparing for a CREST site visit or for writing a site visit report.

NIMH CREST Site Initiation Visit (SIV) Sample Agenda [Word]

This document provides a sample site initiation visit agenda to be customized by the Principal Investigator (PI) and site monitor prior to the visit.

Human Subjects Research

This section provides resources, including policy and guidance documents related to the conduct of human subject research. The resources included below represent those frequently of interest to NIMH investigators, specifically: overviews of human subject research, data and safety monitoring, human subject risk, reportable events, and recruitment. There are numerous other NIH webpages devoted to human subjects research; see Research Involving Human Subjects  , NIH Human Subjects Policies and Guidance  , and New Human Subjects and Clinical Trial Information Form  .

Human Subject Regulations Decision Charts 

The Office for Human Research Protections (OHRP) has developed graphic aids to help guide investigators in deciding if an activity is research involving human subjects that must be reviewed by an IRB under the requirements of the U.S. Department of Health and Human Services (HHS) regulations ( 45 CFR 46  ).

Human Subjects in Research: Things to Consider

This NIMH webpage presents items which investigators should pay particular attention to when proposing to use human subjects in NIMH-funded studies.

Human Subjects Risk

NIMH Guidance on Risk-Based Monitoring

This NIMH guidance aims to clarify risk level definitions and the NIMH’s monitoring expectations to mitigate these risks. This guidance will assist study teams in determining the level of data and safety monitoring that should be established for a study based on the probability and magnitude of anticipated harm and discomfort.

The policies, guidance and documentation in this section outline NIMH expectations for data and safety monitoring of clinical trials  . For human subject research that does not meet criteria for NIH clinical trial designation, investigators still have an option of including a data and safety monitoring plan (DSMP; i.e., in studies that may have significant risk to participants). The initial links below apply to all NIMH-funded clinical trials, while the second section provides documentation for clinical trials under the oversight of a NIMH-constituted data and safety monitoring board (DSMB).

All Clinical Trials

NIMH Policy Governing the Monitoring of Clinical Trials

This NIMH policy outlines NIH and NIMH expectations for data and safety monitoring of clinical trials. This policy also assures that the NIMH is notified by NIMH-funded researchers in a timely manner of all directives emanating from monitoring activities.

Guidance for Developing a Data and Safety Monitoring Plan for Clinical Trials Sponsored by NIMH

This guidance was created to aid investigators developing a data and safety monitoring plan (DSMP) to ensure the safety of research participants and to protect the validity and integrity of study data in clinical trials supported by NIMH. This guidance applies to data and safety monitoring for all NIMH-supported clinical trials (including grants, cooperative agreements, and contracts).

NIMH Policy Governing Independent Safety Monitors and Independent Data and Safety Monitoring Boards

This policy establishes expectations for the monitoring of NIMH-supported clinical trials by Independent Safety Monitors (ISMs) and/or independent data and safety monitoring boards (DSMBs) to assure the safety of research participants, regulatory compliance, and data integrity.

Trials Reviewed by a NIMH-Constituted DSMB

The materials below are for studies designated for review by a NIMH-constituted DSMB. Study teams developing materials for a study-constituted independent DSMB may benefit from reviewing the data report template and the protocol amendment memo.

NIMH Clinical Trials Operations Branch Liaison Orientation Letter [Word]

This letter provides an orientation to working with the NIMH Clinical Trials Operations Branch which supports study teams reporting to the NIMH DSMB.

NIMH DSMB Reporting Guide Full Report Template [PDF]

This template provides a recommended structure for data reports used for DSMB review and oversight. The report template includes standard data tables. Study teams are encouraged to utilize this template as a starting point, and use, remove, and/or modify the existing tables as appropriate for the study under review.

NIMH DSMB Amendment Memo Template [Word]

This template may be used when submitting a study protocol or consent document amendment to the NIMH DSMB.

NIMH Reportable Events Policy

This policy outlines the expectations of NIMH-funded researchers relating to the submission of reportable events (i.e., Adverse Events  (AEs); Serious Adverse Events  (SAEs); Unanticipated Problems Involving Risks to Subjects or Others  ; protocol violations; non-compliance  (serious or continuing); suspensions or terminations by monitoring entities  (i.e., Institutional Review Board (IRB), Independent Safety Monitor (ISM)); and suspensions or terminations by regulatory agencies (i.e., Office for Human Research Protections  (OHRP) or the Food and Drug Administration (FDA)).

( For associated documentation, see: Guidance on Regulatory Documents and Associated Case Report Forms )

NIMH Policy for the Recruitment of Participants in Clinical Research

This policy is intended to support effective and efficient recruitment of participants into all NIMH extramural-funded clinical research studies proposing to enroll 150 or more subjects per study, and all clinical trials, regardless of size.

NIMH Recruitment of Participants in Clinical Research Policy

This policy outlines NIMH expectations regarding the establishment of recruitment plans and milestones for overall study enrollment, and as appropriate, recruitment plans for females and males, members of racial and ethnic minority groups, and children, as well as recruitment reporting.

Frequently Asked Questions (FAQ) about Recruitment Milestone Reporting (RMR)

This NIMH FAQ document provides responses to several of the most common questions surrounding RMR.

Points to Consider about Recruitment and Retention While Preparing a Clinical Research Study

These “points to consider” are meant to serve as a resource as investigators plan a clinical research study and a NIMH grant application. It also outlines common barriers that can impact clinical recruitment and retention.

Additional Resources and Trainings

Conducting Research with Participants at Elevated Risk for Suicide: Considerations for Researchers

This web document is intended to support the development of NIMH research grant applications in suicide research, including those related to clinical course, risk and detection, and interventions and implementation, as well as to support research conduct that is safe, ethical and feasible.

Based on the NIH Good Clinical Practice (GCP) policy  , all NIH-funded clinical investigators and clinical trial staff who are involved in the design, conduct, oversight, or management of clinical trials are requirement to be trained in GCP. Below are links to some GCP courses that meet NIH GCP training expectations.

Good Clinical Practice for Social and Behavioral Research – E-Learning Course 

The NIH Office of Behavioral and Social Sciences Research (OBSSR) offers a self-paced Good Clinical Practice (GCP) training course with nine video modules. Learners complete knowledge checks and exercises throughout the course.

National Institute of Allergies and Infectious Diseases (NIAID) GCP Learning Center 

NIAID has created a self-paced Good Clinical Practice (GCP) training course that includes four modules. These modules educate the learner on the history of human subject research, the regulatory framework, planning human subject research, and conducting human subject research.

National Drug Abuse Treatment (NDAT) Clinical Trials Network  

This NDAT course includes 12 modules based on International Council for Harmonisation (ICH) Good Clinical Practice (GCP) and the Code of Federal Regulations (CFR) for clinical research studies in the U.S. The course is self-paced and takes approximately six hours to complete.

The following notices and links present NIMH expectations and tools for data sharing.

Data Sharing Expectations for NIMH-Funded Clinical Trials 

This notice establishes NIMH’s data sharing expectations, including the request to include a detailed data sharing plan as part of grant applications.

Data Harmonization 

This notice encourages investigators in the mental health research community to utilize data collection protocols using a common set of tools and resources to facilitate sharing, comparing, and integration of data from multiple sources.

NIMH Data Archive 

The NIMH Data Archive is an informatics platform for the sharing of de-identified human subject data from all clinical research funded by the NIMH.

Educational Materials

The following educational materials are provided to support the training of NIMH-funded clinical research investigators and staff.

Good Clinical Practices (GCP) for NIMH-Sponsored Studies [PowerPoint]

This training presentation defines Good Clinical Practice (GCP) and describes its application in NIMH-funded research. Topics include: investigator responsibilities, training and qualifications, resources and staffing, delegation of responsibilities, informed consent, documentation and storage of data, assessment and reporting, protocol adherence, drug accountability, adverse events/unanticipated problems and noncompliance. Note that this presentation does not replace the Good Clinical Practice (GCP) training required for NIH funded investigators.

Good Documentation Practices for NIMH-Sponsored Studies [PowerPoint]

This training presentation provides an overview of good documentation practices to follow throughout the duration of NIMH-funded research. The presentation defines and gives examples of good documentation practices.

Introduction to Site-Level Quality Management for NIMH-Sponsored Studies [PowerPoint]

This training presentation provides an overview of the process of establishing and ensuring the quality of processes, data, and documentation associated with clinical research activities. Quality Management (QM) is defined in relationship to site-level documentation, processes, and activities. Tools that are available to support site-level QM are also described.

NIMH Clinical Monitoring and Clinical Research Education, Support, and Training Program (CREST) Overview [PowerPoint]

This training presentation provides an overview of Clinical Monitoring, types of site monitoring visits and what takes place during these visits as well as an overview of follow-up activities. The presentation specifically describes the NIMH Clinical Research Education Support and Training (CREST) Program, its goals, study portfolio selection process, and standard procedures.

Additional NIMH Links and Contacts:

  • Office of Clinical Research
  • Clinical Trials Operations Branch (CTOB)
  • NIMH Clinical Research Policies, Guidance, and Resources
  • Human Research Protection Branch (HRPB)

Got any suggestions?

We want to hear from you! Send us a message and help improve Slidesgo

Top searches

Trending searches

clinical research studies ppt

spring flowers

88 templates

clinical research studies ppt

st patricks day

12 templates

clinical research studies ppt

world war 1

45 templates

clinical research studies ppt

29 templates

clinical research studies ppt

16 templates

clinical research studies ppt

world war 2

51 templates

Celebrate Slidesgo’s big 5! Five years of great presentations, faster

Clinical Trials Day

Clinical trials day presentation, free google slides theme and powerpoint template.

May 20th is a great opportunity to speak about the importance of clinical trials in medicine because it’s the Clinical Trial Day! This day will be celebrated worldwide and promoted with speeches, events, ads and presentations like this one. This template includes a modern design inspired by the modernity of technology and the blue tones that represent medicine. We have included lots of editable resources and visual elements that will help you when sharing your ideas. Download it and celebrate the improvements that clinical trials have made in medicine!

Features of this template

  • 100% editable and easy to modify
  • 35 different slides to impress your audience
  • Contains easy-to-edit graphics such as graphs, maps, tables, timelines and mockups
  • Includes 500+ icons and Flaticon’s extension for customizing your slides
  • Designed to be used in Google Slides and Microsoft PowerPoint
  • 16:9 widescreen format suitable for all types of screens
  • Includes information about fonts, colors, and credits of the free resources used

How can I use the template?

Am I free to use the templates?

How to attribute?

Attribution required

Related posts on our blog.

How to Add, Duplicate, Move, Delete or Hide Slides in Google Slides | Quick Tips & Tutorial for your presentations

How to Add, Duplicate, Move, Delete or Hide Slides in Google Slides

How to Change Layouts in PowerPoint | Quick Tips & Tutorial for your presentations

How to Change Layouts in PowerPoint

How to Change the Slide Size in Google Slides | Quick Tips & Tutorial for your presentations

How to Change the Slide Size in Google Slides

Related presentations.

Clinical Trial Infographics presentation template

Premium template

Unlock this template and gain unlimited access

Fertility Clinic presentation template

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 19 February 2024

Selection, optimization and validation of ten chronic disease polygenic risk scores for clinical implementation in diverse US populations

  • Niall J. Lennon   ORCID: orcid.org/0000-0002-2874-7371 1   na1 ,
  • Leah C. Kottyan 2   na1 ,
  • Christopher Kachulis   ORCID: orcid.org/0000-0003-2095-7419 1 ,
  • Noura S. Abul-Husn   ORCID: orcid.org/0000-0002-5179-1944 3 ,
  • Josh Arias   ORCID: orcid.org/0000-0001-6545-6656 4 ,
  • Gillian Belbin 3 ,
  • Jennifer E. Below   ORCID: orcid.org/0000-0002-1346-1872 5 ,
  • Sonja I. Berndt 4 ,
  • Wendy K. Chung   ORCID: orcid.org/0000-0003-3438-5685 6 ,
  • James J. Cimino   ORCID: orcid.org/0000-0003-4101-1622 7 ,
  • Ellen Wright Clayton   ORCID: orcid.org/0000-0002-0308-4110 5 ,
  • John J. Connolly 8 ,
  • David R. Crosslin 9 , 10 ,
  • Ozan Dikilitas   ORCID: orcid.org/0000-0002-9906-8608 11 ,
  • Digna R. Velez Edwards 5 ,
  • QiPing Feng   ORCID: orcid.org/0000-0002-6213-793X 5 ,
  • Marissa Fisher 1 ,
  • Robert R. Freimuth 11 ,
  • Tian Ge 12 ,
  • The GIANT Consortium ,
  • The All of Us Research Program ,
  • Joseph T. Glessner   ORCID: orcid.org/0000-0001-5131-2811 8 ,
  • Adam S. Gordon   ORCID: orcid.org/0000-0002-2058-7289 13 ,
  • Candace Patterson 1 ,
  • Hakon Hakonarson   ORCID: orcid.org/0000-0003-2814-7461 8 ,
  • Maegan Harden   ORCID: orcid.org/0000-0002-3607-6416 1 ,
  • Margaret Harr 8 ,
  • Joel N. Hirschhorn 1 , 14 ,
  • Clive Hoggart 3 ,
  • Li Hsu   ORCID: orcid.org/0000-0001-8168-4712 15 ,
  • Marguerite R. Irvin 7 ,
  • Gail P. Jarvik 10 ,
  • Elizabeth W. Karlson 12 ,
  • Atlas Khan   ORCID: orcid.org/0000-0002-6651-2725 6 ,
  • Amit Khera 1 ,
  • Krzysztof Kiryluk   ORCID: orcid.org/0000-0002-5047-6715 6 ,
  • Iftikhar Kullo   ORCID: orcid.org/0000-0002-6524-3471 11 ,
  • Katie Larkin 1 ,
  • Nita Limdi 7 ,
  • Jodell E. Linder   ORCID: orcid.org/0000-0002-0081-4712 5 ,
  • Ruth J. F. Loos 16 , 17 ,
  • Yuan Luo   ORCID: orcid.org/0000-0003-0195-7456 13 ,
  • Edyta Malolepsza 1 ,
  • Teri A. Manolio   ORCID: orcid.org/0000-0001-5844-4382 4 ,
  • Lisa J. Martin   ORCID: orcid.org/0000-0001-8702-9946 2 ,
  • Li McCarthy 1 ,
  • Elizabeth M. McNally 13 ,
  • James B. Meigs 12 ,
  • Tesfaye B. Mersha   ORCID: orcid.org/0000-0002-9189-8447 2 ,
  • Jonathan D. Mosley   ORCID: orcid.org/0000-0001-6421-2887 5 ,
  • Anjene Musick   ORCID: orcid.org/0000-0001-7770-299X 18 ,
  • Bahram Namjou   ORCID: orcid.org/0000-0003-4452-7878 2 ,
  • Nihal Pai 1 ,
  • Lorenzo L. Pesce 13 ,
  • Ulrike Peters 15 ,
  • Josh F. Peterson 5 ,
  • Cynthia A. Prows 2 ,
  • Megan J. Puckelwartz 13 ,
  • Heidi L. Rehm   ORCID: orcid.org/0000-0002-6025-0015 1 ,
  • Dan M. Roden   ORCID: orcid.org/0000-0002-6302-0389 5 ,
  • Elisabeth A. Rosenthal   ORCID: orcid.org/0000-0001-6042-4487 10 ,
  • Robb Rowley 4 ,
  • Konrad Teodor Sawicki 13 ,
  • Daniel J. Schaid 11 ,
  • Roelof A. J. Smit 3 ,
  • Johanna L. Smith   ORCID: orcid.org/0000-0002-5861-0413 11 ,
  • Jordan W. Smoller   ORCID: orcid.org/0000-0002-0381-6334 12 ,
  • Minta Thomas 15 ,
  • Hemant Tiwari 7 ,
  • Diana M. Toledo 1 ,
  • Nataraja Sarma Vaitinadin 5 ,
  • David Veenstra 10 ,
  • Theresa L. Walunas   ORCID: orcid.org/0000-0002-7653-3650 13 ,
  • Zhe Wang   ORCID: orcid.org/0000-0002-8046-4969 3 ,
  • Wei-Qi Wei   ORCID: orcid.org/0000-0003-4985-056X 5 ,
  • Chunhua Weng 6 ,
  • Georgia L. Wiesner 5 ,
  • Xianyong Yin   ORCID: orcid.org/0000-0001-6454-2384 19 &
  • Eimear E. Kenny 3  

Nature Medicine volume  30 ,  pages 480–487 ( 2024 ) Cite this article

11k Accesses

373 Altmetric

Metrics details

  • Clinical genetics
  • Risk factors

Polygenic risk scores (PRSs) have improved in predictive performance, but several challenges remain to be addressed before PRSs can be implemented in the clinic, including reduced predictive performance of PRSs in diverse populations, and the interpretation and communication of genetic results to both providers and patients. To address these challenges, the National Human Genome Research Institute-funded Electronic Medical Records and Genomics (eMERGE) Network has developed a framework and pipeline for return of a PRS-based genome-informed risk assessment to 25,000 diverse adults and children as part of a clinical study. From an initial list of 23 conditions, ten were selected for implementation based on PRS performance, medical actionability and potential clinical utility, including cardiometabolic diseases and cancer. Standardized metrics were considered in the selection process, with additional consideration given to strength of evidence in African and Hispanic populations. We then developed a pipeline for clinical PRS implementation (score transfer to a clinical laboratory, validation and verification of score performance), and used genetic ancestry to calibrate PRS mean and variance, utilizing genetically diverse data from 13,475 participants of the All of Us Research Program cohort to train and test model parameters. Finally, we created a framework for regulatory compliance and developed a PRS clinical report for return to providers and for inclusion in an additional genome-informed risk assessment. The initial experience from eMERGE can inform the approach needed to implement PRS-based testing in diverse clinical settings.

Polygenic risk scores (PRSs) aggregate the effects of many genetic risk variants and can be used to predict an individual’s genetic predisposition to a disease or phenotype 1 . PRSs are being calculated and disseminated at a prodigious rate 1 , 2 , but their development and application to clinical care, particularly among ancestrally diverse individuals, present substantial challenges 3 , 4 , 5 . Incorporation of genomic risk information has the potential to improve risk estimation and management 4 , 6 , particularly at younger ages 7 . Clinical use of PRSs may ultimately prevent disease or enable its detection at earlier, more treatable stages 7 , 8 , 9 , 10 . Improved estimation of risk may also enable targeting of preventive or therapeutic interventions to those most likely to benefit from them while avoiding unnecessary testing or overtreatment 10 , 11 . However, clinical use of Eurocentric PRSs in diverse patient samples risks exacerbating existing health disparities 12 , 13 , 14 .

PRSs for individual conditions are typically generated from summary statistics derived from genome-wide association studies (GWASs), which are themselves derived from populations that are heavily overrepresented by individuals of European ancestry 12 . Such scores have been shown to have limited prediction accuracy with increasing genetic distance from European populations 12 , 15 . PRSs can be improved if developed and validated using multiancestry cohorts 16 . Clinical and environmental data combined with monogenic and polygenic risk measurements can improve risk prediction, as demonstrated in ref. 17 and other studies 18 . Approaches for combining genomic and nongenomic information, optimizing models for populations of diverse genetic ancestry and across age groups, and conveying this information to clinicians and patients have yet to be developed and applied in clinical care. Various forms of PRSs are available to consumers through commercial platforms such as 23andMe, Myriad Genetics (riskScore), Allelica, Ambry Genetics, and others, and several noncommercial studies have explored the clinical use of PRSs in direct-to-participant models 19 , 20 , 21 ; however, there is limited information on the clinical implementation considerations of returning PRSs across multiple phenotypes in a primary care setting 20 . Even before assessing the ability of PRSs to improve health outcomes, reduce risk and enhance clinical care, large multicenter prospective pragmatic studies are needed to assess how patients and care providers interact with and respond to PRSs in a primary care setting 22 .

The Electronic Medical Records and Genomics (eMERGE) Network is a multicenter consortium established in 2007 to conduct genomic research in biobanks with electronic medical records 23 , 24 . In 2020, eMERGE embarked on a study of genomic risk assessment and management in 5,000 children and 20,000 adults of diverse ancestry, beginning with efforts to identify and validate published PRSs across multiple race-ethnic groups (and inferred genetic ancestries) in ten common diseases with complex genetic etiologies. The study plans for 25,000 individuals (aged 3–75 years) to be recruited from general healthcare system populations. Six of the ten recruitment sites are committed to recruiting an ‘enhanced diversity cohort’, meaning that their enrollment will target 75% of enrolled individuals belonging to a racial or ethnic minority or medically underserved population, whereas the remainder of clinical sites will target 35% minority participants 22 . Enrollment is not targeted to individuals with specific conditions, although individuals with prevalent conditions can be included. For this prospective, pragmatic study, the primary outcome being measured is the number of new healthcare actions after return of the genome-informed risk assessment. This paper describes (1) identification, selection and optimization of the PRSs that are included in the study; (2) calibration of ancestry for PRS estimation using a modified method developed for eMERGE; (3) development and launch of clinical reporting tools; and (4) an overview of the first 2,500 samples processed as part of the study.

PRS auditing and evaluation

To select the PRSs for clinical implementation, the Network conducted a multistage process to evaluate proposed scores (Fig. 1 ). An initial set of 23 conditions was selected based on considerations including relevance to population health (condition prevalence and heritability), strength of evidence for PRS performance, clinical expertise in the eMERGE Network, and data availability that would facilitate validation of the PRS in diverse populations. These conditions were abdominal aortic aneurysm, age-related macular degeneration, asthma, atopic dermatitis, atrial fibrillation, bone mineral density, breast cancer, Crohn’s disease, chronic kidney disease, colorectal cancer, coronary heart disease, depression, hypercholesterolemia, hypertension, ischemic stroke, lupus, nonalcoholic fatty liver disease, obesity, primary open angle glaucoma, prostate cancer, rheumatoid arthritis, type 1 diabetes and type 2 diabetes.

figure 1

a ,Timeline and process for selection, evaluation, optimization, transfer, validation and implementation of the clinical PRS test pipeline. Dashed lines represent pivotal moments in the progression of the project with duration between these events indicated in months (mo) above the blue arrow. Numbers in white represent the number of conditions being examined at each stage and their fates. List of ten conditions on the right-hand side indicates the conditions that were implemented in the clinical pipeline for this study. b , Overview of the eMERGE PRS process. Participant DNA is genotyped using the Illumina Global Diversity Array, which assesses 1.8 million sites. Genotyping data are phased and imputed with a reference panel derived from the 1,000 Genomes Project. For each participant, raw PRSs are calculated for each condition ( PRS raw ). Each participant’s genetic ancestry is algorithmically determined in the projection step. For each condition, an ancestry calibration model is applied to each participant’s z- scores based on model parameters derived from the All of Us Research Program (Calibration) and an adjusted z -score is calculated ( PRS adjusted ). Participants whose adjusted scores cross the predefined threshold for high PRS are identified and a pdf report is generated. The report is electronically signed after data review by a clinical laboratory director and delivered to the study portal for return to the clinical sites.

Network sites completed a comprehensive literature review on 23 proposed conditions and the corresponding PRSs. A summary of the features of the PRS for each of the final conditions chosen is shown in Supplementary Table 1 . The collated information included analytic viability—a description of covariates, the age, and ancestry effects of the original PRS model; feasibility—access to sufficiently diverse validation datasets (genetic ancestry and age) as well as condition prevalence and relevance to preventative care; potential clinical actionability—existing screening or treatment strategies, and magnitude (odds ratio) of risk in the high-risk group; and translatability—expected public health impact across diverse populations. Candidate PRSs were restricted to those that were either previously validated and published (journal or preprint) or for which there was sufficient access to information to develop and/or optimize new PRSs, which could then be validated.

In auditing and evaluating evidence of PRS performance, the eMERGE steering committee considered PRSs for conditions that could be implemented in pediatric and/or adult populations, and for diseases with a range of age of onset (0 to >65 years of age). We considered published single nucleotide polymorphism (SNP)-based heritability estimates available for ten of the 23 conditions, ranging from 3% to 58%. The majority of PRSs under consideration aimed to identify individuals at high risk for disease; however, PRSs to predict disease severity and drug response were also considered. Two of the conditions, breast cancer and prostate cancer, were only considered for implementation in individuals whose biological sex was female or male, respectively. As the eMERGE Network plans to enroll >50% participants from underrepresented groups (including racial and ethnic minority groups; people with lower socioeconomic status; underserved rural communities; sexual and gender minority groups) 25 , emphasis was placed on PRSs that were already available for, or could be developed and validated in, diverse population groups.

To define population groups, study-level population descriptors were first extracted from published literature, preprints or information shared directly by collaborators on data used to develop and/or optimize and/or validate PRSs. Methods for defining population groups across studies ranged from self-reporting, extraction from health system data and/or analysis of genetic ancestry. We designated four population groups: European ancestry (that is, study population descriptors included European, European-American or other European descent diaspora groups), African (African, African American or other African descent diaspora groups), Hispanic (that is, Hispanic, Latina/o/x or those who have origins in countries in the Caribbean and Latin America) and Asian (that is, South Asian, East Asian, South-East Asian, Asian-American or other diaspora Asian groups).

Thirteen conditions were considered and not selected for clinical implementation (Fig. 1 ). Of the six conditions dropped from consideration in August 2020, low disease prevalence across ancestral groups (age-related macular degeneration), availability of diverse genetic datasets for validation (primary open angle glaucoma, rheumatoid arthritis and Crohn’s disease) and the lack of a validated algorithm to identify patients and controls based upon electronic health record (EHR) (bone mineral density) were the driving factors. In March 2021, five additional conditions were dropped from consideration for clinical implementation based upon the progress of the development and validation of a multiancestral PRSs (depression, ischemic stroke), the low predictive value of candidate PRSs (hypertension, nonalcoholic fatty liver disease) and ethical considerations around returning results to a condition with low population prevalence (lupus).

Conditions not prioritized for implementation continued on a ‘developmental’ pathway for further refinement. Each of the 12 conditions that were selected to move forward from the March 2021 review was assigned a ‘lead’ and ‘co-lead’ site, which worked together to develop, validate and transfer the score to the clinical laboratory for instantiation and Clinical Laboratory Improvement Amendments (CLIA) validation. Assignment of leads was based on site preference, expertise and distribution of workload.

Selection, optimization and validation

A systematic framework was developed to evaluate the performance for the remaining 12 PRSs, in accordance with best practices outlined in ref. 26 . An in-depth evaluation matrix of the 12 chosen conditions can be found in Supplementary Table 2 . The Network carefully considered a variety of strategies to optimize PRS generalizability and portability. The Network prioritized validation across four ancestries with an emphasis on African and Hispanic ancestry due to their underrepresentation in genetic research and projected representation within the study cohort. We determined that a PRS was validated if the odds ratios were statistically significant in a minimum of two and up to four ancestral populations: African/African American, Asian, European ancestry, and Hispanic/Latino. The PRS Working Group members conducted an extensive scoping exercise to identify suitable datasets of multiple ancestries for disease-specific PRS validation. These included datasets from early phases of eMERGE (2007–2019) as well as external datasets such as the UK Biobank and Million Veteran Program. These larger population-level databases had the advantage of large sample sizes and less case–control ascertainment bias (though other sources of bias can still be an issue; ‘Discussion’). A standardized set of questions was addressed by the disease leads that included the source of discovery and validation datasets, the availability of multiancestry validation datasets, the availability of cross-ancestry PRSs (that is, PRS models that were developed and validated in more than one genetic ancestry), proposed percentile thresholds for identifying high-risk status, model discrimination (AUC) and effect sizes (odds ratios) associated with high-risk versus not high-risk status (Supplementary Table 2 ). For seven out of the 12 candidate scores, no further optimization of the original model was performed. For five scores, an additional optimization effort was undertaken to further refine the score performance in multiple ancestries. Details of the optimization can be found in Supplementary Table 3 . A specific score optimization was applied for chronic kidney disease. This optimization consisted of adding the effect of APOL1 risk genotypes to a polygenic component, which has been found to improve risk predictions in African ancestry cohorts 27 .

For the final selection of PRSs to be included in the prospective clinical study, the steering committee considered the score performance summaries (presented by condition leads) in addition to the actionable and measurable recommendations relevant for return, for each condition, in the prospective cohort. Abdominal aortic aneurysm was removed from the clinical pathway in June 2021 based on inability to pull a critical risk factor from the EHR (smoking) and a relatively low disease prevalence in Asian and Hispanic populations. Colorectal cancer was removed in June 2021 because the development and validation of the PRS was not complete for all the ancestral groups (Fig. 1 ). For the ten remaining phenotypes, the prospective pragmatic study required a small number of measurable primary clinical recommendations per phenotype so that the utility of the PRS to change physician and patient behavior can be measured. These recommendations can be found in Supplementary Tables 2 and 4 of ref. 22 .

Population-based z -score calibration

In this study, the focus is on integration and implementation of validated PRSs in clinical practice rather than novel PRS development. Ultimately, the Network opted to balance generalizability and feasibility by validating and returning cross-ancestry PRSs. However, even with cross-ancestry scores, differences remain in the distribution of z -scores for the PRSs across genetic ancestries that can result in inconsistent categorization of individuals into ‘high’ or ‘not high’ polygenic risk categories for a given condition 28 . To that end, the Network chose to develop methods to genetically infer each participant’s ancestry and calibrate the distribution of resulting z -scores through a population-based calibration model 28 , 29 (see below). An alternative would have been to apply existing PRSs in available samples of different ancestries and derive ancestry-specific effect estimates. However, returning ancestry-specific risk estimates is challenging in real-world implementations as it would require self-reporting of ancestry by patients (who may not be able to provide this with accuracy) and developing multiple ancestry-specific reports for each health condition. In addition, such PRSs would be problematic to return to patients of mixed ancestry.

PRSs often have different means and standard deviations for individuals from different genetic ancestries. While some of these differences could be due to true biological differences in risk, they also result from allele frequency and linkage disequilibrium structure differences between populations 30 . This problem is more acute when a PRS is calculated for an individual whose ancestry does not match the ancestries used to develop the PRS. A clinically implemented PRS test to return disease risk estimates, therefore, must be adjusted to account for these differences due to ancestral background. A calibration method based on principal component analysis (PCA), which was initially described in ref. 28 , was modified to model both the variance and means of scores as ancestry dependent, as compared to the previous method ( Methods ), which modeled only the means as dependent on ancestry. This modification was found to be necessary because some conditions were found to exhibit highly ancestry-dependent variance, which would have led to many more or fewer participants of certain ancestries receiving a ‘high PRS’ determination than was intended. One option considered to create and train the calibration model was to enroll and process a representative number of participants then pause on the return of results while the model was trained and the older data reprocessed. This stop–start approach was deemed suboptimal. Instead, the model was fit, with permission, to a portion of the All of Us (AoU) Research Program ( https://www.researchallofus.org/ ) cohort genotyping data, which allowed for continuous return of results to eMERGE participants once the study began. Of note, the All of Us Research Program cohorts selected for both training and testing the calibration model exhibited high degrees of genetic admixture, which would be expected to accurately reflect the study enrollment population. Importantly, because no ancestry group is homogenous, when individuals are compared directly to other individuals in their assigned population group, a dependence between admixture fraction and PRS can result. This dependence is removed by the described PCA calibration method, and the resulting calibrated PRSs are independent of admixture fraction. More details about the ancestry calibration can be found in Methods .

Transfer and implementation

Once the final ten conditions had been selected, condition leads worked with computational scientists at the clinical laboratory (Clinical Research Sequencing Platform, LLC at the Broad Institute) to transfer the PRS models and create the sample and data-processing workflow (Fig. 2 ). Condition-specific models were run with outputs from the lab’s genotyping (Illumina Global Diversity Array (GDA)), phasing (Eagle2 (ref. 31 ) https://github.com/poruloh/Eagle ) and imputation (Minimac4 (ref. 32 ) https://genome.sph.umich.edu/wiki/Minimac4 ) pipelines to assess genomic site representation (see Methods for more information on the architecture and components of the pipeline). Several rounds of iteration between the clinical laboratory and condition leads followed in which any issues with the pipeline were resolved and the effect of genomic site missingness was assessed (Table 1 ). The final version of the implemented models was returned to the condition leads to recalculate effect sizes in the validation cohorts.

figure 2

‘High-PRS threshold’ represents the percentile that is deemed to be the cutoff for a specific condition above which a high-PRS result is reported for that condition. Odds ratios are reported as the mean odds ratios (square dot) associated with having a score above the specified threshold, compared to having a score below the specified threshold, along with 95% confidence intervals (CIs), shown in the whiskers. The number of case and control samples used to derive these odds ratios and CIs for each condition can be found in Supplementary Table 2 . Note that the odds ratio for obesity is not reported here, as it will be published by the Genetic Investigation of ANthropometric Traits consortium (Smit et al., manuscript in preparation). ‘Number of SNPs’ represents the range of numbers or sites included in each score. ‘Age ranges for return’ indicates the participant ages at which a PRS is calculated for a given condition. AFIB, atrial fibrillation; BC, breast cancer; CKD, chronic kidney disease; CHD, coronary heart disease; HC, hypercholesterolemia; PC, prostate cancer; T1D, type 1 diabetes; T2D, type 2 diabetes.

Finally, as part of the implementation of the PRS pipelines as a clinical test in a CLIA laboratory, a validation study was performed (see Methods for a detailed description; Table 1 summarizes some of the results). Briefly, this study leveraged 70 reference cell lines from diverse ancestry groups (Coriell) where 30X whole genome sequencing data were generated to form a variant truth set from which the technical accuracy and reproducibility of imputation and PRS calling was assessed. A second sample set of 20 matched donor blood and saliva specimens was procured to assess the performance of the pipeline with different input materials. A set of three samples, each with six replicates, was run end-to-end through the wet lab and analytical pipelines as an assessment of reproducibility. As a verification of the clinical validity of the scores, cohorts of cases for eight of the ten conditions were created using the eMERGE phase III imputed dataset (available on https://anvil.terra.bio/#workspaces/anvil-datastorage/AnVIL_eMERGE_GWAS/data (registration required)). PRS performance measures were calculated to confirm associations between scores and conditions. Due to limitations in the eMERGE phase III imputation (no chromosome X, different imputation pipeline), the odds ratios from this analysis were not included in the final reports; rather, the odds ratios calculated in the condition-specific validation cohorts (using the final clinical lab pipeline) were used (Fig. 2 and Table 1 ). A validation report was created for each condition. This report was reviewed and approved by the Laboratory Director in compliance with CLIA regulations for the development of a laboratory-developed test. Personnel were trained on laboratory and analytical procedures, and standard operating procedures were implemented. Data review metrics were established, sample pass/fail criteria were defined, and order and report data-transfer pipelines were built as described in ref. 22 .

Creation of pipeline for report creation, review, sign-out and release

A software pipeline was built to facilitate the data review and clinical report generation. Reports were created both as documents (in pdf format) and structured data (in JSON format; a sample report is included in the Supplementary Information ). Automated rules for case triage were built into the PRS calculation and reporting pipeline to account for differences in return based on age and sex at birth for certain conditions. For instance, the PRS for breast cancer is only calculated for participants who report sex at birth as female; similarly, prostate cancer scores are only generated for participants who report sex at birth as male. Age-related restrictions were similarly coded into the pipeline to account for study policies on return. Data review by an appropriately qualified, trained individual is required for high complexity clinical testing. In the PRS clinical pipeline, this review takes the form of a set of metrics that are exposed by the pipeline to the reviewer. These include a z -score range for each condition (passing samples will have a score −5 <  z  < +5), a PCA plot per batch against a reference sample set (visual representation of outlier samples), monitoring the z -score range for each control per condition (one control on each plate; NA12878) and flagging any samples with multiple ‘high risk’ results for further review.

Each participant’s sample is also run on an orthogonal fingerprinting assay (Fluidigm biomark) that creates a genotype-based fingerprint for that DNA aliquot. Infinium genotyping data are compared to this fingerprint as a primary check of sample chain-of-custody fidelity and to preclude sample or plate swaps during lab processing. Reviewed and approved data for a participant are processed into a clinical report. The text and format of this report were created during an iterative review process by consortium work groups. For this pragmatic clinical implementation study, two results are returned to participants: ‘high risk’ or ‘not high risk’ based on the PRS 22 . In the clinical report, a qualitative framework has been developed to indicate for which condition(s) a participant has been determined to have a high PRS (if any). Quantitative values ( z- scores) are not included for any condition in the main results panel. For breast cancer and CHD, the z -score is presented in another section of the report for inclusion in integrated score models for those conditions. For breast cancer specifically, the provided z -score is used with the BOADICEA 33 model to generate an integrated risk that is included in the genome-informed risk assessment (GIRA), as described in ref. 22 .

Overview of the first 2,500 clinical samples processed

Between the launch in July 2022 and May 2023, 2,500 participants were processed through the clinical PRS pipeline (representing ∼ 10% of the proposed cohort). Of the first 2,500 participants processed, 64.5% (1,612) indicated sex at birth as female, while 35.5% (886) indicated male. Median age at sample collection was 51 years (range: 3 years to 75 years). Participants self-reported race/ancestry, with 32.8% (820) identifying as ‘White (for example, English, European, French, German, Irish, Italian, Polish, etc.)’; 32.8% (820) identified as ‘Black, African American or African (for example, African American, Ethiopian, Haitian, Jamaican, Nigerian, Somali, etc.)’; 25.4% (636) identified as ‘Hispanic, Latino or Spanish (for example, Colombian, Cuban, Dominican, Mexican or Mexican American, Puerto Rican, Salvadoran, etc.)’; 5% (124) identified as ‘Asian (for example, Asian, Indian, Chinese, Filipino, Japanese, Korean, Vietnamese, etc.)’; 1.5% (38) identified as American Indian or Alaska Native (for example, Aztec, Blackfeet Tribe, Mayan, Navajo Nation, Native Village of Barrow (Utqiagvik) Inupiat Traditional Government, Nome Eskimo Community, etc.); 0.9% (22) identified as Middle Eastern or North African (for example, Algerian, Egyptian, Iranian, Lebanese, Moroccan, Syrian, etc.); 0.8% (21) selected ‘None of these fully describe [me_or_my_child]’; 0.7% (17) selected ‘Prefer not to answer’; 0.1% (2) participants had incomplete data. A summary of the performance of the first 2,500 samples and resulting high-PRS metrics are shown in Fig. 3 . In the first 2,500 participants, we identified 515 participants (20.6%) with a high PRS for one of the ten conditions, 64 participants (2.6%) had a high PRS for two conditions, and two participants (0.08%) had a high PRS for three conditions. The remaining 1,919 participants had no high PRS found. High-PRS participants spanned the spectrum of genetic ancestry when projected onto principal component space (Fig. 3 ). Observed numbers of high-PRS assessments were largely consistent with the corresponding expected numbers. The P values in Fig. 3c are two-sided P values, which are calculated taking into account both the finite size of the eMERGE cohort and the finite size of the training data used to estimate the ancestry adjustment parameters. The P values are further adjusted for multiple hypothesis testing using the Holm–Šidák procedure 34 .

figure 3

a , PCA of ancestry indicating participants with a result of ‘high PRS’ for any condition (red dots) compared to participants who did not have a high PRS identified (gray dots). b , Summary of number of high-risk conditions found per participant. c , Observed numbers of high PRS called per condition compared to the expected numbers of high PRS per condition. P values are two-sided P values calculated by simulation to account for the uncertainty in the All of Us (AoU) derived ancestry calibration parameters due to the finite size of the AoU training cohort, and further adjusted for multiple hypothesis testing using the Holm–Šidák procedure. Note not all participants get scored for every condition based on age and sex at birth filters.

While the predictive performance of PRSs has improved substantially in recent years, challenges remain in ensuring that PRSs are applicable and effective in diverse populations. In particular, the vast majority of GWASs have focused on individuals of European ancestry, and the predictive accuracy of PRSs declines with increasing genetic distance from the discovery population 5 , 30 , 35 . This risks exacerbating existing health disparities, as clinical use of Eurocentric PRSs in diverse patient samples may not accurately reflect disease risk in non-European populations. To address these challenges, the eMERGE Network has conducted a multistage process to evaluate and optimize PRS selection, development and validation. The Network has prioritized conditions with high prevalence and heritability, existing literature, clinical actionability and the potential for health disparities, and has developed strategies to optimize PRS generalizability and portability across diverse populations. In particular, the Network has emphasized performance across four major ancestry groups (African, Asian, European, Hispanic, as reflected by self-identified race/ethnicity) and has developed a pipeline for clinical PRS implementation, a framework for regulatory compliance and a PRS clinical report.

The potential impact of PRS-based risk assessment in clinical practice is substantial. By enabling targeted interventions and preventative measures, PRS-based risk assessment has the potential to reduce the burden of a range of conditions 22 . Moreover, the development of PRS-based risk assessment in diverse populations has the potential to reduce health disparities by ensuring that clinical use of PRSs accurately reflects disease risk in diverse populations.

However, challenges remain in the successful implementation of PRS-based risk assessment in clinical practice. Participation bias in training or validation datasets that do not accurately represent the broader populations, for example the United Kingdom BioBank, can lead to skewed results and reduced generalizability in PRS test development 36 . Other challenges include concerns about genetic determinism, the potential for stigmatization and the need for robust regulatory frameworks to ensure that PRS-based risk assessment is deployed safely and effectively. Furthermore, to have more clinical utility, an individual’s PRS-based risk would be calculated as age-based absolute risk. Challenges also remain in healthcare provider and patient understanding and interpretation of PRS results and how to effectively communicate these results. Additionally, one of the biggest challenges is the implementation of effective disease prevention strategies after the return of the results. Return of the results will not result in a benefit without effective disease prevention or early detection strategies. The eMERGE Network’s work provides a blueprint for addressing these challenges, but ongoing research and evaluation will be necessary to ensure that PRS-based risk assessment is implemented in a responsible and effective manner. While this study will not answer all of the unanswered challenges and questions, the results from the 25,000 subjects from the eMERGE study will provide additional data to existing risk stratification to model harms and benefits over patient lifetimes.

Future groups developing, transferring and implementing PRSs into a clinical setting could build upon the eMERGE experience. Slightly less than half of the phenotypes originally considered for PRS development were able to be continued through clinical implementation based on varying considerations, suggesting that a moderately high number of phenotypes with measurable genetic contributions will be appropriate for PRS-based clinical tools. Thresholds for returning ‘high risk’ PRS were identified by each phenotype working group based in part upon the statistical significance between the ‘high-risk’ and ‘not high-risk’ groups. Future studies might consider standardizing the analyses and methods used to define these thresholds. Additionally, to have more clinical utility, an individual’s PRS-based risk would be calculated as an age-based absolute risk. While data for these risk assessments are available for some phenotypes (for example, cardiovascular and cancer), age of onset data are lacking for many clinically important phenotypes. Finally, the standards, guidance and the development of best practices for the integration of PRSs into clinical processes are yet to be developed. Future studies can learn from eMERGE and other groups' experiences will be a foundation for ongoing opportunities for the integration of polygenic risk predictions in clinical care settings.

In conclusion, the eMERGE Network’s work in PRS development represents an important step forward in the implementation of PRS-based risk assessment (in combination with other risk estimates from monogenic testing and family history) in clinical practice.

Consent and ethical approval

The study was conducted in accordance with the Declaration of Helsinki, and the central institutional regulatory board protocol was approved by the Ethics Committee of Vanderbilt University. All participants for eMERGE are consented, using a global primary consent and a site-specific consent. Minors acknowledge study participation by signing an assent (if local policy dictates) and the child’s parent/guardian signs a parental permission form. The Vanderbilt University Medical Center Co-ordinating Center is the institutional review board of record (no. 211043) for the Network’s single institutional review board, approved in July 2021.

For the All of Us Research Program, informed consent for all participants is conducted in person or through an eConsent platform that includes primary consent, Health Insurance Portability and Accountability Act authorization for research EHRs and consent for return of genomic results. The protocol was reviewed by the Institutional Review Board (IRB) of the All of Us Research Program. The All of Us Institutional Review Board follows the regulations and guidance of the National Institutes of Health Office for Human Research Protections for all studies, ensuring that the rights and welfare of research participants are overseen and protected uniformly.

Clinical trials registration

The eMERGE genomic risk assessment study is a registered, prospective, interventional clinical trial registered with clinicaltrials.gov (Identifier: NCT05277116 ). The purpose of the study is to determine if providing a GIRA will impact clinical actions taken by providers and patients to manage disease risk and the propensity of participants to develop a disease reported in the GIRA. For this prospective, pragmatic study, the primary outcome being measured is the number of new healthcare actions after return of the genome-informed risk assessment. Number of new healthcare actions will be measured by electronic health record data and participant-reported outcomes through a REDCap survey. Prespecified actions will include a condition-specific composite of new encounters, clinical orders or specialty referrals for clinical evaluation associated with the condition(s), placed by a provider within six months of result disclosure.

Secondary outcomes are the number of newly diagnosed conditions after return of the genome-informed risk assessment and the number of risk-reducing interventions after return of the genome-informed risk assessment (time frame: six months and 12 months post return of results to participant).

Population group definition

In the score auditing and evaluation phase, condition leads cataloged population groups used in the development or validation of given scores from available publications, preprints or information shared directly from collaborators. Across the initial list of evaluated scores, methods for defining population groups included self-reporting, extraction from health system data and/or analysis of genetic ancestry. In the optimization phase, populations were defined using either computational analysis alone or both computational analysis and self-reported ancestry, as indicated in Supplementary Table 3 . For creation of the training model for PRS ancestry calibration, populations were computationally determined as described in ‘PRS ancestry calibration overview’ below.

Populations with that are underserved and more frequently experience health disparities include racial and ethnic minority groups; people with lower socioeconomic status; underserved rural communities; sexual and gender minority groups; and people with disabilities 25 .

Analytical and technical validation studies

Broad imputation pipeline overview.

An imputation pipeline that takes as an input a variant call format (VCF) file generated from a genotyping microarray and imputes the genotypes at additional sites across the genome was developed. The pipeline architecture and composition was based on the widely used University of Michigan Imputation Server, which uses a software called Eagle ( https://github.com/poruloh/Eagle ) for phasing and Minimac4 ( https://genome.sph.umich.edu/wiki/Minimac4 ) for the imputation. The pipeline uses a curated version of the 1,000 Genomes Project (1KG, www.internationalgenome.org ) as the reference panel. Additional details on the imputation pipeline can be found at https://broadinstitute.github.io/warp/docs/Pipelines/Imputation_Pipeline/README .

Broad curated 1KG reference panel

During the validation process, we determined that some sites in the 1KG reference panel were incorrectly genotyped compared to the sites in matching whole genome sequencing data. To increase accuracy of the imputation and PRS scoring, we curated the original panel by removing sites that were likely incorrectly genotyped based on comparing allele frequencies to those reported in gnomAD v.2 ( https://gnomad.broadinstitute.org/ ). Documentation of this curation can be found at https://broadinstitute.github.io/warp/docs/Pipelines/Imputation_Pipeline/references_overview and a publicly available version of the panel at the following Google Cloud location (accessible via the gsutil utility): gs://broad-gotc-test-storage/imputation/1000G_reference_panel/.

Selection of a reference panel for imputation as an input to a PRS is an important consideration. Some reference panels (for example, Trans-Omics for Precision Medicine (TOPMed)) have more samples than the default used in our pipeline (that is, 1KG). This leads to more variants being imputed. The question is whether this would materially change the PRSs calculated from samples imputed with the TOPMed panel. Access to this panel computationally is restricted (and local download prohibited) so it was deemed infeasible to implement in our clinical production environment. The performance of a non-eMERGE PRS (for CHD; ref. 28 ) using the two different reference panels was determined for 20 GDA saliva specimens and for 42 AoU array v.1 specimens. The cohort was imputed both by the Broad imputation pipeline with curated 1KG as the reference panel as well as on the TOPMed imputation server with TOPMed as the reference panel. Imputed arrays were scored by the PRS pipeline.

The PRS percentiles computed with each method are highly concordant for both cohorts. The Pearson correlation coefficient is 0.996 for both cohorts, the P value of the Welch two-sample t- test is equal to 0.93 and 0.85 (indicating no statistical difference between the methods) for GDA and AoU v.1 cohorts, respectively.

Performance verification of the imputation pipeline

Imputation accuracy was determined for 42 specimens that were processed through a genotyping microarray (AoU v.1 array—the precursor to the commercial Global Diversity Array) and imputed with curated 1KG as the reference panel where corresponding deep-coverage (>30X) PCR-free whole genome sequencing data were used as a truth call set to calculate sensitivity and specificity. The arrays were also imputed on the Michigan Imputation Server with 1KG as the reference panel.

Within the cohort, four different ancestries were represented: non-Finnish Europeans, East Asians, South Asian (SAS), African (AFR). Broad imputation pipeline sensitivity for SNPs is >97% and insertions/deletions (INDELs) >95% for all ancestries. Similarly, specificity for SNPs from the Broad imputation pipeline is above 99% and the specificity for INDELs is >98%. See Extended Data Table 1 for a table of results. Results were highly concordant with those returned by the remote server at Michigan.

Performance evaluation of different input material types

To assess the performance of specimens derived from both saliva and whole blood, a set of 20 matched blood and saliva pairs were run through the GDA genotyping process and the resulting VCFs were imputed using the Broad pipeline to be compared against results for matched blood-derived whole genome data. The Pearson correlation between sensitivity and specificity of blood- and saliva-derived samples are equal to 100% and 100%, respectively. For the same pairs, the Welch two-sample t- test statistic is 0.997 and 0.987, respectively. There is no significant difference between the different input sample types.

Imputation repeatability and reproducibility

Imputation pipeline repeatability was assessed by repeating imputation of a cohort of 1,000 Global Screening Array arrays ten times over the course of two weeks and was found to be 100% concordant. Imputation pipeline precision (reproducibility) was also tested on technical replicates. Three individual samples derived from saliva were each genotyped six times, followed by an imputation in a cohort of all saliva-derived samples. In each set of technical replicates, all pairs and variants in each pair were compared (making a total of 45 pairs for which genotypes were compared). Reproducibility is measured using Jaccard scores. ‘Reproducibility over variants’ was calculated only over sites where at least one of the two replicates in a pair calls a non hom-ref genotype and was found to be 99.91% (95% CI 99.89–99.93) for SNPs and 99.87% (95% CI 99.85–99.90) for INDELs. ‘Reproducibility over all sites’ was calculated over all genotyped sites, including sites genotyped as hom-ref in both replicates and was found to be 100% (95% CI 100–100) for both SNPs and INDELs.

Imputation performance as a function of variant frequency

Because we expect accuracy to be impacted by the frequency of a variant in the population (rare variants are less likely to be in the reference panel and therefore less accurately imputed), we further subdivided the performance assessment by allele frequencies on two cohorts: 42 AoU v.1 arrays and 20 blood–saliva pairs of GDA arrays. Accuracy of imputation of variants as a function of population allele frequency performed as expected, with rare variants in the population not being as accurately represented. Imputation is more accurate for variants that are more frequently observed in the population (≥0.1 allele frequency (AF)).

Impact of genotyping array call rate on imputation performance

The impact of call rate on the imputation was assessed by generating a downsampled series of 42 arrays, each with call rates of 90%, 95%, 97% and 98%. Pearson correlation values for SNPs and INDELs were calculated across bins of allele frequencies, assessed against gnomAD common variants (AF > 0.1), for the cohorts with downsampled call rates. Call rates below 95% were found to produce suboptimal results. At this rate the mean R 2 dosage score for sites with AF ≥ 0.1 was found to be 0.98% (95% CI 0.98–0.98) for both SNPs and INDELs compared to 0.99% for call rates of 97% and 98%.

Impact of imputation batch size on performance

Batch size effect of the imputation pipeline was assessed by imputing and analyzing arrays in a cohort of size 1,000 (randomly chosen), ten cohorts of size 100 (nonoverlapping subsets of the 1,000 cohort) and ten cohorts of size ten (nonoverlapping subsets of one of the 100 cohorts). Pearson correlations of dosage scores were calculated across bins for allele frequencies (assessed against gnomAD) for smaller cohorts versus larger cohorts. The data show that imputation is highly correlated across batch sizes with batches down to as few as ten samples, producing acceptable performance. The mean R 2 correlation of dosage scores for sites with allele frequency greater or equal to 0.1 is above 0.97 in all cases both for SNPs and INDELs and increases to 0.98 for the larger studied cohorts. Increasing batch sizes produces very slight improvements in imputation but these are not significant and the choice of imputation batch size (above or equal to ten samples) can be made on practical and operational grounds.

Broad PRS pipeline overview

The PRS pipeline begins by calculating a raw score using plink2 ( https://www.cog-genomics.org/plink/2.0/ ). For each condition, effect alleles and weights are defined for a set of genomic sites stored in a weights file. At each site, the effect allele dosage observed in the imputed VCF is multiplied by the effect weight in the weights file. The raw score is the sum of these products over all the specified sites.

Validation of technical and analytical performance of the PRS pipeline

For each of the ten conditions chosen by the consortium for clinical return, a validation study was performed to assess the technical and analytical performance as well as to verify the association between score and disease risk. See Extended Data Table 2 for a summary of the validation measures.

PRS pipeline accuracy

Accuracy of the pipeline was determined by calculating the Pearson correlation between PRSs calculated from 70 specimens imputed from GDA array data and PRSs of corresponding deep-coverage PCR-free whole genome sequencing data (used as a truth call set).

Input material performance

Accuracy of PRS scoring when different sample types (blood or saliva) are used as inputs was determined by comparing the PRSs from matched blood and saliva pairs collected from 20 individuals.

PRS pipeline repeatability

PRS pipeline repeatability was assessed by running the pipeline on the same dataset of 70 imputed GDA arrays ten times over the course of two weeks (without call caching). Scores generated from the different processing runs were compared to determine if there are any differences observed for a given PRS when the pipeline is run at different times.

PRS pipeline reproducibility

PRS pipeline precision (reproducibility) was assessed using three samples each run six times end-to-end and then compared in a pairwise manner. The z -score standard deviation is used as a measure of variability.

PRS site representation

The SNP weight sites that are not called during genotyping or imputation were determined. These are sites not present in the intersection of an imputed GDA array and the reference panel. Ideally, all sites required for PRS calculation are present either as genotyped or imputed sites; however, in practice, a small number of sites are not present due to differences in the data used to create the score and the specific array and imputation reference panel used in this study.

Performance verification using eMERGE I–III cohort

A cohort of samples with known phenotypic information was used to verify the relationship between PRS as determined by our pipeline and disease risk. For conditions where cases and controls could be identified in the eMERGE I–III cohort, we determined performance using metrics outlined in the ClinGen working group recommendations 26 . Specifically, we determined the PRS distributions for cases and controls, we examined the impact of ancestry adjustment on the distributions and we examined the relationship between observed and predicted risk. An example of this analysis (for T2D is shown below).

The T2D weight file used for PRSs in this validation report comes from a GWAS by Ge et al. 29 where they reported that individuals in the top 2% of the PRSs in the population have an increased risk of developing T2D.

The T2D cohort in the eMERGE I–III dataset consisted of 19,145 cases and 68,823 control samples. The mean adjusted PRS for case samples was 0.435, while the mean for control samples was −0.042. Individuals with higher adjusted PRS scores tend to be more likely to develop disease (see Extended Data Fig. 1 for a histogram of T2D PRSs in cases and controls).

There are some limitations to this analysis: (1) the eMERGE I–III dataset being used for this analysis was generated from different array platforms and was imputed with a different pipeline including a different version of the 1KG reference panel than the one currently implemented; (2) the eMERGE I–III imputed dataset does not include variants from chromosomes X or Y. For these reasons, the PRS disease association analysis represents a verification of the clinical validation performed by eMERGE condition leads rather than the quantitative measure of the impact of the score on risk. The clinical associations (odds ratios) that are reported on the clinical report for each condition were independently determined by eMERGE disease-specific expert teams.

Validation of pipeline and ancestry adjustment in original case–control cohorts

The final pipeline was made available to computational scientists at each of the eMERGE disease-specific expert teams who had access to appropriate case–control cohorts. These groups confirmed the performance of the final pipeline on their cohorts. The odds ratios for each condition that are reported on the clinical reports come from these cohorts rather than the eMERGE cohort for the reasons described above.

PRS ancestry calibration overview

Pca method description.

For a PRS, which is a sum of SNP effects (linear weights), the central limit theorem states that the distribution of scores in a homogenous population will tend towards a normal distribution as the number of SNPs becomes large. When two different homogenous populations are randomly mixed, the additive property of the PRS leads the resulting distribution to be similarly normally distributed, with mean and variance depending on the mean and variance of the original homogenous populations 37 , 38 . We can therefore model the distribution of the PRS as being normally distributed, with mean and variance being functions of genetic ancestry. Practically, we implement this as

with genetic ancestry being represented by projection into principal component (PC) space 39 . The α and β parameters are found by jointly fitting them to a cohort of training data. This fit is performed by minimizing the negative log likelihood:

where i runs over the individuals in the training cohort, prs i is the i th individual’s raw PRS, and μ i and σ i are calculated using equations ( 2 ) and ( 3 ) above by projecting the i th individual into PC space. Note that, due to the simplicity of the model, overfitting is unlikely to be a problem, and so no regularization or other overfitting avoidance technique is implemented. An individual’s PRS z -score can then be calculated as

where μ and σ have again been calculated based on the specific individual’s projection into PC space. In this way, once the model has been trained, the z -score calculation is fully defined by the fitted model parameters, and z -scores can be calculated without needing additional access to the original training cohort.

Generating trained models from All of Us data

Generating the trained models consisted of three steps: (1) selecting the training cohort; (2) imputation of the training cohort; and (3) training the models on the training cohort. A test cohort was also generated to test the performance of the training.

Ancestry-balanced training and test cohorts were generated by subsampling from an initial cohort of around 100,000 All of Us samples. For the purposes of balancing the cohort, each sample was assigned to one of the five 1KG super populations. Principal component analysis was first performed on a random subset of 20,000 samples. The 1KG samples were projected onto these principal components, and a support vector machine was trained on 1KG to predict ancestry. The support vector machine was then used to assign 108,000 AoU samples to one of the five 1KG super populations. A balanced training cohort was selected based on these predicted ancestries, and principal components were recalculated using this balanced training cohort. A similarly balanced test cohort was selected based on ancestries estimated from projection on the training set PCs. The resulting breakdown of the cohorts by estimated ancestry is shown in Extended Data Table 3 .

Both the training and testing cohorts include a number of individuals with highly admixed ancestry. Admixture was quantified using the tool Admixture 40 with five ancestral populations. The resulting admixtures of each cohort are shown in Extended Data Fig. 2 , and the most common admixed ancestries in each cohort are summarized in Extended Data Table 4 .

Each cohort was imputed using the imputation pipeline described above, with 1KG as the reference panel. By keeping the imputation pipeline identical to the pipeline used for the eMERGE dataset, and because the AoU dataset uses the same GDA array as the eMERGE dataset, any potential biases resulting from differing data production and processing methods were removed. The training cohort was scored for each of the ten conditions, and model parameters were fit by minimizing the negative log likelihood as described. The test cohort was then used to evaluate the generalizability of these model parameters.

Performance on test cohort

Extended Data Fig. 3 illustrates the distribution of calibrated z -scores in the test cohort using the parameters fit in the training cohort. As can be seen, all ancestries show the intended standard normal distribution of calibrated scores.

One of the main improvements of this method over previous methods is the inclusion of an ancestry-dependent variance in addition to the ancestry-dependent mean. The importance of this is shown for the hypercholesterolemia PRS in Extended Data Fig. 4 . The variance of this score differs significantly across ancestries, so that a method that only fits the mean of the distribution as ancestry dependent can result in z -score distributions that have been attenuated towards zero or expanded away from zero for some ancestries. By also treating variance as ancestry dependent, this method results in z -score distributions that are more standardized across ancestries.

In addition to improving calibration across ancestries, this method can improve calibration within ancestries, particularly for highly admixed individuals. An example of this can be seen in Extended Data Fig. 5 . Because no ancestry group is homogenous, when individuals are compared directly to other individuals in their assigned population group, a dependence between admixture fraction and PRS can result. This dependence is removed by the described PCA calibration method, and the resulting calibrated PRSs are independent of admixture fraction.

Reporting summary

Further information on the research design is available in the Nature Portfolio Reporting Summary linked to this article.

Data availability

Underlying data used to verify the performance of the PRS pipeline are available in dbGaP https://www.ncbi.nlm.nih.gov/projects/gap/cgi-bin/study.cgi?study_id=phs001584.v1.p1 . De-identified data relating to trial participants will be available through dbGaP ( https://www.ncbi.nlm.nih.gov/gap/ ) access and the AnVIL platform ( https://anvil.terra.bio/ ) as an interim analysis in 2024 and final dataset at the end of the study, expected in 2026. Information (sites and weights) on the implemented scores can be found at https://github.com/broadinstitute/eMERGE-implemented-PRS-models-Lennon-et-al and also on the UCSC browser https://genome.ucsc.edu/s/Max/emerge . Additionally, PGS Catalog IDs for most of the implemented scores are indicated in Supplementary Table 3 .

Code availability

Codes used in this work to create and operate the imputation and PRS pipelines are hosted at https://github.com/broadinstitute/palantir-workflows/tree/main/ImputationPipeline . Code for the PRS ancestry calibration can also be found in the AoU demonstration workspace https://workbench.researchallofus.org/workspaces/aou-rw-bef5bf62/demopolygenicriskscoregeneticancestrycalibration/data (open access but researcher registration required).

Lambert, S. A. et al. The Polygenic Score Catalog as an open database for reproducibility and systematic evaluation. Nat. Genet. 53 , 420–425 (2021).

Article   CAS   PubMed   Google Scholar  

Lewis, C. M. & Vassos, E. Polygenic risk scores: from research tools to clinical instruments. Genome Med. 12 , 44 (2020).

Article   PubMed   PubMed Central   Google Scholar  

Polygenic Risk Score Task Force of the International Common Disease Alliance. Responsible use of polygenic risk scores in the clinic: potential benefits, risks and gaps. Nat. Med. 27 , 1876–1884 (2021).

Article   CAS   Google Scholar  

Torkamani, A., Wineinger, N. E. & Topol, E. J. The personal and clinical utility of polygenic risk scores. Nat. Rev. Genet. 19 , 581–590 (2018).

Duncan, L. et al. Analysis of polygenic risk score usage and performance in diverse human populations. Nat. Commun. 10 , 3328 (2019).

Article   ADS   CAS   PubMed   PubMed Central   Google Scholar  

Hurson, A. N. et al. Prospective evaluation of a breast-cancer risk model integrating classical risk factors and polygenic risk in 15 cohorts from six countries. Int. J. Epidemiol. 50 , 1897–1911 (2022).

Article   PubMed   Google Scholar  

Inouye, M. et al. Genomic risk prediction of coronary artery disease in 480,000 adults: implications for primary prevention. J. Am. Coll. Cardiol. 72 , 1883–1893 (2018).

Guo, F. et al. Polygenic risk score for defining personalized surveillance intervals after adenoma detection and removal at colonoscopy. Clin. Gastroenterol. Hepatol. 21 , 210–219.e11 (2023).

Fantus, R. J. & Helfand, B. T. Germline genetics of prostate cancer: time to incorporate genetics into early detection tools. Clin. Chem. 65 , 74–79 (2019).

Pharoah, P. D. P., Antoniou, A. C., Easton, D. F. & Ponder, B. A. J. Polygenes, risk prediction, and targeted prevention of breast cancer. N. Engl. J. Med. 358 , 2796–2803 (2008).

Willoughby, A., Andreassen, P. R. & Toland, A. E. Genetic testing to guide risk-stratified screens for breast cancer. J. Pers. Med 9 , 15 (2019).

Martin, A. R. et al. Clinical use of current polygenic risk scores may exacerbate health disparities. Nat. Genet. 51 , 584–591 (2019).

Article   CAS   PubMed   PubMed Central   Google Scholar  

Mars, N. et al. Systematic comparison of family history and polygenic risk across 24 common diseases. Am. J. Hum. Genet. 109 , 2152–2162 (2022).

Wang, Z. et al. The value of rare genetic variation in the prediction of common obesity in European ancestry populations. Front. Endocrinol. 13 , 863893 (2022).

Ruan, Y. et al. Improving polygenic prediction in ancestrally diverse populations. Nat. Genet. 54 , 573–580 (2022).

Márquez-Luna, C., Loh, P.-R., South Asian Type 2 Diabetes (SAT2D) Consortium & Price, A. L. Multiethnic polygenic risk scores improve risk prediction in diverse populations. Genet. Epidemiol. 41 , 811–823 (2017).

Hujoel, M. L. A., Loh, P.-R., Neale, B. M. & Price, A. L. Incorporating family history of disease improves polygenic risk scores in diverse populations. Cell Genom. 2 , 100152 (2022).

Elliott, J. et al. Predictive accuracy of a polygenic risk score-enhanced prediction model vs a clinical risk score for coronary artery disease. JAMA 323 , 636–645 (2020).

Folkersen, L. et al. Impute.me: an open-source, non-profit tool for using data from direct-to-consumer genetic testing to calculate and interpret polygenic risk scores. Front. Genet. 11 , 515901 (2020).

Article   Google Scholar  

Hao, L. et al. Development of a clinical polygenic risk score assay and reporting workflow. Nat. Med. 28 , 1006–1013 (2022).

Vassy, J. L. et al. Cardiovascular disease risk assessment using traditional risk factors and polygenic risk scores in the million veteran program. JAMA Cardiol. 8 , 564–574 (2023).

Linder, J. E. et al. Returning integrated genomic risk and clinical recommendations: the eMERGE study. Genet. Med. 25 , 100006 (2023).

McCarty, C. A. et al. The eMERGE Network: a consortium of biorepositories linked to electronic medical records data for conducting genomic studies. BMC Med. Genomics 4 , 13 (2011).

Gottesman, O. et al. The Electronic Medical Records and Genomics (eMERGE) Network: past, present, and future. Genet. Med. 15 , 761–771 (2013).

NIMHD. Overview. https://www.nimhd.nih.gov/about/overview/ . Accessed 11 Dec 2023.

Wand, H. et al. Improving reporting standards for polygenic scores in risk prediction studies. Nature 591 , 211–219 (2021).

Khan, A. et al. Genome-wide polygenic score to predict chronic kidney disease across ancestries. Nat. Med. 28 , 1412–1420 (2022).

Khera, A. V. et al. Genome-wide polygenic scores for common diseases identify individuals with risk equivalent to monogenic mutations. Nat. Genet. 50 , 1219–1224 (2018).

Ge, T. et al. Development and validation of a trans-ancestry polygenic risk score for type 2 diabetes in diverse populations. Genome Med. 14 , 70 (2022).

Martin, A. R. et al. Human demographic history impacts genetic risk prediction across diverse populations. Am. J. Hum. Genet. 100 , 635–649 (2017).

Loh, P.-R. et al. Reference-based phasing using the Haplotype Reference Consortium panel. Nat. Genet. 48 , 1443–1448 (2016).

Howie, B., Fuchsberger, C., Stephens, M., Marchini, J. & Abecasis, G. R. Fast and accurate genotype imputation in genome-wide association studies through pre-phasing. Nat. Genet. 44 , 955–959 (2012).

Lee, A. et al. BOADICEA: a comprehensive breast cancer risk prediction model incorporating genetic and nongenetic risk factors. Genet. Med. 21 , 1708–1718 (2019).

Holm, S. A simple sequentially rejective multiple test procedure. Scand. J. Stat. 6 , 65–70 (1979).

Ding, Y. et al. Large uncertainty in individual polygenic risk score estimation impacts PRS-based risk stratification. Nat. Genet. 54 , 30–39 (2022).

Schoeler, T. et al. Participation bias in the UK Biobank distorts genetic associations and downstream analyses. Nat. Hum. Behav. 7 , 1216–1227 (2023).

Privé, F. et al. Portability of 245 polygenic scores when derived from the UK Biobank and applied to 9 ancestry groups from the same cohort. Am. J. Hum. Genet. 109 , 12–23 (2022).

Ding, Y. et al. Polygenic scoring accuracy varies across the genetic ancestry continuum. Nature 618 , 774–781 (2023).

Scutari, M., Mackay, I. & Balding, D. Using genetic distance to infer the accuracy of genomic prediction. PLoS Genet. 12 , e1006288 (2016).

Alexander, D. H., Novembre, J. & Lange, K. Fast model-based estimation of ancestry in unrelated individuals. Genome Res. 19 , 1655–1664 (2009).

Download references

Acknowledgements

We thank the past and future participants of the eMERGE Network projects. We thank M. O’Reilly for help with figure creation. We thank our All of Us Research Program colleagues, A. Ramirez, S. Lim, B. Mapes, A. Green and A. Musick, for providing their support and input throughout the ancestry calibration demonstration project lifecycle. We also thank the All of Us Science Committee and All of Us Steering Committee for their efforts evaluating and finalizing the approved demonstration projects. The All of Us Research Program would not be possible without the partnership of contributions made by its participants. To learn more about the All of Us Research Program’s research data repository, please visit https://www.researchallofus.org/ . This phase of the eMERGE Network was initiated and funded by the National Human Genome Research Institute through the following grants: U01HG011172 (Cincinnati Children’s Hospital Medical Center); U01HG011175 (Children’s Hospital of Philadelphia); U01HG008680 (Columbia University); U01HG011176 (Icahn School of Medicine at Mount Sinai); U01HG008685 (Mass General Brigham); U01HG006379 (Mayo Clinic); U01HG011169 (Northwestern University); U01HG011167 (University of Alabama at Birmingham); U01HG008657 (University of Washington); U01HG011181 (Vanderbilt University Medical Center); U01HG011166 (Vanderbilt University Medical Center serving as the Coordinating Center). The All of Us Research Program is supported by the National Institutes of Health, Office of the Director: Regional Medical Centers: 1 OT2 OD026549; 1 OT2 OD026554; 1 OT2 OD026557; 1 OT2 OD026556; 1 OT2 OD026550; 1 OT2 OD 026552; 1 OT2 OD026548; 1 OT2 OD026551; 1 OT2 OD026555; IAA#: AOD 16037; Federally Qualified Health Centers: 75N98019F01202; Data and Research Center: 1 OT2 OD35404; Biobank: 1 U24 OD023121; The Participant Center: U24 OD023176; Participant Technology Systems Center: 1 OT2 OD030043; Community Partners: 1 OT2 OD025277; 3 OT2 OD025315; 1 OT2 OD025337; 1 OT2 OD025276.

Author information

These authors contributed equally: Niall J. Lennon, Leah C. Kottyan.

Full lists of members and their affiliations appear in the Supplementary Information.

Authors and Affiliations

Broad Institute of MIT and Harvard, Cambridge, MA, USA

Niall J. Lennon, Christopher Kachulis, Marissa Fisher, Joel Hirschhorn, Candace Patterson, Maegan Harden, Joel N. Hirschhorn, Amit Khera, Katie Larkin, Edyta Malolepsza, Li McCarthy, Nihal Pai, Heidi L. Rehm & Diana M. Toledo

Cincinnati Children’s Hospital Medical Center, University of Cincinnati, Cincinnati, OH, USA

Leah C. Kottyan, Lisa J. Martin, Tesfaye B. Mersha, Bahram Namjou & Cynthia A. Prows

Icahn School of Medicine at Mount Sinai, New York, NY, USA

Noura S. Abul-Husn, Gillian Belbin, Clive Hoggart, Roelof A. J. Smit, Zhe Wang & Eimear E. Kenny

National Human Genome Research Institute, National Institutes of Health, Bethesda, MD, USA

Josh Arias, Sonja I. Berndt, Sonja Berndt, Teri A. Manolio & Robb Rowley

Vanderbilt University Medical Center, Nashville, TN, USA

Jennifer E. Below, Ellen Wright Clayton, Digna R. Velez Edwards, QiPing Feng, Jodell E. Linder, Jonathan D. Mosley, Josh F. Peterson, Dan M. Roden, Nataraja Sarma Vaitinadin, Wei-Qi Wei & Georgia L. Wiesner

Columbia University, New York, NY, USA

Wendy K. Chung, Atlas Khan, Krzysztof Kiryluk & Chunhua Weng

University of Alabama at Birmingham, Birmingham, AL, USA

James J. Cimino, Marguerite R. Irvin, Nita Limdi & Hemant Tiwari

Children’s Hospital of Philadelphia, Philadelphia, PA, USA

John J. Connolly, Joseph T. Glessner, Hakon Hakonarson & Margaret Harr

Tulane University, New Orleans, LA, USA

David R. Crosslin

University of Washington, Seattle, WA, USA

David R. Crosslin, Gail P. Jarvik, Elisabeth A. Rosenthal & David Veenstra

Mayo Clinic, Rochester, MI, USA

Ozan Dikilitas, Robert R. Freimuth, Iftikhar Kullo, Daniel J. Schaid & Johanna L. Smith

Mass General Brigham, Boston, MA, USA

Tian Ge, Elizabeth W. Karlson, James B. Meigs & Jordan W. Smoller

Northwestern University, Evanston, IL, USA

Adam S. Gordon, Yuan Luo, Elizabeth M. McNally, Lorenzo L. Pesce, Megan J. Puckelwartz, Konrad Teodor Sawicki & Theresa L. Walunas

Boston Children’s Hospital, Boston, MA, USA

Joel Hirschhorn & Joel N. Hirschhorn

Fred Hutchinson Cancer Center, Seattle, WA, USA

Li Hsu, Ulrike Peters & Minta Thomas

Novo Nordisk Foundation Center for Basic Metabolic Research, Faculty of Health and Medical Sciences, University of Copenhagen, Copenhagen, Denmark

Ruth Loos & Ruth J. F. Loos

The Charles Bronfman Institute for Personalized Medicine, Icahn School of Medicine at Mount Sinai, New York, NY, USA

National Institutes of Health, Bethesda, MD, USA

  • Anjene Musick

Nanjing Medical University, Nanjing, China

Xianyong Yin

You can also search for this author in PubMed   Google Scholar

The GIANT Consortium

  • Sonja Berndt
  • , Joel Hirschhorn
  • , Ruth Loos
  • , Roelof A. J. Smit
  •  & Xianyong Yin

The All of Us Research Program

Contributions.

N.J.L. and L.C.K. contributed equally. N.J.L., L.C.K., D.R.C., O.D., T.G., J.T.G., H.H., L.H., E.W.K., R.L., E.M.M., J.B.M., B.N., R.A.J.S. and E.E.K. were responsible for PRS development. N.J.L., L.C.K., D.R.C., O.D., T.G., J.T.G., A.S.G., H.H., L.H., E.W.K., J.E.L., R.L., Y.L., E.M., L.M., J.B.M., B.N., L.L.P., J.F.P., M.J.P., R.R., K.T.S., R.A.J.S., J.L.S., C.W., W.-Q.W. and E.E.K. conducted PRS evaluation. PRS selection, optimization and validation was done by N.J.L., L.C.K., D.R.C., Q.F., O.D., T.G., J.T.G., A.S.G., H.H., L.H., E.W.K., J.E.L., R.L., Y.L., E.M., L.M., B.N., L.L.P., M.J.P., R.R., K.T.S., R.A.J.S., J.L.S., G.L.W., C.W., W.-Q.W. and E.E.K. Population-based z- score calibration was done by N.J.L., C.K., T.G., B.N. and E.E.K. N.J.L., L.C.K., C.K., J. J. Connolly., D.R.C., T.G., E.M., B.N., M.J.P., R.A.J.S. and E.E.K. were responsible for PRS transfer and implementation. Assessment of the first 2,500 participants was done by N.J.L., C.K., Q.F., B.N. and E.E.K. N.J.L., L.C.K., C.K., J.E.L., T.A.M., J.W.S., J.F.P. and E.E.K. wrote the first draft of the paper. All authors reviewed the paper.

Corresponding author

Correspondence to Niall J. Lennon .

Ethics declarations

Competing interests.

N.S.A.-H. is an employee and equity holder of 23andMe; serves as a scientific advisory board member for Allelica, Inc; received personal fees from Genentech Inc, Allelica Inc, and 23andMe; received research funding from Akcea Therapeutics; and was previously employed by Regeneron Pharmaceuticals. E.E.K. received personal fees from Illumina Inc, 23andMe and Regeneron Pharmaceuticals and serves as a scientific advisory board member for Encompass Bioscience, Foresite Labs and Galateo Bio. J.N.H. has equity in Camp4 Therapeutics and has been a consultant to Amgen, AstraZeneca, Cytokinetics, PepGen, Pfizer and Tenaya Therapeutics and is the founder of Ikaika Therapeutics. J.F.P. is a paid consultant for Natera Inc. A. Khera. is an employee of Verve Therapeutics. N.L. received personal fees from Illumina Inc and is a scientific advisory board member for FYR Diagnostics. J.F.P. is a consultant for Myome. D.V. is a consultant for Illumina and has grant support from GeneDx. T.L.W. has grant funding from Gilead Sciences, Inc. The other authors declare no competing interests.

Peer review

Peer review information.

Nature Medicine thanks Cathryn Lewis, Lili Milani, Bjarni Vilhjálmsson and the other, anonymous, reviewer(s) for their contribution to the peer review of this work. Primary Handling Editor: Anna Maria Ranzoni, in collaboration with the Nature Medicine team.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Extended data

Extended data fig. 1 case-control prs histograms..

Histograms of T2D PRS scores for case and control samples in the eMERGE I-III dataset.

Extended Data Fig. 2 Representation of the genetic ancestry admixture composition of both the Test and Training cohorts.

The x-axis represents individuals within the cohorts and the color-coding highlights the proportion of genetic admixture observed.

Extended Data Fig. 3 Calibrated z-scores.

The distributions of calibrated z-scores in the test cohort when the training cohort parameters are applied.

Extended Data Fig. 4 Hypercholesterolemia PRS calibrated z-scores of training cohort.

Note the improvement when an ancestry dependent variance is used over a constant variance method.

Extended Data Fig. 5 PRS z-score as a function of African Admixture Fraction, for individuals of African ancestry.

In the ‘Bucketing’ method, a z-score is calculated by comparing to the mean and variance of all individuals of African ancestry in the cohort. The ‘PCA Calibrated’ method is the method described above. Note the dependence on admixture fraction in the ‘Bucketing’ method, which has been removed in the ‘PCA Calibrated’ method.

Supplementary information

Supplementary information.

Sample clinical report and list of consortia members.

Reporting Summary

Supplementary table 1.

Supplementary Tables 1–3 (tabs in a single worksheet).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Lennon, N.J., Kottyan, L.C., Kachulis, C. et al. Selection, optimization and validation of ten chronic disease polygenic risk scores for clinical implementation in diverse US populations. Nat Med 30 , 480–487 (2024). https://doi.org/10.1038/s41591-024-02796-z

Download citation

Received : 25 May 2023

Accepted : 02 January 2024

Published : 19 February 2024

Issue Date : February 2024

DOI : https://doi.org/10.1038/s41591-024-02796-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

clinical research studies ppt

  • Systematic review
  • Open access
  • Published: 19 February 2024

‘It depends’: what 86 systematic reviews tell us about what strategies to use to support the use of research in clinical practice

  • Annette Boaz   ORCID: orcid.org/0000-0003-0557-1294 1 ,
  • Juan Baeza 2 ,
  • Alec Fraser   ORCID: orcid.org/0000-0003-1121-1551 2 &
  • Erik Persson 3  

Implementation Science volume  19 , Article number:  15 ( 2024 ) Cite this article

1758 Accesses

68 Altmetric

Metrics details

The gap between research findings and clinical practice is well documented and a range of strategies have been developed to support the implementation of research into clinical practice. The objective of this study was to update and extend two previous reviews of systematic reviews of strategies designed to implement research evidence into clinical practice.

We developed a comprehensive systematic literature search strategy based on the terms used in the previous reviews to identify studies that looked explicitly at interventions designed to turn research evidence into practice. The search was performed in June 2022 in four electronic databases: Medline, Embase, Cochrane and Epistemonikos. We searched from January 2010 up to June 2022 and applied no language restrictions. Two independent reviewers appraised the quality of included studies using a quality assessment checklist. To reduce the risk of bias, papers were excluded following discussion between all members of the team. Data were synthesised using descriptive and narrative techniques to identify themes and patterns linked to intervention strategies, targeted behaviours, study settings and study outcomes.

We identified 32 reviews conducted between 2010 and 2022. The reviews are mainly of multi-faceted interventions ( n  = 20) although there are reviews focusing on single strategies (ICT, educational, reminders, local opinion leaders, audit and feedback, social media and toolkits). The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. Furthermore, a lot of nuance lies behind these headline findings, and this is increasingly commented upon in the reviews themselves.

Combined with the two previous reviews, 86 systematic reviews of strategies to increase the implementation of research into clinical practice have been identified. We need to shift the emphasis away from isolating individual and multi-faceted interventions to better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice. This will involve drawing on a wider range of research perspectives (including social science) in primary studies and diversifying the types of synthesis undertaken to include approaches such as realist synthesis which facilitate exploration of the context in which strategies are employed.

Peer Review reports

Contribution to the literature

Considerable time and money is invested in implementing and evaluating strategies to increase the implementation of research into clinical practice.

The growing body of evidence is not providing the anticipated clear lessons to support improved implementation.

Instead what is needed is better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice.

This would involve a more central role in implementation science for a wider range of perspectives, especially from the social, economic, political and behavioural sciences and for greater use of different types of synthesis, such as realist synthesis.

Introduction

The gap between research findings and clinical practice is well documented and a range of interventions has been developed to increase the implementation of research into clinical practice [ 1 , 2 ]. In recent years researchers have worked to improve the consistency in the ways in which these interventions (often called strategies) are described to support their evaluation. One notable development has been the emergence of Implementation Science as a field focusing explicitly on “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice” ([ 3 ] p. 1). The work of implementation science focuses on closing, or at least narrowing, the gap between research and practice. One contribution has been to map existing interventions, identifying 73 discreet strategies to support research implementation [ 4 ] which have been grouped into 9 clusters [ 5 ]. The authors note that they have not considered the evidence of effectiveness of the individual strategies and that a next step is to understand better which strategies perform best in which combinations and for what purposes [ 4 ]. Other authors have noted that there is also scope to learn more from other related fields of study such as policy implementation [ 6 ] and to draw on methods designed to support the evaluation of complex interventions [ 7 ].

The increase in activity designed to support the implementation of research into practice and improvements in reporting provided the impetus for an update of a review of systematic reviews of the effectiveness of interventions designed to support the use of research in clinical practice [ 8 ] which was itself an update of the review conducted by Grimshaw and colleagues in 2001. The 2001 review [ 9 ] identified 41 reviews considering a range of strategies including educational interventions, audit and feedback, computerised decision support to financial incentives and combined interventions. The authors concluded that all the interventions had the potential to promote the uptake of evidence in practice, although no one intervention seemed to be more effective than the others in all settings. They concluded that combined interventions were more likely to be effective than single interventions. The 2011 review identified a further 13 systematic reviews containing 313 discrete primary studies. Consistent with the previous review, four main strategy types were identified: audit and feedback; computerised decision support; opinion leaders; and multi-faceted interventions (MFIs). Nine of the reviews reported on MFIs. The review highlighted the small effects of single interventions such as audit and feedback, computerised decision support and opinion leaders. MFIs claimed an improvement in effectiveness over single interventions, although effect sizes remained small to moderate and this improvement in effectiveness relating to MFIs has been questioned in a subsequent review [ 10 ]. In updating the review, we anticipated a larger pool of reviews and an opportunity to consolidate learning from more recent systematic reviews of interventions.

This review updates and extends our previous review of systematic reviews of interventions designed to implement research evidence into clinical practice. To identify potentially relevant peer-reviewed research papers, we developed a comprehensive systematic literature search strategy based on the terms used in the Grimshaw et al. [ 9 ] and Boaz, Baeza and Fraser [ 8 ] overview articles. To ensure optimal retrieval, our search strategy was refined with support from an expert university librarian, considering the ongoing improvements in the development of search filters for systematic reviews since our first review [ 11 ]. We also wanted to include technology-related terms (e.g. apps, algorithms, machine learning, artificial intelligence) to find studies that explored interventions based on the use of technological innovations as mechanistic tools for increasing the use of evidence into practice (see Additional file 1 : Appendix A for full search strategy).

The search was performed in June 2022 in the following electronic databases: Medline, Embase, Cochrane and Epistemonikos. We searched for articles published since the 2011 review. We searched from January 2010 up to June 2022 and applied no language restrictions. Reference lists of relevant papers were also examined.

We uploaded the results using EPPI-Reviewer, a web-based tool that facilitated semi-automation of the screening process and removal of duplicate studies. We made particular use of a priority screening function to reduce screening workload and avoid ‘data deluge’ [ 12 ]. Through machine learning, one reviewer screened a smaller number of records ( n  = 1200) to train the software to predict whether a given record was more likely to be relevant or irrelevant, thus pulling the relevant studies towards the beginning of the screening process. This automation did not replace manual work but helped the reviewer to identify eligible studies more quickly. During the selection process, we included studies that looked explicitly at interventions designed to turn research evidence into practice. Studies were included if they met the following pre-determined inclusion criteria:

The study was a systematic review

Search terms were included

Focused on the implementation of research evidence into practice

The methodological quality of the included studies was assessed as part of the review

Study populations included healthcare providers and patients. The EPOC taxonomy [ 13 ] was used to categorise the strategies. The EPOC taxonomy has four domains: delivery arrangements, financial arrangements, governance arrangements and implementation strategies. The implementation strategies domain includes 20 strategies targeted at healthcare workers. Numerous EPOC strategies were assessed in the review including educational strategies, local opinion leaders, reminders, ICT-focused approaches and audit and feedback. Some strategies that did not fit easily within the EPOC categories were also included. These were social media strategies and toolkits, and multi-faceted interventions (MFIs) (see Table  2 ). Some systematic reviews included comparisons of different interventions while other reviews compared one type of intervention against a control group. Outcomes related to improvements in health care processes or patient well-being. Numerous individual study types (RCT, CCT, BA, ITS) were included within the systematic reviews.

We excluded papers that:

Focused on changing patient rather than provider behaviour

Had no demonstrable outcomes

Made unclear or no reference to research evidence

The last of these criteria was sometimes difficult to judge, and there was considerable discussion amongst the research team as to whether the link between research evidence and practice was sufficiently explicit in the interventions analysed. As we discussed in the previous review [ 8 ] in the field of healthcare, the principle of evidence-based practice is widely acknowledged and tools to change behaviour such as guidelines are often seen to be an implicit codification of evidence, despite the fact that this is not always the case.

Reviewers employed a two-stage process to select papers for inclusion. First, all titles and abstracts were screened by one reviewer to determine whether the study met the inclusion criteria. Two papers [ 14 , 15 ] were identified that fell just before the 2010 cut-off. As they were not identified in the searches for the first review [ 8 ] they were included and progressed to assessment. Each paper was rated as include, exclude or maybe. The full texts of 111 relevant papers were assessed independently by at least two authors. To reduce the risk of bias, papers were excluded following discussion between all members of the team. 32 papers met the inclusion criteria and proceeded to data extraction. The study selection procedure is documented in a PRISMA literature flow diagram (see Fig.  1 ). We were able to include French, Spanish and Portuguese papers in the selection reflecting the language skills in the study team, but none of the papers identified met the inclusion criteria. Other non- English language papers were excluded.

figure 1

PRISMA flow diagram. Source: authors

One reviewer extracted data on strategy type, number of included studies, local, target population, effectiveness and scope of impact from the included studies. Two reviewers then independently read each paper and noted key findings and broad themes of interest which were then discussed amongst the wider authorial team. Two independent reviewers appraised the quality of included studies using a Quality Assessment Checklist based on Oxman and Guyatt [ 16 ] and Francke et al. [ 17 ]. Each study was rated a quality score ranging from 1 (extensive flaws) to 7 (minimal flaws) (see Additional file 2 : Appendix B). All disagreements were resolved through discussion. Studies were not excluded in this updated overview based on methodological quality as we aimed to reflect the full extent of current research into this topic.

The extracted data were synthesised using descriptive and narrative techniques to identify themes and patterns in the data linked to intervention strategies, targeted behaviours, study settings and study outcomes.

Thirty-two studies were included in the systematic review. Table 1. provides a detailed overview of the included systematic reviews comprising reference, strategy type, quality score, number of included studies, local, target population, effectiveness and scope of impact (see Table  1. at the end of the manuscript). Overall, the quality of the studies was high. Twenty-three studies scored 7, six studies scored 6, one study scored 5, one study scored 4 and one study scored 3. The primary focus of the review was on reviews of effectiveness studies, but a small number of reviews did include data from a wider range of methods including qualitative studies which added to the analysis in the papers [ 18 , 19 , 20 , 21 ]. The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. In this section, we discuss the different EPOC-defined implementation strategies in turn. Interestingly, we found only two ‘new’ approaches in this review that did not fit into the existing EPOC approaches. These are a review focused on the use of social media and a review considering toolkits. In addition to single interventions, we also discuss multi-faceted interventions. These were the most common intervention approach overall. A summary is provided in Table  2 .

Educational strategies

The overview identified three systematic reviews focusing on educational strategies. Grudniewicz et al. [ 22 ] explored the effectiveness of printed educational materials on primary care physician knowledge, behaviour and patient outcomes and concluded they were not effective in any of these aspects. Koota, Kääriäinen and Melender [ 23 ] focused on educational interventions promoting evidence-based practice among emergency room/accident and emergency nurses and found that interventions involving face-to-face contact led to significant or highly significant effects on patient benefits and emergency nurses’ knowledge, skills and behaviour. Interventions using written self-directed learning materials also led to significant improvements in nurses’ knowledge of evidence-based practice. Although the quality of the studies was high, the review primarily included small studies with low response rates, and many of them relied on self-assessed outcomes; consequently, the strength of the evidence for these outcomes is modest. Wu et al. [ 20 ] questioned if educational interventions aimed at nurses to support the implementation of evidence-based practice improve patient outcomes. Although based on evaluation projects and qualitative data, their results also suggest that positive changes on patient outcomes can be made following the implementation of specific evidence-based approaches (or projects). The differing positive outcomes for educational strategies aimed at nurses might indicate that the target audience is important.

Local opinion leaders

Flodgren et al. [ 24 ] was the only systemic review focusing solely on opinion leaders. The review found that local opinion leaders alone, or in combination with other interventions, can be effective in promoting evidence‐based practice, but this varies both within and between studies and the effect on patient outcomes is uncertain. The review found that, overall, any intervention involving opinion leaders probably improves healthcare professionals’ compliance with evidence-based practice but varies within and across studies. However, how opinion leaders had an impact could not be determined because of insufficient details were provided, illustrating that reporting specific details in published studies is important if diffusion of effective methods of increasing evidence-based practice is to be spread across a system. The usefulness of this review is questionable because it cannot provide evidence of what is an effective opinion leader, whether teams of opinion leaders or a single opinion leader are most effective, or the most effective methods used by opinion leaders.

Pantoja et al. [ 26 ] was the only systemic review focusing solely on manually generated reminders delivered on paper included in the overview. The review explored how these affected professional practice and patient outcomes. The review concluded that manually generated reminders delivered on paper as a single intervention probably led to small to moderate increases in adherence to clinical recommendations, and they could be used as a single quality improvement intervention. However, the authors indicated that this intervention would make little or no difference to patient outcomes. The authors state that such a low-tech intervention may be useful in low- and middle-income countries where paper records are more likely to be the norm.

ICT-focused approaches

The three ICT-focused reviews [ 14 , 27 , 28 ] showed mixed results. Jamal, McKenzie and Clark [ 14 ] explored the impact of health information technology on the quality of medical and health care. They examined the impact of electronic health record, computerised provider order-entry, or decision support system. This showed a positive improvement in adherence to evidence-based guidelines but not to patient outcomes. The number of studies included in the review was low and so a conclusive recommendation could not be reached based on this review. Similarly, Brown et al. [ 28 ] found that technology-enabled knowledge translation interventions may improve knowledge of health professionals, but all eight studies raised concerns of bias. The De Angelis et al. [ 27 ] review was more promising, reporting that ICT can be a good way of disseminating clinical practice guidelines but conclude that it is unclear which type of ICT method is the most effective.

Audit and feedback

Sykes, McAnuff and Kolehmainen [ 29 ] examined whether audit and feedback were effective in dementia care and concluded that it remains unclear which ingredients of audit and feedback are successful as the reviewed papers illustrated large variations in the effectiveness of interventions using audit and feedback.

Non-EPOC listed strategies: social media, toolkits

There were two new (non-EPOC listed) intervention types identified in this review compared to the 2011 review — fewer than anticipated. We categorised a third — ‘care bundles’ [ 36 ] as a multi-faceted intervention due to its description in practice and a fourth — ‘Technology Enhanced Knowledge Transfer’ [ 28 ] was classified as an ICT-focused approach. The first new strategy was identified in Bhatt et al.’s [ 30 ] systematic review of the use of social media for the dissemination of clinical practice guidelines. They reported that the use of social media resulted in a significant improvement in knowledge and compliance with evidence-based guidelines compared with more traditional methods. They noted that a wide selection of different healthcare professionals and patients engaged with this type of social media and its global reach may be significant for low- and middle-income countries. This review was also noteworthy for developing a simple stepwise method for using social media for the dissemination of clinical practice guidelines. However, it is debatable whether social media can be classified as an intervention or just a different way of delivering an intervention. For example, the review discussed involving opinion leaders and patient advocates through social media. However, this was a small review that included only five studies, so further research in this new area is needed. Yamada et al. [ 31 ] draw on 39 studies to explore the application of toolkits, 18 of which had toolkits embedded within larger KT interventions, and 21 of which evaluated toolkits as standalone interventions. The individual component strategies of the toolkits were highly variable though the authors suggest that they align most closely with educational strategies. The authors conclude that toolkits as either standalone strategies or as part of MFIs hold some promise for facilitating evidence use in practice but caution that the quality of many of the primary studies included is considered weak limiting these findings.

Multi-faceted interventions

The majority of the systematic reviews ( n  = 20) reported on more than one intervention type. Some of these systematic reviews focus exclusively on multi-faceted interventions, whilst others compare different single or combined interventions aimed at achieving similar outcomes in particular settings. While these two approaches are often described in a similar way, they are actually quite distinct from each other as the former report how multiple strategies may be strategically combined in pursuance of an agreed goal, whilst the latter report how different strategies may be incidentally used in sometimes contrasting settings in the pursuance of similar goals. Ariyo et al. [ 35 ] helpfully summarise five key elements often found in effective MFI strategies in LMICs — but which may also be transferrable to HICs. First, effective MFIs encourage a multi-disciplinary approach acknowledging the roles played by different professional groups to collectively incorporate evidence-informed practice. Second, they utilise leadership drawing on a wide set of clinical and non-clinical actors including managers and even government officials. Third, multiple types of educational practices are utilised — including input from patients as stakeholders in some cases. Fourth, protocols, checklists and bundles are used — most effectively when local ownership is encouraged. Finally, most MFIs included an emphasis on monitoring and evaluation [ 35 ]. In contrast, other studies offer little information about the nature of the different MFI components of included studies which makes it difficult to extrapolate much learning from them in relation to why or how MFIs might affect practice (e.g. [ 28 , 38 ]). Ultimately, context matters, which some review authors argue makes it difficult to say with real certainty whether single or MFI strategies are superior (e.g. [ 21 , 27 ]). Taking all the systematic reviews together we may conclude that MFIs appear to be more likely to generate positive results than single interventions (e.g. [ 34 , 45 ]) though other reviews should make us cautious (e.g. [ 32 , 43 ]).

While multi-faceted interventions still seem to be more effective than single-strategy interventions, there were important distinctions between how the results of reviews of MFIs are interpreted in this review as compared to the previous reviews [ 8 , 9 ], reflecting greater nuance and debate in the literature. This was particularly noticeable where the effectiveness of MFIs was compared to single strategies, reflecting developments widely discussed in previous studies [ 10 ]. We found that most systematic reviews are bounded by their clinical, professional, spatial, system, or setting criteria and often seek to draw out implications for the implementation of evidence in their areas of specific interest (such as nursing or acute care). Frequently this means combining all relevant studies to explore the respective foci of each systematic review. Therefore, most reviews we categorised as MFIs actually include highly variable numbers and combinations of intervention strategies and highly heterogeneous original study designs. This makes statistical analyses of the type used by Squires et al. [ 10 ] on the three reviews in their paper not possible. Further, it also makes extrapolating findings and commenting on broad themes complex and difficult. This may suggest that future research should shift its focus from merely examining ‘what works’ to ‘what works where and what works for whom’ — perhaps pointing to the value of realist approaches to these complex review topics [ 48 , 49 ] and other more theory-informed approaches [ 50 ].

Some reviews have a relatively small number of studies (i.e. fewer than 10) and the authors are often understandably reluctant to engage with wider debates about the implications of their findings. Other larger studies do engage in deeper discussions about internal comparisons of findings across included studies and also contextualise these in wider debates. Some of the most informative studies (e.g. [ 35 , 40 ]) move beyond EPOC categories and contextualise MFIs within wider systems thinking and implementation theory. This distinction between MFIs and single interventions can actually be very useful as it offers lessons about the contexts in which individual interventions might have bounded effectiveness (i.e. educational interventions for individual change). Taken as a whole, this may also then help in terms of how and when to conjoin single interventions into effective MFIs.

In the two previous reviews, a consistent finding was that MFIs were more effective than single interventions [ 8 , 9 ]. However, like Squires et al. [ 10 ] this overview is more equivocal on this important issue. There are four points which may help account for the differences in findings in this regard. Firstly, the diversity of the systematic reviews in terms of clinical topic or setting is an important factor. Secondly, there is heterogeneity of the studies within the included systematic reviews themselves. Thirdly, there is a lack of consistency with regards to the definition and strategies included within of MFIs. Finally, there are epistemological differences across the papers and the reviews. This means that the results that are presented depend on the methods used to measure, report, and synthesise them. For instance, some reviews highlight that education strategies can be useful to improve provider understanding — but without wider organisational or system-level change, they may struggle to deliver sustained transformation [ 19 , 44 ].

It is also worth highlighting the importance of the theory of change underlying the different interventions. Where authors of the systematic reviews draw on theory, there is space to discuss/explain findings. We note a distinction between theoretical and atheoretical systematic review discussion sections. Atheoretical reviews tend to present acontextual findings (for instance, one study found very positive results for one intervention, and this gets highlighted in the abstract) whilst theoretically informed reviews attempt to contextualise and explain patterns within the included studies. Theory-informed systematic reviews seem more likely to offer more profound and useful insights (see [ 19 , 35 , 40 , 43 , 45 ]). We find that the most insightful systematic reviews of MFIs engage in theoretical generalisation — they attempt to go beyond the data of individual studies and discuss the wider implications of the findings of the studies within their reviews drawing on implementation theory. At the same time, they highlight the active role of context and the wider relational and system-wide issues linked to implementation. It is these types of investigations that can help providers further develop evidence-based practice.

This overview has identified a small, but insightful set of papers that interrogate and help theorise why, how, for whom, and in which circumstances it might be the case that MFIs are superior (see [ 19 , 35 , 40 ] once more). At the level of this overview — and in most of the systematic reviews included — it appears to be the case that MFIs struggle with the question of attribution. In addition, there are other important elements that are often unmeasured, or unreported (e.g. costs of the intervention — see [ 40 ]). Finally, the stronger systematic reviews [ 19 , 35 , 40 , 43 , 45 ] engage with systems issues, human agency and context [ 18 ] in a way that was not evident in the systematic reviews identified in the previous reviews [ 8 , 9 ]. The earlier reviews lacked any theory of change that might explain why MFIs might be more effective than single ones — whereas now some systematic reviews do this, which enables them to conclude that sometimes single interventions can still be more effective.

As Nilsen et al. ([ 6 ] p. 7) note ‘Study findings concerning the effectiveness of various approaches are continuously synthesized and assembled in systematic reviews’. We may have gone as far as we can in understanding the implementation of evidence through systematic reviews of single and multi-faceted interventions and the next step would be to conduct more research exploring the complex and situated nature of evidence used in clinical practice and by particular professional groups. This would further build on the nuanced discussion and conclusion sections in a subset of the papers we reviewed. This might also support the field to move away from isolating individual implementation strategies [ 6 ] to explore the complex processes involving a range of actors with differing capacities [ 51 ] working in diverse organisational cultures. Taxonomies of implementation strategies do not fully account for the complex process of implementation, which involves a range of different actors with different capacities and skills across multiple system levels. There is plenty of work to build on, particularly in the social sciences, which currently sits at the margins of debates about evidence implementation (see for example, Normalisation Process Theory [ 52 ]).

There are several changes that we have identified in this overview of systematic reviews in comparison to the review we published in 2011 [ 8 ]. A consistent and welcome finding is that the overall quality of the systematic reviews themselves appears to have improved between the two reviews, although this is not reflected upon in the papers. This is exhibited through better, clearer reporting mechanisms in relation to the mechanics of the reviews, alongside a greater attention to, and deeper description of, how potential biases in included papers are discussed. Additionally, there is an increased, but still limited, inclusion of original studies conducted in low- and middle-income countries as opposed to just high-income countries. Importantly, we found that many of these systematic reviews are attuned to, and comment upon the contextual distinctions of pursuing evidence-informed interventions in health care settings in different economic settings. Furthermore, systematic reviews included in this updated article cover a wider set of clinical specialities (both within and beyond hospital settings) and have a focus on a wider set of healthcare professions — discussing both similarities, differences and inter-professional challenges faced therein, compared to the earlier reviews. These wider ranges of studies highlight that a particular intervention or group of interventions may work well for one professional group but be ineffective for another. This diversity of study settings allows us to consider the important role context (in its many forms) plays on implementing evidence into practice. Examining the complex and varied context of health care will help us address what Nilsen et al. ([ 6 ] p. 1) described as, ‘society’s health problems [that] require research-based knowledge acted on by healthcare practitioners together with implementation of political measures from governmental agencies’. This will help us shift implementation science to move, ‘beyond a success or failure perspective towards improved analysis of variables that could explain the impact of the implementation process’ ([ 6 ] p. 2).

This review brings together 32 papers considering individual and multi-faceted interventions designed to support the use of evidence in clinical practice. The majority of reviews report strategies achieving small impacts (normally on processes of care). There is much less evidence that these strategies have shifted patient outcomes. Combined with the two previous reviews, 86 systematic reviews of strategies to increase the implementation of research into clinical practice have been conducted. As a whole, this substantial body of knowledge struggles to tell us more about the use of individual and MFIs than: ‘it depends’. To really move forwards in addressing the gap between research evidence and practice, we may need to shift the emphasis away from isolating individual and multi-faceted interventions to better understanding and building more situated, relational and organisational capability to support the use of research in clinical practice. This will involve drawing on a wider range of perspectives, especially from the social, economic, political and behavioural sciences in primary studies and diversifying the types of synthesis undertaken to include approaches such as realist synthesis which facilitate exploration of the context in which strategies are employed. Harvey et al. [ 53 ] suggest that when context is likely to be critical to implementation success there are a range of primary research approaches (participatory research, realist evaluation, developmental evaluation, ethnography, quality/ rapid cycle improvement) that are likely to be appropriate and insightful. While these approaches often form part of implementation studies in the form of process evaluations, they are usually relatively small scale in relation to implementation research as a whole. As a result, the findings often do not make it into the subsequent systematic reviews. This review provides further evidence that we need to bring qualitative approaches in from the periphery to play a central role in many implementation studies and subsequent evidence syntheses. It would be helpful for systematic reviews, at the very least, to include more detail about the interventions and their implementation in terms of how and why they worked.

Availability of data and materials

The datasets used and/or analysed during the current study are available from the corresponding author on reasonable request.

Abbreviations

Before and after study

Controlled clinical trial

Effective Practice and Organisation of Care

High-income countries

Information and Communications Technology

Interrupted time series

Knowledge translation

Low- and middle-income countries

Randomised controlled trial

Grol R, Grimshaw J. From best evidence to best practice: effective implementation of change in patients’ care. Lancet. 2003;362:1225–30. https://doi.org/10.1016/S0140-6736(03)14546-1 .

Article   PubMed   Google Scholar  

Green LA, Seifert CM. Translation of research into practice: why we can’t “just do it.” J Am Board Fam Pract. 2005;18:541–5. https://doi.org/10.3122/jabfm.18.6.541 .

Eccles MP, Mittman BS. Welcome to Implementation Science. Implement Sci. 2006;1:1–3. https://doi.org/10.1186/1748-5908-1-1 .

Article   PubMed Central   Google Scholar  

Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:2–14. https://doi.org/10.1186/s13012-015-0209-1 .

Article   Google Scholar  

Waltz TJ, Powell BJ, Matthieu MM, Damschroder LJ, et al. Use of concept mapping to characterize relationships among implementation strategies and assess their feasibility and importance: results from the Expert Recommendations for Implementing Change (ERIC) study. Implement Sci. 2015;10:1–8. https://doi.org/10.1186/s13012-015-0295-0 .

Nilsen P, Ståhl C, Roback K, et al. Never the twain shall meet? - a comparison of implementation science and policy implementation research. Implementation Sci. 2013;8:2–12. https://doi.org/10.1186/1748-5908-8-63 .

Rycroft-Malone J, Seers K, Eldh AC, et al. A realist process evaluation within the Facilitating Implementation of Research Evidence (FIRE) cluster randomised controlled international trial: an exemplar. Implementation Sci. 2018;13:1–15. https://doi.org/10.1186/s13012-018-0811-0 .

Boaz A, Baeza J, Fraser A, European Implementation Score Collaborative Group (EIS). Effective implementation of research into practice: an overview of systematic reviews of the health literature. BMC Res Notes. 2011;4:212. https://doi.org/10.1186/1756-0500-4-212 .

Article   PubMed   PubMed Central   Google Scholar  

Grimshaw JM, Shirran L, Thomas R, Mowatt G, Fraser C, Bero L, et al. Changing provider behavior – an overview of systematic reviews of interventions. Med Care. 2001;39 8Suppl 2:II2–45.

Google Scholar  

Squires JE, Sullivan K, Eccles MP, et al. Are multifaceted interventions more effective than single-component interventions in changing health-care professionals’ behaviours? An overview of systematic reviews. Implement Sci. 2014;9:1–22. https://doi.org/10.1186/s13012-014-0152-6 .

Salvador-Oliván JA, Marco-Cuenca G, Arquero-Avilés R. Development of an efficient search filter to retrieve systematic reviews from PubMed. J Med Libr Assoc. 2021;109:561–74. https://doi.org/10.5195/jmla.2021.1223 .

Thomas JM. Diffusion of innovation in systematic review methodology: why is study selection not yet assisted by automation? OA Evid Based Med. 2013;1:1–6.

Effective Practice and Organisation of Care (EPOC). The EPOC taxonomy of health systems interventions. EPOC Resources for review authors. Oslo: Norwegian Knowledge Centre for the Health Services; 2016. epoc.cochrane.org/epoc-taxonomy . Accessed 9 Oct 2023.

Jamal A, McKenzie K, Clark M. The impact of health information technology on the quality of medical and health care: a systematic review. Health Inf Manag. 2009;38:26–37. https://doi.org/10.1177/183335830903800305 .

Menon A, Korner-Bitensky N, Kastner M, et al. Strategies for rehabilitation professionals to move evidence-based knowledge into practice: a systematic review. J Rehabil Med. 2009;41:1024–32. https://doi.org/10.2340/16501977-0451 .

Oxman AD, Guyatt GH. Validation of an index of the quality of review articles. J Clin Epidemiol. 1991;44:1271–8. https://doi.org/10.1016/0895-4356(91)90160-b .

Article   CAS   PubMed   Google Scholar  

Francke AL, Smit MC, de Veer AJ, et al. Factors influencing the implementation of clinical guidelines for health care professionals: a systematic meta-review. BMC Med Inform Decis Mak. 2008;8:1–11. https://doi.org/10.1186/1472-6947-8-38 .

Jones CA, Roop SC, Pohar SL, et al. Translating knowledge in rehabilitation: systematic review. Phys Ther. 2015;95:663–77. https://doi.org/10.2522/ptj.20130512 .

Scott D, Albrecht L, O’Leary K, Ball GDC, et al. Systematic review of knowledge translation strategies in the allied health professions. Implement Sci. 2012;7:1–17. https://doi.org/10.1186/1748-5908-7-70 .

Wu Y, Brettle A, Zhou C, Ou J, et al. Do educational interventions aimed at nurses to support the implementation of evidence-based practice improve patient outcomes? A systematic review. Nurse Educ Today. 2018;70:109–14. https://doi.org/10.1016/j.nedt.2018.08.026 .

Yost J, Ganann R, Thompson D, Aloweni F, et al. The effectiveness of knowledge translation interventions for promoting evidence-informed decision-making among nurses in tertiary care: a systematic review and meta-analysis. Implement Sci. 2015;10:1–15. https://doi.org/10.1186/s13012-015-0286-1 .

Grudniewicz A, Kealy R, Rodseth RN, Hamid J, et al. What is the effectiveness of printed educational materials on primary care physician knowledge, behaviour, and patient outcomes: a systematic review and meta-analyses. Implement Sci. 2015;10:2–12. https://doi.org/10.1186/s13012-015-0347-5 .

Koota E, Kääriäinen M, Melender HL. Educational interventions promoting evidence-based practice among emergency nurses: a systematic review. Int Emerg Nurs. 2018;41:51–8. https://doi.org/10.1016/j.ienj.2018.06.004 .

Flodgren G, O’Brien MA, Parmelli E, et al. Local opinion leaders: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD000125.pub5 .

Arditi C, Rège-Walther M, Durieux P, et al. Computer-generated reminders delivered on paper to healthcare professionals: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2017. https://doi.org/10.1002/14651858.CD001175.pub4 .

Pantoja T, Grimshaw JM, Colomer N, et al. Manually-generated reminders delivered on paper: effects on professional practice and patient outcomes. Cochrane Database Syst Rev. 2019. https://doi.org/10.1002/14651858.CD001174.pub4 .

De Angelis G, Davies B, King J, McEwan J, et al. Information and communication technologies for the dissemination of clinical practice guidelines to health professionals: a systematic review. JMIR Med Educ. 2016;2:e16. https://doi.org/10.2196/mededu.6288 .

Brown A, Barnes C, Byaruhanga J, McLaughlin M, et al. Effectiveness of technology-enabled knowledge translation strategies in improving the use of research in public health: systematic review. J Med Internet Res. 2020;22:e17274. https://doi.org/10.2196/17274 .

Sykes MJ, McAnuff J, Kolehmainen N. When is audit and feedback effective in dementia care? A systematic review. Int J Nurs Stud. 2018;79:27–35. https://doi.org/10.1016/j.ijnurstu.2017.10.013 .

Bhatt NR, Czarniecki SW, Borgmann H, et al. A systematic review of the use of social media for dissemination of clinical practice guidelines. Eur Urol Focus. 2021;7:1195–204. https://doi.org/10.1016/j.euf.2020.10.008 .

Yamada J, Shorkey A, Barwick M, Widger K, et al. The effectiveness of toolkits as knowledge translation strategies for integrating evidence into clinical care: a systematic review. BMJ Open. 2015;5:e006808. https://doi.org/10.1136/bmjopen-2014-006808 .

Afari-Asiedu S, Abdulai MA, Tostmann A, et al. Interventions to improve dispensing of antibiotics at the community level in low and middle income countries: a systematic review. J Glob Antimicrob Resist. 2022;29:259–74. https://doi.org/10.1016/j.jgar.2022.03.009 .

Boonacker CW, Hoes AW, Dikhoff MJ, Schilder AG, et al. Interventions in health care professionals to improve treatment in children with upper respiratory tract infections. Int J Pediatr Otorhinolaryngol. 2010;74:1113–21. https://doi.org/10.1016/j.ijporl.2010.07.008 .

Al Zoubi FM, Menon A, Mayo NE, et al. The effectiveness of interventions designed to increase the uptake of clinical practice guidelines and best practices among musculoskeletal professionals: a systematic review. BMC Health Serv Res. 2018;18:2–11. https://doi.org/10.1186/s12913-018-3253-0 .

Ariyo P, Zayed B, Riese V, Anton B, et al. Implementation strategies to reduce surgical site infections: a systematic review. Infect Control Hosp Epidemiol. 2019;3:287–300. https://doi.org/10.1017/ice.2018.355 .

Borgert MJ, Goossens A, Dongelmans DA. What are effective strategies for the implementation of care bundles on ICUs: a systematic review. Implement Sci. 2015;10:1–11. https://doi.org/10.1186/s13012-015-0306-1 .

Cahill LS, Carey LM, Lannin NA, et al. Implementation interventions to promote the uptake of evidence-based practices in stroke rehabilitation. Cochrane Database Syst Rev. 2020. https://doi.org/10.1002/14651858.CD012575.pub2 .

Pedersen ER, Rubenstein L, Kandrack R, Danz M, et al. Elusive search for effective provider interventions: a systematic review of provider interventions to increase adherence to evidence-based treatment for depression. Implement Sci. 2018;13:1–30. https://doi.org/10.1186/s13012-018-0788-8 .

Jenkins HJ, Hancock MJ, French SD, Maher CG, et al. Effectiveness of interventions designed to reduce the use of imaging for low-back pain: a systematic review. CMAJ. 2015;187:401–8. https://doi.org/10.1503/cmaj.141183 .

Bennett S, Laver K, MacAndrew M, Beattie E, et al. Implementation of evidence-based, non-pharmacological interventions addressing behavior and psychological symptoms of dementia: a systematic review focused on implementation strategies. Int Psychogeriatr. 2021;33:947–75. https://doi.org/10.1017/S1041610220001702 .

Noonan VK, Wolfe DL, Thorogood NP, et al. Knowledge translation and implementation in spinal cord injury: a systematic review. Spinal Cord. 2014;52:578–87. https://doi.org/10.1038/sc.2014.62 .

Albrecht L, Archibald M, Snelgrove-Clarke E, et al. Systematic review of knowledge translation strategies to promote research uptake in child health settings. J Pediatr Nurs. 2016;31:235–54. https://doi.org/10.1016/j.pedn.2015.12.002 .

Campbell A, Louie-Poon S, Slater L, et al. Knowledge translation strategies used by healthcare professionals in child health settings: an updated systematic review. J Pediatr Nurs. 2019;47:114–20. https://doi.org/10.1016/j.pedn.2019.04.026 .

Bird ML, Miller T, Connell LA, et al. Moving stroke rehabilitation evidence into practice: a systematic review of randomized controlled trials. Clin Rehabil. 2019;33:1586–95. https://doi.org/10.1177/0269215519847253 .

Goorts K, Dizon J, Milanese S. The effectiveness of implementation strategies for promoting evidence informed interventions in allied healthcare: a systematic review. BMC Health Serv Res. 2021;21:1–11. https://doi.org/10.1186/s12913-021-06190-0 .

Zadro JR, O’Keeffe M, Allison JL, Lembke KA, et al. Effectiveness of implementation strategies to improve adherence of physical therapist treatment choices to clinical practice guidelines for musculoskeletal conditions: systematic review. Phys Ther. 2020;100:1516–41. https://doi.org/10.1093/ptj/pzaa101 .

Van der Veer SN, Jager KJ, Nache AM, et al. Translating knowledge on best practice into improving quality of RRT care: a systematic review of implementation strategies. Kidney Int. 2011;80:1021–34. https://doi.org/10.1038/ki.2011.222 .

Pawson R, Greenhalgh T, Harvey G, et al. Realist review–a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10Suppl 1:21–34. https://doi.org/10.1258/1355819054308530 .

Rycroft-Malone J, McCormack B, Hutchinson AM, et al. Realist synthesis: illustrating the method for implementation research. Implementation Sci. 2012;7:1–10. https://doi.org/10.1186/1748-5908-7-33 .

Johnson MJ, May CR. Promoting professional behaviour change in healthcare: what interventions work, and why? A theory-led overview of systematic reviews. BMJ Open. 2015;5:e008592. https://doi.org/10.1136/bmjopen-2015-008592 .

Metz A, Jensen T, Farley A, Boaz A, et al. Is implementation research out of step with implementation practice? Pathways to effective implementation support over the last decade. Implement Res Pract. 2022;3:1–11. https://doi.org/10.1177/26334895221105585 .

May CR, Finch TL, Cornford J, Exley C, et al. Integrating telecare for chronic disease management in the community: What needs to be done? BMC Health Serv Res. 2011;11:1–11. https://doi.org/10.1186/1472-6963-11-131 .

Harvey G, Rycroft-Malone J, Seers K, Wilson P, et al. Connecting the science and practice of implementation – applying the lens of context to inform study design in implementation research. Front Health Serv. 2023;3:1–15. https://doi.org/10.3389/frhs.2023.1162762 .

Download references

Acknowledgements

The authors would like to thank Professor Kathryn Oliver for her support in the planning the review, Professor Steve Hanney for reading and commenting on the final manuscript and the staff at LSHTM library for their support in planning and conducting the literature search.

This study was supported by LSHTM’s Research England QR strategic priorities funding allocation and the National Institute for Health and Care Research (NIHR) Applied Research Collaboration South London (NIHR ARC South London) at King’s College Hospital NHS Foundation Trust. Grant number NIHR200152. The views expressed are those of the author(s) and not necessarily those of the NIHR, the Department of Health and Social Care or Research England.

Author information

Authors and affiliations.

Health and Social Care Workforce Research Unit, The Policy Institute, King’s College London, Virginia Woolf Building, 22 Kingsway, London, WC2B 6LE, UK

Annette Boaz

King’s Business School, King’s College London, 30 Aldwych, London, WC2B 4BG, UK

Juan Baeza & Alec Fraser

Federal University of Santa Catarina (UFSC), Campus Universitário Reitor João Davi Ferreira Lima, Florianópolis, SC, 88.040-900, Brazil

Erik Persson

You can also search for this author in PubMed   Google Scholar

Contributions

AB led the conceptual development and structure of the manuscript. EP conducted the searches and data extraction. All authors contributed to screening and quality appraisal. EP and AF wrote the first draft of the methods section. AB, JB and AF performed result synthesis and contributed to the analyses. AB wrote the first draft of the manuscript and incorporated feedback and revisions from all other authors. All authors revised and approved the final manuscript.

Corresponding author

Correspondence to Annette Boaz .

Ethics declarations

Ethics approval and consent to participate.

Not applicable.

Consent for publication

Competing interests.

The authors declare that they have no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1: appendix a., additional file 2: appendix b., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Boaz, A., Baeza, J., Fraser, A. et al. ‘It depends’: what 86 systematic reviews tell us about what strategies to use to support the use of research in clinical practice. Implementation Sci 19 , 15 (2024). https://doi.org/10.1186/s13012-024-01337-z

Download citation

Received : 01 November 2023

Accepted : 05 January 2024

Published : 19 February 2024

DOI : https://doi.org/10.1186/s13012-024-01337-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Implementation
  • Interventions
  • Clinical practice
  • Research evidence
  • Multi-faceted

Implementation Science

ISSN: 1748-5908

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

clinical research studies ppt

SlideTeam

  • Clinical Study
  • Popular Categories

Powerpoint Templates

Icon Bundle

Kpi Dashboard

Professional

Business Plans

Swot Analysis

Gantt Chart

Business Proposal

Marketing Plan

Project Management

Business Case

Business Model

Cyber Security

Business PPT

Digital Marketing

Digital Transformation

Human Resources

Product Management

Artificial Intelligence

Company Profile

Acknowledgement PPT

PPT Presentation

Reports Brochures

One Page Pitch

Interview PPT

All Categories

Powerpoint Templates and Google slides for Clinical Study

Save your time and attract your audience with our fully editable ppt templates and slides..

Case Study Valuable Clinical Decision Support Transforming Industries With AI ML And NLP Strategy

The purpose of the mentioned slide is to showcase the use of integrating AI technology in healthcare industry. It includes attributes such as challenges faced, solutions delivered, and impact. Increase audience engagement and knowledge by dispensing information using Case Study Valuable Clinical Decision Support Transforming Industries With AI ML And NLP Strategy. This template helps you present information on three stages. You can also present information on Information, Decision, Knowledge using this PPT design. This layout is completely editable so personaize it now to meet your audiences expectations.

Budget Benchmark For Oncology Clinical Studies At Global Level

The slide showcases budget benchmarking to gauge and improve financial results by selecting few operating ratios by which company wants to measure its results and manage assets. Introducing our Budget Benchmark For Oncology Clinical Studies At Global Level set of slides. The topics discussed in these slides are Budget Benchmark, Oncology Clinical Studies, Global Level. This is an immediately available PowerPoint presentation that can be conveniently customized. Download it and convince your audience.

Case Study Clinical Medicine Research Company Profile

The slide showcases the challenge faced by healthcare business to develop digital capabilities. The company leveraged technology partner to develop and deploy customized cloud-based patient management platform for easy data access with added security. Present the topic in a bit more detail with this Case Study Clinical Medicine Research Company Profile. Use it as a tool for discussion and navigation on Challenge, Solution, Manufacturing. This template is free to edit as deemed fit for your organization. Therefore download it now.

Clinical monitoring patient recruitment study design operational planning cpb

Presenting this set of slides with name - Clinical Monitoring Patient Recruitment Study Design Operational Planning Cpb. This is an editable four stages graphic that deals with topics like Clinical Monitoring, Patient Recruitment, Study Design Operational Planning to help convey your message better graphically. This product is a premium product available for immediate download, and is 100 percent editable in Powerpoint. Download this now and use it in your presentations to impress your audience.

Clinical studies development ppt powerpoint presentation styles infographics

Presenting this set of slides with name Clinical Studies Development Ppt Powerpoint Presentation Styles Infographics. The topics discussed in these slides are Clinical Studies Development. This is a completely editable PowerPoint presentation and is available for immediate download. Download now and impress your audience.

Clinical studies onychomycosis ppt powerpoint presentation outline visual aids

Presenting this set of slides with name Clinical Studies Onychomycosis Ppt Powerpoint Presentation Outline Visual Aids. The topics discussed in these slides are Clinical Studies Onychomycosis. This is a completely editable PowerPoint presentation and is available for immediate download. Download now and impress your audience.

Clinical study solutions ppt powerpoint presentation file slides cpb

Presenting this set of slides with name Clinical Study Solutions Ppt Powerpoint Presentation File Slides Cpb. This is an editable Powerpoint four stages graphic that deals with topics like Clinical Study Solutions to help convey your message better graphically. This product is a premium product available for immediate download and is 100 percent editable in Powerpoint. Download this now and use it in your presentations to impress your audience.

Support for clinical study ppt powerpoint presentation pictures information

Presenting this set of slides with name Support For Clinical Study Ppt Powerpoint Presentation Pictures Information. The topics discussed in these slides are Support For Clinical Study. This is a completely editable PowerPoint presentation and is available for immediate download. Download now and impress your audience.

Clinical study analysis ppt powerpoint presentation model graphics example

Presenting this set of slides with name Clinical Study Analysis Ppt Powerpoint Presentation Model Graphics Example. The topics discussed in these slides are Clinical Study Analysis. This is a completely editable PowerPoint presentation and is available for immediate download. Download now and impress your audience.

Clinical studies diabetes ppt powerpoint presentation outline diagrams

Presenting this set of slides with name Clinical Studies Diabetes Ppt Powerpoint Presentation Outline Diagrams. The topics discussed in these slides are Clinical Studies Diabetes. This is a completely editable PowerPoint presentation and is available for immediate download. Download now and impress your audience.

Ai In Study Clinical Decision Support How Ai Is Transforming Healthcare Industry AI SS

This slide highlights case study related to artificial intelligence solution implementation for clinical decision support. Key components of this case study are issue faced, artificial intelligence solution and impact Present the topic in a bit more detail with this Ai In Study Clinical Decision Support How Ai Is Transforming Healthcare Industry AI SS. Use it as a tool for discussion and navigation on Professional, Information, Clinical Research. This template is free to edit as deemed fit for your organization. Therefore download it now.

Google Reviews

Weill Cornell Medicine

Less Invasive Early Lung Cancer Study Receives Top 10 Clinical Research Achievement Award

  • Share to Twitter
  • Share to Facebook
  • Share to LinkedIn

A physician in a white coat behind a red backdrop

Dr. Nasser Altorki. Credit: Tiffany Walling/Getty Images for WCM. 

A Weill Cornell Medicine-led research team has been awarded a 2024 Top 10 Clinical Research Achievement Award from the Clinical Research Forum in recognition of an influential 2023 New England Journal of Medicine study on early-stage lung cancer resection.

The award is one of 10 given annually by the Clinical Research Forum for highly innovative and clinically translatable research with the potential to provide major benefits to patients. The Washington, D.C.-based organization is an influential advocate for government funding of clinical research and the interests of American clinical research institutions generally. The winners will present their award-winning research April 4 at the Clinical Research Forum’s annual meeting in Las Vegas.

The clinical trial results were published  Feb. 9, 2023 by a team led by Dr. Nasser Altorki , chief of the Division of Thoracic Surgery in the Department of Cardiothoracic Surgery at Weill Cornell Medicine and NewYork-Presbyterian/Weill Cornell Medical Center, and co-investigators from Duke University as well as investigators from 83 hospitals across the United States, Canada and Australia. The trial found that a surgery that removes only a portion of one of the five lobes that comprise a lung is as effective as removing an entire lobe for certain early-stage lung cancer patients.

“This award means a lot to me, as it recognizes an important advance in the surgical treatment of patients with early-stage lung cancer,” said Dr. Altorki, who is also the David B. Skinner, M.D. Professor of Thoracic Surgery and a professor of cardiothoracic surgery at Weill Cornell Medicine, and a thoracic surgeon at NewYork-Presbyterian/Weill Cornell Medical Center. “I think the award also recognizes the contribution of Weill Cornell Medicine and NewYork-Presbyterian to cooperative group trials supported by the National Cancer Institute.”

In the trial, investigators compared outcomes for nearly 700 patients with early-stage lung cancer, about half of whom were randomly assigned to “lobectomy” surgery, which removes the whole lobe, while the other half had “sublobar resection” surgery, which removes part of the affected lobe. Over a median follow-up period of seven years after surgery, the two groups did not differ significantly in terms of disease-free or overall survival, and the sublobar group had modestly better lung function.

Lobectomy has been the standard approach for early-stage lung cancer surgery for almost 50 years, but the study’s results indicate that a subset of early-stage lung cancer patients would be better off, or at least no worse, with the more tissue-conserving sublobar surgery.

“We started the trial in 2007 and it took about 10 years to complete,”  said  Dr. Altorki, who is also a member of the Sandra and Edward Meyer Cancer Center at Weill Cornell Medicine. “We then we had to wait until we got the results, which unexpectedly came in May of 2022. They were amazing results, and it was worth the wait, and it changed practice.” 

Related News

  • Genetic Signature May Predict Response to Immunotherapy for Non-Small Cell Lung Cancer
  • New Insight into How an Old IBD Drug Works
  • Helping Patients with Low Income Overcome Eating Disorders

Back to News

Weill Cornell Medicine Office of External Affairs New York, NY --> Phone: (646) 962-9476

  • See us on facebook
  • See us on twitter
  • See us on youtube
  • See us on linkedin
  • See us on instagram

Stanford Medicine study identifies distinct brain organization patterns in women and men

Stanford Medicine researchers have developed a powerful new artificial intelligence model that can distinguish between male and female brains.

February 20, 2024

sex differences in brain

'A key motivation for this study is that sex plays a crucial role in human brain development, in aging, and in the manifestation of psychiatric and neurological disorders,' said Vinod Menon. clelia-clelia

A new study by Stanford Medicine investigators unveils a new artificial intelligence model that was more than 90% successful at determining whether scans of brain activity came from a woman or a man.

The findings, published Feb. 20 in the Proceedings of the National Academy of Sciences, help resolve a long-term controversy about whether reliable sex differences exist in the human brain and suggest that understanding these differences may be critical to addressing neuropsychiatric conditions that affect women and men differently.

“A key motivation for this study is that sex plays a crucial role in human brain development, in aging, and in the manifestation of psychiatric and neurological disorders,” said Vinod Menon , PhD, professor of psychiatry and behavioral sciences and director of the Stanford Cognitive and Systems Neuroscience Laboratory . “Identifying consistent and replicable sex differences in the healthy adult brain is a critical step toward a deeper understanding of sex-specific vulnerabilities in psychiatric and neurological disorders.”

Menon is the study’s senior author. The lead authors are senior research scientist Srikanth Ryali , PhD, and academic staff researcher Yuan Zhang , PhD.

“Hotspots” that most helped the model distinguish male brains from female ones include the default mode network, a brain system that helps us process self-referential information, and the striatum and limbic network, which are involved in learning and how we respond to rewards.

The investigators noted that this work does not weigh in on whether sex-related differences arise early in life or may be driven by hormonal differences or the different societal circumstances that men and women may be more likely to encounter.

Uncovering brain differences

The extent to which a person’s sex affects how their brain is organized and operates has long been a point of dispute among scientists. While we know the sex chromosomes we are born with help determine the cocktail of hormones our brains are exposed to — particularly during early development, puberty and aging — researchers have long struggled to connect sex to concrete differences in the human brain. Brain structures tend to look much the same in men and women, and previous research examining how brain regions work together has also largely failed to turn up consistent brain indicators of sex.

test

Vinod Menon

In their current study, Menon and his team took advantage of recent advances in artificial intelligence, as well as access to multiple large datasets, to pursue a more powerful analysis than has previously been employed. First, they created a deep neural network model, which learns to classify brain imaging data: As the researchers showed brain scans to the model and told it that it was looking at a male or female brain, the model started to “notice” what subtle patterns could help it tell the difference.

This model demonstrated superior performance compared with those in previous studies, in part because it used a deep neural network that analyzes dynamic MRI scans. This approach captures the intricate interplay among different brain regions. When the researchers tested the model on around 1,500 brain scans, it could almost always tell if the scan came from a woman or a man.

The model’s success suggests that detectable sex differences do exist in the brain but just haven’t been picked up reliably before. The fact that it worked so well in different datasets, including brain scans from multiple sites in the U.S. and Europe, make the findings especially convincing as it controls for many confounds that can plague studies of this kind.

“This is a very strong piece of evidence that sex is a robust determinant of human brain organization,” Menon said.

Making predictions

Until recently, a model like the one Menon’s team employed would help researchers sort brains into different groups but wouldn’t provide information about how the sorting happened. Today, however, researchers have access to a tool called “explainable AI,” which can sift through vast amounts of data to explain how a model’s decisions are made.

Using explainable AI, Menon and his team identified the brain networks that were most important to the model’s judgment of whether a brain scan came from a man or a woman. They found the model was most often looking to the default mode network, striatum, and the limbic network to make the call.

The team then wondered if they could create another model that could predict how well participants would do on certain cognitive tasks based on functional brain features that differ between women and men. They developed sex-specific models of cognitive abilities: One model effectively predicted cognitive performance in men but not women, and another in women but not men. The findings indicate that functional brain characteristics varying between sexes have significant behavioral implications.

“These models worked really well because we successfully separated brain patterns between sexes,” Menon said. “That tells me that overlooking sex differences in brain organization could lead us to miss key factors underlying neuropsychiatric disorders.”

While the team applied their deep neural network model to questions about sex differences, Menon says the model can be applied to answer questions regarding how just about any aspect of brain connectivity might relate to any kind of cognitive ability or behavior. He and his team plan to make their model publicly available for any researcher to use.

“Our AI models have very broad applicability,” Menon said. “A researcher could use our models to look for brain differences linked to learning impairments or social functioning differences, for instance — aspects we are keen to understand better to aid individuals in adapting to and surmounting these challenges.”

The research was sponsored by the National Institutes of Health (grants MH084164, EB022907, MH121069, K25HD074652 and AG072114), the Transdisciplinary Initiative, the Uytengsu-Hamilton 22q11 Programs, the Stanford Maternal and Child Health Research Institute, and the NARSAD Young Investigator Award.

About Stanford Medicine

Stanford Medicine is an integrated academic health system comprising the Stanford School of Medicine and adult and pediatric health care delivery systems. Together, they harness the full potential of biomedicine through collaborative research, education and clinical care for patients. For more information, please visit med.stanford.edu .

Artificial intelligence

Exploring ways AI is applied to health care

Stanford Medicine Magazine: AI

  • Skip to main content
  • Keyboard shortcuts for audio player

An ambitious NIH study has brought new attention to chronic fatigue syndrome

Long COVID has brought new attention to how complex chronic illnesses can develop in the aftermath of a viral infection. Prior research may help forward clinical trials to test possible treatments.

AILSA CHANG, HOST:

The condition often called chronic fatigue syndrome was neglected for decades and it still has no proven treatments. Now the results of an ambitious study from the National Institutes of Health are bringing new attention to the condition, NPR's Will Stone reports.

WILL STONE, BYLINE: Like many patients, Sanna Stella traces her illness back to a cold, in this case bronchitis, that she came down with nearly 10 years ago.

SANNA STELLA: Within a month, I was unable to make it, really, from the sofa to the dining room table.

STONE: Eventually, she received her diagnosis, ME/CFS, short for myalgic encephalomyelitis - or chronic fatigue syndrome. Stella resolved to make herself as useful to science as possible, so when she was selected for an intensive study by the NIH, she was all in.

STELLA: The whole thing was pretty tough to do. I mean, after the first four or five days, I could only get to testing on a stretcher.

STONE: A pool of more than 200 patients was painstakingly narrowed down to only 17. The aim was to take the most detailed snapshot ever of the biological underpinnings of the illness. Now the findings are out.

AVINDRA NATH: It involves the brain, the gut, the immune system, the autonomic nervous system.

STONE: Dr. Avindra Nath is at the National Institute of Neurological Disorders and Stroke.

NATH: And the illness itself cannot be explained by deconditioning or psychological factors because we excluded patients who had those kinds of confounding problems.

STONE: The research stands out because of how deeply it probes the illness. There were biopsies, hours spent in tightly controlled metabolic chambers. Cutting-edge technology turned up irregularities in the immune system. In spinal fluid, the team found low levels of molecules that regulate the nervous system and link that to cognitive and physical symptoms.

NANCY KLIMAS: It was an amazing study.

STONE: Dr. Nancy Klimas studies ME/CFS at Nova Southeastern University in Florida.

KLIMAS: As thorough an evaluation as has ever been delivered (laughter) in any clinical study that I know of in any disease.

STONE: The NIH team Made all its data available, which will provide plenty of fodder for future research. Klimas says one key takeaway...

KLIMAS: That this is a disease that comes from the brain.

STONE: The study took years to complete, one reason was the COVID-19 pandemic. Dr. Lucinda Bateman runs the Bateman Horne Center in Utah, which treats patients with chronic fatigue syndrome. She applauds the work but notes the limitations.

LUCINDA BATEMAN: These patients aren't necessarily as sick as many ME/CFS patients.

STONE: In one experiment, the team used brain imaging to show a certain region was not as active when patients with ME/CFS were completing a physical task. Dr. Anthony Komaroff at Harvard Medical School and Brigham and Women's Hospital found this intriguing.

ANTHONY KOMAROFF: It's like they're trying to swim against a current.

STONE: Komaroff says the study also turns up lots of evidence of chronic activation of the immune system.

KOMAROFF: As if the immune system was engaged in a long war against a foreign microbe, a war it couldn't completely win and therefore had to continue fighting.

STONE: This is one prominent hypothesis, both for chronic fatigue syndrome and long COVID, that there's an antigen, something the immune system can't clear. Maureen Hanson at Cornell University studies this line of evidence that was also seen in the NIH study. She says a chronic infection can lead to inflammation and immune dysfunction, including a problem with part of the immune system known as T cells.

MAUREEN HANSON: So you have what's called T cell exhaustion if you're continuously exposed to an antigen.

STONE: The study's authors suggest that drugs called checkpoint inhibitors could be tested for ME/CFS. Hanson says future research needs to focus on treatments.

HANSON: It's really imperative to start doing clinical trials for people who have been sick for decades.

STONE: And she hopes this study brings new urgency.

Wil Stone, NPR News.

(SOUNDBITE OF THE BARR BROTHERS' "STATIC ORPHANS")

Copyright © 2024 NPR. All rights reserved. Visit our website terms of use and permissions pages at www.npr.org for further information.

NPR transcripts are created on a rush deadline by an NPR contractor. This text may not be in its final form and may be updated or revised in the future. Accuracy and availability may vary. The authoritative record of NPR’s programming is the audio record.

Part 1. Overview Information

National Institutes of Health ( NIH )

R01 Research Project Grant

  • August 31, 2022 - Implementation Changes for Genomic Data Sharing Plans Included with Applications Due on or after January 25, 2023. See Notice  NOT-OD-22-198 .
  • August 5, 2022 - Implementation Details for the NIH Data Management and Sharing Policy. See Notice  NOT-OD-22-189 .

See Section III. 3. Additional Information on Eligibility .

This notice of funding opportunity (NOFO) will support projects proposing mechanistic studies that will transform our understanding of polysubstance use in addiction. These hypothesis-based, exploratory projects may investigate mechanisms of polysubstance use at the behavioral, cognitive, cellular, circuit, genetic, epigenetic, pharmacological and/or computational levels.

This NOFO requires a plan for enhancing diverse perspectives (PEDP), which will be assessed as part of the scientific and technical peer review evaluation. Applications that fail to include a PEDP will be considered incomplete and will be withdrawn. Applicants are strongly encouraged to read the NOFO instructions carefully and view the available PEDP guidance material .  

June 17, 2024 

All applications are due by 5:00 PM local time of applicant organization. 

Applicants are encouraged to apply early to allow adequate time to make any corrections to errors found in the application during the submission process by the due date.

Not Applicable

It is critical that applicants follow the instructions in the Research (R) Instructions in the How to Apply - Application Guide , except where instructed to do otherwise (in this NOFO or in a Notice from NIH Guide for Grants and Contracts ).

Conformance to all requirements (both in the How to Apply - Application Guide and the NOFO) is required and strictly enforced. Applicants must read and follow all application instructions in the How to Apply - Application Guide as well as any program-specific instructions noted in Section IV. When the program-specific instructions deviate from those in the How to Apply - Application Guide , follow the program-specific instructions.

Applications that do not comply with these instructions may be delayed or not accepted for review.

There are several options available to submit your application through Grants.gov to NIH and Department of Health and Human Services partners. You must use one of these submission options to access the application forms for this opportunity.

  • Use the NIH ASSIST system to prepare, submit and track your application online.
  • Use an institutional system-to-system (S2S) solution to prepare and submit your application to Grants.gov and eRA Commons to track your application. Check with your institutional officials regarding availability.
  • Use Grants.gov Workspace to prepare and submit your application and eRA Commons to track your application.

Part 2. Full Text of Announcement

Section i. notice of funding opportunity description.

Background: Research on substance use disorders (SUDs) has primarily focused on individual substances although polysubstance use is prevalent. Polysubstance use is the use of more than one addictive substance within a defined interval; the use may be sequential (use of multiple substances on separate occasions), or concurrent/simultaneous. Limiting studies to an individual addictive substance overlooks potential interactions between substances and could influence the translational potential of preclinical research findings.

Results from several studies have demonstrated that the use of multiple addictive substances produces pharmacokinetic and behavioral profiles that are distinct from those produced by a single substance. Despite this recognition, little is known about the precise pharmacological mechanisms and interactions that may contribute to such outcomes, or co-morbidities resulting from co-use. There is also a significant lack in our understanding of how the activity of discrete cells, genes, circuits, expression of receptors, ion channels, intrinsic excitability or signaling mechanisms in the reward systems synergize when exposed to distinct classes of drugs simultaneously or sequentially. Even less is known about these mechanisms in brain regions and circuits that underlie negative reinforcement, or how neurotransmitters, neuromodulators or stress interact within these circuits to contribute to the behavioral and pharmacological profiles observed following polysubstance use. In addition, there is a need for behavioral models of polysubstance use that have translational potential.

Research Objectives : The National Institute on Drug Abuse (NIDA) seeks to stimulate innovative research that will transform our understanding of the basic mechanisms that underlie polysubstance use in addiction. These studies will investigate novel neurobiological, pharmacological and/or behavioral mechanisms underlying the biobehavioral outcomes of polysubstance use.

Research areas and questions of programmatic interest include, but are not limited to :

  • Identification and/or characterization of molecules, genes, cells (including non-neuronal cells), neural pathways, circuits, receptors, ion channels, intrinsic excitability, pharmacological and signaling mechanisms mediating the effects of polysubstance use.
  • Mechanisms underlying the association of early adolescent polysubstance use with SUD’s in adulthood.
  • Sex differences in the development and trajectory of polysubstance use. What are the roles of organizational and activational effects of sex steroids on discrete brain regions and neural circuits, and how is this altered with exposure to polysubstance use?
  • What are the developmental determinants? Are there developmental windows during which polysubstance use would be facilitated?
  • What are the pharmacologic, pharmacokinetic and pharmacodynamic interactions that can impact toxicity, or the SUD trajectory?
  • How do environmental factors interact with brain circuits to influence the development and trajectory of SUDs involving polysubstance use?
  • How does stress interact with brain circuits to influence the development and trajectory of SUDs involving polysubstance use?
  • Are there neurobehavioral risk phenotypes for progression to polysubstance use? What are the neurocognitive and neurobehavioral changes that occur through experience with different patterns of polysubstance use?

Applications Not Responsive to this Notice of Funding Opportunity (NOFO)

The following types of studies are not responsive to this NOFO and will not be reviewed :

  • The major goal of the project is not targeted at delineating the basic mechanisms underlying polysubstance use in addiction.
  • Projects limited exclusively to the phenomenology of polysubstance use, consequences of polysubstance use, or those focused exclusively on the development of tools or animal models. 
  • Projects that do not focus on combinations of two or more addictive substances with well-justified translational and public health relevance. 
  • Projects that do not include a psychostimulant, opioid, or cannabinoid in the polysubstance combination. Alcohol may be included in the polysubstance combination. 
  • Research that does not pertain to at least one of the stages of the substance use trajectory, including, but not limited to initiation, escalation, withdrawal and/or relapse. 

Other application considerations :  

  • Collaborative research teams to foster the sharing of conceptual and/or technical expertise are strongly encouraged.
  • Applicants using animal models are encouraged to use models reflective of chronic and voluntary drug intake.
  • Preliminary data are not required but may be included if available. In the absence of preliminary data, a strong premise should be provided for testing a novel hypothesis based upon the scientific literature as well as evidence of the team’s ability to carry out the proposed studies through published or technical preliminary data.  

Special considerations

NIDA applicants are strongly encouraged to review the guidelines and adhere to the requirements applicable to their research listed in the  Special Considerations for NIDA Funding Opportunities and Awards . Upon award, these considerations will be included in the Notice of Grant Award.

Plan for Enhancing Diverse Perspectives (PEDP)

  • This NOFO requires a Plan for Enhancing Diverse Perspectives (PEDP) as described in NOT-MH-21-310 , submitted as Other Project Information as an attachment (see Section IV).
  • Applicants are strongly encouraged to read the NOFO instructions carefully and view the available PEDP guidance material . The PEDP will be assessed as part of the scientific and technical peer review evaluation, as well as considered among programmatic matters with respect to funding decisions.

See Section VIII. Other Information for award authorities and regulations.

Investigators proposing NIH-defined clinical trials may refer to the Research Methods Resources website for information about developing statistical methods and study designs.

Section II. Award Information

Grant: A financial assistance mechanism providing money, property, or both to an eligible entity to carry out an approved project or activity.

The  OER Glossary  and the How to Apply - Application Guide provides details on these application types. Only those application types listed here are allowed for this NOFO.

Optional: Accepting applications that either propose or do not propose clinical trial(s).

Need help determining whether you are doing a clinical trial?

NIDA intends to commit $2M in FY 2025 to fund three-five awards.

Application budgets will be limited to $350,000 in direct costs/year. The proposed budget needs to reflect the actual needs of the proposed project.

The scope of the proposed project should determine the project period. The maximum project period is five years.

NIH grants policies as described in the NIH Grants Policy Statement will apply to the applications submitted and awards made from this NOFO.

Section III. Eligibility Information

1. Eligible Applicants

All organizations administering an eligible parent award may apply for a supplement under this NOFO.

Higher Education Institutions

  • Public/State Controlled Institutions of Higher Education
  • Private Institutions of Higher Education

The following types of Higher Education Institutions are always encouraged to apply for NIH support as Public or Private Institutions of Higher Education:

  • Hispanic-serving Institutions
  • Historically Black Colleges and Universities (HBCUs)
  • Tribally Controlled Colleges and Universities (TCCUs)
  • Alaska Native and Native Hawaiian Serving Institutions
  • Asian American Native American Pacific Islander Serving Institutions (AANAPISIs)

Nonprofits Other Than Institutions of Higher Education

  • Nonprofits with 501(c)(3) IRS Status (Other than Institutions of Higher Education)
  • Nonprofits without 501(c)(3) IRS Status (Other than Institutions of Higher Education)

For-Profit Organizations

  • Small Businesses
  • For-Profit Organizations (Other than Small Businesses)

Local Governments

  • State Governments
  • County Governments
  • City or Township Governments
  • Special District Governments
  • Indian/Native American Tribal Governments (Federally Recognized)
  • Indian/Native American Tribal Governments (Other than Federally Recognized)

Federal Government

  • Eligible Agencies of the Federal Government
  • U.S. Territory or Possession
  • Independent School Districts
  • Public Housing Authorities/Indian Housing Authorities
  • Native American Tribal Organizations (other than Federally recognized tribal governments)
  • Faith-based or Community-based Organizations
  • Regional Organizations
  • Non-domestic (non-U.S.) Entities (Organizations)

Non-domestic (non-U.S.) Entities (Foreign Organizations) are eligible to apply.

Non-domestic (non-U.S.) components of U.S. Organizations are eligible to apply.

Foreign components, as defined in the NIH Grants Policy Statement , are allowed. 

Applicant Organizations

Applicant organizations must complete and maintain the following registrations as described in the How to Apply - Application Guide to be eligible to apply for or receive an award. All registrations must be completed prior to the application being submitted. Registration can take 6 weeks or more, so applicants should begin the registration process as soon as possible. Failure to complete registrations in advance of a due date is not a valid reason for a late submission, please reference NIH Grants Policy Statement Section 2.3.9.2 Electronically Submitted Applications for additional information. 

  • NATO Commercial and Government Entity (NCAGE) Code – Foreign organizations must obtain an NCAGE code (in lieu of a CAGE code) in order to register in SAM.
  • Unique Entity Identifier (UEI) - A UEI is issued as part of the SAM.gov registration process. The same UEI must be used for all registrations, as well as on the grant application.
  • eRA Commons - Once the unique organization identifier is established, organizations can register with eRA Commons in tandem with completing their Grants.gov registration; all registrations must be in place by time of submission. eRA Commons requires organizations to identify at least one Signing Official (SO) and at least one Program Director/Principal Investigator (PD/PI) account in order to submit an application.
  • Grants.gov – Applicants must have an active SAM registration in order to complete the Grants.gov registration.

Program Directors/Principal Investigators (PD(s)/PI(s))

All PD(s)/PI(s) must have an eRA Commons account.  PD(s)/PI(s) should work with their organizational officials to either create a new account or to affiliate their existing account with the applicant organization in eRA Commons. If the PD/PI is also the organizational Signing Official, they must have two distinct eRA Commons accounts, one for each role. Obtaining an eRA Commons account can take up to 2 weeks.

Any individual(s) with the skills, knowledge, and resources necessary to carry out the proposed research as the Program Director(s)/Principal Investigator(s) (PD(s)/PI(s)) is invited to work with their organization to develop an application for support. Individuals from diverse backgrounds, including underrepresented racial and ethnic groups, individuals with disabilities, and women are always encouraged to apply for NIH support. See, Reminder: Notice of NIH's Encouragement of Applications Supporting Individuals from Underrepresented Ethnic and Racial Groups as well as Individuals with Disabilities, NOT-OD-22-019 .

For institutions/organizations proposing multiple PDs/PIs, visit the Multiple Program Director/Principal Investigator Policy and submission details in the Senior/Key Person Profile (Expanded) Component of the How to Apply - Application Guide .

2. Cost Sharing

This NOFO does not require cost sharing as defined in the  NIH Grants Policy Statement Section 1.2 Definition of Terms .

3. Additional Information on Eligibility

Number of Applications

Applicant organizations may submit more than one application, provided that each application is scientifically distinct.

The NIH will not accept duplicate or highly overlapping applications under review at the same time, per NIH Grants Policy Statement Section 2.3.7.4 Submission of Resubmission Application . This means that the NIH will not accept:

  • A new (A0) application that is submitted before issuance of the summary statement from the review of an overlapping new (A0) or resubmission (A1) application.
  • A resubmission (A1) application that is submitted before issuance of the summary statement from the review of the previous new (A0) application.
  • An application that has substantial overlap with another application pending appeal of initial peer review (see NIH Grants Policy Statement 2.3.9.4 Similar, Essentially Identical, or Identical Applications ).

Section IV. Application and Submission Information

1. Requesting an Application Package

The application forms package specific to this opportunity must be accessed through ASSIST, Grants.gov Workspace or an institutional system-to-system solution. Links to apply using ASSIST or Grants.gov Workspace are available in Part 1 of this NOFO. See your administrative office for instructions if you plan to use an institutional system-to-system solution.

2. Content and Form of Application Submission

It is critical that applicants follow the instructions in the Research (R) Instructions in the  How to Apply - Application Guide  except where instructed in this notice of funding opportunity to do otherwise. Conformance to the requirements in the How to Apply - Application Guide is required and strictly enforced. Applications that are out of compliance with these instructions may be delayed or not accepted for review.

Letter of Intent

Although a letter of intent is not required, is not binding, and does not enter into the review of a subsequent application, the information that it contains allows IC staff to estimate the potential review workload and plan the review.

By the date listed in Part 1. Overview Information , prospective applicants are asked to submit a letter of intent that includes the following information:

  • Descriptive title of proposed activity
  • Name(s), address(es), and telephone number(s) of the PD(s)/PI(s)
  • Names of other key personnel
  • Participating institution(s)
  • Number and title of this funding opportunity

The letter of intent should be sent to: [email protected]

All page limitations described in the How to Apply – Application Guide and the Table of Page Limits must be followed.

The following section supplements the instructions found in the  How to Apply – Application Guide and should be used for preparing an application to this NOFO.

All instructions in the How to Apply - Application Guide must be followed.

Other Attachments:  Plan for Enhancing Diverse Perspectives  

  • In an "Other Attachment" entitled "Plan for Enhancing Diverse Perspectives," all applicants must include a summary of strategies to advance the scientific and technical merit of the proposed project through expanded inclusivity. 
  • The PEDP should provide a holistic and integrated view of how enhancing diverse perspectives is viewed and supported throughout the application and can incorporate elements with relevance to any review criteria (significance, investigator(s), innovation, approach, and environment) as appropriate. 
  • Where possible, applicant(s) should align their description with these required elements within the research strategy section. 
  • The PEDP will vary depending on the scientific aims, expertise required, the environment and performance site(s), as well as how the project aims are structured.
  • The PEDP may be no more than 1-page in length and should include a timeline and milestones for relevant components that will be considered as part of the review

Examples of items that advance inclusivity in research and may be part of the PEDP can include, but are not limited to:

  • Discussion of engagement with different types of institutions and organizations (e.g., research-intensive, undergraduate-focused, minority-serving, community-based). 
  • Description of any planned partnerships that may enhance geographic and regional diversity. 
  • Plan to enhance recruiting of women and individuals from groups historically under-represented in the biomedical, behavioral, and clinical research workforce. 
  • Proposed monitoring activities to identify and measure PEDP progress benchmarks. 
  • Plan to utilize the project infrastructure (i.e., research and structure) to support career-enhancing research opportunities for diverse junior, early-and mid-career researchers. 
  • Description of any training and/or mentoring opportunities available to encourage participation of students, postdoctoral researchers and co-investigators from diverse backgrounds. 
  • Plan to develop transdisciplinary collaboration(s) that require unique expertise and/or solicit diverse perspectives to address research question(s). 
  • Publication plan that enumerates planned manuscripts and proposed lead authorship.  
  • Outreach and planned engagement activities to enhance recruitment of individuals from diverse groups as research participants including those from under-represented backgrounds. 

For further information on the Plan for Enhancing Diverse Perspectives (PEDP), please see https://braininitiative.nih.gov/about/plan-enhancing-diverse-perspectives-pedp

R&R or Modular Budget

R&R or Modular Budget 

PEDP implementation costs 

  • Applicants may include allowable costs associated with PEDP implementation (as outlined in the Grants Policy Statement section 7: https://grants.nih.gov/grants/policy/nihgps/html5/section_7/7.1_general.htm ).

R&R Subaward Budget

All instructions in the  How to Apply - Application Guide must be followed.

All instructions in the How to Apply - Application Guide must be followed, with the following additional instructions:

Research Strategy : The following must be described in the research strategy:

  • Significance : a) A compelling justification based upon high translational and public health relevance for examining the proposed polysubstance combination (a psychostimulant, opiate or cannabinoid must be included in the polysubstance combination), b) How the proposed studies will dramatically enhance our mechanistic understanding of polysubstance use in addiction.
  • Innovation : a) Description of how the proposed project disrupts existing paradigms and explores unanticipated biological phenomena or an unexpected result, b) Description of the risky and/or impactful nature of the proposed research in uncovering mechanisms of polysubstance use in addiction.
  • Approach: a) Well-designed experiments with adequate control conditions to test the proposed hypothesis, b) Description of statistical analyses, c) Justify that the proposed experiments are feasible, d) A timeline for the studies over the proposed funding period.

Resource Sharing Plan:

Individuals are required to comply with the instructions for the Resource Sharing Plans as provided in the How to Apply - Application Guide . 

Other Plan(s):

Note: Effective for due dates on or after January 25, 2023, the Data Management and Sharing Plan will be attached in the Other Plan(s) attachment in FORMS-H application forms packages.

All instructions in the  How to Apply - Application Guide must be followed, with the following additional instructions:

  • All applicants planning research (funded or conducted in whole or in part by NIH) that results in the generation of scientific data are required to comply with the instructions for the Data Management and Sharing Plan. All applications, regardless of the amount of direct costs requested for any one year, must address a Data Management and Sharing Plan.  

Only limited Appendix materials are allowed. Follow all instructions for the Appendix as described in the How to Apply - Application Guide .

When involving human subjects research, clinical research, and/or NIH-defined clinical trials (and when applicable, clinical trials research experience) follow all instructions for the PHS Human Subjects and Clinical Trials Information form in the How to Apply - Application Guide , with the following additional instructions:

If you answered “Yes” to the question “Are Human Subjects Involved?” on the R&R Other Project Information form, you must include at least one human subjects study record using the Study Record: PHS Human Subjects and Clinical Trials Information form or Delayed Onset Study record.

Study Record: PHS Human Subjects and Clinical Trials Information

Delayed Onset Study

Note: Delayed onset does NOT apply to a study that can be described but will not start immediately (i.e., delayed start). All instructions in the How to Apply - Application Guide must be followed.

Foreign Organizations

Foreign (non-U.S.) organizations must follow policies described in the NIH Grants Policy Statement , and procedures for foreign organizations described throughout the SF424 (R&R) Application Guide.

3. Unique Entity Identifier and System for Award Management (SAM)

See Part 2. Section III.1 for information regarding the requirement for obtaining a unique entity identifier and for completing and maintaining active registrations in System for Award Management (SAM), NATO Commercial and Government Entity (NCAGE) Code (if applicable), eRA Commons, and Grants.gov

Part I.  contains information about Key Dates and times. Applicants are encouraged to submit applications before the due date to ensure they have time to make any application corrections that might be necessary for successful submission. When a submission date falls on a weekend or Federal holiday , the application deadline is automatically extended to the next business day.

Organizations must submit applications to Grants.gov (the online portal to find and apply for grants across all Federal agencies). Applicants must then complete the submission process by tracking the status of the application in the eRA Commons , NIH’s electronic system for grants administration. NIH and Grants.gov systems check the application against many of the application instructions upon submission. Errors must be corrected and a changed/corrected application must be submitted to Grants.gov on or before the application due date and time.  If a Changed/Corrected application is submitted after the deadline, the application will be considered late. Applications that miss the due date and time are subjected to the NIH Grants Policy Statement Section 2.3.9.2 Electronically Submitted Applications .

Applicants are responsible for viewing their application before the due date in the eRA Commons to ensure accurate and successful submission.

Information on the submission process and a definition of on-time submission are provided in the  How to Apply – Application Guide .

5. Intergovernmental Review (E.O. 12372)

This initiative is not subject to intergovernmental review.

All NIH awards are subject to the terms and conditions, cost principles, and other considerations described in the NIH Grants Policy Statement .

Pre-award costs are allowable only as described in the  NIH Grants Policy Statement Section 7.9.1 Selected Items of Cost .

Applications must be submitted electronically following the instructions described in the How to Apply - Application Guide . Paper applications will not be accepted.

Applicants must complete all required registrations before the application due date. Section III. Eligibility Information contains information about registration.

For assistance with your electronic application or for more information on the electronic submission process, visit How to Apply – Application Guide . If you encounter a system issue beyond your control that threatens your ability to complete the submission process on-time, you must follow the Dealing with System Issues guidance. For assistance with application submission, contact the Application Submission Contacts in Section VII .

Important reminders:

All PD(s)/PI(s) must include their eRA Commons ID in the Credential field of the Senior/Key Person Profile form . Failure to register in the Commons and to include a valid PD/PI Commons ID in the credential field will prevent the successful submission of an electronic application to NIH. See Section III of this NOFO for information on registration requirements.

The applicant organization must ensure that the unique entity identifier provided on the application is the same identifier used in the organization’s profile in the eRA Commons and for the System for Award Management. Additional information may be found in the How to Apply - Application Guide .

See more tips for avoiding common errors.

Upon receipt, applications will be evaluated for completeness and compliance with application instructions by the Center for Scientific Review and responsiveness by NIDA, NIH. Applications that are incomplete, non-compliant and/or nonresponsive will not be reviewed.

Applications must include annual milestones for PEDP. Applications that fail to include annual milestones will be considered incomplete and will be withdrawn. Applications must include a PEDP submitted as Other Project Information as an attachment. Applications that fail to include a PEDP will be considered incomplete and will be withdrawn before review.

Applicants are required to follow the instructions for post-submission materials, as described in the policy

Section V. Application Review Information

1. Criteria

Only the review criteria described below will be considered in the review process.  Applications submitted to the NIH in support of the NIH mission are evaluated for scientific and technical merit through the NIH peer review system.

A proposed Clinical Trial application may include study design, methods, and intervention that are not by themselves innovative but address important questions or unmet needs. Additionally, the results of the clinical trial may indicate that further clinical development of the intervention is unwarranted or lead to new avenues of scientific investigation.

Overall Impact

Reviewers will provide an overall impact score to reflect their assessment of the likelihood for the project to exert a sustained, powerful influence on the research field(s) involved, in consideration of the following review criteria and additional review criteria (as applicable for the project proposed).

Scored Review Criteria

Reviewers will consider each of the review criteria below in the determination of scientific merit and give a separate score for each. An application does not need to be strong in all categories to be judged likely to have major scientific impact. For example, a project that by its nature is not innovative may be essential to advance a field.

Significance

Does the project address an important problem or a critical barrier to progress in the field? Is the prior research that serves as the key support for the proposed project rigorous? If the aims of the project are achieved, how will scientific knowledge, technical capability, and/or clinical practice be improved? How will successful completion of the aims change the concepts, methods, technologies, treatments, services, or preventative interventions that drive this field?

In addition, for applications involving clinical trials

Are the scientific rationale and need for a clinical trial to test the proposed hypothesis or intervention well supported by preliminary data, clinical and/or preclinical studies, or information in the literature or knowledge of biological mechanisms? For trials focusing on clinical or public health endpoints, is this clinical trial necessary for testing the safety, efficacy or effectiveness of an intervention that could lead to a change in clinical practice, community behaviors or health care policy? For trials focusing on mechanistic, behavioral, physiological, biochemical, or other biomedical endpoints, is this trial needed to advance scientific understanding?

Specific to this NOFO: 

  • To what extent does the application provide a compelling justification based upon high translational and public health relevance for examining the proposed polysubstance combination? 
  • To what extent does the application describe how the proposed work will dramatically improve our mechanistic understanding of polysubstance use in addiction?
  • To what extent do the efforts described in the Plan for Enhancing Diverse Perspectives further the significance of the project?

Investigator(s)

Are the PD(s)/PI(s), collaborators, and other researchers well suited to the project? If Early Stage Investigators or those in the early stages of independent careers, do they have appropriate experience and training? If established, have they demonstrated an ongoing record of accomplishments that have advanced their field(s)? If the project is collaborative or multi-PD/PI, do the investigators have complementary and integrated expertise; are their leadership approach, governance, and organizational structure appropriate for the project?

With regard to the proposed leadership for the project, do the PD/PI(s) and key personnel have the expertise, experience, and ability to organize, manage and implement the proposed clinical trial and meet milestones and timelines? Do they have appropriate expertise in study coordination, data management and statistics? For a multicenter trial, is the organizational structure appropriate and does the application identify a core of potential center investigators and staffing for a coordinating center?

Specific to this NOFO:

  • To what extent will the efforts described in the Plan for Enhancing Diverse Perspectives strengthen and enhance the expertise required for the project?

Does the application challenge and seek to shift current research or clinical practice paradigms by utilizing novel theoretical concepts, approaches or methodologies, instrumentation, or interventions? Are the concepts, approaches or methodologies, instrumentation, or interventions novel to one field of research or novel in a broad sense? Is a refinement, improvement, or new application of theoretical concepts, approaches or methodologies, instrumentation, or interventions proposed?

Does the design/research plan include innovative elements, as appropriate, that enhance its sensitivity, potential for information or potential to advance scientific knowledge or clinical practice?

Specific to this NOFO:  

  • To what extent does the project disrupt existing paradigms, explore an unanticipated biological phenomenon or unexpected previous result? 
  • To what extent does the project take risks rather than simply proceeding to the next logical step?
  • To what extent will the efforts described in the Plan for Enhancing Diverse Perspectives meaningfully contribute to innovation?

Are the overall strategy, methodology, and analyses well-reasoned and appropriate to accomplish the specific aims of the project? Have the investigators included plans to address weaknesses in the rigor of prior research that serves as the key support for the proposed project? Have the investigators presented strategies to ensure a robust and unbiased approach, as appropriate for the work proposed? Are potential problems, alternative strategies, and benchmarks for success presented? If the project is in the early stages of development, will the strategy establish feasibility and will particularly risky aspects be managed? Have the investigators presented adequate plans to address relevant biological variables, such as sex, for studies in vertebrate animals or human subjects?

If the project involves human subjects and/or NIH-defined clinical research, are the plans to address 1) the protection of human subjects from research risks, and 2) inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion or exclusion of individuals of all ages (including children and older adults), justified in terms of the scientific goals and research strategy proposed?

Does the application adequately address the following, if applicable

Study Design

Is the study design justified and appropriate to address primary and secondary outcome variable(s)/endpoints that will be clear, informative and relevant to the hypothesis being tested? Is the scientific rationale/premise of the study based on previously well-designed preclinical and/or clinical research? Given the methods used to assign participants and deliver interventions, is the study design adequately powered to answer the research question(s), test the proposed hypothesis/hypotheses, and provide interpretable results? Is the trial appropriately designed to conduct the research efficiently? Are the study populations (size, gender, age, demographic group), proposed intervention arms/dose, and duration of the trial, appropriate and well justified?

Are potential ethical issues adequately addressed? Is the process for obtaining informed consent or assent appropriate? Is the eligible population available? Are the plans for recruitment outreach, enrollment, retention, handling dropouts, missed visits, and losses to follow-up appropriate to ensure robust data collection? Are the planned recruitment timelines feasible and is the plan to monitor accrual adequate? Has the need for randomization (or not), masking (if appropriate), controls, and inclusion/exclusion criteria been addressed? Are differences addressed, if applicable, in the intervention effect due to sex/gender and race/ethnicity?

Are the plans to standardize, assure quality of, and monitor adherence to, the trial protocol and data collection or distribution guidelines appropriate? Is there a plan to obtain required study agent(s)? Does the application propose to use existing available resources, as applicable?

Data Management and Statistical Analysis

Are planned analyses and statistical approach appropriate for the proposed study design and methods used to assign participants and deliver interventions? Are the procedures for data management and quality control of data adequate at clinical site(s) or at center laboratories, as applicable? Have the methods for standardization of procedures for data management to assess the effect of the intervention and quality control been addressed? Is there a plan to complete data analysis within the proposed period of the award?

  • How adequate are the experimental designs and statistical analyses? 
  • Are the timelines and milestones associated with the Plan for Enhancing Diverse Perspectives well-developed and feasible? 

Environment

Will the scientific environment in which the work will be done contribute to the probability of success? Are the institutional support, equipment, and other physical resources available to the investigators adequate for the project proposed? Will the project benefit from unique features of the scientific environment, subject populations, or collaborative arrangements?

If proposed, are the administrative, data coordinating, enrollment and laboratory/testing centers, appropriate for the trial proposed?

Does the application adequately address the capability and ability to conduct the trial at the proposed site(s) or centers? Are the plans to add or drop enrollment centers, as needed, appropriate?

If international site(s) is/are proposed, does the application adequately address the complexity of executing the clinical trial?

If multi-sites/centers, is there evidence of the ability of the individual site or center to: (1) enroll the proposed numbers; (2) adhere to the protocol; (3) collect and transmit data in an accurate and timely fashion; and, (4) operate within the proposed organizational structure?

  •  To what extent will features of the environment described in the Plan for Enhancing Diverse Perspectives (e.g., collaborative arrangements, geographic diversity, institutional support) contribute to the success of the project?

Additional Review Criteria

As applicable for the project proposed, reviewers will evaluate the following additional items while determining scientific and technical merit, and in providing an overall impact score, but will not give separate scores for these items.

Study Timeline

Specific to applications involving clinical trials

Is the study timeline described in detail, taking into account start-up activities, the anticipated rate of enrollment, and planned follow-up assessment? Is the projected timeline feasible and well justified? Does the project incorporate efficiencies and utilize existing resources (e.g., CTSAs, practice-based research networks, electronic medical records, administrative database, or patient registries) to increase the efficiency of participant enrollment and data collection, as appropriate?

Are potential challenges and corresponding solutions discussed (e.g., strategies that can be implemented in the event of enrollment shortfalls)?

Protections for Human Subjects

For research that involves human subjects but does not involve one of the categories of research that are exempt under 45 CFR Part 46, the committee will evaluate the justification for involvement of human subjects and the proposed protections from research risk relating to their participation according to the following five review criteria: 1) risk to subjects, 2) adequacy of protection against risks, 3) potential benefits to the subjects and others, 4) importance of the knowledge to be gained, and 5) data and safety monitoring for clinical trials.

For research that involves human subjects and meets the criteria for one or more of the categories of research that are exempt under 45 CFR Part 46, the committee will evaluate: 1) the justification for the exemption, 2) human subjects involvement and characteristics, and 3) sources of materials. For additional information on review of the Human Subjects section, please refer to the Guidelines for the Review of Human Subjects .

Inclusion of Women, Minorities, and Individuals Across the Lifespan

When the proposed project involves human subjects and/or NIH-defined clinical research, the committee will evaluate the proposed plans for the inclusion (or exclusion) of individuals on the basis of sex/gender, race, and ethnicity, as well as the inclusion (or exclusion) of individuals of all ages (including children and older adults) to determine if it is justified in terms of the scientific goals and research strategy proposed. For additional information on review of the Inclusion section, please refer to the Guidelines for the Review of Inclusion in Clinical Research .

Vertebrate Animals

The committee will evaluate the involvement of live vertebrate animals as part of the scientific assessment according to the following three points: (1) a complete description of all proposed procedures including the species, strains, ages, sex, and total numbers of animals to be used; (2) justifications that the species is appropriate for the proposed research and why the research goals cannot be accomplished using an alternative non-animal model; and (3) interventions including analgesia, anesthesia, sedation, palliative care, and humane endpoints that will be used to limit any unavoidable discomfort, distress, pain and injury in the conduct of scientifically valuable research. Methods of euthanasia and justification for selected methods, if NOT consistent with the AVMA Guidelines for the Euthanasia of Animals, is also required but is found in a separate section of the application. For additional information on review of the Vertebrate Animals Section, please refer to the Worksheet for Review of the Vertebrate Animals Section.

Reviewers will assess whether materials or procedures proposed are potentially hazardous to research personnel and/or the environment, and if needed, determine whether adequate protection is proposed.

Resubmissions

For Resubmissions, the committee will evaluate the application as now presented, taking into consideration the responses to comments from the previous scientific review group and changes made to the project.

For Revisions, the committee will consider the appropriateness of the proposed expansion of the scope of the project. If the Revision application relates to a specific line of investigation presented in the original application that was not recommended for approval by the committee, then the committee will consider whether the responses to comments from the previous scientific review group are adequate and whether substantial changes are clearly evident.

As applicable for the project proposed, reviewers will consider each of the following items, but will not give scores for these items, and should not consider them in providing an overall impact score.

Applications from Foreign Organizations

Reviewers will assess whether the project presents special opportunities for furthering research programs through the use of unusual talent, resources, populations, or environmental conditions that exist in other countries and either are not readily available in the United States or augment existing U.S. resources.

Select Agent Research

Reviewers will assess the information provided in this section of the application, including 1) the Select Agent(s) to be used in the proposed research, 2) the registration status of all entities where Select Agent(s) will be used, 3) the procedures that will be used to monitor possession use and transfer of Select Agent(s), and 4) plans for appropriate biosafety, biocontainment, and security of the Select Agent(s).

Resource Sharing Plans

Reviewers will comment on whether the Resource Sharing Plan(s) (i.e., Sharing Model Organisms ) or the rationale for not sharing the resources, is reasonable.

Authentication of Key Biological and/or Chemical Resources:

For projects involving key biological and/or chemical resources, reviewers will comment on the brief plans proposed for identifying and ensuring the validity of those resources.

Budget and Period of Support

Reviewers will consider whether the budget and the requested period of support are fully justified and reasonable in relation to the proposed research.

2. Review and Selection Process

Applications will be evaluated for scientific and technical merit by (an) appropriate Scientific Review Group(s) convened by NIDA, in accordance with NIH peer review policy and procedures , using the stated review criteria. Assignment to a Scientific Review Group will be shown in the eRA Commons.

As part of the scientific peer review, all applications will receive a written critique.

Applications may undergo a selection process in which only those applications deemed to have the highest scientific and technical merit (generally the top half of applications under review) will be discussed and assigned an overall impact score.

Appeals of initial peer review will not be accepted for applications submitted in response to this NOFO.

Applications will be assigned to the appropriate NIH Institute or Center. Applications will compete for available funds with all other recommended applications submitted in response to this NOFO. Following initial peer review, recommended applications will receive a second level of review by the National Advisory Council on Drug Abuse. The following will be considered in making funding decisions:

  • Scientific and technical merit of the proposed project as determined by scientific peer review.
  • Availability of funds.
  • Relevance of the proposed project to program priorities.

3. Anticipated Announcement and Award Dates

After the peer review of the application is completed, the PD/PI will be able to access his or her Summary Statement (written critique) via the  eRA Commons . Refer to Part 1 for dates for peer review, advisory council review, and earliest start date.

Information regarding the disposition of applications is available in the  NIH Grants Policy Statement Section 2.4.4 Disposition of Applications .

Section VI. Award Administration Information

1. Award Notices

If the application is under consideration for funding, NIH will request "just-in-time" information from the applicant as described in the  NIH Grants Policy Statement . This request is not a Notice of Award nor should it be construed to be an indicator of possible funding.

A formal notification in the form of a Notice of Award (NoA) will be provided to the applicant organization for successful applications. The NoA signed by the grants management officer is the authorizing document and will be sent via email to the recipient's business official.

Recipients must comply with any funding restrictions described in Section IV.6. Funding Restrictions. Selection of an application for award is not an authorization to begin performance. Any costs incurred before receipt of the NoA are at the recipient's risk. These costs may be reimbursed only to the extent considered allowable pre-award costs.

Any application awarded in response to this NOFO will be subject to terms and conditions found on the Award Conditions and Information for NIH Grants website.  This includes any recent legislation and policy applicable to awards that is highlighted on this website.

Individual awards are based on the application submitted to, and as approved by, the NIH and are subject to the IC-specific terms and conditions identified in the NoA.

ClinicalTrials.gov: If an award provides for one or more clinical trials. By law (Title VIII, Section 801 of Public Law 110-85), the "responsible party" must register and submit results information for certain “applicable clinical trials” on the ClinicalTrials.gov Protocol Registration and Results System Information Website ( https://register.clinicaltrials.gov ). NIH expects registration and results reporting of all trials whether required under the law or not. For more information, see https://grants.nih.gov/policy/clinical-trials/reporting/index.htm

Institutional Review Board or Independent Ethics Committee Approval: Recipient institutions must ensure that all protocols are reviewed by their IRB or IEC. To help ensure the safety of participants enrolled in NIH-funded studies, the recipient must provide NIH copies of documents related to all major changes in the status of ongoing protocols.

Data and Safety Monitoring Requirements: The NIH policy for data and safety monitoring requires oversight and monitoring of all NIH-conducted or -supported human biomedical and behavioral intervention studies (clinical trials) to ensure the safety of participants and the validity and integrity of the data. Further information concerning these requirements is found at http://grants.nih.gov/grants/policy/hs/data_safety.htm and in the application instructions (SF424 (R&R) and PHS 398).

Investigational New Drug or Investigational Device Exemption Requirements: Consistent with federal regulations, clinical research projects involving the use of investigational therapeutics, vaccines, or other medical interventions (including licensed products and devices for a purpose other than that for which they were licensed) in humans under a research protocol must be performed under a Food and Drug Administration (FDA) investigational new drug (IND) or investigational device exemption (IDE).

2. Administrative and National Policy Requirements

All NIH grant and cooperative agreement awards include the  NIH Grants Policy Statement as part of the NoA. For these terms of award, see the NIH Grants Policy Statement Part II: Terms and Conditions of NIH Grant Awards, Subpart A: General  and Part II: Terms and Conditions of NIH Grant Awards, Subpart B: Terms and Conditions for Specific Types of Grants, Recipients, and Activities , including of note, but not limited to:

  • Federal-wide Standard Terms and Conditions for Research Grants
  • Prohibition on Certain Telecommunications and Video Surveillance Services or Equipment
  • Acknowledgment of Federal Funding

If a recipient is successful and receives a Notice of Award, in accepting the award, the recipient agrees that any activities under the award are subject to all provisions currently in effect or implemented during the period of the award, other Department regulations and policies in effect at the time of the award, and applicable statutory provisions.

If a recipient receives an award, the recipient must follow all applicable nondiscrimination laws. The recipient agrees to this when registering in SAM.gov. The recipient must also submit an Assurance of Compliance ( HHS-690 ). To learn more, see the Laws and Regulations Enforced by the HHS Office for Civil Rights website . 

HHS recognizes that NIH research projects are often limited in scope for many reasons that are nondiscriminatory, such as the principal investigator’s scientific interest, funding limitations, recruitment requirements, and other considerations. Thus, criteria in research protocols that target or exclude certain populations are warranted where nondiscriminatory justifications establish that such criteria are appropriate with respect to the health or safety of the subjects, the scientific study design, or the purpose of the research. For additional guidance regarding how the provisions apply to NIH grant programs, please contact the Scientific/Research Contact that is identified in Section VII under Agency Contacts of this NOFO.

In accordance with the statutory provisions contained in Section 872 of the Duncan Hunter National Defense Authorization Act of Fiscal Year 2009 (Public Law 110-417), NIH awards will be subject to System for Award Management (SAM.gov) requirements. SAM.gov requires Federal agencies to review and consider information about an applicant in the designated integrity and performance system (currently SAM.gov) prior to making an award. An applicant can review and comment on any information in the responsibility/qualification records available in SAM.gov. NIH will consider any comments by the applicant, in addition to the information available in the responsibility/qualification records in SAM.gov, in making a judgement about the applicant’s integrity, business ethics, and record of performance under Federal awards when completing the review of risk posed by applicants as described in 2 CFR Part 200.206 “Federal awarding agency review of risk posed by applicants.” This provision will apply to all NIH grants and cooperative agreements except fellowships.

3. Data Management and Sharing

Consistent with the 2023 NIH Policy for Data Management and Sharing, when data management and sharing is applicable to the award, recipients will be required to adhere to the Data Management and Sharing requirements as outlined in the NIH Grants Policy Statement . Upon the approval of a Data Management and Sharing Plan, it is required for recipients to implement the plan as described.

4. Reporting

When multiple years are involved, recipients will be required to submit the  Research Performance Progress Report (RPPR)  annually and financial statements as required in the NIH Grants Policy Statement .

Awardees will provide updates at least annually on implementation of the PEDP

A final RPPR, invention statement, and the expenditure data portion of the Federal Financial Report are required for closeout of an award, as described in the NIH Grants Policy Statement . NIH NOFOs outline intended research goals and objectives. Post award, NIH will review and measure performance based on the details and outcomes that are shared within the RPPR, as described at 2 CFR Part 200.301.

The Federal Funding Accountability and Transparency Act of 2006 as amended (FFATA), includes a requirement for recipients of Federal grants to report information about first-tier subawards and executive compensation under Federal assistance awards issued in FY2011 or later.  All recipients of applicable NIH grants and cooperative agreements are required to report to the Federal Subaward Reporting System (FSRS) available at www.fsrs.gov on all subawards over $25,000.  See the NIH Grants Policy Statement for additional information on this reporting requirement.

In accordance with the regulatory requirements provided at 2 CFR Part 200.113 and Appendix XII to 2 CFR Part 200, recipients that have currently active Federal grants, cooperative agreements, and procurement contracts from all Federal awarding agencies with a cumulative total value greater than $10,000,000 for any period of time during the period of performance of a Federal award, must report and maintain the currency of information reported in the System for Award Management (SAM) about civil, criminal, and administrative proceedings in connection with the award or performance of a Federal award that reached final disposition within the most recent five-year period.  The recipient must also make semiannual disclosures regarding such proceedings. Proceedings information will be made publicly available in the designated integrity and performance system (Responsibility/Qualification in SAM.gov, formerly FAPIIS).  This is a statutory requirement under section 872 of Public Law 110-417, as amended (41 U.S.C. 2313).  As required by section 3010 of Public Law 111-212, all information posted in the designated integrity and performance system on or after April 15, 2011, except past performance reviews required for Federal procurement contracts, will be publicly available.  Full reporting requirements and procedures are found in Appendix XII to 2 CFR Part 200 – Award Term and Conditions for Recipient Integrity and Performance Matters.

Section VII. Agency Contacts

We encourage inquiries concerning this funding opportunity and welcome the opportunity to answer questions from potential applicants.

eRA Service Desk (Questions regarding ASSIST, eRA Commons, application errors and warnings, documenting system problems that threaten submission by the due date, and post-submission issues)

Finding Help Online:  https://www.era.nih.gov/need-help  (preferred method of contact) Telephone: 301-402-7469 or 866-504-9552 (Toll Free)

General Grants Information (Questions regarding application instructions, application processes, and NIH grant resources) Email:  [email protected]  (preferred method of contact) Telephone: 301-637-3015

Grants.gov Customer Support (Questions regarding Grants.gov registration and Workspace) Contact Center Telephone: 800-518-4726 Email:  [email protected]

Sunila Nair, PhD National Institute on Drug Abuse (NIDA) Phone: 301-827-6832 Email: [email protected]

Dharmendar Rathore, PhD National Institute on Drug Abuse (NIDA) Phone: 301-402-6965 Email:  [email protected]

Krista Lyles National Institute on Drug Abuse (NIDA) Phone: 301-480-2203 Email:  [email protected]

Section VIII. Other Information

Recently issued trans-NIH policy notices may affect your application submission. A full list of policy notices published by NIH is provided in the NIH Guide for Grants and Contracts . All awards are subject to the terms and conditions, cost principles, and other considerations described in the NIH Grants Policy Statement .

Awards are made under the authorization of Sections 301 and 405 of the Public Health Service Act as amended (42 USC 241 and 284) and under Federal Regulations 42 CFR Part 52 and 2 CFR Part 200.

NIH Office of Extramural Research Logo

Note: For help accessing PDF, RTF, MS Word, Excel, PowerPoint, Audio or Video files, see Help Downloading Files .

  • Open access
  • Published: 23 February 2024

Evaluation of the effectiveness of using flipped classroom in puncture skills teaching

  • Weihao Zhang 1 ,
  • Miao Jiang 2 ,
  • Wei Zhao 1 ,
  • Shuai Li 1 ,
  • Feifei Feng 4 ,
  • Yongjing Wang 5 ,
  • Yan Li 2 &
  • Lan Liu 1  

BMC Medical Education volume  24 , Article number:  176 ( 2024 ) Cite this article

Metrics details

The effectiveness of flipped classroom (FC) on puncture skills in medical education is still uncertain. This study aimed to assess the role of the FC model in puncture skills and investigate the acceptance and approval of FC among medical students and instructors.

A mixed research approach of quasi-experimental research design and descriptive qualitative research was conducted in September 2022 for one month, using an FC teaching method that combined instructional videos and group learning. The study participants were 71 fifth-year medical students from two classes at a Chinese medical school and four instructors. The medical students were randomly divided into two groups: the traditional classroom (TC) group (Group A) and the FC group (Group B). For teaching, Group B used FC, and Group A used PowerPoint-based TC. The effectiveness of the two teaching models was assessed with Objective Structured Clinical Examination (OSCE), and questionnaires were distributed to the medical students and instructors after the assessment. Two independent sample t-tests were used to analyse the differences in demographic data and the OSCE scores of the two groups of medical students.

Group B scored higher in puncture skills than Group A, especially regarding abdominal puncture ( p  = 0.03), thoracentesis ( p  < 0.001), bone marrow puncture ( p  < 0.001) and average performance of puncture skills ( p  < 0.001). For lumbar puncture, no difference in skill scores was observed between groups A and B ( p  > 0.409). The medical students thought that the FC improved their self-learning ability and helped them acquire knowledge. Regarding the OSCE of their skills, most medical students thought that it was more innovative and objective than traditional examinations and that it was better for assessing their overall abilities. Both the FC and OSCE were supported by the medical students. The instructors were also satisfied with the students’ performance in the FC and supported the teaching model, agreeing to continue using it.

Conclusions

This study shows that FC teaching that combines instructional videos and group learning is a reliable and well-received teaching method for puncture skills, which supplements and expands existing teaching methods in the medical field.

Peer Review reports

Introduction

The COVID-19 pandemic has affected many sectors of medical education around the world, with many universities suspending on-campus teaching activities [ 1 ]. In early 2020, Chinese universities were actively teaching online, as required by the Chinese Ministry of Education. In this context, previous theoretical and practical teaching was considered no longer applicable, especially for practical skills such as internal medicine puncture skills. Therefore, it was necessary to adjust the medical teaching strategy as early as possible to ensure the smooth completion of the course. The flipped classroom (FC) is a blended learning model that combines lecture materials that are read or viewed prior to class with interactive face-to-face classrooms that actively engage students (The word ‘student’ refers to a ‘medical student’) [ 2 ]. This method addresses the problem of limited teaching time in the traditional lecture-based teaching model by allocating classroom time to the active application of the material that students learn before class [ 3 ]. However, most of the time, Chinese universities use the didactic model, which allows for a minimum number of instructors to convey information to a large number of students at the same time [ 4 ]. Medical students must master clinical procedures to be competent in a variety of clinical settings [ 5 ].

The Objective Structured Clinical Examination (OSCE) is a well-researched and proven method for assessing medical skills [ 6 ]; it comprehensively assesses a medical student’s ability to apply their medical knowledge and skills in clinical practice [ 7 ]. The OSCE uses multi-station assessment, standardised patients and virtual patients to assess candidates’ clinical skills in a fair and objective manner [ 8 ], and it is considered to be the most reliable clinical examination system in medical training [ 9 ]. In recent years, the OSCE has been widely used in China for the final assessment of residency training [ 10 ].

Although previous studies have shown that FC has a positive impact on several medical fields, it is still unknown whether FC can improve student performance in clinical skills [ 11 ], and few studies have evaluated the impact of FC on learning medical puncture skills. This study aimed (1) to assess the effectiveness of the FC in improving puncture skills performance by using a multi-component assessment and (2) to evaluate secondary endpoints such as student and instructors’ satisfaction with and acceptance of the FC and OSCE.

A mixed research approach of quasi-experimental research design and descriptive qualitative research was conducted in September 2022 for one month as a pilot study at our institution. It was conducted in the Second Hospital of Shandong University and approved by the Ethics Committee of our hospital (LCLL-2022-011).

Participants

All 71 medical students from two classes at a medical school and four instructors from medical school affiliates participated in this study. Inclusion criteria for the students included (1) voluntary participation in this study and signing of an informed consent form, (2) full-time undergraduate medical students and (3) no obvious physical or psychological abnormality. The exclusion criteria included having been exposed to flipped classroom teaching or having received training in puncture skills. The inclusion criteria for instructors included (1) having participated in the FC and TC teaching training organised and passing our hospital’s assessment; and (2) voluntarily participating in this study, complying with relevant regulations and signing of an informed consent form. The exclusion criteria included not participating in training related to this study or failing our hospital’s assessment.

Study design

This study adopted the FC teaching method, combining instructional videos with group learning. Before the study started, participants were randomly divided into groups A and B. Group B was taught using the FC teaching model and Group A using the traditional classroom (TC) teaching model. Randomisation and random assignment for this study was accomplished by the Random Number Table (URL: https://randomnumbergenerator.org/random-number-table ). We numbered all students who volunteered for this study consecutively starting with 1 and then rearranged the serial numbers using a Random Number Table, with students coded ‘odd’ assigned to the traditional classroom (TC) group, and students coded ‘even’ assigned to the FC group. The flow chart of Fig.  1 shows the study design.

figure 1

Study flowchart

Group A did not have any learning task before class, and the instructors gave a face-to-face lecture for them using PowerPoint slides. Group B was randomly divided into four display groups, B1–B4, with random assignment, each of which was responsible for one puncture presentation in the form of PowerPoint presentations. Instructional videos on puncture skills were distributed to the members of Group B, who verbally agreed not to share the videos with Group A. A professional medical training institution produced the videos that were approximately 141 min long, of which 35, 40, 35 and 31 min were devoted to abdominal, thoracic, lumbar and bone marrow punctures, respectively. They outlined the purpose, indications, contraindications, introduction of operating items, demonstration of operation steps and explanation of operation problems of puncture. All puncture demonstrations were performed on body models. Before the lecture in the classroom, we asked group B to watch the instructional videos, and all Group B members completed this task. Group A were not given any self-study tasks, such as instructional videos, prior to their classroom instruction.

Conceptual framework

In educational theory, Mayer’s cognitive multimedia learning system suggests that learning is most effective in an e-learning environment when both images and text are available [ 12 ]. Mayer argues that multimedia includes animation and narration, and his research involves using short multimedia tutorials [ 13 ], which considerably affect learning. Therefore, Mayer’s cognitive multimedia learning theory was used as a theoretical basis for this study.

The production of the lecture PowerPoint and the selection of instructional videos was based on the requirements of the notification document of the Chinese National Health Care Commission on the syllabus of the Physicians Qualifying Examination issued in 2019 (website: https://www.nmec.org.cn/Pages/ArticleInfo-13-11403.html ). The theoretical objectives of the training courses include (1) indications and contraindications for puncture; (2) operational points; and (3) common problems with puncture and measures to solve them. The practical objective is to learn how to perform punctures on mannequins.

Teaching process

The classroom courses for groups A and B were held separately once a week for four weeks. The sequence of lectures for both groups was abdominal puncture, thoracentesis, lumbar puncture and bone marrow puncture, and the duration of each session was controlled at 45 min.

Group A watched PowerPoints that were summarised by the lecturer, and the students asked questions. The instructors then summarised the important and more complex points of puncture skills. The classroom schedule of Group A was structured as follows: 30 min for the instructor’s PowerPoint presentation, 15 min for the students to ask the instructor questions, and 10 min for the instructor to summarise and comment. Group B used team-based learning [ 14 ], a group learning method, as follows: we randomly divided Group B into four groups of 7–8 people each using the random number table. The group (B1, B2, B3 or B4) in charge of the tasks for a particular week assigned two people to complete the PowerPoint presentation in the role of instructors. The other three groups commented on the PowerPoint presentation; then, they discussed the questions in the PowerPoint presentation in groups. The instructors briefly commented on errors and areas in the PowerPoint presentation that were difficult to understand. The classroom schedule of Group B was as follows: 25 min for group PowerPoint presentation, 10 min for group comments, 10 min for discussion between groups and 10 min for instructor’s comments and summary.

After the weekly lectures in the classroom, the two groups practiced the puncture skills in the Clinical Skills Training Center at the medical school, which provided the training location and equipment. Training time was limited to two hours, and the length, content and instructors for both groups A and B were the same.

Main outcome measures

Groups A and B conducted the OSCE examination in the Clinical Skills Training Center of our hospital the day after they completed their skill training. The assessment items included four sites in the order of abdominal puncture, thoracentesis, lumbar puncture and bone marrow puncture. Each station was equipped with independent assessment space and equipment, including puncture simulator and puncture disinfection tools, and arranged for a professionally trained OSCE examiner to take charge of the examination. The preparatory work before the examination had been approved by our hospital’s OSCE Working Committee. The examiners used the OSCE assessment, and the process, content and examiners were the same for both groups. Figure  1 shows the OSCE assessment process.

Secondary outcome measures

The questionnaires on the FC and the OSCE were designed by two of the study researchers and were administered in this study. The first two questionnaires focused on students’ acceptance and recognition of FC and OSCE; the other questionnaires aimed to evaluate instructors’ teaching habits and perceptions of the FC model. To assess the reliability of the questionnaires before the formal survey, 25 students were selected for the pre-assessment, and the results showed that the Cronbach’s α coefficients of the FC and OSCE questionnaires for students were 0.815, and their reliability met the requirements.

The evaluation dimensions of the FC questionnaire for students included pre-course and in-class perceptions of the content and methods of FC, acceptance, participation and suggestions for FC (additional file 1 ). The evaluation dimensions of the OSCE questionnaires included the assessment difficulties and effectiveness of OSCE, strengths and weaknesses, and recommendations (additional file 2 ). The FC questionnaire for instructors consists of open-ended questions, and its evaluation dimensions include teaching methods, evaluation of teaching effectiveness, levels of understanding of students, perceptions of FC teaching and development proposals. The survey gathered data from three perspectives: instructors’ past teaching experiences, teaching habits and knowledge of students’ skill acquisition and perceptions of the FC (additional file 3 ). In the FC and OSCE questionnaires for students, the Likert-scale answers to the questions ranged from 1 ( strongly disagree ) to 5 ( strongly agree ). At the end of the skills assessment, participants in both groups completed the questionnaires: Group A completed the OSCE questionnaire; Group B completed the OSCE and FC questionnaires for students, and the instructors completed the FC questionnaire for instructors.

Statistical analysis

The normality and homogeneity of variance of the OSCE assessment data of groups A and B were analysed using IBM SPSS Statistics 26 (IBM Corp., Armonk, NY, USA). The data conforming to the normal distribution were described by the mean ± standard deviation, and the independent sample t-test was applied to analyse the differences between the two groups. The count data were described by the composition ratio (%); non-normally distributed data, described using rank means, were analysed with a non-parametric test (Mann-Whitney U). Statistical difference between the two groups was p  < 0.05.

Table  1 shows the demographic characteristics of groups A and B. All 71 fifth-year medical students from two classes of the medical college and four instructors from a medical school-affiliated hospital participated in this study. The mean age of medical students in groups A and B was 27.60 ± 3.35 and 27.66 ± 2.99, respectively. The two groups showed no statistical differences in terms of gender ( p  = 0.866), age ( p  = 0.897) and marital status ( p  = 0.987), and none had previous experience of the FC teaching model.

Results of skills assessment

Table  2 shows the results of the skills assessment for groups A and B. Thirty-five participants in Group A and 29 participants in Group B participated in the skills assessment. Group B had higher scores for abdominal puncture ( p  = 0.03), thoracentesis ( p  < 0.001) and bone marrow puncture ( p  < 0.001), and their average scores for puncture skills ( p  < 0.001) were higher than those of Group A. However, the two groups showed no statistical difference in lumbar puncture scores ( p  = 0.409).

Results of questionnaires

A total of 29 valid FC questionnaires and 64 valid OSCE questionnaires were collected. One and six medical students in groups A and B, respectively, did not participate in the OSCE and complete the questionnaires. We investigated the reasons why students dropped out of the OSCE and questionnaires in groups A and B. One student in group A mentioned said that he had to take an elective exam on the day of the OSCE; four of the six students in group B also had to take an elective exam, while two were unable to do so for health reasons. None of the six students in group B dropped out of the OSCE and questionnaires because of the FC.

The four instructors completed the teaching-related FC questionnaires. The instructors involved in FC teaching were two men and two women, with an average age of 37 years. All of them held MD degrees and had an average teaching experience of 8.75 years. They all participated in FC teaching training for medical students, were well-versed in the FC model and passed our hospital’s FC teaching qualification examination.

Questionnaires of students’ views of the OSCE

Regarding the difficulty of the puncture skills assessment, nearly half of the students thought that the lumbar puncture was the most challenging, followed by the bone marrow puncture, thoracentesis and abdominal puncture. One-fifth of the students thought that all four punctures were not difficult (Fig.  2 ). Regarding the OSCE assessment, most of the students said they understood it well (89.07%), that it truly reflected their competency levels (85.94%) and that this model helps medical students improve their overall competencies (90.63%). Compared to the traditional assessment model, more students said that the OSCE is more innovative and objective (92.19%), that they liked this type of skills assessment very much (85.94%) and that they agreed this type of assessment should be extended to the residency exam (78.12%; Table  3 ).

figure 2

Which operation do you think is more difficult in this assessment? (Group A, B)

Questionnaires of students’ views of the FC

For the FC questionnaire, more than half of the students thought that the video lessons were the most appropriate way to learn about the procedures before class (Fig.  3 A), and they thought the videos’ durations were appropriate (Fig.  3 B). For the most effective way to interact in class, half of the students supported mutual teaching and learning, followed by scenario-based presentations, student and instructor Q&A, and student panel Q&A (Fig.  3 C). Compared to the TC, one-third of the students thought the FC helped with long-term knowledge acquisition; another third said the FC provides more specialised knowledge (Fig.  3 D), and more than half of the students thought the FC was more effective than the TC (Fig.  3 E). Most of the students thought that the analysis and discussion of problems in the FC led to a more comprehensive and deeper mastery of knowledge (96.55%); they thought that the FC was more effective for improving self-learning skills than the TC (93.10%), and they supported the hospital in continuing to promote the FC teaching model (79.31%; Table  3 ).

figure 3

Survey of Group B interns’ views on FC. A : Which teaching resources do you think are the most suitable for your pre-course learning stage? B : Is the length of the learning resources provided in the pre-course period appropriate? C : Which do you think is the most effective way to interact and communicate in the classroom? D : What do you think about the learning effects of FC compared to TC? E : Compared with TC, what do you think is the learning efficiency of FC?

Questionnaires of instructors’ views of the FC

The FC questionnaires for instructors showed that all instructors were well aware of the FC model and had had appropriate teaching experience before the study. They all preferred the FC model to TC, were satisfied with FC’s effectiveness and the medical students’ performance, supported the FC model and agreed to continue using it. When we asked the instructors about the preference of these two teaching methods (teaching effectiveness and student performance satisfaction), they all said that they preferred FC and believed that TC’s teaching effectiveness and student performance satisfaction are worse than those of FC.

During the COVID-19 global pandemic, instructors around the world have been using FC to teach in a fully online environment. However, the teaching of operational skills such as puncture skills is facing challenges with regard to students (not familiar with FC, difficult to accept online courses), instructors (increased workload of lesson preparation) and operational training [ 15 ]. To meet this challenge during the epidemic teaching period (COVID-19) and provide a realistic basis for the development of FC in medical teaching in China, this study assessed the FC model’s value, which combines instructional videos with group learning in teaching puncture skills to medical students.

Our research showed that using the FC model led to more highly skilled performance of abdominal, thoracic, lumbar and bone marrow puncture when compared to the TC among fifth-year medical students. We surveyed the students’ perceptions of the FC and OSCE after they took their puncture skills examination, and they generally had a high level of recognition and acceptance of the FC and the OSCE, supporting their promotion in future clinical teaching and assessment. This study is similar to the one by Sana, which stated that a blended teaching approach based on video learning and simulation teaching improved students’ OSCE scores and performance compared to TC [ 16 ]. Similarly, a meta-analysis conducted by Hew et al. showed that medical students responded well to video learning and interactive discussions in class and stated that this teaching method helped improve learning motivation and better the understanding of learning topics [ 17 ]. Our study demonstrates the positive effect of the FC model, which combines instructional videos with group learning, on medical students learning clinical skills in an environment with limited clinical teaching resources.

Analysis of puncture skills performance

Studies from several medical schools have shown that medical students have inadequate exposure to basic clinical procedures during their studies, and they feel no confidence performing them [ 18 , 19 ]. This finding highlights the need for a more effective approach to ensuring adequate learning and practice opportunities in clinical skills teaching [ 20 ]. Many healthcare institutions worldwide have significantly increased the number of students to address the shortage of healthcare workers, which greatly reduces opportunities for learning practical skills [ 21 ]. Research has shown that watching instructional videos is one way to implement a FC and can be a viable response to increased student numbers [ 22 ]. Combining videos with face-to-face instruction can improve medical students’ knowledge and their performance of clinical skills [ 20 ]. The FC allows students to learn the knowledge independently before class by watching videos or using other learning media; thus, class time is freed up for them to apply their knowledge and actively participate in higher-level thinking [ 23 ].

In our study, Group B had higher scores in puncture skills scores compared to Group A, which suggests that the FC teaching model improves puncture skills– a finding also confirmed by studies of other categories of clinical skills [ 24 , 25 , 26 ]. However, groups A and B (TC and FC) showed no difference in scores for the lumbar puncture, and the mean score was the lowest of the four punctures overall. The questionnaire showed that nearly half of the students thought lumbar puncture was the most difficult of the four punctures skills, which is consistent with the results of existing studies [ 19 , 27 ]. We hypothesised that the operational difficulty of the lumbar puncture affects students’ learning and assessment self-confidence, which in turn affects the final operational outcome [ 23 ]. Some studies have shown that teaching and practicing with lumbar puncture simulators can increase students’ self-confidence [ 28 , 29 ]. A recent study on asthma teaching similarly compared the effects of FC and TC on medical students’ test scores [ 30 ]. Although they found that FC did not improve test scores, most of the participants were satisfied with FC and stated that it improved their motivation to learn. All of the above illustrated the positive impact of FC on medical students’ learning.

Analysis of the results of the questionnaires about students’ views on FC

After the assessment, students in Group B answered a questionnaire survey on the FC. Regarding the pre-course learning method, most students thought that the videos were appropriate. Studies showed that video resource learning was equivalent to bedside teaching and lectures for clinical skills training [ 22 , 31 ]. Although the total duration of the puncture videos was nearly 2.5 h, we learned through the questionnaire that this length was acceptable to most of the students. This finding is quite different from previous studies, which have shown that most students prefer short videos [ 32 , 33 ]. Different preferences for video length might be determined by students’ learning motivation [ 34 ]. The puncture in our research involved more steps than the other punctures; therefore, a longer video demonstration was needed, and understandably, students were receptive to this longer video.

When answering the question about the most effective way to communicate and interact in the classroom, half of the students in Group B supported teaching each other. The FC comprises student-centred classroom learning activities that ultimately lead to learning interest and a greater focus on the task or tasks being learned [ 13 ]. We also surveyed students’ acceptance of the FC. The results of the questionnaire showed that most of the students supported the continuation of the FC model, indicating that the FC had a very high acceptance level among the students. Medical students face the dual pressures of academics and clinical practice daily, and the FC allows them the flexibility to develop their learning plans without being restricted to a particular time and place [ 35 ].

Analysis of the questionnaire results about students’ views on the OSCE

The OSCE has gained worldwide popularity for its comprehensiveness and objectivity in evaluating medical students’ clinical competence [ 36 , 37 ]. In this study, we assessed the puncture skills of the interns with the OSCE. The questionnaire showed that most of the students were well aware of this exam model, and they had a high level of acceptance and approval of it; however, a small number of students felt that it increased their stress levels and affected their performance. One study showed that the OSCE is one of the most anxiety-producing assessments for learners [ 38 ]. Improving examiner training, reducing the time between the exam and getting feedback on the results, and allowing students to fully understand the feedback may be one strategy for reducing immediate anxiety [ 39 ].

Analysis of the questionnaire results about instructors’ views on FC

Based on the questionnaire responses, we found that the FC was supported by the instructors– a result consistent with previous studies [ 11 , 40 ]. A study of FC among ophthalmic trainees found that teachers were more satisfied with FC in teaching ocular trauma traineeships compared to TCs, and it met their teaching expectations. Moreover, the study speculated that the inclusive, lively, and student-centred nature of the FC model may have contributed to its popularity [ 11 ]. However, in our study, some instructors reported that the FC requires a great deal of preparation, including the production of pre-class instructional videos and interactive content for class. These instructors also face heavy clinical and research pressure, and often, they do not have enough energy to conduct the FC. One study noted that instructors can reduce the stress of teaching by utilizing existing teaching resources or collaborative lesson planning [ 41 ]. Therefore, it is necessary to continue studying FC methods in clinical teaching work and adapt them to the characteristics of different clinical disciplines, optimize the teaching structure and reduce the teaching pressure on instructors.

The innovations of this study are several. In terms of research methods, this study creatively utilized a mixed research approach, combining a quasi-experimental research design with descriptive qualitative research. This approach is an improvement over previous one-sided research methods as it evaluates the skills and teaching experience of both instructors and students using quantitative and qualitative methods. This study highlights that the FC model not only improves students’ performance in puncture skills assessment, but it also increases students’ and instructors’ acceptance of and satisfaction with puncture skills courses. In this research design, the study integrates the FC teaching method, which combines instructional videos with collaborative learning for the first time. This teaching method can assess the effectiveness and benefits of FC through two processes of pre-class preparation and classroom teaching, resulting in a more scientific and comprehensive evaluation. This also indicates that the TC has some issues, such as limited inclusiveness and abstract teaching content.

Our study showed that FC combining the instructional videos with group learning performed comparable or even better than TC in improving the assessment performance of medical students’ puncture skills; the recognition and acceptance of the FC teaching model by students and instructors were high, and they supported its further promotion for clinical skills teaching. Our study complements the current research on applying FC in the teaching of puncture skills. During the current disease pandemic, FC deserves further promotion in the medical field.

Limitations

Our study had several limitations. First, it was monocentric: although students were receptive to the content and length of the pre-course learning videos, their views do not necessarily represent the opinions of students in other clinical teaching centres. Second, although students completed pre-course video learning, we do not have detailed data on individual viewing habits and how much they comprehended these puncture skills. Finally, as the difference in abdominal, thoracentesis and bone marrow skills in this study is not substantial, increasing the sample size may make it more significant. This result also shows that the FC model has the same or even better effectiveness on puncture skill performance than TC. It is worth noting that four of the six dropouts in Group B did not participate in the OSCE and questionnaires because they had to take the elective examinations, and the other two could not participate in the OSCE and questionnaires due to health reasons. Thus, the absence of the above six records is completely random and will not be biased against the final results.

This study supplements and expands the existing teaching methods in the medical field by addressing the learning effectiveness of FC, combining instructional videos with group learning on the puncture skills of medical students. Future research could expand this teaching method to other clinical disciplines according to their characteristics.

Data availability

The datasets used and analysed in this study are available from the corresponding authors upon reasonable request.

Abbreviations

  • Flipped classroom

Objective Structured Clinical Examination

  • Traditional classroom

Klasen JM, Vithyapathy A, Zante B, Burm S. The storm has arrived: the impact of SARS-CoV-2 on medical students. Perspect Med Educ. 2020;9:181–5.

Article   PubMed   PubMed Central   Google Scholar  

Sandrone S, Berthaud JV, Carlson C, et al. Education Research: flipped classroom in neurology: principles, practices, and perspectives. Neurology. 2019;93:E106–11.

Article   PubMed   Google Scholar  

Pham J, Tran A, O’Leary KS, Youm J, Tran DK, Chen JW. Neurosurgery lectures Benefit from a flipped Class Approach. World Neurosurg. 2022;164:e481–91.

Chen M, Ni C, Hu Y, et al. Meta-analysis on the effectiveness of team-based learning on medical education in China. BMC Med Educ. 2018. https://doi.org/10.1186/s12909-018-1179-1 .

Burgess A, van Diggele C, Roberts C, Mellis C. Tips for teaching procedural skills. BMC Med Educ. 2020. https://doi.org/10.1186/s12909-020-02284-1 .

Sterz J, Linßen S, Stefanescu MC, Schreckenbach T, Seifert LB, Ruesseler M. Implementation of written structured feedback into a surgical OSCE. BMC Med Educ. 2021. https://doi.org/10.1186/s12909-021-02581-3 .

Patrício MF, Julião M, Fareleira F, Carneiro AV. Is the OSCE a feasible tool to assess competencies in undergraduate medical education? Med Teach. 2013;35:503–14.

Fu Y, Zhang W, Zhang S, Hua D, Xu D, Huang H. Applying a video recording, video-based rating method in OSCEs. Med Educ Online. 2023;28:2187949.

Loda T, Erschens RS, Nevins AB, Zipfel S, Herrmann-Werner A. Perspectives, benefits and challenges of a live OSCE during the COVID-19 pandemic in a cross-sectional study. BMJ Open. 2022;12:e058845.

Gan W, Mok TN, Chen J, She G, Zha Z, Wang H, Li H, Li J, Zheng X. Researching the application of virtual reality in medical education: one-year follow-up of a randomized trial. BMC Med Educ. 2023. https://doi.org/10.1186/s12909-022-03992-6 .

Lin Y, Zhu Y, Chen C, et al. Facing the challenges in ophthalmology clerkship teaching: is flipped classroom the answer? PLoS ONE. 2017. https://doi.org/10.1371/journal.pone.0174829 .

Hansen M, Oosthuizen G, Windsor J, Doherty I, Greig S, McHardy K, McCann L. Enhancement of medical interns’ levels of clinical skills competence and self-confidence levels via Video iPods: Pilot Randomized Controlled Trial. J Med Internet Res. 2011. https://doi.org/10.2196/jmir.1596 .

Mayer RE. Multimedia learning.

Sivarajah RT, Curci NE, Johnson EM, Lam DL, Lee JT, Richardson ML. A review of innovative teaching methods. Acad Radiol. 2019;26:101–13.

Lo CK, Hew KF. Design principles for fully online flipped learning in health professions education: a systematic review of research during the COVID-19 pandemic. BMC Med Educ. 2022. https://doi.org/10.1186/s12909-022-03782-0 .

Saeed S, Khan MH, Siddiqui MMU, Dhanwani A, Hussain A, Ali MM. Hybridizing video-based learning with simulation for flipping the clinical skills learning at a university hospital in Pakistan. BMC Med Educ. 2023. https://doi.org/10.1186/s12909-023-04580-y .

Hew KF, Lo CK. Flipped classroom improves student learning in health professions education: a meta-analysis. BMC Med Educ. 2018. https://doi.org/10.1186/s12909-018-1144-z .

Barr J, Graffeo CS. Procedural experience and confidence among Graduating Medical Students. J Surg Educ. 2016;73:466–73.

Dehmer JJ, Amos KD, Farrell TM, Meyer AA, Newton WP, Meyers MO. Competence and confidence with basic procedural skills: the experience and opinions of fourth-year medical students at a single institution. Acad Med. 2013;88:682–7.

Chan E, Botelho MG, Wong GTC. A flipped classroom, same-level peer-assisted learning approach to clinical skill teaching for medical students. PLoS ONE. 2021. https://doi.org/10.1371/journal.pone.0258926 .

Gisondi MA, Regan L, Branzetti J, Hopson LR. More learners, Finite resources, and the changing Landscape of Procedural Training at the Bedside. Acad Med. 2018;93:699–704.

George A, Blaauw D, Green-Thompson L, et al. Comparison of video demonstrations and bedside tutorials for teaching paediatric clinical skills to large groups of medical students in resource-constrained settings. Int J Educational Technol High Educ. 2019. https://doi.org/10.1186/s41239-019-0164-z .

Mehta NB, Hull AL, Young JB, Stoller JK. Just imagine: new paradigms for medical education. Acad Med. 2013;88:1418–23.

Liu KJ, Tkachenko E, Waldman A, et al. A video-based, flipped classroom, simulation curriculum for dermatologic surgery: a prospective, multi-institution study. J Am Acad Dermatol. 2019;81:1271–6.

Wu JC, Chi SC, Wu CC, Kang YN. Helps from flipped classroom in learning suturing skill: the medical students’ perspective. PLoS ONE. 2018. https://doi.org/10.1371/journal.pone.0204698 .

Wang A, Xiao R, Zhang C, et al. Effectiveness of a combined problem-based learning and flipped classroom teaching method in ophthalmic clinical skill training. BMC Med Educ. 2022. https://doi.org/10.1186/s12909-022-03538-w .

von Cranach M, Backhaus T, Brich J. Medical students’ attitudes toward lumbar puncture—and how to change. Brain Behav. 2019. https://doi.org/10.1002/brb3.1310 .

Sun C, Qi X. Evaluation of Problem- and Simulator-based learning in lumbar puncture in adult neurology Residency Training. World Neurosurg. 2018;109:e807–11.

McMillan HJ, Writer H, Moreau KA, Eady K, Sell E, Lobos AT, Grabowski J, Doja A. Lumbar puncture simulation in pediatric residency training: improving procedural competence and decreasing anxiety. BMC Med Educ. 2016. https://doi.org/10.1186/s12909-016-0722-1 .

Sourg HAA, Satti S, Ahmed N, Ahmed ABM. Impact of flipped classroom model in increasing the achievement for medical students. BMC Med Educ. 2023. https://doi.org/10.1186/s12909-023-04276-3 .

Saun TJ, Odorizzi S, Yeung C, Johnson M, Bandiera G, Dev SP. A peer-reviewed Instructional Video is as effective as a standard recorded didactic lecture in medical trainees performing chest tube insertion: a Randomized Control Trial. J Surg Educ. 2017;74:437–42.

Fleagle TR, Borcherding NC, Harris J, Hoffmann DS. Application of flipped classroom pedagogy to the human gross anatomy laboratory: student preferences and learning outcomes. Anat Sci Educ. 2018;11:385–96.

ZHANG Fan FENG, Shu-xiong. (2017) The process of Personalized Learning Based on flipped Classroom. Sino-US English Teaching. https://doi.org/10.17265/1539-8072/2017.04.005 .

Xu Y, Chen C, Feng D, Luo Z. A survey of College students on the preference for online teaching videos of variable durations in Online flipped Classroom. Front Public Health. 2022. https://doi.org/10.3389/fpubh.2022.838106 .

Tan E, Brainard A, Larkin GL. Acceptability of the flipped classroom approach for in-house teaching in emergency medicine. EMA - Emerg Med Australasia. 2015;27:453–9.

Article   Google Scholar  

Babar S, Afzal A. The new-normal osce examination: executing in the covid-19 era. Pak J Med Sci. 2021;37:2026–8.

PubMed   PubMed Central   Google Scholar  

Feng S, Meng X, Yan Y, Xu X, Xiao D, Brand-Saberi B, Cheng X, Yang XS. Exploring the situational motivation of medical students through clinical medicine level test: a cross-sectional study. Adv Physiol Educ. 2022;46:416–25.

Guraya SY, Guraya SS, Habib F, AlQuiliti KW, Khoshhal KI. Medical students’ perception of test anxiety triggered by different assessment modalities. Med Teach. 2018;40:49–S55.

Daniels VJ, Ortiz S, Sandhu G, Lai H, Yoon MN, Bulut O, Hillier T. Effect of detailed OSCE score reporting on learning and anxiety in Medical School. J Med Educ Curric Dev. 2021;8:238212052199232.

Kurup V, Sendlewski G. (2020) The Feasibility of Incorporating a Flipped Classroom Model in an Anesthesia Residency Curriculum-Pilot Study.

Poulain P, Bertrand M, Dufour H, Taly A. A field guide for implementing a flipped classroom. Biochem Mol Biol Educ. 2023;51:410–7.

Article   CAS   PubMed   Google Scholar  

Download references

Acknowledgements

This work was supported by the Education and Teaching Reform Project of Shandong University under grant 2021Y143, and 2023Y044.

Author information

Authors and affiliations.

Department of Gastroenterology, The Second Hospital of Shandong University, Jinan, Shandong, 250033, China

Weihao Zhang, Wei Zhao, Shuai Li & Lan Liu

Clinical Skill Training Center, The Second Hospital of Shandong University, Jinan, Shandong, 250033, China

Miao Jiang & Yan Li

Department of Neurology, The Second Hospital of Shandong University, Jinan, Shandong, 250033, China

Department of Respiration, The Second Hospital of Shandong University, Jinan, Shandong, 250033, China

Feifei Feng

Department of Hematology, The Second Hospital of Shandong University, Jinan, Shandong, 250033, China

Yongjing Wang

You can also search for this author in PubMed   Google Scholar

Contributions

WHZ: study design, implementation of activities, data collection, data analysis, data interpretation, and preparation of the manuscript; MJ: study design, data collection, providing equipment, funding acquisition and data interpretation; WZ: implementation of activities, data collection, and data interpretation; SL: participation in assessment, data interpretation; FL: participation in assessment, data interpretation; FFF: participation in assessment, data interpretation; YJW: participation in assessment, data interpretation. YL: implementation of activities, data interpretation; and LL: study design, conceptualization, funding acquisition, resource provision, and revision of the manuscript. All authors reviewed the manuscript.

Corresponding author

Correspondence to Lan Liu .

Ethics declarations

Ethics approval and consent to participate.

This study was performed in accordance with the Helsinki Declaration and was approved by the Ethics Committee of the Second Hospital of Shandong University (LCLL-2022-011). All participants, including students and instructors, were informed in detail of the research plan, and we obtained their verbal informed consent prior to this study. This informed consent procedure was also approved by the Ethics Committee of the Second Hospital of Shandong University (LCLL-2022-011).

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Electronic supplementary material

Below is the link to the electronic supplementary material.

Supplementary Material 1

Supplementary material 2, supplementary material 3, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Zhang, W., Jiang, M., Zhao, W. et al. Evaluation of the effectiveness of using flipped classroom in puncture skills teaching. BMC Med Educ 24 , 176 (2024). https://doi.org/10.1186/s12909-024-05132-8

Download citation

Received : 17 July 2023

Accepted : 04 February 2024

Published : 23 February 2024

DOI : https://doi.org/10.1186/s12909-024-05132-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Medical students
  • Objective structured clinical examinations
  • Questionnaire

BMC Medical Education

ISSN: 1472-6920

clinical research studies ppt

IMAGES

  1. Introduction to clinical research

    clinical research studies ppt

  2. PPT

    clinical research studies ppt

  3. Clinical research ppt

    clinical research studies ppt

  4. PPT

    clinical research studies ppt

  5. PPT

    clinical research studies ppt

  6. Introduction to clinical research

    clinical research studies ppt

VIDEO

  1. Academic Workout #318: Microbiology

  2. About Clinical Trials

  3. Fundamentals of Clinical Research Series 3

  4. Introduction to research methodology for health sciences-Second day 2023

  5. The Different Possible Patient Visits In Most Clinical Research Studies

COMMENTS

  1. PDF Lesson 1: Clinical Trials Overview

    Clinical trials are research studies involving people (healthy volunteers or patients) that test the safety and efficacy of a new treatment. A 'treatment' in this context could mean: A medicine. A medical device - such as a cardiac stent (used for narrow or weak blood vessels). A surgical procedure. A test for diagnosing an illness.

  2. Introduction to clinical research

    This presentation will provide a basic overview of clinical research process. 1 of 18 Download Now Recommended Clinical Research Dr Gangaprasad Waghmare Clinical Trial Phases Dr. Ashutosh Tiwari Clinical Trials - An Introduction Dr Purnendu Sekhar Das Institutional review board by akshdeep sharma What's hot (20) Viewers also liked (20)

  3. Clinical Research Presentation

    Clinical research ppt, Clinical trial protocol, ammendments, Protocol deviations and violations Components of a clinical study protocol INSTITUTIONAL REVIEW BOARD/INDEPENDENT ETHICS COMMITTEE (IRB/IEC) Clinical research protocol Site & investigator selection Clinical Research Terminology History of clinical trials Clinical research

  4. PDF Understanding the Clinical Research Process and Principles of Clinical

    2.Possible side effects of experimental vaccines could include fever, chills, rash, aches and pains, nausea, headache, dizziness, and fatigue. Injections can cause pain, soreness, redness, and swelling on the part of the body where the vaccine shot is given. Clinical Research Activity. With your group:

  5. Clinical Trials

    Clinical Trials - An Introduction - Download as a PDF or view online for free. Submit Search. Upload. Clinical Trials - An Introduction. Dr Purnendu Sekhar Das Health and Life Sciences Analytics Professional at Accenture Management Consulting. ... Clinical research ppt, Clinical research ppt, Malay Singh ...

  6. Clinical Trials and Clinical Research: A Comprehensive Review

    Clinical research is an alternative terminology used to describe medical research. Clinical research involves people, and it is generally carried out to evaluate the efficacy of a therapeutic drug, a medical/surgical procedure, or a device as a part of treatment and patient management.

  7. PDF Overview of Study Designs in Clinical Research

    Hierarchy of Evidence for Clinical Decision Making. Expert opinions, editorials, perspective, ideas are based on professional experience - a key aspect of EBP! Animal studies often ARE the basic research studies! "Provide a substantial foundation". "Difficult to generalize to the patient sitting in front of the practitioner.".

  8. How to read a published clinical trial: A practical guide for

    This article provides a practical guide for clinicians who want to read and interpret published clinical trials. It covers the key elements of a trial report, such as the study design, the intervention, the outcomes, the analysis, and the interpretation. It also explains how to use a decision matrix to compare the results of different trials and apply them to clinical practice.

  9. PPT

    Presentation Transcript. This Course Will Introduce You To: • The basics of clinical research, types of clinical trials and why clinical research is necessary. • Good Clinical Practice and Good Laboratory Practice that guide the conduct of clinical research. • The importance of protecting participants and the informed consent procedures.

  10. Interactive PowerPoint Presentation about Clinical Trials

    "Clinical trials" is a PowerPoint slide presentation for patients or a lay audience. The presentation covers: What are clinical trials? Why are clinical trials important Are you considering enrolling in a clinical trial? The clinical trial process Informed consent Rights and protections Trials registers

  11. PPT

    Introduction to Clinical Research Methodology. Introduction Overview of the Scientific Method Criteria Supporting the Causal Nature of an Association Outline of Available Research Designs. From The Book of Daniel, Chapter One. Slideshow 1359284 by evangelina.

  12. Free Clinical Trial Powerpoint Template

    Clinical Trial Phases PowerPoint Slides After giving an overview of what the clinical trial looks like, you can use individual slides to go deep into each clinical trial phase. You'll find five-step diagrams you can easily custom according to the development of your research.

  13. Clinical research study designs: The essentials

    In clinical research, our aim is to design a study which would be able to derive a valid and meaningful scientific conclusion using appropriate statistical methods. The conclusions derived from a research study can either improve health care or result in inadvertent harm to patients.

  14. Research Design For Clinical Trials Powerpoint Presentation ...

    Research Design For Clinical Trials Powerpoint Presentation Slides Clinical trial phases involve various steps that are followed to ensure the safety and efficacy of the newly developed drug by testing it on targeted individuals in a controlled environment. Check out our efficiently designed Research Design for Clinical Trials PowerPoint template.

  15. 210+ clinical research PPT Templates

    Business research Topics - Free Presentation Template. clinical research PPT Templates Download over 6,300+ complete free templates in high resolution. Ready-Made Slide Variety of templates for each industries.

  16. NIMH Clinical Research Toolbox

    The CREST Program aims to ensure that the reported clinical research study data are accurate, complete, and verifiable, the conduct of the study is in compliance with the study protocol, Good Clinical Practice (GCP) and the regulations of applicable agencies, and the rights and well-being of human subjects are protected, in accordance with 45 ...

  17. Clinical Trial PPT PowerPoint Templates

    This Clinical Research Trial Phases Ppt PowerPoint Presentation Complete Deck With Slides is a primer on how to capitalize on business opportunities through planning, innovation, and market intelligence. ... This slide tabulates the multiples phases of the drug testing process in clinical research trials. It also provides information regarding ...

  18. Clinical Trial Infographics for Google Slides and PowerPoint

    Free Google Slides theme and PowerPoint template. These Clinical Trial Infographics are simply great for medical purposes: talking about treatments, steps, and diseases is simple if you use these timelines, arrows, bars and circle charts, banners and text blocks.

  19. Clinical Trials Day

    Free Google Slides theme and PowerPoint template. May 20th is a great opportunity to speak about the importance of clinical trials in medicine because it's the Clinical Trial Day! This day will be celebrated worldwide and promoted with speeches, events, ads and presentations like this one. This template includes a modern design inspired by ...

  20. Selection, optimization and validation of ten chronic disease ...

    a,Timeline and process for selection, evaluation, optimization, transfer, validation and implementation of the clinical PRS test pipeline.Dashed lines represent pivotal moments in the progression ...

  21. 'It depends': what 86 systematic reviews tell us about what strategies

    The gap between research findings and clinical practice is well documented and a range of strategies have been developed to support the implementation of research into clinical practice. The objective of this study was to update and extend two previous reviews of systematic reviews of strategies designed to implement research evidence into clinical practice.

  22. Phases of clinical trials

    Clinical trial is a systematic investigation in human subjects for evaluating the safety & efficacy of any new drug. 1 of 27 Ad Ad Ad Ad Ad Ad Ad Ad Ad Ad Ad Ad Ad Recommended Phases in clinical trial Upendra Agarwal Clinical Trial Phases Dr. Ashutosh Tiwari Clinical trial design Dr. Ritu Budania

  23. Clinical Study PowerPoint Presentation and Slides

    The topics discussed in these slides are Clinical Studies Onychomycosis. This is a completely editable PowerPoint presentation and is available for immediate download. Download now and impress your audience. Presenting this set of slides with name Clinical Study Solutions Ppt Powerpoint Presentation File Slides Cpb.

  24. Less Invasive Early Lung Cancer Study Receives Top 10 Clinical Research

    A Weill Cornell Medicine-led research team has been awarded a 2024 Top 10 Clinical Research Achievement Award from the Clinical Research Forum in recognition of an influential 2023 New England Journal of Medicine study on early-stage lung cancer resection.. The award is one of 10 given annually by the Clinical Research Forum for highly innovative and clinically translatable research with the ...

  25. Stanford Medicine study identifies distinct brain organization patterns

    A new study by Stanford Medicine investigators unveils a new artificial intelligence model that was more than 90% successful at determining whether scans of brain activity came from a woman or a man.. The findings, published Feb. 20 in the Proceedings of the National Academy of Sciences, help resolve a long-term controversy about whether reliable sex differences exist in the human brain and ...

  26. An ambitious NIH study has brought new attention to chronic ...

    Prior research may help forward clinical trials to test possible treatments. Health. An ambitious NIH study has brought new attention to chronic fatigue syndrome. February 21, 2024 5:26 PM ET.

  27. RFA-DA-25-043: Transformative Research on the Basic Mechanisms of

    Background: Research on substance use disorders (SUDs) has primarily focused on individual substances although polysubstance use is prevalent. Polysubstance use is the use of more than one addictive substance within a defined interval; the use may be sequential (use of multiple substances on separate occasions), or concurrent/simultaneous.

  28. Clinical trials

    S Sirisha Annavarapu Mar 30, 2015 • 76 likes • 30,327 views Health & Medicine pharmacology 1 of 41 Download Now Recommended Clinical Trial Phases Dr. Ashutosh Tiwari Introduction to clinical research Pradeep H Clinical Trials - An Introduction Dr Purnendu Sekhar Das Investigational New drug application [INDA] Sagar Savale

  29. Evaluation of the effectiveness of using flipped classroom in puncture

    The effectiveness of flipped classroom (FC) on puncture skills in medical education is still uncertain. This study aimed to assess the role of the FC model in puncture skills and investigate the acceptance and approval of FC among medical students and instructors. A mixed research approach of quasi-experimental research design and descriptive qualitative research was conducted in September ...