Advertisement

Supported by

Poll Ranks Biden as 14th-Best President, With Trump Last

President Biden may owe his place in the top third to his predecessor: Mr. Biden’s signature accomplishment, according to the historians, was evicting Donald J. Trump from the Oval Office.

  • Share full article

President Biden standing at the top of the steps leading to Air Force One.

By Peter Baker

Peter Baker has covered the past five presidents, ranked seventh, 12th, 14th, 32nd and 45th in the survey.

President Biden has not had a lot of fun perusing polls lately. He has a lower approval rating than every president going back to Dwight D. Eisenhower at this stage of their tenures, and he trails former President Donald J. Trump in a fall rematch. But Mr. Biden can take solace from one survey in which he is way out in front of Mr. Trump.

A new poll of historians coming out on Presidents’ Day weekend ranks Mr. Biden as the 14th-best president in American history, just ahead of Woodrow Wilson, Ronald Reagan and Ulysses S. Grant. While that may not get Mr. Biden a spot on Mount Rushmore, it certainly puts him well ahead of Mr. Trump, who places dead last as the worst president ever.

Indeed, Mr. Biden may owe his place in the top third in part to Mr. Trump. Although he has claims to a historical legacy by managing the end of the Covid pandemic; rebuilding the nation’s roads, bridges and other infrastructure; and leading an international coalition against Russian aggression, Mr. Biden’s signature accomplishment, according to the historians, was evicting Mr. Trump from the Oval Office.

“Biden’s most important achievements may be that he rescued the presidency from Trump, resumed a more traditional style of presidential leadership and is gearing up to keep the office out of his predecessor’s hands this fall,” wrote Justin Vaughn and Brandon Rottinghaus, the college professors who conducted the survey and announced the results in The Los Angeles Times .

Mr. Trump might not care much what a bunch of academics think, but for what it’s worth he fares badly even among the self-identified Republican historians. Finishing 45th overall, Mr. Trump trails even the mid-19th-century failures who blundered the country into a civil war or botched its aftermath like James Buchanan, Franklin Pierce and Andrew Johnson.

Judging modern-day presidents, of course, is a hazardous exercise, one shaped by the politics of the moment and not necessarily reflective of how history will look a century from now. Even long-ago presidents can move up or down such polls depending on the changing cultural mores of the times the surveys are conducted.

For instance, Barack Obama, finishing at No. 7 this year, is up nine places since 2015, as is Grant, now ranked 17th. On the other hand, Andrew Jackson has fallen 12 places to 21st while Wilson (15th) and Reagan (16th) have each fallen five places.

At least some of that may owe to the increasing contemporary focus on racial justice. Mr. Obama, of course, was the nation’s first Black president, and Grant’s war against the Ku Klux Klan has come to balance out the corruption of his administration. But more attention today has focused on Jackson’s brutal campaigns against Native Americans and his “Trail of Tears” forced removal of Indigenous communities, and Wilson’s racist views and resegregation of parts of the federal government.

As usual, Abraham Lincoln, Franklin D. Roosevelt, George Washington, Theodore Roosevelt and Thomas Jefferson top the list, and historians generally share similar views of many presidents regardless of their own personal ideology or partisan affiliation. But some modern presidents generate more splits among the historians along party lines.

Among Republican scholars, for instance, Reagan finishes fifth, George H.W. Bush 11th, Mr. Obama 15th and Mr. Biden 30th, while among Democratic historians, Reagan is 18th, Mr. Bush 19th, Mr. Obama sixth and Mr. Biden 13th. Other than Grant and Mr. Biden, the biggest disparity is over George W. Bush, who is ranked 19th among Republicans and 33rd among Democrats.

Intriguingly, one modern president who generates little partisan difference is Bill Clinton. In fact, Republicans rank him slightly higher, at 10th, than Democrats do, at 12th, perhaps reflecting some #MeToo era rethinking and liberal unease over his centrist politics.

The survey, conducted by Mr. Vaughn, an associate professor of political science at Coastal Carolina University, and Mr. Rottinghaus, a professor of political science at the University of Houston, was based on 154 responses from scholars across the country.

Peter Baker is the chief White House correspondent for The Times. He has covered the last five presidents and sometimes writes analytical pieces that place presidents and their administrations in a larger context and historical framework. More about Peter Baker

Our Coverage of the 2024 Presidential Election

News and Analysis

South Carolina voters are heading to the polls  to cast ballots  in a Republican presidential primary that could determine the political fate of the state’s former governor, Nikki Haley, in her long-shot bid  to derail former President Donald Trump.

A new super PAC supporting Trump has emerged with plans to air ads during the presidential general election . The group, Right for America, is backed by a member of his private club, Mar-a-Lago.

Anger within the Democratic Party over Biden’s support for Israel in the war in Gaza has been building for months . Michigan’s upcoming primary will measure that discontent for the first time .

  Nikki Hailey’s Voters: Supporters of the former governor of South Carolina tend to be moderate and college educated. We spoke with nearly 40 of them to find out what they might do in November  if Trump drives Haley out of the race.

Immigration Politics:  President   Biden’s aides are looking at the Republicans’ decision to kill a bipartisan border measure as an opportunity to bolster his re-election campaign. But there are risks to such a strategy .

An Invisible Constituency: Asian Americans are largely underrepresented in public opinion polls. Efforts are underway to change that .

 On Wall Street:  Investors are already thinking about how financial markets might respond to the outcome of a Biden-Trump rematch , and how they should trade to prepare for it.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Advanced Search
  • Journal List
  • HHS Author Manuscripts

Logo of nihpa

Quantitative Approaches for the Evaluation of Implementation Research Studies

Justin d. smith.

1 Center for Prevention Implementation Methodology (Ce-PIM) for Drug Abuse and HIV, Department of Psychiatry and Behavioral Sciences, Northwestern University Feinberg School of Medicine, 750 N Lake Shore Dr., Chicago, Illinois, USA.

Mohamed Hasan

2 Center for Healthcare Studies, Institute of Public Health and Medicine, Northwestern University Feinberg School of Medicine, 633 N St. Claire St., Chicago, Illinois, USA.

Authors’ contributions

Implementation research necessitates a shift from clinical trial methods in both the conduct of the study and in the way that it is evaluated given the focus on the impact of implementation strategies. That is, the methods or techniques to support the adoption and delivery of a clinical or preventive intervention, program, or policy. As strategies target one or more levels within the service delivery system, evaluating their impact needs to follow suit. This article discusses the methods and practices involved in quantitative evaluations of implementation research studies. We focus on evaluation methods that characterize and quantify the overall impacts of an implementation strategy on various outcomes. This article discusses available measurement methods for common quantitative implementation outcomes involved in such an evaluation—adoption, fidelity, implementation cost, reach, and sustainment—and the sources of such data for these metrics using established taxonomies and frameworks. Last, we present an example of a quantitative evaluation from an ongoing randomized rollout implementation trial of the Collaborative Care Model for depression management in a large primary healthcare system.

1. Background

As part of this special issue on implementation science, this article discusses quantitative methods for evaluating implementation research studies and presents an example of an ongoing implementation trial for illustrative purposes. We focus on what is called “summative evaluation,” which characterizes and quantifies the impacts of an implementation strategy on various outcomes ( Gaglio & Glasgow, 2017 ). This type of evaluation involves aggregation methods conducted at the end of a study to assess the success of an implementation strategy on the adoption, delivery, and sustainment of an evidence-based practice (EBP), and the cost associated with implementation ( Bauer, Damschroder, Hagedorn, Smith, & Kilbourne, 2015 ). These results help decision makers understand the overall worth of an implementation strategy and whether to upscale, modify, or discontinue ( Bauer et al., 2015 ). This topic complements others in this issue on formative evaluation (Elwy et al.) and qualitative methods (Hamilton et al.), which are also used in implementation research evaluation.

Implementation research, as defined by the United States National Institutes of Health (NIH), is “the scientific study of the use of strategies [italics added] to adopt and integrate evidence-based health interventions into clinical and community settings in order to improve patient outcomes and benefit population health. Implementation research seeks to understand the behavior of healthcare professionals and support staff, healthcare organizations, healthcare consumers and family members, and policymakers in context as key influences on the adoption, implementation and sustainability of evidence-based interventions and guidelines” ( Department of Health and Human Services, 2019 ). Implementation strategies are methods or techniques used to enhance the adoption, implementation, and sustainability of a clinical program or practice ( Powell et al., 2015 ).

To grasp the evaluation methods used in implementation research, one must appreciate the nature of this research and how the study designs, aims, and measures differ in fundamental ways from those methods with which readers will be most familiar—that is, evaluations of clinical efficacy or effectiveness trials. First, whereas clinical intervention research focuses on how a given clinical intervention—meaning a pill, program, practice, principle, product, policy, or procedure ( Brown et al., 2017 )—affects a health outcome at the patient level, implementation research focuses on how systems can take that intervention to scale in order to improve health outcomes of the broader community ( Colditz & Emmons, 2017 ). Thus, when implementation strategies are the focus, the outcomes evaluated are at the system level. Figure 1 illustrates the emphasis (foreground box) of effectiveness versus implementation research and the corresponding outcomes that would be included in the evaluation. This difference can be illustrated by “hybrid trials” in which effectiveness and implementation are evaluated simultaneously but with different outcomes for each aim ( Curran, Bauer, Mittman, Pyne, & Stetler, 2012 ; also see Landes et al., this issue).

An external file that holds a picture, illustration, etc.
Object name is nihms-1539194-f0001.jpg

Emphasis and Outcomes Evaluated in Clinical Effectiveness versus Implementation Research

Note. Adapted from a slide developed by C. Hendricks Brown.

2. Design Considerations for Evaluating Implementation Research Studies

The stark contrast between the emphasis in implementation versus effectiveness trials occurs largely because implementation strategies most often, but not always, target one or more levels within the system that supports the adoption and implementation of the intervention, such as the provider, clinic, school, health department, or even state or national levels ( Powell et al., 2015 ). Implementation strategies are discussed in this issue by Kirchner and colleagues. With the focus on levels within which patients who receive the clinical or preventive intervention are embedded, research designs in implementation research follow suit. The choice of a study design to evaluate an implementation strategy influences the confidence in the association drawn between a strategy and an observed effect ( Grimshaw, Campbell, Eccles, & Steen, 2000 ). Strong designs and methodologically-robust studies support the validity of the evaluations and provide evidence likely to be used by policy makers. Study designs are generally classified into observational (descriptive) and experimental/quasi-experimental.

Brown et al. (2017) described three broad types of designs for implementation research. ( 1 ) Within-site designs involve evaluation of the effects of implementation strategies within a single service system unit (e.g., clinic, hospital). Common within-site designs include post, pre-post, and interrupted time series. While these designs are simple and can be useful for understanding the impact in a local context ( Cheung & Duan, 2014 ), they contribute limited generalizable knowledge due to the biases inherent small-sample studies with no direct comparison condition. Brown et al. describe two broad design types can be used to create generalizable knowledge as they inherently involve multiple units for aggregation and comparison using the evaluation methods described in this article. ( 2 ) Between-site designs involve comparison of outcomes between two or more service system units or clusters/groups of units. While they commonly involve the testing of a novel implementation strategy compared to routine practice (i.e., implementation as usual), they can also be head-to-head tests of two or more novel implementation strategies for the same intervention, which we refer to as a comparative implementation trial (e.g., Smith et al., 2019 ). ( 3 ) Within- and between-site designs add a time-based crossover for each unit in which they begin in one condition—usually routine practice—and then move to a second condition involving the introduction of the implementation strategy. We refer to this category as rollout trials, which includes the stepped-wedge and dynamic wait-list design ( Brown et al., 2017 ; Landsverk et al., 2017 ; Wyman, Henry, Knoblauch, & Brown, 2015 ). Designs for implementation research are discussed in this issue by Miller and colleagues.

3. Quantitative Methods for Evaluating Implementation Outcomes

While summative evaluation is distinguishable from formative evaluation (see Elwy et al. this issue ), proper understanding of the implementation strategy requires using both methods, perhaps at different stages of implementation research ( The Health Foundation, 2015 ). Formative evaluation is a rigorous assessment process designed to identify potential and actual influences on the effectiveness of implementation efforts ( Stetler et al., 2006 ). Earlier stages of implementation research might rely solely on formative evaluation and the use of qualitative and mixed methods approaches. In contrast, later stage implementation research involves powered tests of the effect of one or more implementation strategies and are thus likely to use a between-site or a within- and between-site research design with at least one quantitative outcome. Quantitative methods are especially important to explore the extent and variation of change (within and across units) induced by the implementation strategies.

Proctor and colleagues (2011) provide a taxonomy of available implementation outcomes, which include acceptability, adoption, appropriateness, feasibility, fidelity, implementation cost, penetration/reach, and sustainability/sustainment. Table 1 in this article presents a modified version of Table 1 from Proctor et al. (2011) , focusing only on the quantitative measurement characteristics of these outcomes. Table 1 also includes the additional metrics of speed and quantity, which will be discussed in more detail in the case example. As noted in Table 1 , and by Proctor et al. (2011) , certain outcomes are more applicable at earlier versus later stages of implementation research. A recent review of implementation research in the field of HIV indicated that earlier stage implementation research was more likely to focus on acceptability and feasibility, whereas later stage testing of implementation strategies focused less on these and more on adoption, cost, penetration/reach, fidelity, and sustainability ( Smith et al., 2019 ). These sources of quantitative information are at multiple levels in the service delivery system, such as the intervention delivery agent, leadership, and key stakeholders in and outside of a particular delivery system ( Brown et al., 2013 ).

Quantitative Measurement Characteristics of Common Implementation Outcomes

Note. This table is modeled after Table 1 in the Proctor et al. (2011) article.

Methods for quantitative data collection include structured surveys; use of administrative records, including payor and health expenditure records; extraction from the electronic health record (EHR); and direct observation. Structured surveys are commonly used to assess attitudes and perceptions of providers and patients concerning such factors as the ability to sustain the intervention and a host of potential facilitators and barriers to implementation (e.g., Bertrand, Holtgrave, & Gregowski, 2009 ; Luke, Calhoun, Robichaux, Elliott, & Moreland-Russell, 2014 ). Administrative databases and the EHR are used to assess aspects of intervention delivery that result from the implementation strategies ( Bauer et al., 2015 ). Although the EHR supports automatic and cumulative data acquisition, its utility for measuring implementation outcomes is limited depending on the type of implementation strategy and the intervention. For example, it is well suited for gathering data on EHR-based implementation strategies, such as clinical decision supports and symptom screening, but less useful for behaviors that would not otherwise be documented in the EHR (e.g., effects of a learning collaborative on adoption of a cognitive behavioral therapy protocol). Last, observational assessment of implementation is fairly common but resource intensive, which limits its use outside of funded research. This is particularly germane to assessing fidelity of implementation, which is commonly observational in funded research but is rarely done when the intervention is adopted under real-world circumstances ( Schoenwald et al., 2011 ). The costs associated with observational fidelity measurement has led to promising efforts to automate this process with machine learning methods (e.g., Imel et al., 2019 ).

Quantitative evaluation of implementation research studies most commonly involves assessment of multiple outcome metrics to garner a comprehensive appraisal of the effects of the implementation strategy. This is due in large part to the interrelatedness and interdependence of these metrics. A shortcoming of the Proctor et al. (2011) taxonomy is that it does not specify relations between outcomes, rather they are simply listed. The RE-AIM evaluation framework ( Gaglio, Shoup, & Glasgow, 2013 ; Glasgow, Vogt, & Boles, 1999 ) is commonly used and includes consideration of the interrelatedness between both the implementtion outcomes and the clinical effectiveness of the intervention being implemented. Thus, it is particularly well-suited for effectiveness-implementation hybrid trials ( Curran et al., 2012 ; also see Landes et al., this issue). RE-AIM stands for Reach, Effectiveness (of the clinical or preventive intervention), Adoption, Implementation, and Maintenance. Each metric is important for determining the overall public health impact of the implementation, but they are somewhat interdependent. As such, RE-AIM dimensions can be presented in some combination, such as the “public health impact” metric (reach rate multiplied by the effect size of the intervention) ( Glasgow, Klesges, Dzewaltowski, Estabrooks, & Vogt, 2006 ). RE-AIM is one in a class of evaluation frameworks. For a review, see Tabak, Khoong, Chambers, and Brownson (2012) .

4. Resources for Quantitative Evaluation in Implementation Research

There are a number of useful resources for the quantitative measures used to evaluate implementation research studies. First is the Instrument Review Project affiliated with the Society for Implementation Research Collaboration ( Lewis, Stanick, et al., 2015 ). The results of this systematic review of measures indicated significant variability in the coverage of measures across implementation outcomes and salient determinants of implementation (commonly referred to as barriers and facilitators). The authors reviewed each identified measure for the psychometric properties of internal consistency, structural validity, predictive validity, having norms, responsiveness, and usability (pragmatism). Few measures were deemed high-quality and psychometrically sound due in large part to not using gold-standard measure development methods. This review is ongoing and a website ( https://societyforimplementationresearchcollaboration.org/sirc-instrument-project/ ) is continuously updated to reflect completed work, as well as emerging measures in the field, and is available to members of the society. A number of articles and book chapters provide critical discussions of the state of measurement in implementation research, noting the need for validation of instruments, use across studies, and pragmatism ( Emmons, Weiner, Fernandez, & Tu, 2012 ; Lewis, Fischer, et al., 2015 ; Lewis, Proctor, & Brownson, 2017 ; Martinez, Lewis, & Weiner, 2014 ; Rabin et al., 2016 ).

The RE-AIM website also includes various means of operationalizing the components of this evaluation framework ( http://www.re-aim.org/resources-and-tools/measures-and-checklists/ ) and recent reviews of the use of RE-AIM are also helpful when planning a quantitative evaluation ( Gaglio et al., 2013 ; Glasgow et al., 2019 ). Additionally, the Grid-Enabled Measures Database (GEM), hosted by the National Cancer Institute, has an ever-growing list of implementation-related measures (130 as of July, 2019) with a general rating by users ( https://www.gem-measures.org/public/wsmeasures.aspx?cat=8&aid=1&wid=11 ). Last, Rabin et al. (2016) provide an environmental scan of resources for measures in implementation and dissemination science.

5. Pragmatism: Reducing Measurement Burden

An emphasis in the field has been on finding ways to reduce the measurement burden on implementers, and to a lesser extent on implementation researchers to reduce costs and increase the pace of dissemination ( Glasgow et al., 2019 ; Glasgow & Riley, 2013 ). Powell et al. (2017) established criteria for pragmatic measures that resulted in four distinct categories: (1) acceptable, (2) compatible, (3) easy, and (4) useful; next steps are to develop consensus regarding the most important criteria and developing quantifiable rating criteria for assessing implementation measures on their pragmatism. Advancements have occurred using technology for the evaluation of implementation ( Brown et al., 2015 ). For example, automated and unobtrusive implementation measures can greatly reduce stakeholder burden and increase response rates. As an example, our group ( Wang et al., 2016 ) conducted a proof-of-concept demonstrating the use text analysis to automatically classify the completion of implementation activities using communication logs between implementer and implementing agency. As mentioned earlier in this article, researchers have begun to automate the assessment of implementation fidelity to such evidence-based interventions as motivational interviewing (e.g., Imel et al., 2019 ; Xiao, Imel, Georgiou, Atkins, & Narayanan, 2015 ), and this work is expanding to other intervention protocols to aid in implementation quality ( Smith et al., 2018 ).

6. Example of a Quantitative Evaluation of an Implementation Research Study

We now present the quantitative evaluation plan for an ongoing hybrid type II effectiveness-implementation trial (see Landes et al., this issue ) examining the effectiveness and implementation of the Collaborative Care Model (CCM; Unützer et al., 2002 ) for the management of depression in adult primary care clinics of Northwestern Medicine (Principal Investigator: Smith). CCM is a structure for population-based management of depression involving the primary care provider, a behavioral care manager, and a consulting psychiatrist. A meta-analysis of 79 randomized trials (n=24,308), concluded that CCM is more effective than standard care for short- and long-term treatment of depression ( Archer et al., 2012 ). CCM has also been shown to provide good economic value ( Jacob et al., 2012 ).

Our study involves 11 primary care practices in a rollout implementation design (see Figure 2 ). Randomization in roll-out designs occurs by start time of the implementation strategy, and ensures confidence in the results of the evaluation because known and unknown biases are equally distributed in the case and control groups ( Grimshaw et al., 2000 ). Rollout trials are both powerful and practical as many organizations feel it is unethical to withhold effective interventions, and roll-out designs reduce the logistic and resource demands of delivering the strategy to all units simultaneously. The co-primary aims of the study concern the effectiveness of CCM and its implementation, respectively: 1) Test the effectiveness of CCM to improve depression symptomatology and access to psychiatric services within the primary care environment; and 2) Evaluate the impact of our strategy package on the progressive improvement in speed and quantity of CCM implementation over successive clinics. We will use training and educational implementation strategies, provided to primary care providers, support staff (e.g., nurses, medical assistants), and to practice and system leadership, as well as monitoring and feedback to the practices. Figure 3 summarizes the quantitative evaluation being conducted in this trial using the RE-AIM framework.

An external file that holds a picture, illustration, etc.
Object name is nihms-1539194-f0002.jpg

Design and Timeline of Randomized Rollout Implementation Trial of CCM

Note. CCM = Collaborative Care Model. Clinics will have a staggered start every 3–4 months randomized using a matching scheme. Pre-implementation assessment period is 4 months. Evaluation of CCM implementation will be a minimum of 24 months at each clinic.

An external file that holds a picture, illustration, etc.
Object name is nihms-1539194-f0003.jpg

Summative Evaluation Metrics of CCM Implementation Using the RE-AIM Framework

Note. CCM = Collaborative Care Model. EHR = electronic health record.

7.1. EHR and other administrative data sources

As this is a Type 2effectiveness-implementation hybrid trial, Aim 1 encompasses both reach —an implementation outcome—of depression management by CCM within primary care—and the effectiveness of CCM at improving patient and service outcomes. Within RE-AIM, the Public Health Impact metric is effectiveness (effect size) multiplied by reach rate. EHR and administrative data are being used to evaluate the primary implementation outcomes of reach (i.e., the proportion of patients in the practice who are eligible for CCM and who are referred). The reach rates achieved after implementation of CCM can be compared to rates of mental health contact for patients with depression prior to implementation as well as to that achieved by other CCM implementation evaluations in the literature.

The primary effectiveness outcome of CCM is the reduction of patients’ depression symptom severity. De-identified longitudinal patient outcome data from the EHR—principally depression diagnosis and scores on the PHQ-9 ( Kroenke, Spitzer, & Williams, 2001 )—will be analyzed to evaluate the impact of CCM. Other indicators of the effectiveness of CCM will be evaluated as well but are not discussed here as they are likely to be familiar to most readers with knowledge of clinical trials. Service outcomes, from the Institute of Medicine’s Standards of Care ( Institute of Medicine Committee on Crossing the Quality Chasm, 2006 ), centered on providing care that is effective (providing services based on scientific knowledge to all who could benefit and refraining from providing services to those not likely to benefit), timely (reducing waits and sometimes harmful delays for both those who receive and those who give care), and equitable (providing care that does not vary in quality because of personal characteristics such as gender, ethnicity, geographic location, and socioeconomic status). We also sought to provide care that is safe, patient-centered, and efficient.

EHR data will also be used to determine adoption of CCM (i.e., the number of providers with eligible patients who refer to CCM). This can be accomplished by tracking patient screening results and intakes completed by the CCM behavioral care manager within the primary care clinician’s encounter record.

7.2. Speed and quantity of implementation

Achievement of Aim 2 requires an evaluation approach and an appropriate trial design to obtain results that can contribute to generalizable knowledge. A rigorous rollout implementation trial design, with matched-pair randomization to when the practice would change from usual care to CCM was devised. Figure 2 provides a schematic of the design with the timing of the crossover from standard practice to CCM implementation. The first thing one will notice about the design is that the sequential nature of the rollout in which implementation at earlier sites precedes the onset of implementation in later sites. This suggests the potential to learn from successes and challenges to improve implementation efficiency (speed) over time. We will use the Universal SIC® ( Saldana, Schaper, Campbell, & Chapman, 2015 ), a date-based, observational measure, to capture the speed of implementation of various activities needed to successfully implement CCM, such as “establishing a workflow”, “preparing for training”, and “behavioral care manager hired.” This measure is completed by practice staff and members of the implementation team based on their direct knowledge of precisely when the activity was completed. Using the completion date of each activity, we will analyze the time elapsed in each practice to complete each stage (Duration Score). Then, we will calculate the percentage of stages completed (Proportion Score). These scores can then be used in statistical analyses to understand the factors that contributed to timely stage completion, the number of stages that are important for successful program implementation by relating the SIC to other implementation outcomes, such as reach rate; and simply whether there was a degree of improvement in implementation efficiency and scale as the rollout took place. That is, were more stages completed more quickly by later sites compared to earlier ones in the rollout schedule. This analysis comprises the implementation domain of RE-AIM. It will be used in combination with other metrics from the EHR to determine the fidelity of implementation, which is consistent with RE-AIM.

7.3. Surveys

To understand the process and the determinants of implementation—the factors that impede or promote adoption and delivery with fidelity—a battery of surveys was administered at multiple time-points to key staff members in each practice. One challenge with large-scale implementation research is the need for measures to be both psychometrically sound as well as pragmatic. With this in mind, we adapted a set of questions for the current trial that were developed and validated in prior studies. This low-burden assessment is comprised of items from four validated implementation surveys concerning factors at the inner setting of the organization: the Implementation Leadership Scale ( Aarons, Ehrhart, & Farahnak, 2014 ), the Evidence-Based Practice Attitude Scale ( Aarons, 2004 ), the Clinical Effectiveness and Evidence-Based Practice Questionnaire ( Upton & Upton, 2006 ), and the Organizational Change Recipient’s Belief Scale ( Armenakis, Bernerth, Pitts, & Walker, 2007 ). In a prior study, we used confirmatory factor analysis to evaluate the four scales after shortening for pragmatism and tailoring the wording of the items (when appropriate) to the context under investigation in the study (Smith et al., under review). Further, different versions of the survey were created for administration to the various professional roles in the organization. Results showed that the scales were largely replicated after shortening and tailoring; internal consistencies were acceptable; and the factor structures were statistically invariant across professional role groups. The same process was undertaken for this study with versions of the battery developed for providers, practice leadership, support staff, and the behavioral care managers. The survey was administered immediately after initial training in the model and then again at 4, 12, and 24 months. Items were added after the baseline survey regarding the process of implementation thus far and the most prominent barriers and facilitators to implementation of CCM in the practice. Survey-based evaluation of maintenance in RE-AIM, also called sustainability, will occur via the Clinical Sustainability Assessment Tool ( Luke, Malone, Prewitt, Hackett, & Lin, 2018 ) to key decision makers at multiple levels in the healthcare system.

7.4. Cost of implementation

The costs incurred when adopting and delivering a new clinical intervention are a top reason attributed to lack of adoption of behavioral interventions ( Glasgow & Emmons, 2007 ). While cost-effectiveness and cost-benefit analyses demonstrate the long-term economic benefits associated with the effects of these interventions, they rarely consider the costs to the implementer associated with these endeavors as a unique component ( Ritzwoller, Sukhanova, Gaglio, & Glasgow, 2009 ). As such, decision makers value different kinds of economic evaluations, such as budget impact analysis, which assesses the expected short-term changes in expenditures for a health care organization or system in adopting a new intervention ( Jordan, Graham, Berkel, & Smith, 2019 ), and cost-effectiveness analysis from the perspective of the implementing system and not simply the individual recipient of the evidence-based intervention being implemented ( Raghavan, 2017 ). Eisman and colleagues ( this issue ) discuss economic evaluations in implementation research.

In our study, our economic approach focuses on the cost to Northwestern Medicine to deliver CCM and will incorporate reimbursement from payors to ensure that the costs to the system are recouped in such a way that it can be sustained over time under current models of compensated care. The cost-effectiveness of CCM has been established ( Jacob et al., 2012 ), but we will also quantify the cost of achieving salient health outcomes for the patients involved, such as cost to achieve remission as well as projected costs that would increase remission rates.

7. Conclusions

The field of implementation research has developed methods for conducting quantitative evaluation to summarize the overall, aggregate impact of implementation strategies on salient outcomes. Methods are still emerging to aid researchers in the specification and planning of evaluations for implementation studies (e.g., Smith, 2018 ). However, as noted in the case example, evaluations focused only on the aggregate results of a study should not be done in the absence of ongoing formative evaluations, such as in-protocol audit and feedback and other methods (see Elwy et al., this issue ),and mixed and/or qualitative methods (see Hamilton et al., this issue ). Both of which are critical for interpreting the results of evaluations that aggregate the results of a large trial and gaging the generalizability of the findings. In large part, the intent of quantitative evaluations of large trials in implementation research aligns with its clinical-level counterparts, but with the emphasis on the factors in the service delivery system associated with adoption and delivery of the clinical intervention rather than on the direct recipients of that intervention (see Figure 1 ). The case example shows how both can be accomplished in an effectiveness-implementation hybrid design (see Landes et al., this issue ). This article shows current thinking on quantitative outcome evaluation in the context of implementation research. Given the quickly-evolving nature of the field of implementation research, it is imperative for interested readers to consult the most up-to-date resources for guidance on quantitative evaluation.

  • Quantitative evaluation can be conducted in the context of implementation research to determine impact of various strategies on salient outcomes.
  • The defining characteristics of implementation research studies are discussed.
  • Quantitative evaluation frameworks and measures for key implementation research outcomes are presented.
  • Application is illustrated using a case example of implementing collaborative care for depression in primary care practices in a large healthcare system.

Acknowledgements

The authors wish to thank Hendricks Brown who provided input on the development of this article and to the members of the Collaborative Behavioral Health Program research team at Northwestern: Lisa J. Rosenthal, Jeffrey Rado, Grace Garcia, Jacob Atlas, Michael Malcolm, Emily Fu, Inger Burnett-Zeigler, C. Hendricks Brown, and John Csernansky. We also wish to thank the Woman’s Board of Northwestern Memorial Hospital, who generously provided a grant to support and evaluate the implementation and effectiveness of this model of care as it was introduced to the Northwestern Medicine system, and our clinical, operations, and quality partners in Northwestern Medicine’s Central Region.

This study was supported by a grant from the Woman’s Board of Northwestern Memorial Hospital and grant P30DA027828 from the National Institute on Drug Abuse, awarded to C. Hendricks Brown. The opinions expressed herein are the views of the authors and do not necessarily reflect the official policy or position of the Woman’s Board, Northwestern Medicine, the National Institute on Drug Abuse, or any other part of the US Department of Health and Human Services.

List of Abbreviations

Competing interests

None declared.

Declarations

Ethics approval and consent to participate

Not applicable. This study did not involve human subjects.

Availability of data and material

Not applicable.

Publisher's Disclaimer: This is a PDF file of an unedited manuscript that has been accepted for publication. As a service to our customers we are providing this early version of the manuscript. The manuscript will undergo copyediting, typesetting, and review of the resulting proof before it is published in its final citable form. Please note that during the production process errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.

  • Aarons GA (2004). Mental health provider attitudes toward adoption of evidence-based practice: the Evidence-Based Practice Attitude Scale (EBPAS) . Ment Health Serv Res , 6 ( 2 ), 61–74. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Aarons GA, Ehrhart MG, & Farahnak LR (2014). The implementation leadership scale (ILS): Development of a brief measure of unit level implementation leadership . Implementation Science , 9 . doi: 10.1186/1748-5908-9-45 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Archer J, Bower P, Gilbody S, Lovell K, Richards D, Gask L,… Coventry P (2012). Collaborative care for depression and anxiety problems . Cochrane Database of Systematic Reviews(10) doi: 10.1002/14651858.CD006525.pub2 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Armenakis AA, Bernerth JB, Pitts JP, & Walker HJ (2007). Organizational Change Recipients Beliefs Scale: Development of an Assessmetn Instrument . The Journal of Applied Behavioral Science , 42 , 481–505. doi:DOI: 10.1177/0021886307303654 [ CrossRef ] [ Google Scholar ]
  • Bauer MS, Damschroder L, Hagedorn H, Smith J, & Kilbourne AM (2015). An introduction to implementation science for the non-specialist . BMC Psychology , 3 ( 1 ), 32. doi: 10.1186/s40359-015-0089-9 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bertrand JT, Holtgrave DR, & Gregowski A (2009). Evaluating HIV/AIDS programs in the US and developing countries In Mayer KH & Pizer HF (Eds.), HIV Prevention (pp. 571–590). San Diego: Academic Press. [ Google Scholar ]
  • Brown CH, Curran G, Palinkas LA, Aarons GA, Wells KB, Jones L,… Cruden G (2017). An overview of research and evaluation designs for dissemination and implementation . Annual Review of Public Health , 38 ( 1 ), null. doi:doi: 10.1146/annurev-publhealth-031816-044215 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brown CH, Mohr DC, Gallo CG, Mader C, Palinkas L, Wingood G,… Poduska J (2013). A computational future for preventing HIV in minority communities: how advanced technology can improve implementation of effective programs . J Acquir Immune Defic Syndr , 63 . doi: 10.1097/QAI.0b013e31829372bd [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Brown CH, PoVey C, Hjorth A, Gallo CG, Wilensky U, & Villamar J (2015). Computational and technical approaches to improve the implementation of prevention programs . Implementation Science , 10 ( Suppl 1 ), A28. doi: 10.1186/1748-5908-10-S1-A28 [ CrossRef ] [ Google Scholar ]
  • Cheung K, & Duan N (2014). Design of implementation studies for quality improvement programs: An effectiveness-cost-effectiveness framework . American Journal of Public Health , 104 ( 1 ), e23–e30. doi: 10.2105/ajph.2013.301579 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Colditz GA, & Emmons KM (2017). The promise and challenges of dissemination and implementation research In Brownson RC, Colditz GA, & Proctor EK (Eds.), Dissemination and implementation research in health: Translating science to practice . New York, NY: Oxford University Press. [ Google Scholar ]
  • Curran GM, Bauer M, Mittman B, Pyne JM, & Stetler C (2012). Effectiveness-implementation hybrid designs: Combining elements of clinical effectiveness and implementation research to enhance public health impact . Medical Care , 50 ( 3 ), 217–226. doi: 10.1097/MLR.0b013e3182408812 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Department of Health and Human Services. (2019). PAR-19–274: Dissemination and Implementation Research in Health (R01 Clinical Trial Optional) . Retrieved from https://grants.nih.gov/grants/guide/pa-files/PAR-19-274.html
  • Emmons KM, Weiner B, Fernandez ME, & Tu S (2012). Systems antecedents for dissemination and implementation: a review and analysis of measures . Health Educ Behav , 39 . doi: 10.1177/1090198111409748 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Gaglio B, & Glasgow RE (2017). Evaluation approaches for dissemination and implementation research In Brownson R, Colditz G, & Proctor E (Eds.), Dissemination and Implementation Research in Health: Translating Science into Practice (2nd ed., pp. 317–334). New York: Oxford University Press. [ Google Scholar ]
  • Gaglio B, Shoup JA, & Glasgow RE (2013). The RE-AIM Framework: A systematic review of use over time . American Journal of Public Health , 103 ( 6 ), e38–e46. doi: 10.2105/ajph.2013.301299 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Glasgow RE, & Emmons KM (2007). How can we increase translation of research into practice? Types of evidence needed . Annual Review of Public Health , 28 , 413–433. [ PubMed ] [ Google Scholar ]
  • Glasgow RE, Harden SM, Gaglio B, Rabin B, Smith ML, Porter GC,… Estabrooks PA (2019). RE-AIM Planning and Evaluation Framework: Adapting to New Science and Practice With a 20-Year Review . Frontiers in Public Health , 7 ( 64 ). doi: 10.3389/fpubh.2019.00064 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Glasgow RE, Klesges LM, Dzewaltowski DA, Estabrooks PA, & Vogt TM (2006). Evaluating the impact of health promotion programs: using the RE-AIM framework to form summary measures for decision making involving complex issues . Health Education Research , 21 ( 5 ), 688–694. doi: 10.1093/her/cyl081 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Glasgow RE, & Riley WT (2013). Pragmatic measures: what they are and why we need them . Am J Prev Med , 45 . doi: 10.1016/j.amepre.2013.03.010 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Glasgow RE, Vogt TM, & Boles SM (1999). Evaluating the public health impact of health promotion interventions: The RE-AIM framework . American Journal of Public Health , 89 ( 9 ), 1322–1327. doi: 10.2105/AJPH.89.9.1322 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grimshaw J, Campbell M, Eccles M, & Steen N (2000). Experimental and quasi-experimental designs for evaluating guideline implementation strategies . Family practice , 17 Suppl 1 , S11–16. doi: 10.1093/fampra/17.suppl_1.s11 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Imel ZE, Pace BT, Soma CS, Tanana M, Hirsch T, Gibson J,… Atkins, D. C. (2019). Design feasibility of an automated, machine-learning based feedback system for motivational interviewing . Psychotherapy , 56 ( 2 ), 318–328. doi: 10.1037/pst0000221 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Institute of Medicine Committee on Crossing the Quality Chasm. (2006). Adaption to mental health and addictive disorder: Improving the quality of health care for mental and substanceuse conditions . Retrieved from Washington, D.C.: [ Google Scholar ]
  • Jacob V, Chattopadhyay SK, Sipe TA, Thota AB, Byard GJ, & Chapman DP (2012). Economics of collaborative care for management of depressive disorders: A community guide systematic review . American Journal of Preventive Medicine , 42 ( 5 ), 539–549. doi: 10.1016/j.amepre.2012.01.011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Jordan N, Graham AK, Berkel C, & Smith JD (2019). Budget impact analysis of preparing to implement the Family Check-Up 4 Health in primary care to reduce pediatric obesity . Prevention Science , 20 ( 5 ), 655–664. doi: 10.1007/s11121-018-0970-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Kroenke K, Spitzer R, & Williams JW (2001). The PHQ-9 . Journal of General Internal Medicine , 16 ( 9 ), 606–613. doi: 10.1046/j.1525-1497.2001.016009606.x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Landsverk J, Brown CH, Smith JD, Chamberlain P, Palinkas LA, Ogihara M,… Horwitz SM (2017). Design and analysis in dissemination and implementation research In Brownson RC, Colditz GA, & Proctor EK (Eds.), Dissemination and implementation research in health: Translating research to practice (2nd ed., pp. 201–227). New York: Oxford University Press. [ Google Scholar ]
  • Lewis CC, Fischer S, Weiner BJ, Stanick C, Kim M, & Martinez RG (2015). Outcomes for implementation science: an enhanced systematic review of instruments using evidence-based rating criteria . Implementation Science , 10 ( 1 ), 155. doi: 10.1186/s13012-015-0342-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Lewis CC, Proctor EK, & Brownson RC (2017). Measurement issues in dissemination and implementation research In Brownson RC, Colditz GA, & Proctor EK (Eds.), Dissemination and implementation research in health: Translating research to practice (2nd ed., pp. 229–244). New York: Oxford University Press. [ Google Scholar ]
  • Lewis CC, Stanick CF, Martinez RG, Weiner BJ, Kim M, Barwick M, & Comtois KA (2015). The Society for Implementation Research Collaboration Instrument Review Project: A methodology to promote rigorous evaluation . Implementation Science , 10 ( 1 ), 2. doi: 10.1186/s13012-014-0193-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luke DA, Calhoun A, Robichaux CB, Elliott MB, & Moreland-Russell S (2014). The Program Sustainability Assessment Tool: A new instrument for public health programs . Preventing Chronic Disease , 11 , E12. doi: 10.5888/pcd11.130184 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Luke DA, Malone S, Prewitt K, Hackett R, & Lin J (2018). The Clinical Sustainability Assessment Tool (CSAT): Assessing sustainability in clinical medicine settings . Paper presented at the Conference on the Science of Dissemination and Implementation in Health, Washington, DC. [ Google Scholar ]
  • Martinez RG, Lewis CC, & Weiner BJ (2014). Instrumentation issues in implementation science . Implement Sci , 9 . doi: 10.1186/s13012-014-0118-8 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Powell BJ, Stanick CF, Halko HM, Dorsey CN, Weiner BJ, Barwick MA,… Lewis CC (2017). Toward criteria for pragmatic measurement in implementation research and practice: a stakeholder-driven approach using concept mapping . Implementation Science , 12 ( 1 ), 118. doi: 10.1186/s13012-017-0649-x [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Powell BJ, Waltz TJ, Chinman MJ, Damschroder LJ, Smith JL, Matthieu MM,… Kirchner JE (2015). A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project . Implement Sci , 10 . doi: 10.1186/s13012-015-0209-1 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A,… Hensley M (2011). Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda . Adm Policy Ment Health Ment Health Serv Res , 38 . doi: 10.1007/s10488-010-0319-7 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Rabin BA, Lewis CC, Norton WE, Neta G, Chambers D, Tobin JN,… Glasgow RE (2016). Measurement resources for dissemination and implementation research in health . Implementation Science , 11 ( 1 ), 42. doi: 10.1186/s13012-016-0401-y [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Raghavan R (2017). The role of economic evaluation in dissemination and implementation research In Brownson RC, Colditz GA, & Proctor EK (Eds.), Dissemination and implementation research in health: Translating science to practice (2nd ed.). New York: Oxford University Press. [ Google Scholar ]
  • Ritzwoller DP, Sukhanova A, Gaglio B, & Glasgow RE (2009). Costing behavioral interventions: a practical guide to enhance translation . Annals of Behavioral Medicine , 37 ( 2 ), 218–227. [ PubMed ] [ Google Scholar ]
  • Saldana L, Schaper H, Campbell M, & Chapman J (2015). Standardized Measurement of Implementation: The Universal SIC . Implementation Science , 10 ( 1 ), A73. doi: 10.1186/1748-5908-10-s1-a73 [ CrossRef ] [ Google Scholar ]
  • Schoenwald S, Garland A, Chapman J, Frazier S, Sheidow A, & Southam-Gerow M (2011). Toward the effective and efficient measurement of implementation fidelity . Admin Policy Mental Health Mental Health Serv Res , 38 . doi: 10.1007/s10488-010-0321-0 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smith JD (2018). An implementation research logic model: A step toward improving scientific rigor, transparency, reproducibility, and specification . Implementation Science , 14 ( Supp 1 ), S39. [ Google Scholar ]
  • Smith JD, Berkel C, Jordan N, Atkins DC, Narayanan SS, Gallo C,… Bruening MM (2018). An individually tailored family-centered intervention for pediatric obesity in primary care: Study protocol of a randomized type II hybrid implementation-effectiveness trial (Raising Healthy Children study) . Implementation Science , 13 ( 11 ), 1–15. doi: 10.1186/s13012-017-0697-2 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Smith JD, Li DH, Hirschhorn LR, Gallo C, McNulty M, Phillips GI,… Benbow ND (2019). Landscape of HIV implementation research funded by the National Institutes of Health: A mapping review of project abstracts (submitted for publication) . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Smith JD, Rafferty MR, Heinemann AW, Meachum MK, Villamar JA, Lieber RL, & Brown CH (under review). Evaluation of the factor structure of implementation research measures adapted for a novel context and multiple professional roles . [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stetler CB, Legro MW, Wallace CM, Bowman C, Guihan M, Hagedorn H,… Smith JL (2006). The role of formative evaluation in implementation research and the QUERI experience . Journal of General Internal Medicine , 21 ( 2 ), S1. doi : 10.1007/s11606-006-0267-9 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Tabak RG, Khoong EC, Chambers DA, & Brownson RC (2012). Bridging research and practice: Models for dissemination and implementation research . American Journal of Preventive Medicine , 43 ( 3 ), 337–350. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • The Health Foundation. (2015). Evaluation: What to consider. Commonly asked questions about how to approach evaluation of quality improvement in health care . Retrieved from London, England: https://www.health.org.uk/sites/default/files/EvaluationWhatToConsider.pdf [ Google Scholar ]
  • Unützer J, Katon W, Callahan CM, Williams J, John W, Hunkeler E, Harpole L,… Investigators, f. t. I. (2002). Collaborative care management of late-life depression in the primary care setting: A randomized controlled trial . JAMA , 288 ( 22 ), 2836–2845. doi: 10.1001/jama.288.22.2836 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Upton D, & Upton P (2006). Development of an evidence-based practice questionnaire for nurses . Journal of Advanced Nursing , 53 ( 4 ), 454–458. [ PubMed ] [ Google Scholar ]
  • Wang D, Ogihara M, Gallo C, Villamar JA, Smith JD, Vermeer W,… Brown CH (2016). Automatic classification of communication logs into implementation stages via text analysis . Implementation Science , 11 ( 1 ), 1–14. doi: 10.1186/s13012-016-0483-6 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Wyman PA, Henry D, Knoblauch S, & Brown CH (2015). Designs for testing group-based interventions with limited numbers of social units: The dynamic wait-listed and regression point displacement designs . Prevention Science , 16 ( 7 ), 956–966. doi : 10.1007/s11121-014-0535-6 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Xiao B, Imel ZE, Georgiou PG, Atkins DC, & Narayanan SS (2015). “Rate My Therapist”: Automated detection of empathy in drug and alcohol counseling via speech and language processing . PLoS ONE , 10 ( 12 ), e0143055. doi: 10.1371/journal.pone.0143055 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]

IMAGES

  1. Journals in the Field of Evaluation

    evaluation project journal articles

  2. 14+ Project Evaluation Report Templates

    evaluation project journal articles

  3. 14+ Project Evaluation Report Templates

    evaluation project journal articles

  4. Journal Evaluation Assignment

    evaluation project journal articles

  5. FREE 7+ Sample Project Evaluation Templates in PDF

    evaluation project journal articles

  6. Evaluation and Educational Research

    evaluation project journal articles

VIDEO

  1. Front page design for school project

  2. Lec 2 Creating Journals

  3. New Project Journal/folio

  4. front page design for school project & journal

  5. Aesthetic Front Page Ideas for School Project & Journal 🌼❤️ #fairycrafts #aesthetic #trending

  6. Personality Evaluation #shorts #upsc #ias #ips

COMMENTS

  1. Program Evaluation for Health Professionals: What It Is, What It Isn't

    This article provides an overview of program evaluation and considers what it is (and is not). We detail a clear, practical framework for health professionals to use when planning and completing a program evaluation and illustrate this with examples from our work. What is Program Evaluation?

  2. Research Project Evaluation—Learnings from the PATHWAYS Project

    1. Introduction Over the last few decades, a strong discussion on the role of the evaluation process in research has developed, especially in interdisciplinary or multidimensional research [ 1, 2, 3, 4, 5 ]. Despite existing concepts and definitions, the importance of the role of evaluation is often underestimated.

  3. Implementing the Evaluation Plan and Analysis: Who, What, When, and How

    Published online 2021 Feb 13. doi: 10.4300/JGME-D-20-01523.1 PMCID: PMC7901627 PMID: 33680313 Implementing the Evaluation Plan and Analysis: Who, What, When, and How Dorene F. Balmer, PhD (@dorenebalmer), Jennifer A. Rama, MD, MEd (@jenramacastro), and Deborah Simpson, PhD (@debsimpson3)

  4. Understanding project evaluation

    Article publication date: 18 December 2019 Permissions Issue publication date: 29 May 2020 Downloads 3257 Abstract Purpose The purpose of this paper is to understand the underlying logics applied by different project evaluation approaches and to propose an alternative research agenda. Design/methodology/approach

  5. Full article: Evaluation of Information Systems Project Success

    View PDFPDFView EPUBEPUB. Evaluating the success of projects should be a key process in project management. However, there are only a few studies that address the evaluation process in practice. In order to help fill this gap, this paper presents the results of an exploratory survey with experienced information systems project managers.

  6. Program Evaluation: Getting Started and Standards

    Today, the purpose of program evaluation typically falls in 1 of 2 orientations in using data to (1) determine the overall value or worth of an education program (summative judgements of a program) or (2) plan program improvement (formative improvements to a program, project, or activity).

  7. Full article: Monitoring and evaluation practices and project outcome

    1. Introduction. The immense contribution of a successful project to the development and growth of many countries across the world cannot be emphasized enough (Kahn, Citation 2019).Laursen et al. (Citation 2018) indicated that projects are essential for value creation and economic development, it is through projects that process and products are developed for the use of people and society.

  8. Project monitoring and evaluation: a method for enhancing the

    The LFA was first developed by Practical Concepts Incorporated in 1969 for the United States Agency for International Development (USAID) to assist with project design and appraisal [11], [14], [16].The origins of the concept can be traced back through "management by objectives" popularised by Peter Drucker in the 1960s [15], [17] to ancient Greece where the role of the "Strategoi" was ...

  9. Evaluation and Program Planning

    Articles are of two types: 1) reports on specific evaluation or planning efforts, and 2) dicussions of issues relevant to the conduct of evaluation and planning. Reports on individual evaluations should include presentation of the evaluation setting, design, analysis and results.

  10. Understanding project evaluation

    Most project evaluation research deals with project success and assessments after project completion (Haass & Guzman, 2020), often using the classical iron triangle: assessing time, cost,...

  11. Evaluation of project success: a structured literature review

    Article publication date: 5 September 2017 Permissions Downloads 12800 Abstract Purpose Barnes' Iron Triangle was one of the first attempts to evaluate project success based on time, cost and performance, which were portrayed as interdependent dimensions.

  12. Full article: More efficient project execution and evaluation with

    Submit an article Journal homepage. Free access. 6,843 Views 8 CrossRef citations to date 0. Altmetric Listen. Articles. More efficient project execution and evaluation with logical framework and project cycle management: evidence from international development projects ... this paper has studied the adoption and evaluation of project ...

  13. Project Monitoring and Evaluation: An Enhancing Method for Health

    In this article, we will focus on the results of HRS evaluation from 2002 to 2010, also on its success and challenges. Results: In an overall view, the main results are the experiences of the designing and implantation of such process after pre-project preparation, all parts followed under the whole supervision of the aims of the HRS evaluation.

  14. A review of program and project evaluation models

    American Journal of Evaluation, Vol. 27 No. 3, pp. 296-319. ... Project evaluation conditions involve mandate and power (Loo, 1985), as well as political stances, ideologies, assumptions, and ...

  15. 21161 PDFs

    Dec 2021. Explore the latest full-text research PDFs, articles, conference papers, preprints and more on MONITORING AND EVALUATION. Find methods information, sources, references or conduct a ...

  16. MinervaVerse to invest in women and minority-led businesses

    "MinervaVerse was born out of the challenges that my co-founder Rachel Zillner and I faced as we were getting established with our business Clutch," said MinervaVerse co-founder Anne Descalzo.

  17. Launch of a new special supplement in BMC Infectious Diseases on Point

    This webinar will launch a special supplement of seven journal articles in BMC Infectious Diseases entitled Point-of-care Testing for Sexually Transmitted Infections: results of an independent multi-country clinic-based and clinic-utility evaluation of STI diagnostics (PRoSPeRo project).The results of this work have already led to changes in ...

  18. Nvidia Hits $2 Trillion Valuation on Insatiable AI Chip Demand

    It took Nvidia 24 years as a public company for its valuation to reach the rarefied air of $1 trillion. Thanks to the chip maker's role in powering the AI revolution, a second trillion took ...

  19. Developing Your Evaluation Plans: A Critical Component of Public Health

    A program's infrastructure is often cited as a critical component of public health success. 1, 2 The Component Model of Infrastructure (CMI) identifies evaluation as a critical component of program infrastructure under the core component of engaged data. 3 A written evaluation plan that is thoughtful, transparent, and collaboratively ...

  20. Scientists Resort to Once-Unthinkable Solutions to Cool the Planet

    Three geoengineering projects seek to alter the chemistry of the atmosphere and the ocean. Critics warn of unintended consequences.

  21. California Gets Another Budget Deficit Shock

    The state's budget gnomes now project a $73 billion fiscal hole this year. ... Journal Editorial Report: The week's best and worst from Kate Bachelder Odell, Kyle Peterson, Bill McGurn and Dan ...

  22. Measuring Success: Evaluation Article Types for the

    The Importance and Use of Evaluation in Public Health Education and Promotion. Evaluation is a process used by researchers, practitioners, and educators to assess the value of a given program, project, or policy ().The primary purposes of evaluation in public health education and promotion are to: (1) determine the effectiveness of a given intervention and/or (2) assess and improve the quality ...

  23. Poll Ranks Biden as 14th-Best President, With Trump Last

    President Biden may owe his place in the top third to his predecessor: Mr. Biden's signature accomplishment, according to the historians, was evicting Donald J. Trump from the Oval Office.

  24. Quantitative Approaches for the Evaluation of Implementation Research

    Brown et al. (2017) described three broad types of designs for implementation research. ( 1) Within-site designs involve evaluation of the effects of implementation strategies within a single service system unit (e.g., clinic, hospital). Common within-site designs include post, pre-post, and interrupted time series.