- CASP Checklists
- How to use our CASP Checklists
- Referencing and Creative Commons
- Online Training Courses
- CASP Workshops
- Teaching Critical Appraisal
- What is Critical Appraisal
- Study Designs
- Useful Links
- View all Tools and Resources
- How to Critically Appraise a Research Paper
Research papers are a powerful means through which millions of researchers around the globe pass on knowledge about our world.
However, the quality of research can be highly variable. To avoid being misled, it is vital to perform critical appraisals of research studies to assess the validity, results and relevance of the published research. Critical appraisal skills are essential to be able to identify whether published research provides results that can be used as evidence to help improve your practice.
What is a critical appraisal?
Most of us know not to believe everything we read in the newspaper or on various media channels. But when it comes to research literature and journals, they must be critically appraised due to the nature of the context. In order for us to trust research papers, we want to be safe in the knowledge that they have been efficiently and professionally checked to confirm what they are saying. This is where a critical appraisal comes in.
Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context. We have put together a more detailed page to explain what critical appraisal is to give you more information.
Why is a critical appraisal of research required?
Critical appraisal skills are important as they enable you to systematically and objectively assess the trustworthiness, relevance and results of published papers. When a research article is published, who wrote it should not be an indication of its trustworthiness and relevance.
What are the benefits of performing critical appraisals for research papers?
Performing a critical appraisal helps to:
- Reduce information overload by eliminating irrelevant or weak studies
- Identify the most relevant papers
- Distinguish evidence from opinion, assumptions, misreporting, and belief
- Assess the validity of the study
- Check the usefulness and clinical applicability of the study
How to critically appraise a research paper
There are some key questions to consider when critically appraising a paper. These include:
- Is the study relevant to my field of practice?
- What research question is being asked?
- Was the study design appropriate for the research question?
CASP has several checklists to help with performing a critical appraisal which we believe are crucial because:
- They help the user to undertake a complex task involving many steps
- They support the user in being systematic by ensuring that all important factors or considerations are taken into account
- They increase consistency in decision-making by providing a framework
In addition to our free checklists, CASP has developed a number of valuable online e-learning modules designed to increase your knowledge and confidence in conducting a critical appraisal.
Introduction To Critical Appraisal & CASP
This Module covers the following:
- Challenges using evidence to change practice
- 5 steps of evidence-based practice
- Developing critical appraisal skills
- Integrating and acting on the evidence
- The Critical Appraisal Skills Programme (CASP)
- Online Learning
- What is Qualitative Research?
- What is a Case-Control Study in Research?
- What Are Systematic Reviews? Why Are They Important?
- How to Critically Appraise a Medical Research Paper
- Critical Appraisal for the ISFE Dental Examinations
- What Is a Cohort Study & Why Are They Important?
- How to use the PICO Framework to Aid Critical Appraisal
- How to Critically Appraise a Randomised Controlled Trial
- New CASP Module
Critical Appraisal Skills Programme
Critical Appraisal Skills Programme (CASP) will use the information you provide on this form to be in touch with you and to provide updates and marketing. Please let us know all the ways you would like to hear from us:
We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.
Copyright 2023 CASP UK - OAP Ltd. All rights reserved Website by Beyond Your Brand
Please enter both an email address and a password.
- Show/Hide Password Show password Hide password
- Reset Password
Need to reset your password? Enter the email address which you used to register on this site (or your membership/contact number) and we'll email you a link to reset it. You must complete the process within 2hrs of receiving the link.
We've sent you an email.
An email has been sent to Simply follow the link provided in the email to reset your password. If you can't find the email please check your junk or spam folder and add [email protected] to your address book.
- About RCS England
- Dissecting the literature: the importance of critical appraisal
08 Dec 2017
This post was updated in 2023.
Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context.
Amanda Burls, What is Critical Appraisal?
Why is critical appraisal needed?
Literature searches using databases like Medline or EMBASE often result in an overwhelming volume of results which can vary in quality. Similarly, those who browse medical literature for the purposes of CPD or in response to a clinical query will know that there are vast amounts of content available. Critical appraisal helps to reduce the burden and allow you to focus on articles that are relevant to the research question, and that can reliably support or refute its claims with high-quality evidence, or identify high-level research relevant to your practice.
Critical appraisal allows us to:
- reduce information overload by eliminating irrelevant or weak studies
- identify the most relevant papers
- distinguish evidence from opinion, assumptions, misreporting, and belief
- assess the validity of the study
- assess the usefulness and clinical applicability of the study
- recognise any potential for bias.
Critical appraisal helps to separate what is significant from what is not. One way we use critical appraisal in the Library is to prioritise the most clinically relevant content for our Current Awareness Updates .
How to critically appraise a paper
There are some general rules to help you, including a range of checklists highlighted at the end of this blog. Some key questions to consider when critically appraising a paper:
- Is the study question relevant to my field?
- Does the study add anything new to the evidence in my field?
- What type of research question is being asked? A well-developed research question usually identifies three components: the group or population of patients, the studied parameter (e.g. a therapy or clinical intervention) and outcomes of interest.
- Was the study design appropriate for the research question? You can learn more about different study types and the hierarchy of evidence here .
- Did the methodology address important potential sources of bias? Bias can be attributed to chance (e.g. random error) or to the study methods (systematic bias).
- Was the study performed according to the original protocol? Deviations from the planned protocol can affect the validity or relevance of a study, e.g. a decrease in the studied population over the course of a randomised controlled trial .
- Does the study test a stated hypothesis? Is there a clear statement of what the investigators expect the study to find which can be tested, and confirmed or refuted.
- Were the statistical analyses performed correctly? The approach to dealing with missing data, and the statistical techniques that have been applied should be specified. Original data should be presented clearly so that readers can check the statistical accuracy of the paper.
- Do the data justify the conclusions? Watch out for definite conclusions based on statistically insignificant results, generalised findings from a small sample size, and statistically significant associations being misinterpreted to imply a cause and effect.
- Are there any conflicts of interest? Who has funded the study and can we trust their objectivity? Do the authors have any potential conflicts of interest, and have these been declared?
And an important consideration for surgeons:
- Will the results help me manage my patients?
At the end of the appraisal process you should have a better appreciation of how strong the evidence is, and ultimately whether or not you should apply it to your patients.
- How to Read a Paper by Trisha Greenhalgh
- The Doctor’s Guide to Critical Appraisal by Narinder Kaur Gosall
- CASP checklists
- CEBM Critical Appraisal Tools
- Critical Appraisal: a checklist
- Critical Appraisal of a Journal Article (PDF)
- Introduction to...Critical appraisal of literature
- Reporting guidelines for the main study types
Kirsty Morrison, Information Specialist
Share this page:
- Library Blog
How to critically appraise an article
- 1 Surgical Outcomes Research Centre, Royal Prince Alfred Hospital, Missenden Road, Sydney, NSW 2050, Australia. [email protected]
- PMID: 19153565
- DOI: 10.1038/ncpgasthep1331
Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article in order to assess the usefulness and validity of research findings. The most important components of a critical appraisal are an evaluation of the appropriateness of the study design for the research question and a careful assessment of the key methodological features of this design. Other factors that also should be considered include the suitability of the statistical methods used and their subsequent interpretation, potential conflicts of interest and the relevance of the research to one's own practice. This Review presents a 10-step guide to critical appraisal that aims to assist clinicians to identify the most relevant high-quality studies available to guide their clinical practice.
- Decision Making*
- Evidence-Based Medicine*
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Account settings
- Advanced Search
- Journal List
- J Clin Diagn Res
- v.11(5); 2017 May
Critical Appraisal of Clinical Research
1 Professor, Department of Orthodontics, King Saud bin Abdul Aziz University for Health Sciences-College of Dentistry, Riyadh, Kingdom of Saudi Arabia.
2 Associate Professor, Department of Oral and Maxillofacial Surgery, Al Farabi Dental College, Riyadh, KSA.
Evidence-based practice is the integration of individual clinical expertise with the best available external clinical evidence from systematic research and patient’s values and expectations into the decision making process for patient care. It is a fundamental skill to be able to identify and appraise the best available evidence in order to integrate it with your own clinical experience and patients values. The aim of this article is to provide a robust and simple process for assessing the credibility of articles and their value to your clinical practice.
Decisions related to patient value and care is carefully made following an essential process of integration of the best existing evidence, clinical experience and patient preference. Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [ 1 ].
Critical appraisal is essential to:
- Combat information overload;
- Identify papers that are clinically relevant;
- Continuing Professional Development (CPD).
Carrying out Critical Appraisal:
Assessing the research methods used in the study is a prime step in its critical appraisal. This is done using checklists which are specific to the study design.
Standard Common Questions:
- What is the research question?
- What is the study type (design)?
- Selection issues.
- What are the outcome factors and how are they measured?
- What are the study factors and how are they measured?
- What important potential confounders are considered?
- What is the statistical method used in the study?
- Statistical results.
- What conclusions did the authors reach about the research question?
- Are ethical issues considered?
The Critical Appraisal starts by double checking the following main sections:
I. Overview of the paper:
- The publishing journal and the year
- The article title: Does it state key trial objectives?
- The author (s) and their institution (s)
The presence of a peer review process in journal acceptance protocols also adds robustness to the assessment criteria for research papers and hence would indicate a reduced likelihood of publication of poor quality research. Other areas to consider may include authors’ declarations of interest and potential market bias. Attention should be paid to any declared funding or the issue of a research grant, in order to check for a conflict of interest [ 2 ].
II. ABSTRACT: Reading the abstract is a quick way of getting to know the article and its purpose, major procedures and methods, main findings, and conclusions.
- Aim of the study: It should be well and clearly written.
- Materials and Methods: The study design and type of groups, type of randomization process, sample size, gender, age, and procedure rendered to each group and measuring tool(s) should be evidently mentioned.
- Results: The measured variables with their statistical analysis and significance.
- Conclusion: It must clearly answer the question of interest.
III. Introduction/Background section:
An excellent introduction will thoroughly include references to earlier work related to the area under discussion and express the importance and limitations of what is previously acknowledged [ 2 ].
-Why this study is considered necessary? What is the purpose of this study? Was the purpose identified before the study or a chance result revealed as part of ‘data searching?’
-What has been already achieved and how does this study be at variance?
-Does the scientific approach outline the advantages along with possible drawbacks associated with the intervention or observations?
IV. Methods and Materials section : Full details on how the study was actually carried out should be mentioned. Precise information is given on the study design, the population, the sample size and the interventions presented. All measurements approaches should be clearly stated [ 3 ].
V. Results section : This section should clearly reveal what actually occur to the subjects. The results might contain raw data and explain the statistical analysis. These can be shown in related tables, diagrams and graphs.
VI. Discussion section : This section should include an absolute comparison of what is already identified in the topic of interest and the clinical relevance of what has been newly established. A discussion on a possible related limitations and necessitation for further studies should also be indicated.
Does it summarize the main findings of the study and relate them to any deficiencies in the study design or problems in the conduct of the study? (This is called intention to treat analysis).
- Does it address any source of potential bias?
- Are interpretations consistent with the results?
- How are null findings interpreted?
- Does it mention how do the findings of this study relate to previous work in the area?
- Can they be generalized (external validity)?
- Does it mention their clinical implications/applicability?
- What are the results/outcomes/findings applicable to and will they affect a clinical practice?
- Does the conclusion answer the study question?
- -Is the conclusion convincing?
- -Does the paper indicate ethics approval?
- -Can you identify potential ethical issues?
- -Do the results apply to the population in which you are interested?
- -Will you use the results of the study?
Once you have answered the preliminary and key questions and identified the research method used, you can incorporate specific questions related to each method into your appraisal process or checklist.
1-What is the research question?
For a study to gain value, it should address a significant problem within the healthcare and provide new or meaningful results. Useful structure for assessing the problem addressed in the article is the Problem Intervention Comparison Outcome (PICO) method [ 3 ].
P = Patient or problem: Patient/Problem/Population:
It involves identifying if the research has a focused question. What is the chief complaint?
E.g.,: Disease status, previous ailments, current medications etc.,
I = Intervention: Appropriately and clearly stated management strategy e.g.,: new diagnostic test, treatment, adjunctive therapy etc.,
C= Comparison: A suitable control or alternative
E.g.,: specific and limited to one alternative choice.
O= Outcomes: The desired results or patient related consequences have to be identified. e.g.,: eliminating symptoms, improving function, esthetics etc.,
The clinical question determines which study designs are appropriate. There are five broad categories of clinical questions, as shown in [ Table/Fig-1 ].
Categories of clinical questions and the related study designs.
2- What is the study type (design)?
The study design of the research is fundamental to the usefulness of the study.
In a clinical paper the methodology employed to generate the results is fully explained. In general, all questions about the related clinical query, the study design, the subjects and the correlated measures to reduce bias and confounding should be adequately and thoroughly explored and answered.
Researchers identify the target population they are interested in. A sample population is therefore taken and results from this sample are then generalized to the target population.
The sample should be representative of the target population from which it came. Knowing the baseline characteristics of the sample population is important because this allows researchers to see how closely the subjects match their own patients [ 4 ].
Sample size calculation (Power calculation): A trial should be large enough to have a high chance of detecting a worthwhile effect if it exists. Statisticians can work out before the trial begins how large the sample size should be in order to have a good chance of detecting a true difference between the intervention and control groups [ 5 ].
- Is the sample defined? Human, Animals (type); what population does it represent?
- Does it mention eligibility criteria with reasons?
- Does it mention where and how the sample were recruited, selected and assessed?
- Does it mention where was the study carried out?
- Is the sample size justified? Rightly calculated? Is it adequate to detect statistical and clinical significant results?
- Does it mention a suitable study design/type?
- Is the study type appropriate to the research question?
- Is the study adequately controlled? Does it mention type of randomization process? Does it mention the presence of control group or explain lack of it?
- Are the samples similar at baseline? Is sample attrition mentioned?
- All studies report the number of participants/specimens at the start of a study, together with details of how many of them completed the study and reasons for incomplete follow up if there is any.
- Does it mention who was blinded? Are the assessors and participants blind to the interventions received?
- Is it mentioned how was the data analysed?
- Are any measurements taken likely to be valid?
Researchers use measuring techniques and instruments that have been shown to be valid and reliable.
Validity refers to the extent to which a test measures what it is supposed to measure.
(the extent to which the value obtained represents the object of interest.)
- -Soundness, effectiveness of the measuring instrument;
- -What does the test measure?
- -Does it measure, what it is supposed to be measured?
- -How well, how accurately does it measure?
Reliability: In research, the term reliability means “repeatability” or “consistency”
Reliability refers to how consistent a test is on repeated measurements. It is important especially if assessments are made on different occasions and or by different examiners. Studies should state the method for assessing the reliability of any measurements taken and what the intra –examiner reliability was [ 6 ].
The following questions should be raised:
- - How were subjects chosen or recruited? If not random, are they representative of the population?
- - Types of Blinding (Masking) Single, Double, Triple?
- - Is there a control group? How was it chosen?
- - How are patients followed up? Who are the dropouts? Why and how many are there?
- - Are the independent (predictor) and dependent (outcome) variables in the study clearly identified, defined, and measured?
- - Is there a statement about sample size issues or statistical power (especially important in negative studies)?
- - If a multicenter study, what quality assurance measures were employed to obtain consistency across sites?
- - Are there selection biases?
- • In a case-control study, if exercise habits to be compared:
- - Are the controls appropriate?
- - Were records of cases and controls reviewed blindly?
- - How were possible selection biases controlled (Prevalence bias, Admission Rate bias, Volunteer bias, Recall bias, Lead Time bias, Detection bias, etc.,)?
- • Cross Sectional Studies:
- - Was the sample selected in an appropriate manner (random, convenience, etc.,)?
- - Were efforts made to ensure a good response rate or to minimize the occurrence of missing data?
- - Were reliability (reproducibility) and validity reported?
- • In an intervention study, how were subjects recruited and assigned to groups?
- • In a cohort study, how many reached final follow-up?
- - Are the subject’s representatives of the population to which the findings are applied?
- - Is there evidence of volunteer bias? Was there adequate follow-up time?
- - What was the drop-out rate?
- - Any shortcoming in the methodology can lead to results that do not reflect the truth. If clinical practice is changed on the basis of these results, patients could be harmed.
Researchers employ a variety of techniques to make the methodology more robust, such as matching, restriction, randomization, and blinding [ 7 ].
Bias is the term used to describe an error at any stage of the study that was not due to chance. Bias leads to results in which there are a systematic deviation from the truth. As bias cannot be measured, researchers need to rely on good research design to minimize bias [ 8 ]. To minimize any bias within a study the sample population should be representative of the population. It is also imperative to consider the sample size in the study and identify if the study is adequately powered to produce statistically significant results, i.e., p-values quoted are <0.05 [ 9 ].
4-What are the outcome factors and how are they measured?
- -Are all relevant outcomes assessed?
- -Is measurement error an important source of bias?
5-What are the study factors and how are they measured?
- -Are all the relevant study factors included in the study?
- -Have the factors been measured using appropriate tools?
Data Analysis and Results:
- Were the tests appropriate for the data?
- Are confidence intervals or p-values given?
- How strong is the association between intervention and outcome?
- How precise is the estimate of the risk?
- Does it clearly mention the main finding(s) and does the data support them?
- Does it mention the clinical significance of the result?
- Is adverse event or lack of it mentioned?
- Are all relevant outcomes assessed?
- Was the sample size adequate to detect a clinically/socially significant result?
- Are the results presented in a way to help in health policy decisions?
- Is there measurement error?
- Is measurement error an important source of bias?
A confounder has a triangular relationship with both the exposure and the outcome. However, it is not on the causal pathway. It makes it appear as if there is a direct relationship between the exposure and the outcome or it might even mask an association that would otherwise have been present [ 9 ].
6- What important potential confounders are considered?
- -Are potential confounders examined and controlled for?
- -Is confounding an important source of bias?
7- What is the statistical method in the study?
- -Are the statistical methods described appropriate to compare participants for primary and secondary outcomes?
- -Are statistical methods specified insufficient detail (If I had access to the raw data, could I reproduce the analysis)?
- -Were the tests appropriate for the data?
- -Are confidence intervals or p-values given?
- -Are results presented as absolute risk reduction as well as relative risk reduction?
Interpretation of p-value:
The p-value refers to the probability that any particular outcome would have arisen by chance. A p-value of less than 1 in 20 (p<0.05) is statistically significant.
- When p-value is less than significance level, which is usually 0.05, we often reject the null hypothesis and the result is considered to be statistically significant. Conversely, when p-value is greater than 0.05, we conclude that the result is not statistically significant and the null hypothesis is accepted.
Multiple repetition of the same trial would not yield the exact same results every time. However, on average the results would be within a certain range. A 95% confidence interval means that there is a 95% chance that the true size of effect will lie within this range.
8- Statistical results:
- -Do statistical tests answer the research question?
Are statistical tests performed and comparisons made (data searching)?
Correct statistical analysis of results is crucial to the reliability of the conclusions drawn from the research paper. Depending on the study design and sample selection method employed, observational or inferential statistical analysis may be carried out on the results of the study.
It is important to identify if this is appropriate for the study [ 9 ].
- -Was the sample size adequate to detect a clinically/socially significant result?
- -Are the results presented in a way to help in health policy decisions?
Statistical significance as shown by p-value is not the same as clinical significance. Statistical significance judges whether treatment effects are explicable as chance findings, whereas clinical significance assesses whether treatment effects are worthwhile in real life. Small improvements that are statistically significant might not result in any meaningful improvement clinically. The following questions should always be on mind:
- -If the results are statistically significant, do they also have clinical significance?
- -If the results are not statistically significant, was the sample size sufficiently large to detect a meaningful difference or effect?
9- What conclusions did the authors reach about the study question?
Conclusions should ensure that recommendations stated are suitable for the results attained within the capacity of the study. The authors should also concentrate on the limitations in the study and their effects on the outcomes and the proposed suggestions for future studies [ 10 ].
- -Are the questions posed in the study adequately addressed?
- -Are the conclusions justified by the data?
- -Do the authors extrapolate beyond the data?
- -Are shortcomings of the study addressed and constructive suggestions given for future research?
Do the citations follow one of the Council of Biological Editors’ (CBE) standard formats?
10- Are ethical issues considered?
If a study involves human subjects, human tissues, or animals, was approval from appropriate institutional or governmental entities obtained? [ 10 , 11 ].
Critical appraisal of RCTs: Factors to look for:
- Allocation (randomization, stratification, confounders).
- Follow up of participants (intention to treat).
- Data collection (bias).
- Sample size (power calculation).
- Presentation of results (clear, precise).
- Applicability to local population.
[ Table/Fig-2 ] summarizes the guidelines for Consolidated Standards of Reporting Trials CONSORT [ 12 ].
Summary of the CONSORT guidelines.
Critical appraisal of systematic reviews: provide an overview of all primary studies on a topic and try to obtain an overall picture of the results.
In a systematic review, all the primary studies identified are critically appraised and only the best ones are selected. A meta-analysis (i.e., a statistical analysis) of the results from selected studies may be included. Factors to look for:
- Literature search (did it include published and unpublished materials as well as non-English language studies? Was personal contact with experts sought?).
- Quality-control of studies included (type of study; scoring system used to rate studies; analysis performed by at least two experts).
- Homogeneity of studies.
[ Table/Fig-3 ] summarizes the guidelines for Preferred Reporting Items for Systematic reviews and Meta-Analyses PRISMA [ 13 ].
Summary of PRISMA guidelines.
Critical appraisal is a fundamental skill in modern practice for assessing the value of clinical researches and providing an indication of their relevance to the profession. It is a skills-set developed throughout a professional career that facilitates this and, through integration with clinical experience and patient preference, permits the practice of evidence based medicine and dentistry. By following a systematic approach, such evidence can be considered and applied to clinical practice.
Financial or other Competing Interests
Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.
7.3 Critically Appraising the Literature
Now that you know the parts of a paper, we will discuss how to critically appraise a paper. Critical appraisal refers to the process of carefully and methodically reviewing research to determine its credibility, usefulness, and applicability in a certain context. 6 It is an essential element of evidence-based practice. As stated earlier, you want to ensure that what you read in the literature is trustworthy before considering applying the findings in practice. The key things to consider include the study’s results, if the results match the conclusion (validity) and if the findings will help you in practice (applicability). A stepwise approach to reading and analysing the paper is a good way to highlight important points in the paper. While there are numerous checklists for critical appraisal, we have provided a simple guide for critical appraisal of quantitative and qualitative studies. The guides were adapted from Epidemiology by Petra Buttner (2015) and How to Read a Paper [ the basics of evidence-based medicine and healthcare (2019); papers that go beyond numbers- qualitative research (1997)] by Trisha Greenhalgh to aid your review of the papers. 5,7,8
A guide to reading scientific articles – Quantitative studies
What is the title of the study?
- Does the title clearly describe the study focus?
- Does it contain details about the population and the study design?
What was the purpose of the study (why was it performed)?
- Identify the research question
- Identify the exposure and outcome
What was the study design?
- Was the design appropriate for the study?
Describe the study population (sample).
- What was the sample size?
- How were participants recruited?
- Where did the research take place?
- Who was included, and who was excluded?
- Are there any potential sources of bias related to the choice of the sample?
What were data collection methods used?
- How were the exposure and outcome variables were measured
- How was data collected- instruments or equipment? Were the tools appropriate?
- Is there evidence of random selection as opposed to systematic or self-selection?
- How was bias minimised or avoided?
For experimental studies
- How were subjects assigned to treatment or intervention: randomly or by some other method?
- What control groups were included (placebo, untreated controls, both or neither)
- How were the treatments compared?
- Were there dropouts or loss to follow-up?
- Were the outcomes or effects measured objectively?
For observational studies
- Was the data collection process adequate (including questionnaire design and pre-testing)?
- What techniques were used to handle non-response and/or incomplete data?
- If a cohort study, was the follow-up rate sufficiently high?
- If a case-control study, are the controls appropriate and adequately matched?
How was the data analysed?
- Is the statistical analysis appropriate, and is it presented in sufficient detail?
What are the findings?
- What are the main findings of the study? Pay specific attention to the presented text and tables in relation to the study’s main findings .
- Are the numbers consistent? Is the entire sample accounted for?
- Do the authors find a difference between the treatment and control groups?
- Are the results statistically significant? If there is a statistically significant difference, is it enough of a difference to be clinically significant?
- Did the authors find a difference between exposed and control groups or cases and controls?
- Is there a statistically significant difference between groups?
- Could the results be of public health significance, even though the difference is not statistically significant? (This may highlight the need for a larger study).
- Are the results likely to be affected by confounding? Why or why not?
- What (if any) variables are identified as potential confounders in the study?
- How is confounding dealt with in this study?
- Are there any potential confounders that the authors have not taken into account? What might the likely impact be on the results?
Summing it up
Read the following article:
Chen X, Jiang X, Huang X, He H, Zheng J: Association between probiotic yogurt intake and gestational diabetes mellitus: a case-control study. Iran J Public Health. 2019, 48:1248-1256.
Let’s conduct a critical appraisal of this article.
A guide to reading scientific articles – Qualitative studies
What is the research question?
Was a qualitative approach appropriate?
- Identify the study design and if it was appropriate for the research question.
How were the setting and the subjects selected?
- What sampling strategy was used?
- Where was the study conducted?
Was the sampling strategy appropriate for the approach?
- Consider the qualitative approach used and decide if the sampling strategy or technique is appropriate
What was the researcher’s position, and has this been taken into account?
- Consider the researcher’s background, gender, knowledge, personal experience and relationship with participants
What were the data collection methods?
- How was data collected? What technique was used?
How were data analysed, and how were these checked?
- How did the authors analyse the data? Was this stated?
- Did two or more researchers conduct the analysis independently, and were the outcomes compared (double coding)?
- Did the researchers come to a consensus, and how were disagreements handled?
Are the results credible?
- Does the result answer the research question?
- Are themes presented with quotes and do they relate to the research question or aim?
Are the conclusions justified by the results?
- Have the findings been discussed in relation to existing theory and previous research?
- How well does the interpretation of the findings fit well with what is already known?
Are the findings transferable to other settings?
- Can the findings be applied to other settings? Consider the sample.
Wallisch A, Little L, Pope E, Dunn W. Parent Perspectives of an Occupational Therapy Telehealth Intervention. Int J Telerehabil. 2019 Jun 12;11(1):15-22. doi: 10.5195/ijt.2019.6274. PMID: 31341543; PMCID: PMC6597151.
Let’s conduct a critical appraisal of this article
Now that you know how to critically appraise both quantitative and qualitative papers, it is also important to note that numerous critical appraisal tools exist. Some have different sub-tools for different study designs, while others are designed to be used for multiple study designs. These tools aid the critical appraisal process as they contain different questions to prompt the reader while assessing the study’s quality. 9 Examples of tools commonly used in health professions are listed below in Table 7.2. Please note that this list is not exhaustive, as numerous appraisal tools exist. You can use any of these tools to appraise the quality of an article before choosing to use their findings to inform your own research or to change practice.
Table 7.2 Critical appraisal tools
An Introduction to Research Methods for Undergraduate Health Profession Students Copyright © 2023 by Faith Alele and Bunmi Malau-Aduli is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License , except where otherwise noted.
Medicine: A Brief Guide to Critical Appraisal
- Quick Start
- First Year Library Essentials
- Literature Reviews and Data Management
- Advanced Search Health This link opens in a new window
- Guide to Using EndNote This link opens in a new window
- A Brief Guide to Critical Appraisal
- Manage Research Data This link opens in a new window
- Articles & Databases
- Anatomy & Radiology
- Medicines Information
- Diagnostic Tests & Calculators
- Health Statistics
- Multimedia Sources
- News & Public Opinion
- Aboriginal and Torres Strait Islander Health Guide This link opens in a new window
- Medical Ethics Guide This link opens in a new window
Have you ever seen a news piece about a scientific breakthrough and wondered how accurate the reporting is? Or wondered about the research behind the headlines? This is the beginning of critical appraisal: thinking critically about what you see and hear, and asking questions to determine how much of a 'breakthrough' something really is.
The article " Is this study legit? 5 questions to ask when reading news stories of medical research " is a succinct introduction to the sorts of questions you should ask in these situations, but there's more than that when it comes to critical appraisal. Read on to learn more about this practical and crucial aspect of evidence-based practice.
What is Critical Appraisal?
Critical appraisal forms part of the process of evidence-based practice. “ Evidence-based practice across the health professions ” outlines the fives steps of this process. Critical appraisal is step three:
- Ask a question
- Access the information
- Appraise the articles found
- Apply the information
Critical appraisal is the examination of evidence to determine applicability to clinical practice. It considers (1) :
- Are the results of the study believable?
- Was the study methodologically sound?
- What is the clinical importance of the study’s results?
- Are the findings sufficiently important? That is, are they practice-changing?
- Are the results of the study applicable to your patient?
- Is your patient comparable to the population in the study?
Why Critically Appraise?
If practitioners hope to ‘stand on the shoulders of giants’, practicing in a manner that is responsive to the discoveries of the research community, then it makes sense for the responsible, critically thinking practitioner to consider the reliability, influence, and relevance of the evidence presented to them.
While critical thinking is valuable, it is also important to avoid treading too much into cynicism; in the words of Hoffman et al. (1):
… keep in mind that no research is perfect and that it is important not to be overly critical of research articles. An article just needs to be good enough to assist you to make a clinical decision.
How do I Critically Appraise?
Evidence-based practice is intended to be practical . To enable this, critical appraisal checklists have been developed to guide practitioners through the process in an efficient yet comprehensive manner.
Critical appraisal checklists guide the reader through the appraisal process by prompting the reader to ask certain questions of the paper they are appraising. There are many different critical appraisal checklists but the best apply certain questions based on what type of study the paper is describing. This allows for a more nuanced and appropriate appraisal. Wherever possible, choose the appraisal tool that best fits the study you are appraising.
Like many things in life, repetition builds confidence and the more you apply critical appraisal tools (like checklists) to the literature the more the process will become second nature for you and the more effective you will be.
How do I Identify Study Types?
Identifying the study type described in the paper is sometimes a harder job than it should be. Helpful papers spell out the study type in the title or abstract, but not all papers are helpful in this way. As such, the critical appraiser may need to do a little work to identify what type of study they are about to critique. Again, experience builds confidence but having an understanding of the typical features of common study types certainly helps.
To assist with this, the Library has produced a guide to study designs in health research .
The following selected references will help also with understanding study types but there are also other resources in the Library’s collection and freely available online:
- The “ How to read a paper ” article series from The BMJ is a well-known source for establishing an understanding of the features of different study types; this series was subsequently adapted into a book (“ How to read a paper: the basics of evidence-based medicine ”) which offers more depth and currency than that found in the articles. (2)
- Chapter two of “ Evidence-based practice across the health professions ” briefly outlines some study types and their application; subsequent chapters go into more detail about different study types depending on what type of question they are exploring (intervention, diagnosis, prognosis, qualitative) along with systematic reviews.
- “ Clinical evidence made easy ” contains several chapters on different study designs and also includes critical appraisal tools. (3)
- “ Translational research and clinical practice: basic tools for medical decision making and self-learning ” unpacks the components of a paper, explaining their purpose along with key features of different study designs. (4)
- The BMJ website contains the contents of the fourth edition of the book “ Epidemiology for the uninitiated ”. This eBook contains chapters exploring ecological studies, longitudinal studies, case-control and cross-sectional studies, and experimental studies.
In order to encourage consistency and quality, authors of reports on research should follow reporting guidelines when writing their papers. The EQUATOR Network is a good source of reporting guidelines for the main study types.
While these guidelines aren't critical appraisal tools as such, they can assist by prompting you to consider whether the reporting of the research is missing important elements.
Once you've identified the study type at hand, visit EQUATOR to find the associated reporting guidelines and ask yourself: does this paper meet the guideline for its study type?
Which Checklist Should I Use?
Determining which checklist to use ultimately comes down to finding an appraisal tool that:
- Fits best with the study you are appraising
- Is reliable, well-known or otherwise validated
- You understand and are comfortable using
Below are some sources of critical appraisal tools. These have been selected as they are known to be widely accepted, easily applicable, and relevant to appraisal of a typical journal article. You may find another tool that you prefer, which is acceptable as long as it is defensible:
- CASP (Critical Appraisal Skills Programme)
- JBI (Joanna Briggs Institute)
- CEBM (Centre for Evidence-Based Medicine)
- SIGN (Scottish Intercollegiate Guidelines Network)
- STROBE (Strengthing the Reporting of Observational Studies in Epidemiology)
- BMJ Best Practice
The information on this page has been compiled by the Medical Librarian. Please contact the Library's Health Team ( [email protected] ) for further assistance.
1. Hoffmann T, Bennett S, Del Mar C. Evidence-based practice across the health professions. 2nd ed. Chatswood, N.S.W., Australia: Elsevier Churchill Livingston; 2013.
2. Greenhalgh T. How to read a paper : the basics of evidence-based medicine. 5th ed. Chichester, West Sussex: Wiley; 2014.
3. Harris M, Jackson D, Taylor G. Clinical evidence made easy. Oxfordshire, England: Scion Publishing; 2014.
4. Aronoff SC. Translational research and clinical practice: basic tools for medical decision making and self-learning. New York: Oxford University Press; 2011.
- << Previous: Guide to Using EndNote
- Next: Manage Research Data >>
- Last Updated: Nov 24, 2023 9:45 AM
- URL: https://deakin.libguides.com/medicine