U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Perspect Clin Res
  • v.12(2); Apr-Jun 2021

Critical appraisal of published research papers – A reinforcing tool for research methodology: Questionnaire-based study

Snehalata gajbhiye.

Department of Pharmacology and Therapeutics, Seth GS Medical College and KEM Hospital, Mumbai, Maharashtra, India

Raakhi Tripathi

Urwashi parmar, nishtha khatri, anirudha potey.

1 Department of Clinical Trials, Serum Institute of India, Pune, Maharashtra, India

Background and Objectives:

Critical appraisal of published research papers is routinely conducted as a journal club (JC) activity in pharmacology departments of various medical colleges across Maharashtra, and it forms an important part of their postgraduate curriculum. The objective of this study was to evaluate the perception of pharmacology postgraduate students and teachers toward use of critical appraisal as a reinforcing tool for research methodology. Evaluation of performance of the in-house pharmacology postgraduate students in the critical appraisal activity constituted secondary objective of the study.

Materials and Methods:

The study was conducted in two parts. In Part I, a cross-sectional questionnaire-based evaluation on perception toward critical appraisal activity was carried out among pharmacology postgraduate students and teachers. In Part II of the study, JC score sheets of 2 nd - and 3 rd -year pharmacology students over the past 4 years were evaluated.

One hundred and twenty-seven postgraduate students and 32 teachers participated in Part I of the study. About 118 (92.9%) students and 28 (87.5%) faculties considered the critical appraisal activity to be beneficial for the students. JC score sheet assessments suggested that there was a statistically significant improvement in overall scores obtained by postgraduate students ( n = 25) in their last JC as compared to the first JC.

Conclusion:

Journal article criticism is a crucial tool to develop a research attitude among postgraduate students. Participation in the JC activity led to the improvement in the skill of critical appraisal of published research articles, but this improvement was not educationally relevant.

INTRODUCTION

Critical appraisal of a research paper is defined as “The process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context.”[ 1 ] Since scientific literature is rapidly expanding with more than 12,000 articles being added to the MEDLINE database per week,[ 2 ] critical appraisal is very important to distinguish scientifically useful and well-written articles from imprecise articles.

Educational authorities like the Medical Council of India (MCI) and Maharashtra University of Health Sciences (MUHS) have stated in pharmacology postgraduate curriculum that students must critically appraise research papers. To impart training toward these skills, MCI and MUHS have emphasized on the introduction of journal club (JC) activity for postgraduate (PG) students, wherein students review a published original research paper and state the merits and demerits of the paper. Abiding by this, pharmacology departments across various medical colleges in Maharashtra organize JC at frequent intervals[ 3 , 4 ] and students discuss varied aspects of the article with teaching faculty of the department.[ 5 ] Moreover, this activity carries a significant weightage of marks in the pharmacology university examination. As postgraduate students attend this activity throughout their 3-year tenure, it was perceived by the authors that this activity of critical appraisal of research papers could emerge as a tool for reinforcing the knowledge of research methodology. Hence, a questionnaire-based study was designed to find out the perceptions from PG students and teachers.

There have been studies that have laid emphasis on the procedure of conducting critical appraisal of research papers and its application into clinical practice.[ 6 , 7 ] However, there are no studies that have evaluated how well students are able to critically appraise a research paper. The Department of Pharmacology and Therapeutics at Seth GS Medical College has developed an evaluation method to score the PG students on this skill and this tool has been implemented for the last 5 years. Since there are no research data available on the performance of PG Pharmacology students in JC, capturing the critical appraisal activity evaluation scores of in-house PG students was chosen as another objective of the study.

MATERIALS AND METHODS

Description of the journal club activity.

JC is conducted in the Department of Pharmacology and Therapeutics at Seth GS Medical College once in every 2 weeks. During the JC activity, postgraduate students critically appraise published original research articles on their completeness and aptness in terms of the following: study title, rationale, objectives, study design, methodology-study population, inclusion/exclusion criteria, duration, intervention and safety/efficacy variables, randomization, blinding, statistical analysis, results, discussion, conclusion, references, and abstract. All postgraduate students attend this activity, while one of them critically appraises the article (who has received the research paper given by one of the faculty members 5 days before the day of JC). Other faculties also attend these sessions and facilitate the discussions. As the student comments on various sections of the paper, the same predecided faculty who gave the article (single assessor) evaluates the student on a total score of 100 which is split per section as follows: Introduction –20 marks, Methodology –20 marks, Discussion – 20 marks, Results and Conclusion –20 marks, References –10 marks, and Title, Abstract, and Keywords – 10 marks. However, there are no standard operating procedures to assess the performance of students at JC.

Methodology

After seeking permission from the Institutional Ethics Committee, the study was conducted in two parts. Part I consisted of a cross-sectional questionnaire-based survey that was conducted from October 2016 to September 2017. A questionnaire to evaluate perception towards the activity of critical appraisal of published papers as research methodology reinforcing tool was developed by the study investigators. The questionnaire consisted of 20 questions: 14 questions [refer Figure 1 ] graded on a 3-point Likert scale (agree, neutral, and disagree), 1 multiple choice selection question, 2 dichotomous questions, 1 semi-open-ended questions, and 2 open-ended questions. Content validation for this questionnaire was carried out with the help of eight pharmacology teachers. The content validity ratio per item was calculated and each item in the questionnaire had a CVR ratio (CVR) of >0.75.[ 8 ] The perception questionnaire was either E-mailed or sent through WhatsApp to PG pharmacology students and teaching faculty in pharmacology departments at various medical colleges across Maharashtra. Informed consent was obtained on E-mail from all the participants.

An external file that holds a picture, illustration, etc.
Object name is PCR-12-100-g001.jpg

Graphical representation of the percentage of students/teachers who agreed that critical appraisal of research helped them improve their knowledge on various aspects of research, perceived that faculty participation is important in this activity, and considered critical appraisal activity beneficial for students. The numbers adjacent to the bar diagrams indicate the raw number of students/faculty who agreed, while brackets indicate %

Part II of the study consisted of evaluating the performance of postgraduate students toward skills of critical appraisal of published papers. For this purpose, marks obtained by 2 nd - and 3 rd -year residents during JC sessions conducted over a period of 4 years from October 2013 to September 2017 were recorded and analyzed. No data on personal identifiers of the students were captured.

Statistical analysis

Marks obtained by postgraduate students in their first and last JC were compared using Wilcoxon signed-rank test, while marks obtained by 2 nd - and 3 rd -year postgraduate students were compared using Mann–Whitney test since the data were nonparametric. These statistical analyses were performed using GraphPad Prism statistical software, San Diego, Calfornia, USA, Version 7.0d. Data obtained from the perception questionnaire were entered in Microsoft Excel sheet and were expressed as frequencies (percentages) using descriptive statistics.

Participants who answered all items of the questionnaire were considered as complete responders and only completed questionnaires were analyzed. The questionnaire was sent through an E-mail to 100 students and through WhatsApp to 68 students. Out of the 100 students who received the questionnaire through E-mail, 79 responded completely and 8 were incomplete responders, while 13 students did not revert back. Out of the 68 students who received the questionnaire through WhatsApp, 48 responded completely, 6 gave an incomplete response, and 14 students did not revert back. Hence, of the 168 postgraduate students who received the questionnaire, 127 responded completely (student response rate for analysis = 75.6%). The questionnaire was E-mailed to 33 faculties and was sent through WhatsApp to 25 faculties. Out of the 33 faculties who received the questionnaire through E-mail, 19 responded completely, 5 responded incompletely, and 9 did not respond at all. Out of the 25 faculties who received the questionnaire through WhatsApp, 13 responded completely, 3 were incomplete responders, and 9 did not respond at all. Hence, of a total of 58 faculties who were contacted, 32 responded completely (faculty response rate for analysis = 55%). For Part I of the study, responses on the perception questionnaire from 127 postgraduate students and 32 postgraduate teachers were recorded and analyzed. None of the faculty who participated in the validation of the questionnaire participated in the survey. Number of responses obtained region wise (Mumbai region and rest of Maharashtra region) have been depicted in Table 1 .

Region-wise distribution of responses

Students ( =127)Faculty ( =32)
Mumbai colleges58 (45.7)18 (56.3)
Rest of Maharashtra colleges69 (54.3)14 (43.7)

Number of responses obtained from students/faculty belonging to Mumbai colleges and rest of Maharashtra colleges. Brackets indicate percentages

As per the data obtained on the Likert scale questions, 102 (80.3%) students and 29 (90.6%) teachers agreed that critical appraisal trains the students in doing a review of literature before selecting a particular research topic. Majority of the participants, i.e., 104 (81.9%) students and 29 (90.6%) teachers also believed that the activity increases student's knowledge regarding various experimental evaluation techniques. Moreover, 112 (88.2%) students and 27 (84.4%) faculty considered that critical appraisal activity results in improved skills of writing and understanding methodology section of research articles in terms of inclusion/exclusion criteria, endpoints, and safety/efficacy variables. About 103 (81.1%) students and 24 (75%) teachers perceived that this activity results in refinement of the student's research work. About 118 (92.9%) students and 28 (87.5%) faculty considered the critical appraisal activity to be beneficial for the students. Responses to 14 individual Likert scale items of the questionnaire have been depicted in Figure 1 .

With respect to the multiple choice selection question, 66 (52%) students and 16 (50%) teachers opined that faculty should select the paper, 53 (41.7%) students and 9 (28.1%) teachers stated that the papers should be selected by the presenting student himself/herself, while 8 (6.3%) students and 7 (21.9%) teachers expressed that some other student should select the paper to be presented at the JC.

The responses to dichotomous questions were as follows: majority of the students, that is, 109 (85.8%) and 23 (71.9%) teachers perceived that a standard checklist for article review should be given to the students before critical appraisal of journal article. Open-ended questions of the questionnaire invited suggestions from the participants regarding ways of getting trained on critical appraisal skills and of improving JC activity. Some of the suggestions given by faculty were as follows: increasing the frequency of JC activity, discussion of cited articles and new guidelines related to it, selecting all types of articles for criticism rather than only randomized controlled trials, and regular yearly exams on article criticism. Students stated that regular and frequent article criticism activity, practice of writing letter to the editor after criticism, active participation by peers and faculty, increasing weightage of marks for critical appraisal of papers in university examinations (at present marks are 50 out of 400), and a formal training for research criticism from 1 st year of postgraduation could improve critical appraisal program.

In Part II of this study, performance of the students on the skill of critical appraisal of papers was evaluated. Complete data of the first and last JC scores of a total of 25 students of the department were available, and when these scores were compared, it was seen that there was a statistically significant improvement in the overall scores ( P = 0.04), as well as in the scores obtained in methodology ( P = 0.03) and results section ( P = 0.02). This is depicted in Table 2 . Although statistically significant, the differences in scores in the methodology section, results section, and overall scores were 1.28/20, 1.28/20, and 4.36/100, respectively, amounting to 5.4%, 5.4%, and 4.36% higher scores in the last JC, which may not be considered educationally relevant (practically significant). The quantum of difference that would be considered practically significant was not decided a priori .

Comparison of marks obtained by pharmacology residents in their first and last journal club

SectionMarks obtained by pharmacology residents in their first journal club ( =25) Marks obtained by pharmacology residents in their last journal club ( =25) Wilcoxon signed-rank test
Mean±SDMedian (IQR)Mean±SDMedian (IQR) value
Introduction (maximum: 20 marks)13.48±2.5214 (12-16)14.28±2.3214 (13-16)0.22
Methodology (maximum: 20 marks)13.36±3.1114 (12-16)14.64±2.4014 (14-16.5)0.03*
Results and conclusion (maximum: 20 marks)13.60±2.4214 (12-15.5)14.88±2.6415 (13.5-16.5)0.02*
Discussion (maximum: 20 marks)13.44±3.2014 (11-16)14.16±2.7814 (12.5-16)0.12
References (maximum: 10 marks)7.12±1.207 (6.5-8)7.06±1.287 (6-8)0.80
Title, abstract, and keywords (maximum: 10 marks)7.44±0.927 (7-8)7.78±1.128 (7-9)0.17
Overall score (maximum: 100 marks)68.44±11.3972 (64-76)72.80±11.3271 (68-82.5)0.04*

Marks have been represented as mean±SD. The maximum marks that can be obtained in each section have been stated as maximum. *Indicates statistically significant ( P <0.05). IQR=Interquartile range, SD=Standard deviation

Scores of two groups, one group consisting of 2 nd -year postgraduate students ( n = 44) and second group consisting of 3 rd -year postgraduate students ( n = 32) were compared and revealed no statistically significant difference in overall score ( P = 0.84). This is depicted in Table 3 . Since the quantum of difference in the overall scores was meager 0.84/100 (0.84%), it cannot be considered practically significant.

Comparison of marks obtained by 2 nd - and 3 rd -year pharmacology residents in the activity of critical appraisal of research articles

SectionMarks obtained by 2 -year pharmacology students ( =44) Marks obtained by 3 -year pharmacology students ( =32) Mann-Whitney test, value
Mean±SDMedian (IQR)Mean±SDMedian (IQR)
Introduction (maximum: 20 marks)14.09±2.4114 (13-16)14.28±2.1414 (13-16)0.7527
Methodology (maximum: 20 marks)14.30±2.9014.5 (13-16)14.41±2.2414 (13-16)0.8385
Results and conclusion (maximum: 20 marks)14.09±2.4414 (12.5-16)14.59±2.6114.5 (13-16)0.4757
Discussion (maximum: 20 marks)13.86±2.7314 (12-16)14.16±2.7114.5 (12.5-16)0.5924
References (maximum: 10 marks)7.34±1.168 (7-8)7.05±1.407 (6-8)0.2551
Title, abstract, and keywords (maximum: 10 marks)7.82±0.908 (7-8.5)7.83±1.118 (7-8.5)0.9642
Overall score (maximum: 100 marks)71.50±10.7171.5 (66.5-79.5)72.34±10.8573 (66-79.5)0.8404

Marks have been represented as mean±SD. The maximum marks that can be obtained in each section have been stated as maximum. P <0.05 was considered to be statistically significant. IQR=Interquartile range, SD=Standard deviation

The present study gauged the perception of the pharmacology postgraduate students and teachers toward the use of critical appraisal activity as a reinforcing tool for research methodology. Both students and faculties (>50%) believed that critical appraisal activity increases student's knowledge on principles of ethics, experimental evaluation techniques, CONSORT guidelines, statistical analysis, concept of conflict of interest, current trends and recent advances in Pharmacology and trains on doing a review of literature, and improves skills on protocol writing and referencing. In the study conducted by Crank-Patton et al ., a survey on 278 general surgery program directors was carried out and more than 50% indicated that JC was important to their training program.[ 9 ]

The grading template used in Part II of the study was based on the IMRaD structure. Hence, equal weightage was given to the Introduction, Methodology, Results, and Discussion sections and lesser weightage was given to the references and title, abstract, and keywords sections.[ 10 ] While evaluating the scores obtained by 25 students in their first and last JC, it was seen that there was a statistically significant improvement in the overall scores of the students in their last JC. However, the meager improvement in scores cannot be considered educationally relevant, as the authors expected the students to score >90% for the upgrade to be considered educationally impactful. The above findings suggest that even though participation in the JC activity led to a steady increase in student's performance (~4%), the increment was not as expected. In addition, the students did not portray an excellent performance (>90%), with average scores being around 72% even in the last JC. This can be probably explained by the fact that students perform this activity in a routine setting and not in an examination setting. Unlike the scenario in an examination, students were aware that even if they performed at a mediocre level, there would be no repercussions.

A separate comparison of scores obtained by 44 students in their 2 nd year and 32 students in their 3 rd year of postgraduation students was also done. The number of student evaluation sheets reviewed for this analysis was greater than the number of student evaluation sheets reviewed to compare first and last JC scores. This can be spelled out by the fact that many students were still in 2 nd year when this analysis was done and the score data for their last JC, which would take place in 3 rd year, was not available. In addition, few students were asked to present at JC multiple times during the 2 nd /3 rd year of their postgraduation.

While evaluating the critical appraisal scores obtained by 2 nd - and 3 rd -year postgraduate students, it was found that although the 3 rd -year students had a mean overall score greater than the 2 nd -year students, this difference was not statistically significant. During the 1 st year of MD Pharmacology course, students at the study center attend JC once in every 2 weeks. Even though the 1 st -year students do not themselves present in JC, they listen and observe the criticism points stated by senior peers presenting at the JC, and thereby, incur substantial amount of knowledge required to critically appraise papers. By the time, they become 2 nd -year students, they are already well versed with the program and this could have led to similar overall mean scores between the 2 nd -year students (71.50 ± 10.71) and 3 rd -year students (72.34 ± 10.85). This finding suggests that attentive listening is as important as active participation in the JC. Moreover, although students are well acquainted with the process of criticism when they are in their 3 rd year, there is certainly a scope for improvement in terms of the mean overall scores.

Similar results were obtained in a study conducted by Stern et al ., in which 62 students in the internal medicine program at the New England Medical Center were asked to respond to a questionnaire, evaluate a sample article, and complete a self-assessment of competence in evaluation of research. Twenty-eight residents returned the questionnaire and the composite score for the resident's objective assessment was not significantly correlated with the postgraduate year or self-assessed critical appraisal skill.[ 11 ]

Article criticism activity provides the students with practical experience of techniques taught in research methodology workshop. However, this should be supplemented with activities that assess the improvement of designing and presenting studies, such as protocol and paper writing. Thus, critical appraisal plays a significant role in reinforcing good research practices among the new generation of physicians. Moreover, critical appraisal is an integral part of PG assessment, and although the current format of conducting JCs did not portray a clinically meaningful improvement, the authors believe that it is important to continue this activity with certain modifications suggested by students who participated in this study. Students suggested that an increase in the frequency of critical appraisal activity accompanied by the display of active participation by peers and faculty could help in the betterment of this activity. This should be brought to attention of the faculty, as students seem to be interested to learn. Critical appraisal should be a two-way teaching–learning process between the students and faculty and not a dire need for satisfying the students' eligibility criteria for postgraduate university examinations. This activity is not only for the trainee doctors but also a part of the overall faculty development program.[ 12 ]

In the present era, JCs have been used as a tool to not only teach critical appraisal skills but also to teach other necessary aspects such as research design, medical statistics, clinical epidemiology, and clinical decision-making.[ 13 , 14 ] A study conducted by Khan in 2013 suggested that success of JC program can be ensured if institutes develop a defined JC objective for the development of learning capability of students and also if they cultivate more skilled faculties.[ 15 ] A good JC is believed to facilitate relevant, meaningful scientific discussion, and evaluation of the research updates that will eventually benefit the patient care.[ 12 ]

Although there is a lot of literature emphasizing the importance of JC, there is a lack of studies that have evaluated the outcome of such activity. One such study conducted by Ibrahim et al . assessed the importance of critical appraisal as an activity in surgical trainees in Nigeria. They reported that 92.42% trainees considered the activity to be important or very important and 48% trainees stated that the activity helped in improving literature search.[ 16 ]

This study is unique since it is the first of its kind to evaluate how well students are able to critically appraise a research paper. Moreover, the study has taken into consideration the due opinions of the students as well as faculties, unlike the previous literature which has laid emphasis on only student's perception. A limitation of this study is that sample size for faculties was smaller than the students, as it was not possible to convince the distant faculty in other cities to fill the survey. Besides, there may be a variation in the manner of conduct of the critical appraisal activity in pharmacology departments across the various medical colleges in the country. Another limitation of this study was that a single assessor graded a single student during one particular JC. Nevertheless, each student presented at multiple JC and thereby came across multiple assessors. Since the articles addressed at different JC were disparate, interobserver variability was not taken into account in this study. Furthermore, the authors did not make an a priori decision on the quantum of increase in scores that would be considered educationally meaningful.

Pharmacology students and teachers acknowledge the role of critical appraisal in improving the ability to understand the crucial concepts of research methodology and research conduct. In our institute, participation in the JC activity led to an improvement in the skill of critical appraisal of published research articles among the pharmacology postgraduate students. However, this improvement was not educationally relevant. The scores obtained by final-year postgraduate students in this activity were nearly 72% indicating that there is still scope of betterment in this skill.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

Acknowledgments

We would like to acknowledge the support rendered by the entire Department of Pharmacology and Therapeutics at Seth GS Medical College.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Review Article
  • Published: 20 January 2009

How to critically appraise an article

  • Jane M Young 1 &
  • Michael J Solomon 2  

Nature Clinical Practice Gastroenterology & Hepatology volume  6 ,  pages 82–91 ( 2009 ) Cite this article

52k Accesses

99 Citations

448 Altmetric

Metrics details

Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article in order to assess the usefulness and validity of research findings. The most important components of a critical appraisal are an evaluation of the appropriateness of the study design for the research question and a careful assessment of the key methodological features of this design. Other factors that also should be considered include the suitability of the statistical methods used and their subsequent interpretation, potential conflicts of interest and the relevance of the research to one's own practice. This Review presents a 10-step guide to critical appraisal that aims to assist clinicians to identify the most relevant high-quality studies available to guide their clinical practice.

Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article

Critical appraisal provides a basis for decisions on whether to use the results of a study in clinical practice

Different study designs are prone to various sources of systematic bias

Design-specific, critical-appraisal checklists are useful tools to help assess study quality

Assessments of other factors, including the importance of the research question, the appropriateness of statistical analysis, the legitimacy of conclusions and potential conflicts of interest are an important part of the critical appraisal process

This is a preview of subscription content, access via your institution

Access options

Subscribe to this journal

Receive 12 print issues and online access

195,33 € per year

only 16,28 € per issue

Buy this article

  • Purchase on SpringerLink
  • Instant access to full article PDF

Prices may be subject to local taxes which are calculated during checkout

research appraisal paper example

Similar content being viewed by others

research appraisal paper example

Making sense of the literature: an introduction to critical appraisal for the primary care practitioner

research appraisal paper example

How to appraise the literature: basic principles for the busy clinician - part 2: systematic reviews and meta-analyses

research appraisal paper example

How to appraise the literature: basic principles for the busy clinician - part 1: randomised controlled trials

Druss BG and Marcus SC (2005) Growth and decentralisation of the medical literature: implications for evidence-based medicine. J Med Libr Assoc 93 : 499–501

PubMed   PubMed Central   Google Scholar  

Glasziou PP (2008) Information overload: what's behind it, what's beyond it? Med J Aust 189 : 84–85

PubMed   Google Scholar  

Last JE (Ed.; 2001) A Dictionary of Epidemiology (4th Edn). New York: Oxford University Press

Google Scholar  

Sackett DL et al . (2000). Evidence-based Medicine. How to Practice and Teach EBM . London: Churchill Livingstone

Guyatt G and Rennie D (Eds; 2002). Users' Guides to the Medical Literature: a Manual for Evidence-based Clinical Practice . Chicago: American Medical Association

Greenhalgh T (2000) How to Read a Paper: the Basics of Evidence-based Medicine . London: Blackwell Medicine Books

MacAuley D (1994) READER: an acronym to aid critical reading by general practitioners. Br J Gen Pract 44 : 83–85

CAS   PubMed   PubMed Central   Google Scholar  

Hill A and Spittlehouse C (2001) What is critical appraisal. Evidence-based Medicine 3 : 1–8 [ http://www.evidence-based-medicine.co.uk ] (accessed 25 November 2008)

Public Health Resource Unit (2008) Critical Appraisal Skills Programme (CASP) . [ http://www.phru.nhs.uk/Pages/PHD/CASP.htm ] (accessed 8 August 2008)

National Health and Medical Research Council (2000) How to Review the Evidence: Systematic Identification and Review of the Scientific Literature . Canberra: NHMRC

Elwood JM (1998) Critical Appraisal of Epidemiological Studies and Clinical Trials (2nd Edn). Oxford: Oxford University Press

Agency for Healthcare Research and Quality (2002) Systems to rate the strength of scientific evidence? Evidence Report/Technology Assessment No 47, Publication No 02-E019 Rockville: Agency for Healthcare Research and Quality

Crombie IK (1996) The Pocket Guide to Critical Appraisal: a Handbook for Health Care Professionals . London: Blackwell Medicine Publishing Group

Heller RF et al . (2008) Critical appraisal for public health: a new checklist. Public Health 122 : 92–98

Article   Google Scholar  

MacAuley D et al . (1998) Randomised controlled trial of the READER method of critical appraisal in general practice. BMJ 316 : 1134–37

Article   CAS   Google Scholar  

Parkes J et al . Teaching critical appraisal skills in health care settings (Review). Cochrane Database of Systematic Reviews 2005, Issue 3. Art. No.: cd001270. 10.1002/14651858.cd001270

Mays N and Pope C (2000) Assessing quality in qualitative research. BMJ 320 : 50–52

Hawking SW (2003) On the Shoulders of Giants: the Great Works of Physics and Astronomy . Philadelphia, PN: Penguin

National Health and Medical Research Council (1999) A Guide to the Development, Implementation and Evaluation of Clinical Practice Guidelines . Canberra: National Health and Medical Research Council

US Preventive Services Taskforce (1996) Guide to clinical preventive services (2nd Edn). Baltimore, MD: Williams & Wilkins

Solomon MJ and McLeod RS (1995) Should we be performing more randomized controlled trials evaluating surgical operations? Surgery 118 : 456–467

Rothman KJ (2002) Epidemiology: an Introduction . Oxford: Oxford University Press

Young JM and Solomon MJ (2003) Improving the evidence-base in surgery: sources of bias in surgical studies. ANZ J Surg 73 : 504–506

Margitic SE et al . (1995) Lessons learned from a prospective meta-analysis. J Am Geriatr Soc 43 : 435–439

Shea B et al . (2001) Assessing the quality of reports of systematic reviews: the QUORUM statement compared to other tools. In Systematic Reviews in Health Care: Meta-analysis in Context 2nd Edition, 122–139 (Eds Egger M. et al .) London: BMJ Books

Chapter   Google Scholar  

Easterbrook PH et al . (1991) Publication bias in clinical research. Lancet 337 : 867–872

Begg CB and Berlin JA (1989) Publication bias and dissemination of clinical research. J Natl Cancer Inst 81 : 107–115

Moher D et al . (2000) Improving the quality of reports of meta-analyses of randomised controlled trials: the QUORUM statement. Br J Surg 87 : 1448–1454

Shea BJ et al . (2007) Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Medical Research Methodology 7 : 10 [10.1186/1471-2288-7-10]

Stroup DF et al . (2000) Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 283 : 2008–2012

Young JM and Solomon MJ (2003) Improving the evidence-base in surgery: evaluating surgical effectiveness. ANZ J Surg 73 : 507–510

Schulz KF (1995) Subverting randomization in controlled trials. JAMA 274 : 1456–1458

Schulz KF et al . (1995) Empirical evidence of bias. Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 273 : 408–412

Moher D et al . (2001) The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. BMC Medical Research Methodology 1 : 2 [ http://www.biomedcentral.com/ 1471-2288/1/2 ] (accessed 25 November 2008)

Rochon PA et al . (2005) Reader's guide to critical appraisal of cohort studies: 1. Role and design. BMJ 330 : 895–897

Mamdani M et al . (2005) Reader's guide to critical appraisal of cohort studies: 2. Assessing potential for confounding. BMJ 330 : 960–962

Normand S et al . (2005) Reader's guide to critical appraisal of cohort studies: 3. Analytical strategies to reduce confounding. BMJ 330 : 1021–1023

von Elm E et al . (2007) Strengthening the reporting of observational studies in epidemiology (STROBE) statement: guidelines for reporting observational studies. BMJ 335 : 806–808

Sutton-Tyrrell K (1991) Assessing bias in case-control studies: proper selection of cases and controls. Stroke 22 : 938–942

Knottnerus J (2003) Assessment of the accuracy of diagnostic tests: the cross-sectional study. J Clin Epidemiol 56 : 1118–1128

Furukawa TA and Guyatt GH (2006) Sources of bias in diagnostic accuracy studies and the diagnostic process. CMAJ 174 : 481–482

Bossyut PM et al . (2003)The STARD statement for reporting studies of diagnostic accuracy: explanation and elaboration. Ann Intern Med 138 : W1–W12

STARD statement (Standards for the Reporting of Diagnostic Accuracy Studies). [ http://www.stard-statement.org/ ] (accessed 10 September 2008)

Raftery J (1998) Economic evaluation: an introduction. BMJ 316 : 1013–1014

Palmer S et al . (1999) Economics notes: types of economic evaluation. BMJ 318 : 1349

Russ S et al . (1999) Barriers to participation in randomized controlled trials: a systematic review. J Clin Epidemiol 52 : 1143–1156

Tinmouth JM et al . (2004) Are claims of equivalency in digestive diseases trials supported by the evidence? Gastroentrology 126 : 1700–1710

Kaul S and Diamond GA (2006) Good enough: a primer on the analysis and interpretation of noninferiority trials. Ann Intern Med 145 : 62–69

Piaggio G et al . (2006) Reporting of noninferiority and equivalence randomized trials: an extension of the CONSORT statement. JAMA 295 : 1152–1160

Heritier SR et al . (2007) Inclusion of patients in clinical trial analysis: the intention to treat principle. In Interpreting and Reporting Clinical Trials: a Guide to the CONSORT Statement and the Principles of Randomized Controlled Trials , 92–98 (Eds Keech A. et al .) Strawberry Hills, NSW: Australian Medical Publishing Company

National Health and Medical Research Council (2007) National Statement on Ethical Conduct in Human Research 89–90 Canberra: NHMRC

Lo B et al . (2000) Conflict-of-interest policies for investigators in clinical trials. N Engl J Med 343 : 1616–1620

Kim SYH et al . (2004) Potential research participants' views regarding researcher and institutional financial conflicts of interests. J Med Ethics 30 : 73–79

Komesaroff PA and Kerridge IH (2002) Ethical issues concerning the relationships between medical practitioners and the pharmaceutical industry. Med J Aust 176 : 118–121

Little M (1999) Research, ethics and conflicts of interest. J Med Ethics 25 : 259–262

Lemmens T and Singer PA (1998) Bioethics for clinicians: 17. Conflict of interest in research, education and patient care. CMAJ 159 : 960–965

Download references

Author information

Authors and affiliations.

JM Young is an Associate Professor of Public Health and the Executive Director of the Surgical Outcomes Research Centre at the University of Sydney and Sydney South-West Area Health Service, Sydney,

Jane M Young

MJ Solomon is Head of the Surgical Outcomes Research Centre and Director of Colorectal Research at the University of Sydney and Sydney South-West Area Health Service, Sydney, Australia.,

Michael J Solomon

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Jane M Young .

Ethics declarations

Competing interests.

The authors declare no competing financial interests.

Rights and permissions

Reprints and permissions

About this article

Cite this article.

Young, J., Solomon, M. How to critically appraise an article. Nat Rev Gastroenterol Hepatol 6 , 82–91 (2009). https://doi.org/10.1038/ncpgasthep1331

Download citation

Received : 10 August 2008

Accepted : 03 November 2008

Published : 20 January 2009

Issue Date : February 2009

DOI : https://doi.org/10.1038/ncpgasthep1331

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

This article is cited by

Emergency physicians’ perceptions of critical appraisal skills: a qualitative study.

  • Sumintra Wood
  • Jacqueline Paulis
  • Angela Chen

BMC Medical Education (2022)

An integrative review on individual determinants of enrolment in National Health Insurance Scheme among older adults in Ghana

  • Anthony Kwame Morgan
  • Anthony Acquah Mensah

BMC Primary Care (2022)

Autopsy findings of COVID-19 in children: a systematic review and meta-analysis

  • Anju Khairwa
  • Kana Ram Jat

Forensic Science, Medicine and Pathology (2022)

The use of a modified Delphi technique to develop a critical appraisal tool for clinical pharmacokinetic studies

  • Alaa Bahaa Eldeen Soliman
  • Shane Ashley Pawluk
  • Ousama Rachid

International Journal of Clinical Pharmacy (2022)

Critical Appraisal: Analysis of a Prospective Comparative Study Published in IJS

  • Ramakrishna Ramakrishna HK
  • Swarnalatha MC

Indian Journal of Surgery (2021)

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

research appraisal paper example

Banner

  • Teesside University Student & Library Services
  • Learning Hub Group

Critical Appraisal for Health Students

  • Critical Appraisal of a quantitative paper
  • Critical Appraisal: Help
  • Critical Appraisal of a qualitative paper
  • Useful resources

Appraisal of a Quantitative paper: Top tips

undefined

  • Introduction

Critical appraisal of a quantitative paper (RCT)

This guide, aimed at health students, provides basic level support for appraising quantitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework for appraising quantitative research (based on reliability, internal and external validity) is provided and there is an opportunity to practise the technique on a sample article.

Please note this framework is for appraising one particular type of quantitative research a Randomised Controlled Trial (RCT) which is defined as 

a trial in which participants are randomly assigned to one of two or more groups: the experimental group or groups receive the intervention or interventions being tested; the comparison group (control group) receive usual care or no treatment or a placebo.  The groups are then followed up to see if there are any differences between the results.  This helps in assessing the effectiveness of the intervention.(CASP, 2020)

Support materials

  • Framework for reading quantitative papers (RCTs)
  • Critical appraisal of a quantitative paper PowerPoint

To practise following this framework for critically appraising a quantitative article, please look at the following article:

Marrero, D.G.  et al.  (2016) 'Comparison of commercial and self-initiated weight loss programs in people with prediabetes: a randomized control trial',  AJPH Research , 106(5), pp. 949-956.

Critical Appraisal of a quantitative paper (RCT): practical example

  • Internal Validity
  • External Validity
  • Reliability Measurement Tool

How to use this practical example 

Using the framework, you can have a go at appraising a quantitative paper - we are going to look at the following article:

Marrero, d.g.  et al  (2016) 'comparison of commercial and self-initiated weight loss programs in people with prediabetes: a randomized control trial',  ajph research , 106(5), pp. 949-956.,            step 1.  take a quick look at the article, step 2.  click on the internal validity tab above - there are questions to help you appraise the article, read the questions and look for the answers in the article. , step 3.   click on each question and our answers will appear., step 4.    repeat with the other aspects of external validity and reliability. , questioning the internal validity:, randomisation : how were participants allocated to each group did a randomisation process taken place, comparability of groups: how similar were the groups eg age, sex, ethnicity – is this made clear, blinding (none, single, double or triple): who was not aware of which group a patient was in (eg nobody, only patient, patient and clinician, patient, clinician and researcher) was it feasible for more blinding to have taken place , equal treatment of groups: were both groups treated in the same way , attrition : what percentage of participants dropped out did this adversely affect one group has this been evaluated, overall internal validity: does the research measure what it is supposed to be measuring, questioning the external validity:, attrition: was everyone accounted for at the end of the study was any attempt made to contact drop-outs, sampling approach: how was the sample selected was it based on probability or non-probability what was the approach (eg simple random, convenience) was this an appropriate approach, sample size (power calculation): how many participants was a sample size calculation performed did the study pass, exclusion/ inclusion criteria: were the criteria set out clearly were they based on recognised diagnostic criteria, what is the overall external validity can the results be applied to the wider population, questioning the reliability (measurement tool) internal validity:, internal consistency reliability (cronbach’s alpha). has a cronbach’s alpha score of 0.7 or above been included, test re-test reliability correlation. was the test repeated more than once were the same results received has a correlation coefficient been reported is it above 0.7 , validity of measurement tool. is it an established tool if not what has been done to check if it is reliable pilot study expert panel literature review criterion validity (test against other tools): has a criterion validity comparison been carried out was the score above 0.7, what is the overall reliability how consistent are the measurements , overall validity and reliability:, overall how valid and reliable is the paper.

  • << Previous: Critical Appraisal of a qualitative paper
  • Next: Useful resources >>
  • Last Updated: Jul 23, 2024 3:37 PM
  • URL: https://libguides.tees.ac.uk/critical_appraisal

research appraisal paper example

  • Get new issue alerts Get alerts
  • Submit a Manuscript

Secondary Logo

Journal logo.

Colleague's E-mail is Invalid

Your message has been successfully sent to your colleague.

Save my selection

Critical appraisal of a clinical research paper

What one needs to know.

Manjali, Jifmi Jose; Gupta, Tejpal

Department of Radiation Oncology, Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, Maharashtra, India

Address for correspondence: Dr. Tejpal Gupta, ACTREC, Tata Memorial Centre, Homi Bhabha National Institute, Kharghar, Navi Mumbai - 410 210, Maharashtra, India. E-mail: [email protected]

Received May 25, 2020

Received in revised form June 11, 2020

Accepted June 19, 2020

This is an open access journal, and articles are distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 License, which allows others to remix, tweak, and build upon the work non-commercially, as long as appropriate credit is given and the new creations are licensed under the identical terms.

In the present era of evidence-based medicine (EBM), integrating best research evidence into the clinical practice necessitates developing skills to critically evaluate and analyze the scientific literature. Critical appraisal is the process of systematically examining research evidence to assess its validity, results, and relevance to inform clinical decision-making. All components of a clinical research article need to be appraised as per the study design and conduct. As research bias can be introduced at every step in the flow of a study leading to erroneous conclusions, it is essential that suitable measures are adopted to mitigate bias. Several tools have been developed for the critical appraisal of scientific literature, including grading of evidence to help clinicians in the pursuit of EBM in a systematic manner. In this review, we discuss the broad framework for the critical appraisal of a clinical research paper, along with some of the relevant guidelines and recommendations.

INTRODUCTION

Medical research information is ever growing and branching day by day. Despite the vastness of medical literature, it is necessary that as clinicians we offer the best treatment to our patients as per the current knowledge. Integrating best research evidence with clinical expertise and patient values has led to the concept of evidence-based medicine (EBM).[ 1 ] Although this philosophy originated in the middle of the 19 th century,[ 2 ] it first appeared in its current form in the modern medical literature in 1991.[ 3 ] EBM is defined as the conscientious, explicit, and judicious use of the current best evidence in making decisions about the care of an individual patient.[ 1 ] The essentials of EBM include generating a clinical question, tracking the best available evidence, critically evaluating the evidence for validity and clinical usefulness, further applying the results to clinical practice, and evaluating its performance. Appropriate application of EBM can result in cost-effectiveness and improve health-care efficiency.[ 4 ] Without continual accumulation of new knowledge, existing dogmas and paradigms quickly become outdated and may prove detrimental to the patients. The current growth of medical literature with 1.8 million scientific articles published in the year 2012,[ 5 ] often makes it difficult for the clinicians to keep pace with the vast amount of scientific data, thus making foraging (alerts to new information) and hunting (finding answers to clinical questions) essential skills to help navigate the so-called “jungle” of information.[ 6 ] Therefore, it is essential that health-care professionals read medical literature selectively to effectively utilize their limited time and assiduously imbibe new knowledge to improve decision-making for their patients. To practice EBM in its true sense, a clinician not only needs to devote time to develop the skill of effectively searching the literature, but also needs to learn to evaluate the significance, methodology, outcomes, and transparency of the study.[ 4 ] Along with the evaluation and interpretation of a study, a thorough understanding of its methodology is necessary. It is common knowledge that studies with positive results are relatively easy to publish.[ 7 8 ] However, it is the critical appraisal of any research study (even those with negative results) that helps us to understand the science better and ask relevant questions in future using an appropriate study design and endpoints. Therefore, this review is focused on the framework for the critical appraisal of a clinical research paper. In addition, we have also discussed some of the relevant guidelines and recommendations for the critical appraisal of clinical research papers.

CRITICAL APPRAISAL

Critical appraisal is the process of systematically examining the research evidence to assess its validity, results, and relevance before using it to inform a decision.[ 9 ] It entails the following:

  • Balanced assessment of the benefits/strengths and flaws/weaknesses of a study
  • Assessment of the research process and results
  • Consideration of quantitative and qualitative aspects.

Critical appraisal is performed to assess the following

aspects of a study:

  • Validity – Is the methodology robust?
  • Reliability – Are the results credible?
  • Applicability– Do the results have the potential to change the current practice?

Contrary to the common belief, a critical appraisal is not the negative dismissal of any piece of research or an assessment of the results alone; it is neither solely based on a statistical analysis nor a process undertaken by the experts only. When performing a critical appraisal of a scientific article, it is essential that we know its basic composition and assess every section meticulously.

Initial assessment

This involves taking a generalized look at the details of the article. The journal it was published in holds special value – a peer reviewed, indexed journal with a good impact factor adds robustness to the paper. The setting, timeline, and year of publication of the study also need to be noted, as they provide a better understanding of the evolution of thoughts in that particular subject. Declaration of the conflicts of interest by the authors, the role of the funding source if any, and any potential commercial bias should also be noted.[ 10 ]

COMPONENTS OF A CLINICAL RESEARCH PAPER

The components of any scientific article or clinical research paper remain largely the same. An article begins with a title, abstract, and keywords, which are followed by the main text, which includes the IMRAD – introduction, methods, results and discussion, and ends with the conclusion and references.

It is a brief summary of the research article which helps the readers understand the purpose, methods, and results of the study. Although an abstract may provide a brief overview of the study, the full text of the article needs to be read and evaluated for a thorough understanding. There are two types of abstracts, namely structured and unstructured. A structured abstract comprises different sections typically labelled as background/purpose, methods, results, and conclusion, whereas an unstructured abstract is not divided into these sections.

Introduction

The introduction of a research paper familiarizes the reader with the topic. It refers to the current evidence in the particular subject and the possible lacunae which necessitate the present study. In other words, the introduction puts the study in perspective. The findings of other related studies have to be quoted and referenced, especially their central statements. The introduction also needs to justify the appropriateness of the chosen study.[ 11 ]

This section highlights the procedure followed while conducting the study. It provides all the data necessary for the study's appraisal and lays out the study design which is paramount. For clinical research articles, this section should describe the participant or patient/population/problem (P), intervention (I), comparison (C), outcome (O), and study design (S) PICO(S), generally referred to as the PICO(S) framework [ Table 1 ].

T1-21

Study designs and levels of evidence

Study designs are broadly divided into descriptive and interventional studies,[ 12 ] which can be further subdivided as shown in Figure 1 . Each study design has its own characteristics and should be used in the appropriate setting. The various study designs form the building blocks of evidence. This in turn justifies the need for a hierarchical classification of evidence, referred to as “Levels of Evidence,” as it forms the cornerstone of EBM [ Table 2 ]. Most medical journals now mandate that the submitted manuscript conform to and comply with the clinical research reporting statements and guidelines as applicable to the study design [ Table 3 ] to maintain clarity, transparency, and reproducibility and ensure comparability across different studies asking the same research question. As per the study design, the appropriate descriptive and inferential statistical analyses should be specified in the statistical plan. For prospective studies, a clear mention of sample size calculation (depending on the type of study, power, alpha error, meaningful difference, and variance) is mandatory, so as to identify whether the study was adequately powered.[ 13 ] The endpoints (primary, secondary, and exploratory, if any) should be mentioned clearly along with the exact methods used for the measurement of the variables.

F1-21

Statistical testing

The statistical framework of any research study is commonly based on testing the null hypothesis, wherein the results are deemed significant by comparing P values obtained from an experimental dataset to a predefined significance level (0.05 being the most popular choice). By definition, P value is the probability under the specified statistical model to obtain a statistical summary equal to or more extreme than the one computed from the data and can range from 0 to 1. P < 0.05 indicates that results are unlikely to be due to chance alone. Unfortunately, P value does not indicate the magnitude of the observed difference, which may also be desirable. An alternative and complementary approach is the use of confidence intervals (CI), which is a range of values calculated from the observed data, that is likely to contain the true value at a specified probability. The probability is chosen by the investigator, and it is set customarily at 95% (1– alpha error of 0.05). CI provides information that may be used to test hypotheses; additionally, they provide information related to the precision, power, sample size, and effect size.

This section contains the findings of the study, presented clearly and objectively. The results obtained using the descriptive and inferential statistical analyses (as mentioned in the methods section) should be described. The use of tables and figures, including graphical representation [ Table 4 ], is encouraged to improve the clarity;[ 14 ] however, the duplication of these data in the text should be avoided.

T4-21

The discussion section presents the authors' interpretations of the obtained results. This section includes:

  • A comparison of the study results with what is currently known, drawing similarities and differences
  • Novel findings of the study that have added to the existing body of knowledge
  • Caveats and limitations.

It is imperative that the key relevant references are cited in any research paper in the appropriate format which allows the readers to access the original source of the specified statement or evidence. A brief look at the reference list gives an overview of how well the indexed medical literature was searched for the purpose of writing the manuscript.

Overall assessment

After a careful assessment of the various sections of a research article, it is necessary to assess the relevance of the study findings to the present scenario and weigh the potential benefits and drawbacks of its application to the population. In this context, it is necessary that the integrity of the intervention be noted. This can be verified by assessing the factors such as adherence to the specified program, the exposure needed, quality of delivery, participant responsiveness, and potential contamination. This relates to the feasibility of applying the intervention to the community.

BIAS IN CLINICAL RESEARCH

Research articles are the media through which science is communicated, and it is necessary that we adhere to the basic principles of transparency and accuracy when communicating our findings. Any such trend or deviation from the truth in data collection, analysis, interpretation, or publication is called bias.[ 15 ] This may lead to erroneous conclusions, and hence, all scientists and clinicians must be aware of the bias and employ all possible measures to mitigate it.

The extent to which a study is free from bias defines its internal validity. Internal validity is different from the external validity and precision. The external validity of a study is about its generalizability or applicability (depends on the purpose of the study), while precision is the extent to which a study is free from random errors (depends on the number of participants). A study is irrelevant without internal validity even if it is applicable and precise.[ 16 ] A bias can be introduced at every step in the flow of a study [ Figure 2 ].

F2-21

The various types of biases in clinical research include:

  • Selection bias: This happens while recruiting patients. This may lead to the differences in the way patients are accepted or rejected for a trial and the way in which interventions are assigned to the individuals. We need to assess whether the study population is a true representative of the target population. Furthermore, when there is no or an inadequate sequence generation, it can result in the over-estimation of treatment effects compared to randomized trials.[ 14 ] This can be mitigated by using a process called randomization. Randomization is the process of assigning clinical trial participants to treatment groups, such that each participant has an equal chance of being assigned to a particular group. This process should be completely random (e.g., tossing a coin, using a computer program, and throwing dice). When the process is not exactly random (e.g., randomization by date of birth, odd-even numbers, alternation, registration date, etc.), there is a significant potential for a selection bias
  • Allocation bias: This is a bias that sets in when the person responsible for the study also allocates the treatment. It is known that inadequate or unclear concealment of allocation can lead to an overestimation of the treatment effects.[ 17 ] Adequate allocation concealment helps in mitigating this bias. This can be done by sequentially numbering identical drug containers or through central allocation by a person not involved in study enrollment
  • Confounding bias: Having an effect on the dependent and independent variables through a spurious association, confounding factors can introduce a significant bias. Hence, the baseline characteristics need to be similar in the groups being compared. Known confounders can be managed during the selection process by stratified randomization (in randomized trials) and matching (in observational studies) or during analysis by meta-regression.[ 18 ] However, the unknown confounders can be minimized only through randomization
  • Performance bias: This is a bias that is introduced because of the knowledge about the intervention allocation in the patient, investigator, or outcome assessor. This results in ascertainment or recall bias (patient), reporting bias (investigator), and detection bias (outcome assessor), all of which can lead to an overestimation of the treatment effects.[ 17 ] This can be mitigated by blinding – a process in which the treatment allocation is hidden from the patient, investigator, and/or outcome assessor. However, it has to be noted that blinding may not be practical or possible in all kinds of clinical trials
  • Method bias: In clinical trials, it is necessary that the outcomes be assessed and recorded using valid and reliable tools, the lack of which can introduce a method bias[ 19 ]
  • Attrition bias: This is a bias that is introduced because of the systematic differences between the groups in the loss of participants from the study. It is necessary to describe the completeness of the outcomes including the exclusions (along with the reasons), loss to follow-up, and drop-outs from the analysis
  • Other bias: This includes any important concerns about biases not covered in the other domains.

Trial registration

In the recent times, it has become an ethical as well as a regulatory requirement in most countries to register the clinical trials prospectively before the enrollment of the first subject. Registration of a clinical trial is defined as the publication of an internationally agreed upon set of information about the design, conduct, and administration of any clinical trial on a publicly accessible website managed by a registry conforming to international standards. Apart from improving the awareness and visibility of the study, registration ensures transparency in the conduct and reduces publication bias and selective reporting. Some of the common sites are the ClinicalTrials. gov run by the National Library of Medicine of the National Institutes of Health (), Clinical Trials Registry-India () run by the Indian Council of Medical Research, and the International Clinical Trials Registry Platform () run by the World Health Organization.

Tools for critical appraisal

Several tools have been developed to assess the transparency of the scientific research papers and the degree of congruence of the research question with the study in the context of the various sections listed above [ Table 5 ].

T5-21

Ethical considerations

Bad ethics cannot produce good science. Therefore, all scientific research must follow the ethical principles laid out in the declaration of Helsinki. For clinical research, it is mandatory that team members be trained in good clinical practice, familiarize themselves with clinical research methodology, and follow standard operating procedures as prescribed. Although the regulatory framework and landscape may vary to a certain extent depending upon the country where the research work is conducted, it is the responsibility of the Institutional Review Boards/Institutional Ethics Committees to provide study oversight such that the safety, well-being, and rights of the participants are adequately protected.

CONCLUSIONS

Critical appraisal is the systematic examination of the research evidence reported in the scientific articles to assess their validity, reliability, and applicability before using their findings to inform decision-making. It should be considered as the first step to grade the quality of evidence.

Financial support and sponsorship

Conflicts of interest.

There are no conflicts of interest.

  • Cited Here |
  • Google Scholar

Appraisal; bias; clinical study; evidence-based medicine; guidelines; tools

  • + Favorites
  • View in Gallery

Readers Of this Article Also Read

Epidemiological studies of risk factors could aid in designing risk....

site header image

Evaluating Research Articles

Understanding research statistics, critical appraisal, help us improve the libguide.

research appraisal paper example

Imagine for a moment that you are trying to answer a clinical (PICO) question regarding one of your patients/clients. Do you know how to determine if a research study is of high quality? Can you tell if it is applicable to your question? In evidence based practice, there are many things to look for in an article that will reveal its quality and relevance. This guide is a collection of resources and activities that will help you learn how to evaluate articles efficiently and accurately.

Is health research new to you? Or perhaps you're a little out of practice with reading it? The following questions will help illuminate an article's strengths or shortcomings. Ask them of yourself as you are reading an article:

  • Is the article peer reviewed?
  • Are there any conflicts of interest based on the author's affiliation or the funding source of the research?
  • Are the research questions or objectives clearly defined?
  • Is the study a systematic review or meta analysis?
  • Is the study design appropriate for the research question?
  • Is the sample size justified? Do the authors explain how it is representative of the wider population?
  • Do the researchers describe the setting of data collection?
  • Does the paper clearly describe the measurements used?
  • Did the researchers use appropriate statistical measures?
  • Are the research questions or objectives answered?
  • Did the researchers account for confounding factors?
  • Have the researchers only drawn conclusions about the groups represented in the research?
  • Have the authors declared any conflicts of interest?

If the answer to these questions about an article you are reading are mostly YESes , then it's likely that the article is of decent quality. If the answers are most NOs , then it may be a good idea to move on to another article. If the YESes and NOs are roughly even, you'll have to decide for yourself if the article is good enough quality for you. Some factors, like a poor literature review, are not as important as the researchers neglecting to describe the measurements they used. As you read more research, you'll be able to more easily identify research that is well done vs. that which is not well done.

research appraisal paper example

Determining if a research study has used appropriate statistical measures is one of the most critical and difficult steps in evaluating an article. The following links are great, quick resources for helping to better understand how to use statistics in health research.

research appraisal paper example

  • How to read a paper: Statistics for the non-statistician. II: “Significant” relations and their pitfalls This article continues the checklist of questions that will help you to appraise the statistical validity of a paper. Greenhalgh Trisha. How to read a paper: Statistics for the non-statistician. II: “Significant” relations and their pitfalls BMJ 1997; 315 :422 *On the PMC PDF, you need to scroll past the first article to get to this one.*
  • A consumer's guide to subgroup analysis The extent to which a clinician should believe and act on the results of subgroup analyses of data from randomized trials or meta-analyses is controversial. Guidelines are provided in this paper for making these decisions.

Statistical Versus Clinical Significance

When appraising studies, it's important to consider both the clinical and statistical significance of the research. This video offers a quick explanation of why.

If you have a little more time, this video explores statistical and clinical significance in more detail, including examples of how to calculate an effect size.

  • Statistical vs. Clinical Significance Transcript Transcript document for the Statistical vs. Clinical Significance video.
  • Effect Size Transcript Transcript document for the Effect Size video.
  • P Values, Statistical Significance & Clinical Significance This handout also explains clinical and statistical significance.
  • Absolute versus relative risk – making sense of media stories Understanding the difference between relative and absolute risk is essential to understanding statistical tests commonly found in research articles.

Critical appraisal is the process of systematically evaluating research using established and transparent methods. In critical appraisal, health professionals use validated checklists/worksheets as tools to guide their assessment of the research. It is a more advanced way of evaluating research than the more basic method explained above. To learn more about critical appraisal or to access critical appraisal tools, visit the websites below.

research appraisal paper example

  • Last Updated: Jun 11, 2024 10:26 AM
  • URL: https://libguides.massgeneral.org/evaluatingarticles

research appraisal paper example

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • Write for Us
  • BMJ Journals

You are here

  • Volume 22, Issue 1
  • How to appraise qualitative research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • Calvin Moorley 1 ,
  • Xabi Cathala 2
  • 1 Nursing Research and Diversity in Care, School of Health and Social Care , London South Bank University , London , UK
  • 2 Institute of Vocational Learning , School of Health and Social Care, London South Bank University , London , UK
  • Correspondence to Dr Calvin Moorley, Nursing Research and Diversity in Care, School of Health and Social Care, London South Bank University, London SE1 0AA, UK; Moorleyc{at}lsbu.ac.uk

https://doi.org/10.1136/ebnurs-2018-103044

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

In order to make a decision about implementing evidence into practice, nurses need to be able to critically appraise research. Nurses also have a professional responsibility to maintain up-to-date practice. 1 This paper provides a guide on how to critically appraise a qualitative research paper.

What is qualitative research?

  • View inline

Useful terms

Some of the qualitative approaches used in nursing research include grounded theory, phenomenology, ethnography, case study (can lend itself to mixed methods) and narrative analysis. The data collection methods used in qualitative research include in depth interviews, focus groups, observations and stories in the form of diaries or other documents. 3

Authenticity

Title, keywords, authors and abstract.

In a previous paper, we discussed how the title, keywords, authors’ positions and affiliations and abstract can influence the authenticity and readability of quantitative research papers, 4 the same applies to qualitative research. However, other areas such as the purpose of the study and the research question, theoretical and conceptual frameworks, sampling and methodology also need consideration when appraising a qualitative paper.

Purpose and question

The topic under investigation in the study should be guided by a clear research question or a statement of the problem or purpose. An example of a statement can be seen in table 2 . Unlike most quantitative studies, qualitative research does not seek to test a hypothesis. The research statement should be specific to the problem and should be reflected in the design. This will inform the reader of what will be studied and justify the purpose of the study. 5

Example of research question and problem statement

An appropriate literature review should have been conducted and summarised in the paper. It should be linked to the subject, using peer-reviewed primary research which is up to date. We suggest papers with a age limit of 5–8 years excluding original work. The literature review should give the reader a balanced view on what has been written on the subject. It is worth noting that for some qualitative approaches some literature reviews are conducted after the data collection to minimise bias, for example, in grounded theory studies. In phenomenological studies, the review sometimes occurs after the data analysis. If this is the case, the author(s) should make this clear.

Theoretical and conceptual frameworks

Most authors use the terms theoretical and conceptual frameworks interchangeably. Usually, a theoretical framework is used when research is underpinned by one theory that aims to help predict, explain and understand the topic investigated. A theoretical framework is the blueprint that can hold or scaffold a study’s theory. Conceptual frameworks are based on concepts from various theories and findings which help to guide the research. 6 It is the researcher’s understanding of how different variables are connected in the study, for example, the literature review and research question. Theoretical and conceptual frameworks connect the researcher to existing knowledge and these are used in a study to help to explain and understand what is being investigated. A framework is the design or map for a study. When you are appraising a qualitative paper, you should be able to see how the framework helped with (1) providing a rationale and (2) the development of research questions or statements. 7 You should be able to identify how the framework, research question, purpose and literature review all complement each other.

There remains an ongoing debate in relation to what an appropriate sample size should be for a qualitative study. We hold the view that qualitative research does not seek to power and a sample size can be as small as one (eg, a single case study) or any number above one (a grounded theory study) providing that it is appropriate and answers the research problem. Shorten and Moorley 8 explain that three main types of sampling exist in qualitative research: (1) convenience (2) judgement or (3) theoretical. In the paper , the sample size should be stated and a rationale for how it was decided should be clear.

Methodology

Qualitative research encompasses a variety of methods and designs. Based on the chosen method or design, the findings may be reported in a variety of different formats. Table 3 provides the main qualitative approaches used in nursing with a short description.

Different qualitative approaches

The authors should make it clear why they are using a qualitative methodology and the chosen theoretical approach or framework. The paper should provide details of participant inclusion and exclusion criteria as well as recruitment sites where the sample was drawn from, for example, urban, rural, hospital inpatient or community. Methods of data collection should be identified and be appropriate for the research statement/question.

Data collection

Overall there should be a clear trail of data collection. The paper should explain when and how the study was advertised, participants were recruited and consented. it should also state when and where the data collection took place. Data collection methods include interviews, this can be structured or unstructured and in depth one to one or group. 9 Group interviews are often referred to as focus group interviews these are often voice recorded and transcribed verbatim. It should be clear if these were conducted face to face, telephone or any other type of media used. Table 3 includes some data collection methods. Other collection methods not included in table 3 examples are observation, diaries, video recording, photographs, documents or objects (artefacts). The schedule of questions for interview or the protocol for non-interview data collection should be provided, available or discussed in the paper. Some authors may use the term ‘recruitment ended once data saturation was reached’. This simply mean that the researchers were not gaining any new information at subsequent interviews, so they stopped data collection.

The data collection section should include details of the ethical approval gained to carry out the study. For example, the strategies used to gain participants’ consent to take part in the study. The authors should make clear if any ethical issues arose and how these were resolved or managed.

The approach to data analysis (see ref  10 ) needs to be clearly articulated, for example, was there more than one person responsible for analysing the data? How were any discrepancies in findings resolved? An audit trail of how the data were analysed including its management should be documented. If member checking was used this should also be reported. This level of transparency contributes to the trustworthiness and credibility of qualitative research. Some researchers provide a diagram of how they approached data analysis to demonstrate the rigour applied ( figure 1 ).

  • Download figure
  • Open in new tab
  • Download powerpoint

Example of data analysis diagram.

Validity and rigour

The study’s validity is reliant on the statement of the question/problem, theoretical/conceptual framework, design, method, sample and data analysis. When critiquing qualitative research, these elements will help you to determine the study’s reliability. Noble and Smith 11 explain that validity is the integrity of data methods applied and that findings should accurately reflect the data. Rigour should acknowledge the researcher’s role and involvement as well as any biases. Essentially it should focus on truth value, consistency and neutrality and applicability. 11 The authors should discuss if they used triangulation (see table 2 ) to develop the best possible understanding of the phenomena.

Themes and interpretations and implications for practice

In qualitative research no hypothesis is tested, therefore, there is no specific result. Instead, qualitative findings are often reported in themes based on the data analysed. The findings should be clearly linked to, and reflect, the data. This contributes to the soundness of the research. 11 The researchers should make it clear how they arrived at the interpretations of the findings. The theoretical or conceptual framework used should be discussed aiding the rigour of the study. The implications of the findings need to be made clear and where appropriate their applicability or transferability should be identified. 12

Discussions, recommendations and conclusions

The discussion should relate to the research findings as the authors seek to make connections with the literature reviewed earlier in the paper to contextualise their work. A strong discussion will connect the research aims and objectives to the findings and will be supported with literature if possible. A paper that seeks to influence nursing practice will have a recommendations section for clinical practice and research. A good conclusion will focus on the findings and discussion of the phenomena investigated.

Qualitative research has much to offer nursing and healthcare, in terms of understanding patients’ experience of illness, treatment and recovery, it can also help to understand better areas of healthcare practice. However, it must be done with rigour and this paper provides some guidance for appraising such research. To help you critique a qualitative research paper some guidance is provided in table 4 .

Some guidance for critiquing qualitative research

  • ↵ Nursing and Midwifery Council . The code: Standard of conduct, performance and ethics for nurses and midwives . 2015 https://www.nmc.org.uk/globalassets/sitedocuments/nmc-publications/nmc-code.pdf ( accessed 21 Aug 18 ).
  • Barrett D ,
  • Cathala X ,
  • Shorten A ,

Patient consent for publication Not required.

Competing interests None declared.

Provenance and peer review Commissioned; internally peer reviewed.

Read the full text or download the PDF:

  • CASP Checklists
  • How to use our CASP Checklists
  • Referencing and Creative Commons
  • Online Training Courses
  • CASP Workshops
  • What is Critical Appraisal
  • Study Designs
  • Useful Links
  • Bibliography
  • View all Tools and Resources
  • Testimonials

How to Critically Appraise a Research Paper

Research papers are a powerful means through which millions of researchers around the globe pass on knowledge about our world.

However, the quality of research can be highly variable. To avoid being misled, it is vital to perform critical appraisals of research studies to assess the validity, results and relevance of the published research. Critical appraisal skills are essential to be able to identify whether published research provides results that can be used as evidence to help improve your practice.

What is a critical appraisal?

Most of us know not to believe everything we read in the newspaper or on various media channels. But when it comes to research literature and journals, they must be critically appraised due to the nature of the context. In order for us to trust research papers, we want to be safe in the knowledge that they have been efficiently and professionally checked to confirm what they are saying. This is where a critical appraisal comes in.

Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context. We have put together a more detailed page to explain what critical appraisal is to give you more information.

Why is a critical appraisal of research required?

Critical appraisal skills are important as they enable you to systematically and objectively assess the trustworthiness, relevance and results of published papers. When a research article is published, who wrote it should not be an indication of its trustworthiness and relevance.

What are the benefits of performing critical appraisals for research papers?

Performing a critical appraisal helps to:

  • Reduce information overload by eliminating irrelevant or weak studies
  • Identify the most relevant papers
  • Distinguish evidence from opinion, assumptions, misreporting, and belief
  • Assess the validity of the study
  • Check the usefulness and clinical applicability of the study

How to critically appraise a research paper

There are some key questions to consider when critically appraising a paper. These include:

  • Is the study relevant to my field of practice?
  • What research question is being asked?
  • Was the study design appropriate for the research question?

CASP has several checklists to help with performing a critical appraisal which we believe are crucial because:

  • They help the user to undertake a complex task involving many steps
  • They support the user in being systematic by ensuring that all important factors or considerations are taken into account
  • They increase consistency in decision-making by providing a framework

In addition to our free checklists, CASP has developed a number of valuable online e-learning modules designed to increase your knowledge and confidence in conducting a critical appraisal.

Introduction To Critical Appraisal & CASP

This Module covers the following:

  • Challenges using evidence to change practice
  • 5 steps of evidence-based practice
  • Developing critical appraisal skills
  • Integrating and acting on the evidence
  • The Critical Appraisal Skills Programme (CASP)
  • Online Learning
  • The Role of Homogeneity in Research
  • PICO Search Strategy Tips & Examples
  • What Is a Subgroup Analysis?
  • What Is Evidence-Based Practice?
  • What Is A Cross-Sectional Study?
  • What Is A PICO Tool?
  • What Is A Pilot Study?
  • Different Types of Bias in Research
  • What is Qualitative Research?
  • What is a Case-Control Study in Research?
  • Privacy Policy

research appraisal paper example

Critical Appraisal Skills Programme

Critical Appraisal Skills Programme (CASP) will use the information you provide on this form to be in touch with you and to provide updates and marketing. Please let us know all the ways you would like to hear from us:

We use Mailchimp as our marketing platform. By clicking below to subscribe, you acknowledge that your information will be transferred to Mailchimp for processing. Learn more about Mailchimp's privacy practices here.

Copyright 2024 CASP UK - OAP Ltd. All rights reserved Website by Beyond Your Brand

Banner

RCE 672: Research and Program Evaluation: APA Sample Paper

  • Tips for GALILEO
  • Evaluate Sources
  • How to Paraphrase
  • APA Citations
  • APA References
  • Book with Editor(s)
  • Book with No Author
  • Book with Organization as Author
  • Chapters and Parts of Books
  • Company Reports
  • Journal Article
  • Magazine Article
  • Patents & Laws
  • Unpublished Manuscripts/Informal Publications (i.e. course packets and dissertations)
  • APA Sample Paper
  • Paper Formatting (APA)
  • Zotero Citation Tool

[email protected]

229-227-6959

Make an appointment with a Librarian  

Make an appointment with a Tutor 

Follow us on:

research appraisal paper example

APA Sample Paper from the Purdue OWL

  • The Purdue OWL has an APA Sample Paper available on its website.
  • << Previous: Websites
  • Next: Paper Formatting (APA) >>
  • Last Updated: Aug 29, 2024 1:48 PM
  • URL: https://libguides.thomasu.edu/RCE672

COMMENTS

  1. (PDF) How to critically appraise an article

    SuMMarY. Critical appraisal is a systematic process used to identify the strengths. and weaknesse s of a res earch article in order t o assess the usefulness and. validity of r esearch findings ...

  2. Critical Appraisal of Clinical Studies: An Example from Computed

    Critical Appraisal: The I-ELCAP Study. In its October 26, 2006, issue, the New England Journal of Medicine published the results of the International Early Lung Cancer Action Program (I-ELCAP) study, a large clinical research study examining annual computed tomography (CT) screening for lung cancer in asymptomatic persons. Though the authors concluded that the screening program could save ...

  3. Critical Appraisal of Clinical Research

    Critical appraisal is the course of action for watchfully and systematically examining research to assess its reliability, value and relevance in order to direct professionals in their vital clinical decision making [1]. Critical appraisal is essential to: Continuing Professional Development (CPD).

  4. Critical appraisal of published research papers

    INTRODUCTION. Critical appraisal of a research paper is defined as "The process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context."[] Since scientific literature is rapidly expanding with more than 12,000 articles being added to the MEDLINE database per week,[] critical appraisal is very important to distinguish ...

  5. PDF Critical appraisal of a journal article

    Critical appraisal of a journal article 1. Introduction to critical appraisal Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context. (Burls 2009) Critical appraisal is an important element of evidence-based medicine.

  6. A guide to critical appraisal of evidence

    Critical appraisal is the assessment of research studies' worth to clinical practice. Critical appraisal—the heart of evidence-based practice—involves four phases: rapid critical appraisal, evaluation, synthesis, and recommendation. This article reviews each phase and provides examples, tips, and caveats to help evidence appraisers ...

  7. PDF Session 5: How to critically appraise a paper

    Session 5: How to critically appraise a paper - an example. aper - an exampleAppraising research - broad questionsl. What is the research question? l Is the study ethical?l Is the s. udy design valid and appropriate? l What are the results?l Wha. are the implications of the findings for c.

  8. Critical Appraisal of a qualitative paper

    Critical appraisal of a qualitative paper. This guide aimed at health students, provides basic level support for appraising qualitative research papers. It's designed for students who have already attended lectures on critical appraisal. ... is provided and there is an opportunity to practise the technique on a sample article. Support Materials.

  9. How to critically appraise an article

    Key Points. Critical appraisal is a systematic process used to identify the strengths and weaknesses of a research article. Critical appraisal provides a basis for decisions on whether to use the ...

  10. PDF Critical Appraisal for Research Papers

    A "95% confidence interval" means that there is a 95% chance that the real difference between 2 groups would fall between the upper and lower limits measured. The wider the confidence interval, the more likely that a certain result will fall within that interval. Strong evidence will have a wider confidence interval.

  11. Critical Appraisal of a quantitative paper

    This guide, aimed at health students, provides basic level support for appraising quantitative research papers. It's designed for students who have already attended lectures on critical appraisal. One framework for appraising quantitative research (based on reliability, internal and external validity) is provided and there is an opportunity to ...

  12. Critical appraisal of a clinical research paper: What one ne ...

    its validity, results, and relevance to inform clinical decision-making. All components of a clinical research article need to be appraised as per the study design and conduct. As research bias can be introduced at every step in the flow of a study leading to erroneous conclusions, it is essential that suitable measures are adopted to mitigate bias. Several tools have been developed for the ...

  13. PDF critical appraisal guidelines

    The validity of research papers is based on the research methodology and the risk of bias that comes from the methodology. It does not come from the results. They should appraise the article that is most likely to guide clinical care. 6. Critical Appraisal a. First and foremost, an appraisal is not an attack on the authors of the appraised ...

  14. PDF Critical Appraisal of A Research Paper

    Critical report of a study. Brief summary of the work. Your opinion of the work. Make clear the criteria you used to judge it. Support your opinion with evidences from the text. Give a balanced view of the work by including both its strengths and weaknesses. Conclude with a recommendation. 18.

  15. Full article: Critical appraisal

    For example, in quantitative research a critical appraisal checklist assists a reviewer in assessing each study according to the same (pre-determined) criteria; that is, checklists help standardize the process, if not the outcome (they are navigational tools, not anchors, Booth, Citation 2007). Also, if the checklist has been through a rigorous ...

  16. Evaluating Research Articles

    Critical Appraisal. Critical appraisal is the process of systematically evaluating research using established and transparent methods. In critical appraisal, health professionals use validated checklists/worksheets as tools to guide their assessment of the research. It is a more advanced way of evaluating research than the more basic method ...

  17. How to appraise qualitative research

    In order to make a decision about implementing evidence into practice, nurses need to be able to critically appraise research. Nurses also have a professional responsibility to maintain up-to-date practice.1 This paper provides a guide on how to critically appraise a qualitative research paper. Qualitative research concentrates on understanding phenomena and may focus on meanings, perceptions ...

  18. How to Critically Appraise a Research Paper

    This is where a critical appraisal comes in. Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context. We have put together a more detailed page to explain what critical appraisal is to give you more information.

  19. Evaluating Research in Academic Journals: A Practical Guide to

    Examples 1.3.1 and 1.3.2 show statements from research articles in which the researchers acknowledge limitations in their methods of measurement. Example 1.3.1

  20. PDF Critical appraisal handout

    1. Introduction to critical appraisal. Critical appraisal is the process of carefully and systematically examining research to judge its trustworthiness, and its value and relevance in a particular context. (Burls 2009) Critical appraisal is an important element of evidence-based medicine. The five steps of evidence-based medicine are:

  21. How to Critically Appraise a Research Paper?

    The Critical appraisal. of a research is "The process of carefully and systematically exami ning the research to judge its trustworthiness, value and relevance in a particular context" [2 ...

  22. RCE 672: Research and Program Evaluation: APA Sample Paper

    RCE 672: Research and Program Evaluation: APA Sample Paper. Home; Research; Search Databases Toggle Dropdown. Tips for GALILEO ; Evaluate Sources; Write Your Paper Toggle Dropdown. How to Paraphrase

  23. A guide to critiquing a research paper. Methodological appraisal of a

    The purpose of this paper is to show how published research can be systematically appraised using the critiquing framework by Coughlan et al., 2007a, Coughlan et al., 2007b. This paper, is the second critique undertaken by the authors ( Fothergill and Lipp, 2014 ), the first of which applied Coughlan's critiquing tool for quantitative studies ...