• Study Protocol
  • Open access
  • Published: 26 August 2024

Learning effect of online versus onsite education in health and medical scholarship – protocol for a cluster randomized trial

  • Rie Raffing 1 ,
  • Lars Konge 2 &
  • Hanne Tønnesen 1  

BMC Medical Education volume  24 , Article number:  927 ( 2024 ) Cite this article

123 Accesses

Metrics details

The disruption of health and medical education by the COVID-19 pandemic made educators question the effect of online setting on students’ learning, motivation, self-efficacy and preference. In light of the health care staff shortage online scalable education seemed relevant. Reviews on the effect of online medical education called for high quality RCTs, which are increasingly relevant with rapid technological development and widespread adaption of online learning in universities. The objective of this trial is to compare standardized and feasible outcomes of an online and an onsite setting of a research course regarding the efficacy for PhD students within health and medical sciences: Primarily on learning of research methodology and secondly on preference, motivation, self-efficacy on short term and academic achievements on long term. Based on the authors experience with conducting courses during the pandemic, the hypothesis is that student preferred onsite setting is different to online setting.

Cluster randomized trial with two parallel groups. Two PhD research training courses at the University of Copenhagen are randomized to online (Zoom) or onsite (The Parker Institute, Denmark) setting. Enrolled students are invited to participate in the study. Primary outcome is short term learning. Secondary outcomes are short term preference, motivation, self-efficacy, and long-term academic achievements. Standardized, reproducible and feasible outcomes will be measured by tailor made multiple choice questionnaires, evaluation survey, frequently used Intrinsic Motivation Inventory, Single Item Self-Efficacy Question, and Google Scholar publication data. Sample size is calculated to 20 clusters and courses are randomized by a computer random number generator. Statistical analyses will be performed blinded by an external statistical expert.

Primary outcome and secondary significant outcomes will be compared and contrasted with relevant literature. Limitations include geographical setting; bias include lack of blinding and strengths are robust assessment methods in a well-established conceptual framework. Generalizability to PhD education in other disciplines is high. Results of this study will both have implications for students and educators involved in research training courses in health and medical education and for the patients who ultimately benefits from this training.

Trial registration

Retrospectively registered at ClinicalTrials.gov: NCT05736627. SPIRIT guidelines are followed.

Peer Review reports

Medical education was utterly disrupted for two years by the COVID-19 pandemic. In the midst of rearranging courses and adapting to online platforms we, with lecturers and course managers around the globe, wondered what the conversion to online setting did to students’ learning, motivation and self-efficacy [ 1 , 2 , 3 ]. What the long-term consequences would be [ 4 ] and if scalable online medical education should play a greater role in the future [ 5 ] seemed relevant and appealing questions in a time when health care professionals are in demand. Our experience of performing research training during the pandemic was that although PhD students were grateful for courses being available, they found it difficult to concentrate related to the long screen hours. We sensed that most students preferred an onsite setting and perceived online courses a temporary and inferior necessity. The question is if this impacted their learning?

Since the common use of the internet in medical education, systematic reviews have sought to answer if there is a difference in learning effect when taught online compared to onsite. Although authors conclude that online learning may be equivalent to onsite in effect, they agree that studies are heterogeneous and small [ 6 , 7 ], with low quality of the evidence [ 8 , 9 ]. They therefore call for more robust and adequately powered high-quality RCTs to confirm their findings and suggest that students’ preferences in online learning should be investigated [ 7 , 8 , 9 ].

This uncovers two knowledge gaps: I) High-quality RCTs on online versus onsite learning in health and medical education and II) Studies on students’ preferences in online learning.

Recently solid RCTs have been performed on the topic of web-based theoretical learning of research methods among health professionals [ 10 , 11 ]. However, these studies are on asynchronous courses among medical or master students with short term outcomes.

This uncovers three additional knowledge gaps: III) Studies on synchronous online learning IV) among PhD students of health and medical education V) with long term measurement of outcomes.

The rapid technological development including artificial intelligence (AI) and widespread adaption as well as application of online learning forced by the pandemic, has made online learning well-established. It represents high resolution live synchronic settings which is available on a variety of platforms with integrated AI and options for interaction with and among students, chat and break out rooms, and exterior digital tools for teachers [ 12 , 13 , 14 ]. Thus, investigating online learning today may be quite different than before the pandemic. On one hand, it could seem plausible that this technological development would make a difference in favour of online learning which could not be found in previous reviews of the evidence. On the other hand, the personal face-to-face interaction during onsite learning may still be more beneficial for the learning process and combined with our experience of students finding it difficult to concentrate when online during the pandemic we hypothesize that outcomes of the onsite setting are different from the online setting.

To support a robust study, we design it as a cluster randomized trial. Moreover, we use the well-established and widely used Kirkpatrick’s conceptual framework for evaluating learning as a lens to assess our outcomes [ 15 ]. Thus, to fill the above-mentioned knowledge gaps, the objective of this trial is to compare a synchronous online and an in-person onsite setting of a research course regarding the efficacy for PhD students within the health and medical sciences:

Primarily on theoretical learning of research methodology and

Secondly on

◦ Preference, motivation, self-efficacy on short term

◦ Academic achievements on long term

Trial design

This study protocol covers synchronous online and in-person onsite setting of research courses testing the efficacy for PhD students. It is a two parallel arms cluster randomized trial (Fig.  1 ).

figure 1

Consort flow diagram

The study measures baseline and post intervention. Baseline variables and knowledge scores are obtained at the first day of the course, post intervention measurement is obtained the last day of the course (short term) and monthly for 24 months (long term).

Randomization is stratified giving 1:1 allocation ratio of the courses. As the number of participants within each course might differ, the allocation ratio of participants in the study will not fully be equal and 1:1 balanced.

Study setting

The study site is The Parker Institute at Bispebjerg and Frederiksberg Hospital, University of Copenhagen, Denmark. From here the courses are organized and run online and onsite. The course programs and time schedules, the learning objective, the course management, the lecturers, and the delivery are identical in the two settings. The teachers use the same introductory presentations followed by training in break out groups, feed-back and discussions. For the online group, the setting is organized as meetings in the online collaboration tool Zoom® [ 16 ] using the basic available technicalities such as screen sharing, chat function for comments, and breakout rooms and other basics digital tools if preferred. The online version of the course is synchronous with live education and interaction. For the onsite group, the setting is the physical classroom at the learning facilities at the Parker Institute. Coffee and tea as well as simple sandwiches and bottles of water, which facilitate sociality, are available at the onsite setting. The participants in the online setting must get their food and drink by themselves, but online sociality is made possible by not closing down the online room during the breaks. The research methodology courses included in the study are “Practical Course in Systematic Review Technique in Clinical Research”, (see course programme in appendix 1) and “Getting started: Writing your first manuscript for publication” [ 17 ] (see course programme in appendix 2). The two courses both have 12 seats and last either three or three and a half days resulting in 2.2 and 2.6 ECTS credits, respectively. They are offered by the PhD School of the Faculty of Health and Medical Sciences, University of Copenhagen. Both courses are available and covered by the annual tuition fee for all PhD students enrolled at a Danish university.

Eligibility criteria

Inclusion criteria for participants: All PhD students enrolled on the PhD courses participate after informed consent: “Practical Course in Systematic Review Technique in Clinical Research” and “Getting started: Writing your first manuscript for publication” at the PhD School of the Faculty of Health and Medical Sciences, University of Copenhagen, Denmark.

Exclusion criteria for participants: Declining to participate and withdrawal of informed consent.

Informed consent

The PhD students at the PhD School at the Faculty of Health Sciences, University of Copenhagen participate after informed consent, taken by the daily project leader, allowing evaluation data from the course to be used after pseudo-anonymization in the project. They are informed in a welcome letter approximately three weeks prior to the course and again in the introduction the first course day. They register their consent on the first course day (Appendix 3). Declining to participate in the project does not influence their participation in the course.

Interventions

Online course settings will be compared to onsite course settings. We test if the onsite setting is different to online. Online learning is increasing but onsite learning is still the preferred educational setting in a medical context. In this case onsite learning represents “usual care”. The online course setting is meetings in Zoom using the technicalities available such as chat and breakout rooms. The onsite setting is the learning facilities, at the Parker Institute, Bispebjerg and Frederiksberg Hospital, The Capital Region, University of Copenhagen, Denmark.

The course settings are not expected to harm the participants, but should a request be made to discontinue the course or change setting this will be met, and the participant taken out of the study. Course participants are allowed to take part in relevant concomitant courses or other interventions during the trial.

Strategies to improve adherence to interventions

Course participants are motivated to complete the course irrespectively of the setting because it bears ECTS-points for their PhD education and adds to the mandatory number of ECTS-points. Thus, we expect adherence to be the same in both groups. However, we monitor their presence in the course and allocate time during class for testing the short-term outcomes ( motivation, self-efficacy, preference and learning). We encourage and, if necessary, repeatedly remind them to register with Google Scholar for our testing of the long-term outcome (academic achievement).

Outcomes are related to the Kirkpatrick model for evaluating learning (Fig.  2 ) which divides outcomes into four different levels; Reaction which includes for example motivation, self-efficacy and preferences, Learning which includes knowledge acquisition, Behaviour for practical application of skills when back at the job (not included in our outcomes), and Results for impact for end-users which includes for example academic achievements in the form of scientific articles [ 18 , 19 , 20 ].

figure 2

The Kirkpatrick model

Primary outcome

The primary outcome is short term learning (Kirkpatrick level 2).

Learning is assessed by a Multiple-Choice Questionnaire (MCQ) developed prior to the RCT specifically for this setting (Appendix 4). First the lecturers of the two courses were contacted and asked to provide five multiple choice questions presented as a stem with three answer options; one correct answer and two distractors. The questions should be related to core elements of their teaching under the heading of research training. The questions were set up to test the cognition of the students at the levels of "Knows" or "Knows how" according to Miller's Pyramid of Competence and not their behaviour [ 21 ]. Six of the course lecturers responded and out of this material all the questions which covered curriculum of both courses were selected. It was tested on 10 PhD students and within the lecturer group, revised after an item analysis and English language revised. The MCQ ended up containing 25 questions. The MCQ is filled in at baseline and repeated at the end of the course. The primary outcomes based on the MCQ is estimated as the score of learning calculated as number of correct answers out of 25 after the course. A decrease of points of the MCQ in the intervention groups denotes a deterioration of learning. In the MCQ the minimum score is 0 and 25 is maximum, where 19 indicates passing the course.

Furthermore, as secondary outcome, this outcome measurement will be categorized as binary outcome to determine passed/failed of the course defined by 75% (19/25) correct answers.

The learning score will be computed on group and individual level and compared regarding continued outcomes by the Mann–Whitney test comparing the learning score of the online and onsite groups. Regarding the binomial outcome of learning (passed/failed) data will be analysed by the Fisher’s exact test on an intention-to-treat basis between the online and onsite. The results will be presented as median and range and as mean and standard deviations, for possible future use in meta-analyses.

Secondary outcomes

Motivation assessment post course: Motivation level is measured by the Intrinsic Motivation Inventory (IMI) Scale [ 22 ] (Appendix 5). The IMI items were randomized by random.org on the 4th of August 2022. It contains 12 items to be assessed by the students on a 7-point Likert scale where 1 is “Not at all true”, 4 is “Somewhat true” and 7 is “Very true”. The motivation score will be computed on group and individual level and will then be tested by the Mann–Whitney of the online and onsite group.

Self-efficacy assessment post course: Self-efficacy level is measured by a single-item measure developed and validated by Williams and Smith [ 23 ] (Appendix 6). It is assessed by the students on a scale from 1–10 where 1 is “Strongly disagree” and 10 is “Strongly agree”. The self-efficacy score will be computed on group and individual level and tested by a Mann–Whitney test to compare the self-efficacy score of the online and onsite group.

Preference assessment post course: Preference is measured as part of the general course satisfaction evaluation with the question “If you had the option to choose, which form would you prefer this course to have?” with the options “onsite form” and “online form”.

Academic achievement assessment is based on 24 monthly measurements post course of number of publications, number of citations, h-index, i10-index. This data is collected through the Google Scholar Profiles [ 24 ] of the students as this database covers most scientific journals. Associations between onsite/online and long-term academic will be examined with Kaplan Meyer and log rank test with a significance level of 0.05.

Participant timeline

Enrolment for the course at the Faculty of Health Sciences, University of Copenhagen, Denmark, becomes available when it is published in the course catalogue. In the course description the course location is “To be announced”. Approximately 3–4 weeks before the course begins, the participant list is finalized, and students receive a welcome letter containing course details, including their allocation to either the online or onsite setting. On the first day of the course, oral information is provided, and participants provide informed consent, baseline variables, and base line knowledge scores.

The last day of scheduled activities the following scores are collected, knowledge, motivation, self-efficacy, setting preference, and academic achievement. To track students' long term academic achievements, follow-ups are conducted monthly for a period of 24 months, with assessments occurring within one week of the last course day (Table  1 ).

Sample size

The power calculation is based on the main outcome, theoretical learning on short term. For the sample size determination, we considered 12 available seats for participants in each course. To achieve statistical power, we aimed for 8 clusters in both online and onsite arms (in total 16 clusters) to detect an increase in learning outcome of 20% (learning outcome increase of 5 points). We considered an intraclass correlation coefficient of 0.02, a standard deviation of 10, a power of 80%, and a two-sided alpha level of 5%. The Allocation Ratio was set at 1, implying an equal number of subjects in both online and onsite group.

Considering a dropout up to 2 students per course, equivalent to 17%, we determined that a total of 112 participants would be needed. This calculation factored in 10 clusters of 12 participants per study arm, which we deemed sufficient to assess any changes in learning outcome.

The sample size was estimated using the function n4means from the R package CRTSize [ 25 ].

Recruitment

Participants are PhD students enrolled in 10 courses of “Practical Course in Systematic Review Technique in Clinical Research” and 10 courses of “Getting started: Writing your first manuscript for publication” at the PhD School of the Faculty of Health Sciences, University of Copenhagen, Denmark.

Assignment of interventions: allocation

Randomization will be performed on course-level. The courses are randomized by a computer random number generator [ 26 ]. To get a balanced randomization per year, 2 sets with 2 unique random integers in each, taken from the 1–4 range is requested.

The setting is not included in the course catalogue of the PhD School and thus allocation to online or onsite is concealed until 3–4 weeks before course commencement when a welcome letter with course information including allocation to online or onsite setting is distributed to the students. The lecturers are also informed of the course setting at this time point. If students withdraw from the course after being informed of the setting, a letter is sent to them enquiring of the reason for withdrawal and reason is recorded (Appendix 7).

The allocation sequence is generated by a computer random number generator (random.org). The participants and the lecturers sign up for the course without knowing the course setting (online or onsite) until 3–4 weeks before the course.

Assignment of interventions: blinding

Due to the nature of the study, it is not possible to blind trial participants or lecturers. The outcomes are reported by the participants directly in an online form, thus being blinded for the outcome assessor, but not for the individual participant. The data collection for the long-term follow-up regarding academic achievements is conducted without blinding. However, the external researcher analysing the data will be blinded.

Data collection and management

Data will be collected by the project leader (Table  1 ). Baseline variables and post course knowledge, motivation, and self-efficacy are self-reported through questionnaires in SurveyXact® [ 27 ]. Academic achievements are collected through Google Scholar profiles of the participants.

Given that we are using participant assessments and evaluations for research purposes, all data collection – except for monthly follow-up of academic achievements after the course – takes place either in the immediate beginning or ending of the course and therefore we expect participant retention to be high.

Data will be downloaded from SurveyXact and stored in a locked and logged drive on a computer belonging to the Capital Region of Denmark. Only the project leader has access to the data.

This project conduct is following the Danish Data Protection Agency guidelines of the European GDPR throughout the trial. Following the end of the trial, data will be stored at the Danish National Data Archive which fulfil Danish and European guidelines for data protection and management.

Statistical methods

Data is anonymized and blinded before the analyses. Analyses are performed by a researcher not otherwise involved in the inclusion or randomization, data collection or handling. All statistical tests will be testing the null hypotheses assuming the two arms of the trial being equal based on corresponding estimates. Analysis of primary outcome on short-term learning will be started once all data has been collected for all individuals in the last included course. Analyses of long-term academic achievement will be started at end of follow-up.

Baseline characteristics including both course- and individual level information will be presented. Table 2 presents the available data on baseline.

We will use multivariate analysis for identification of the most important predictors (motivation, self-efficacy, sex, educational background, and knowledge) for best effect on short and long term. The results will be presented as risk ratio (RR) with 95% confidence interval (CI). The results will be considered significant if CI does not include the value one.

All data processing and analyses were conducted using R statistical software version 4.1.0, 2021–05-18 (R Foundation for Statistical Computing, Vienna, Austria).

If possible, all analysis will be performed for “Practical Course in Systematic Review Technique in Clinical Research” and for “Getting started: Writing your first manuscript for publication” separately.

Primary analyses will be handled with the intention-to-treat approach. The analyses will include all individuals with valid data regardless of they did attend the complete course. Missing data will be handled with multiple imputation [ 28 ] .

Upon reasonable request, public assess will be granted to protocol, datasets analysed during the current study, and statistical code Table 3 .

Oversight, monitoring, and adverse events

This project is coordinated in collaboration between the WHO CC (DEN-62) at the Parker Institute, CAMES, and the PhD School at the Faculty of Health and Medical Sciences, University of Copenhagen. The project leader runs the day-to-day support of the trial. The steering committee of the trial includes principal investigators from WHO CC (DEN-62) and CAMES and the project leader and meets approximately three times a year.

Data monitoring is done on a daily basis by the project leader and controlled by an external independent researcher.

An adverse event is “a harmful and negative outcome that happens when a patient has been provided with medical care” [ 29 ]. Since this trial does not involve patients in medical care, we do not expect adverse events. If participants decline taking part in the course after receiving the information of the course setting, information on reason for declining is sought obtained. If the reason is the setting this can be considered an unintended effect. Information of unintended effects of the online setting (the intervention) will be recorded. Participants are encouraged to contact the project leader with any response to the course in general both during and after the course.

The trial description has been sent to the Scientific Ethical Committee of the Capital Region of Denmark (VEK) (21041907), which assessed it as not necessary to notify and that it could proceed without permission from VEK according to the Danish law and regulation of scientific research. The trial is registered with the Danish Data Protection Agency (Privacy) (P-2022–158). Important protocol modification will be communicated to relevant parties as well as VEK, the Joint Regional Information Security and Clinicaltrials.gov within an as short timeframe as possible.

Dissemination plans

The results (positive, negative, or inconclusive) will be disseminated in educational, scientific, and clinical fora, in international scientific peer-reviewed journals, and clinicaltrials.gov will be updated upon completion of the trial. After scientific publication, the results will be disseminated to the public by the press, social media including the website of the hospital and other organizations – as well as internationally via WHO CC (DEN-62) at the Parker Institute and WHO Europe.

All authors will fulfil the ICMJE recommendations for authorship, and RR will be first author of the articles as a part of her PhD dissertation. Contributors who do not fulfil these recommendations will be offered acknowledgement in the article.

This cluster randomized trial investigates if an onsite setting of a research course for PhD students within the health and medical sciences is different from an online setting. The outcomes measured are learning of research methodology (primary), preference, motivation, and self-efficacy (secondary) on short term and academic achievements (secondary) on long term.

The results of this study will be discussed as follows:

Discussion of primary outcome

Primary outcome will be compared and contrasted with similar studies including recent RCTs and mixed-method studies on online and onsite research methodology courses within health and medical education [ 10 , 11 , 30 ] and for inspiration outside the field [ 31 , 32 ]: Tokalic finds similar outcomes for online and onsite, Martinic finds that the web-based educational intervention improves knowledge, Cheung concludes that the evidence is insufficient to say that the two modes have different learning outcomes, Kofoed finds online setting to have negative impact on learning and Rahimi-Ardabili presents positive self-reported student knowledge. These conflicting results will be discussed in the context of the result on the learning outcome of this study. The literature may change if more relevant studies are published.

Discussion of secondary outcomes

Secondary significant outcomes are compared and contrasted with similar studies.

Limitations, generalizability, bias and strengths

It is a limitation to this study, that an onsite curriculum for a full day is delivered identically online, as this may favour the onsite course due to screen fatigue [ 33 ]. At the same time, it is also a strength that the time schedules are similar in both settings. The offer of coffee, tea, water, and a plain sandwich in the onsite course may better facilitate the possibility for socializing. Another limitation is that the study is performed in Denmark within a specific educational culture, with institutional policies and resources which might affect the outcome and limit generalization to other geographical settings. However, international students are welcome in the class.

In educational interventions it is generally difficult to blind participants and this inherent limitation also applies to this trial [ 11 ]. Thus, the participants are not blinded to their assigned intervention, and neither are the lecturers in the courses. However, the external statistical expert will be blinded when doing the analyses.

We chose to compare in-person onsite setting with a synchronous online setting. Therefore, the online setting cannot be expected to generalize to asynchronous online setting. Asynchronous delivery has in some cases showed positive results and it might be because students could go back and forth through the modules in the interface without time limit [ 11 ].

We will report on all the outcomes defined prior to conducting the study to avoid selective reporting bias.

It is a strength of the study that it seeks to report outcomes within the 1, 2 and 4 levels of the Kirkpatrick conceptual framework, and not solely on level 1. It is also a strength that the study is cluster randomized which will reduce “infections” between the two settings and has an adequate power calculated sample size and looks for a relevant educational difference of 20% between the online and onsite setting.

Perspectives with implications for practice

The results of this study may have implications for the students for which educational setting they choose. Learning and preference results has implications for lecturers, course managers and curriculum developers which setting they should plan for the health and medical education. It may also be of inspiration for teaching and training in other disciplines. From a societal perspective it also has implications because we will know the effect and preferences of online learning in case of a future lock down.

Future research could investigate academic achievements in online and onsite research training on the long run (Kirkpatrick 4); the effect of blended learning versus online or onsite (Kirkpatrick 2); lecturers’ preferences for online and onsite setting within health and medical education (Kirkpatrick 1) and resource use in synchronous and asynchronous online learning (Kirkpatrick 5).

Trial status

This trial collected pilot data from August to September 2021 and opened for inclusion in January 2022. Completion of recruitment is expected in April 2024 and long-term follow-up in April 2026. Protocol version number 1 03.06.2022 with amendments 30.11.2023.

Availability of data and materials

The project leader will have access to the final trial dataset which will be available upon reasonable request. Exception to this is the qualitative raw data that might contain information leading to personal identification.

Abbreviations

Artificial Intelligence

Copenhagen academy for medical education and simulation

Confidence interval

Coronavirus disease

European credit transfer and accumulation system

International committee of medical journal editors

Intrinsic motivation inventory

Multiple choice questionnaire

Doctor of medicine

Masters of sciences

Randomized controlled trial

Scientific ethical committee of the Capital Region of Denmark

WHO Collaborating centre for evidence-based clinical health promotion

Samara M, Algdah A, Nassar Y, Zahra SA, Halim M, Barsom RMM. How did online learning impact the academic. J Technol Sci Educ. 2023;13(3):869–85.

Article   Google Scholar  

Nejadghaderi SA, Khoshgoftar Z, Fazlollahi A, Nasiri MJ. Medical education during the coronavirus disease 2019 pandemic: an umbrella review. Front Med (Lausanne). 2024;11:1358084. https://doi.org/10.3389/fmed.2024.1358084 .

Madi M, Hamzeh H, Abujaber S, Nawasreh ZH. Have we failed them? Online learning self-efficacy of physiotherapy students during COVID-19 pandemic. Physiother Res Int. 2023;5:e1992. https://doi.org/10.1002/pri.1992 .

Torda A. How COVID-19 has pushed us into a medical education revolution. Intern Med J. 2020;50(9):1150–3.

Alhat S. Virtual Classroom: A Future of Education Post-COVID-19. Shanlax Int J Educ. 2020;8(4):101–4.

Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Internet-based learning in the health professions: A meta-analysis. JAMA. 2008;300(10):1181–96. https://doi.org/10.1001/jama.300.10.1181 .

Pei L, Wu H. Does online learning work better than offline learning in undergraduate medical education? A systematic review and meta-analysis. Med Educ Online. 2019;24(1):1666538. https://doi.org/10.1080/10872981.2019.1666538 .

Richmond H, Copsey B, Hall AM, Davies D, Lamb SE. A systematic review and meta-analysis of online versus alternative methods for training licensed health care professionals to deliver clinical interventions. BMC Med Educ. 2017;17(1):227. https://doi.org/10.1186/s12909-017-1047-4 .

George PP, Zhabenko O, Kyaw BM, Antoniou P, Posadzki P, Saxena N, Semwal M, Tudor Car L, Zary N, Lockwood C, Car J. Online Digital Education for Postregistration Training of Medical Doctors: Systematic Review by the Digital Health Education Collaboration. J Med Internet Res. 2019;21(2):e13269. https://doi.org/10.2196/13269 .

Tokalić R, Poklepović Peričić T, Marušić A. Similar Outcomes of Web-Based and Face-to-Face Training of the GRADE Approach for the Certainty of Evidence: Randomized Controlled Trial. J Med Internet Res. 2023;25:e43928. https://doi.org/10.2196/43928 .

Krnic Martinic M, Čivljak M, Marušić A, Sapunar D, Poklepović Peričić T, Buljan I, et al. Web-Based Educational Intervention to Improve Knowledge of Systematic Reviews Among Health Science Professionals: Randomized Controlled Trial. J Med Internet Res. 2022;24(8): e37000.

https://www.mentimeter.com/ . Accessed 4 Dec 2023.

https://www.sendsteps.com/en/ . Accessed 4 Dec 2023.

https://da.padlet.com/ . Accessed 4 Dec 2023.

Zackoff MW, Real FJ, Abramson EL, Li STT, Klein MD, Gusic ME. Enhancing Educational Scholarship Through Conceptual Frameworks: A Challenge and Roadmap for Medical Educators. Acad Pediatr. 2019;19(2):135–41. https://doi.org/10.1016/j.acap.2018.08.003 .

https://zoom.us/ . Accessed 20 Aug 2024.

Raffing R, Larsen S, Konge L, Tønnesen H. From Targeted Needs Assessment to Course Ready for Implementation-A Model for Curriculum Development and the Course Results. Int J Environ Res Public Health. 2023;20(3):2529. https://doi.org/10.3390/ijerph20032529 .

https://www.kirkpatrickpartners.com/the-kirkpatrick-model/ . Accessed 12 Dec 2023.

Smidt A, Balandin S, Sigafoos J, Reed VA. The Kirkpatrick model: A useful tool for evaluating training outcomes. J Intellect Dev Disabil. 2009;34(3):266–74.

Campbell K, Taylor V, Douglas S. Effectiveness of online cancer education for nurses and allied health professionals; a systematic review using kirkpatrick evaluation framework. J Cancer Educ. 2019;34(2):339–56.

Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 Suppl):S63–7.

Ryan RM, Deci EL. Self-Determination Theory and the Facilitation of Intrinsic Motivation, Social Development, and Well-Being. Am Psychol. 2000;55(1):68–78. https://doi.org/10.1037//0003-066X.55.1.68 .

Williams GM, Smith AP. Using single-item measures to examine the relationships between work, personality, and well-being in the workplace. Psychology. 2016;07(06):753–67.

https://scholar.google.com/intl/en/scholar/citations.html . Accessed 4 Dec 2023.

Rotondi MA. CRTSize: sample size estimation functions for cluster randomized trials. R package version 1.0. 2015. Available from: https://cran.r-project.org/package=CRTSize .

Random.org. Available from: https://www.random.org/

https://rambollxact.dk/surveyxact . Accessed 4 Dec 2023.

Sterne JAC, White IR, Carlin JB, Spratt M, Royston P, Kenward MG, et al. Multiple imputation for missing data in epidemiological and clinical research: Potential and pitfalls. BMJ (Online). 2009;339:157–60.

Google Scholar  

Skelly C, Cassagnol M, Munakomi S. Adverse Events. StatPearls Treasure Island: StatPearls Publishing. 2023. Available from: https://www.ncbi.nlm.nih.gov/books/NBK558963/ .

Rahimi-Ardabili H, Spooner C, Harris MF, Magin P, Tam CWM, Liaw ST, et al. Online training in evidence-based medicine and research methods for GP registrars: a mixed-methods evaluation of engagement and impact. BMC Med Educ. 2021;21(1):1–14. Available from:  https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8439372/pdf/12909_2021_Article_2916.pdf .

Cheung YYH, Lam KF, Zhang H, Kwan CW, Wat KP, Zhang Z, et al. A randomized controlled experiment for comparing face-to-face and online teaching during COVID-19 pandemic. Front Educ. 2023;8. https://doi.org/10.3389/feduc.2023.1160430 .

Kofoed M, Gebhart L, Gilmore D, Moschitto R. Zooming to Class?: Experimental Evidence on College Students' Online Learning During Covid-19. SSRN Electron J. 2021;IZA Discussion Paper No. 14356.

Mutlu Aİ, Yüksel M. Listening effort, fatigue, and streamed voice quality during online university courses. Logop Phoniatr Vocol :1–8. Available from: https://doi.org/10.1080/14015439.2024.2317789

Download references

Acknowledgements

We thank the students who make their evaluations available for this trial and MSc (Public Health) Mie Sylow Liljendahl for statistical support.

Open access funding provided by Copenhagen University The Parker Institute, which hosts the WHO CC (DEN-62), receives a core grant from the Oak Foundation (OCAY-18–774-OFIL). The Oak Foundation had no role in the design of the study or in the collection, analysis, and interpretation of the data or in writing the manuscript.

Author information

Authors and affiliations.

WHO Collaborating Centre (DEN-62), Clinical Health Promotion Centre, The Parker Institute, Bispebjerg & Frederiksberg Hospital, University of Copenhagen, Copenhagen, 2400, Denmark

Rie Raffing & Hanne Tønnesen

Copenhagen Academy for Medical Education and Simulation (CAMES), Centre for HR and Education, The Capital Region of Denmark, Copenhagen, 2100, Denmark

You can also search for this author in PubMed   Google Scholar

Contributions

RR, LK and HT have made substantial contributions to the conception and design of the work; RR to the acquisition of data, and RR, LK and HT to the interpretation of data; RR has drafted the work and RR, LK, and HT have substantively revised it AND approved the submitted version AND agreed to be personally accountable for their own contributions as well as ensuring that any questions which relates to the accuracy or integrity of the work are adequately investigated, resolved and documented.

Corresponding author

Correspondence to Rie Raffing .

Ethics declarations

Ethics approval and consent to participate.

The Danish National Committee on Health Research Ethics has assessed the study Journal-nr.:21041907 (Date: 21–09-2021) without objections or comments. The study has been approved by The Danish Data Protection Agency Journal-nr.: P-2022–158 (Date: 04.05.2022).

All PhD students participate after informed consent. They can withdraw from the study at any time without explanations or consequences for their education. They will be offered information of the results at study completion. There are no risks for the course participants as the measurements in the course follow routine procedure and they are not affected by the follow up in Google Scholar. However, the 15 min of filling in the forms may be considered inconvenient.

The project will follow the GDPR and the Joint Regional Information Security Policy. Names and ID numbers are stored on a secure and logged server at the Capital Region Denmark to avoid risk of data leak. All outcomes are part of the routine evaluation at the courses, except the follow up for academic achievement by publications and related indexes. However, the publications are publicly available per se.

Competing interests

The authors declare no competing interests

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., supplementary material 2., supplementary material 3., supplementary material 4., supplementary material 5., supplementary material 6., supplementary material 7., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Raffing, R., Konge, L. & Tønnesen, H. Learning effect of online versus onsite education in health and medical scholarship – protocol for a cluster randomized trial. BMC Med Educ 24 , 927 (2024). https://doi.org/10.1186/s12909-024-05915-z

Download citation

Received : 25 March 2024

Accepted : 14 August 2024

Published : 26 August 2024

DOI : https://doi.org/10.1186/s12909-024-05915-z

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Self-efficacy
  • Achievements
  • Health and Medical education

BMC Medical Education

ISSN: 1472-6920

research about quality of education

  • IIEP Buenos Aires

IIEP-UNESCOBack to homepage

  • A global institute
  • Governing Board
  • Expert directory
  • 60th anniversary
  • Monitoring and evaluation

Latest news

  • Upcoming events
  • PlanED: The IIEP podcast
  • Partnering with IIEP
  • Career opportunities
  • 11th Medium-Term Strategy
  • Planning and management to improve learning
  • Inclusion in education
  • Using digital tools to promote transparency and accountability
  • Ethics and corruption in education
  • Digital technology to transform education
  • Crisis-sensitive educational planning
  • Rethinking national school calendars for climate-resilient learning
  • Skills for the future
  • Interactive map
  • Foundations of education sector planning programmes
  • Online specialized courses
  • Customized, on-demand training
  • Training in Buenos Aires
  • Training in Dakar
  • Preparation of strategic plans
  • Sector diagnosis
  • Costs and financing of education
  • Tools for planning
  • Crisis-sensitive education planning
  • Supporting training centres
  • Support for basic education quality management
  • Gender at the Centre
  • Teacher careers
  • Geospatial data
  • Cities and Education 2030
  • Learning assessment data
  • Governance and quality assurance
  • School grants
  • Early childhood education
  • Flexible learning pathways in higher education
  • Instructional leaders
  • Planning for teachers in times of crisis and displacement
  • Planning to fulfil the right to education
  • Thematic resource portals
  • Policy Fora
  • Network of Education Policy Specialists in Latin America
  • Strategic Debates
  • Publications
  • Briefs, Papers, Tools
  • Search the collection
  • Visitors information
  • Planipolis (Education plans and policies)
  • IIEP Learning Portal
  • Ethics and corruption ETICO Platform
  • PEFOP (Vocational Training in Africa)
  • SITEAL (Latin America)
  • Policy toolbox
  • Education for safety, resilience and social cohesion
  • Health and Education Resource Centre
  • Interactive Map
  • Search deploy
  • The institute

Defining and measuring the quality of education

Strategic_seminar1.jpg.

research about quality of education

What is the quality of education? What are the most important aspects of quality and how can they be measured?

These questions have been raised for a long time and are still widely debated. The current understanding of education quality has considerably benefitted from the conceptual work undertaken through national and international initiatives to assess learning achievement. These provide valuable feedback to policy-makers on the competencies mastered by pupils and youths, and the factors which explain these. But there is also a growing awareness of the importance of values and behaviours, although these are more difficult to measure.  

To address these concerns, IIEP organized (on 15 December 2011) a Strategic Debate on “Defining and measuring the quality of education: Is there an emerging consensus?” The topic was approached from the point of view of two cross-national surveys: the OECD Programme for International Student Assessment (PISA) and the Southern and Eastern Africa Consortium for Monitoring Educational Quality (SACMEQ)*.

Assessing the creativity of students

“Students’ capacity to extrapolate from what they know and apply this creatively in novel situations is more important than what the students know”, said Andreas Schleicher, Head of the Indicators and Analysis Division at the Directorate for Education, OECD, and in charge of PISA. This concept is reflected in current developments taking place in workplaces in many countries, which increasingly require non-routine interactive skills. When comparing the results obtained in different countries, PISA’s experience has shown that “education systems can creatively combine the equity and quality agenda in education”, Schleicher said. Contrary to conventional wisdom, countries can be both high-average performers in PISA while demonstrating low individual and institutional variance in students’ achievement. Finally, Schleicher emphasized that investment in education is not the only determining factor for quality, since good and consistent implementation of educational policy is also very important.

The importance of cross-national cooperation

When reviewing the experience of SACMEQ, Mioko Saito, Head a.i of the IIEP Equity, Access and Quality Unit (technically supporting the SACMEQ implementation in collaboration with SACMEQ Coordinating Centre), explained how the notion of educational quality has significantly evolved in the southern and eastern African region and became a priority over the past decades. Since 1995, SACMEQ has, on a regular basis, initiated cross-national assessments on the quality of education, and each member country has benefited considerably from this cooperation. It helped them embracing new assessment areas (such as HIV and AIDS knowledge) and units of analysis (teachers, as well as pupils) to produce evidence on what pupils and teachers know and master, said Saito. She concluded by stressing that SACMEQ also has a major capacity development mission and is concerned with having research results bear on policy decisions.  

The debate following the presentations focused on the crucial role of the media in stimulating public debate on the results of cross-national tests such as PISA and SACMEQ. It was also emphasized that more collaboration among the different cross-national mechanisms for the assessment of learner achievement would be beneficial. If more items were shared among the networks, more light could be shed on the international comparability of educational outcomes.

* PISA assesses the acquisition of key competencies for adult life of 15-year-olds in mathematics, reading, and science in OECD countries. SACMEQ focuses on achievements of Grade 6 pupils. Created in 1995, SACMEQ is a network of 15 southern and eastern African ministries of education: Botswana, Kenya, Lesotho, Malawi, Mauritius, Mozambique, Namibia, Seychelles, South Africa, Swaziland, Tanzania (Mainland), Tanzania (Zanzibar), Uganda, Zambia, and Zimbabwe

  • New website coming 29 August 2024
  • Q&A: Educational planning brings people together 27 August 2024
  • Gender at the Centre Initiative for education: A journey in graphics 01 August 2024
  • PISA Website
  • Andreas Schleicher's presentation pdf, 2.3 Mo
  • Mioko Saito's presentation pdf, 1.6 Mo

Follow us on facebook

  • Privacy Notice

REACH at Harvard Graduate School of Education

Quality Education

Young Peruvian boy practices reading while sitting at a desk. Photo: Elizabeth Adelman

Young Peruvian boy practices reading while sitting at a desk. Photo: Elizabeth Adelman

What are levers for inclusive and quality education for refugees?

Over one half of children globally who are out of school live in conflict settings. Yet quality education is an essential component for securing a future for refugee children. REACH identifies ways to strengthen and build refugees’ “unknowable futures.” This work is relevant not only for refugees but for other young people globally who face similar, even if less extreme, uncertainties in the face of rapid globalization and technological change.

cornflower blue sq.png

Are Refugee Children Learning? Early Grade Literacy in a Refugee Camp in Kenya

by Benjamin Piper, Sarah Dryden-Peterson, Vidur Chopra, Celia Reddick, and Arbogast Oyanga

Research article

Are Refugee Children Learning? Early Grade Literacy in a Refugee Camp in Kenya by Benjamin Piper, Sarah Dryden-Peterson, Vidur Chopra, Celia Reddick, and Arbogast Oyanga (2020) in The Journal on Education in Emergencies .

In the first literacy census in a refugee camp, the authors assessed all the schools providing lower primary education to refugee children in Kakuma refugee camp in Kenya. The outcomes for these students were concerningly low, even lower than for those of disadvantaged children in the host community, Turkana County. Literacy outcomes differed among the refugee children, depending on their country of origin, the language of instruction used at the school in Kenya, the languages spoken at home, and the children’s self-professed expectation of a return to their country of origin.

Midnightblue.png

Refugee Education: Backward Design to Enable Futures

by Sarah Dryden-Peterson

policy engagement

Refugee Education: Backward Design to Enable Futures by Sarah Dryden-Peterson (2019) in Education and Conflict Review.

This short paper explores the use of backward design as a way to conceptualize refugee education policy and practice. Drawing on examples of classroom and research experiences, it proposes a planning template aimed at enabling refugee education policy and practice to facilitate the futures that refugee young people imagine and aim to create. 

cornflower blue sq.png

Tío Emilio’s Story: A Tale from Nicaragua

written by Hania Mariën

Tío Emilio’s Story written by Hania Mariën (2019). A curriculum for students ages 10-14 .

Tío Emilio’s Story is a narrative of one individual’s pursuit of education during a time of conflict in Nicaragua. It offers students an insight into life in Nicaragua from 1967-1990, as experienced by one child on Isla Ometepe and touches upon issues of poverty, armed violence, and leaving home, as well as that of resilience and persistence. At its core, however, it is about learning to be a child. Educators and students are encouraged to learn with and from Tío Emilio and critically engage with Nicaraguan history, the United States’ involvement in the country, and to think about how conflict and military regimes can impact education.

Additional Resources

Sarah Dryden-Peterson on TVO’s The Agenda with Steve Paikin

Video | What would it take to ensure that all refugee young people have access to learning that enables them to feel a sense of belonging? Refugee REACH founder and director Sarah Dryden-Peterson joined Steve Paikin on TVO’s The Agenda to discuss her book “Right Where We Belong: How Refugee Teachers and Students Are Changing the Future of Education,” and to explore this question.

Refugee Education: Power, Purposes, and Pedagogies Across Contexts

Video | Refugee REACH director Sarah Dryden-Peterson delivers a lecture titled Refugee Education: Power, Purposes, and Pedagogies Across Contexts, hosted by NYU’s Global TIES for Children.

FreshEd Podcast on Refugee Education and Language of Instruction

Podcast | Celia Reddick and Sarah Dryden-Peterson discuss language of instruction in refugee education on the FreshEd podcast, hosted by Will Brehm.

Doing Research Amid Pandemic

Video | Refugee REACH director Sarah Dryden-Peterson and students Esther Elonga, Martha Franco, Orelia Jonathan, and Kristia Wantchekon discuss how experiences of uncertainty affect the research design process amid multiple pandemics of Covid-19 and racism.

Creating Change in Real Time

Insight | Student leaders and educators in Refugee REACH director Sarah Dryden-Peterson's new module at HGSE, Education in Uncertainty, share how they were able to connect their studies to practice and respond to emerging needs of their local communities and build supports during Covid-19.

In Focus: Mary Winters

Interview | Mary Winters, an HGSE alumna and now Programme Specialist with the LEGO Foundation, shares what it’s been like to put her classroom learning into practice, how she uses research in her work, and what keeps her going.

Social Support Networks, Instant Messaging, and Gender Equity in Refugee Education

Research | This article finds that peer-to-peer group chats expand transnational learning opportunities and possibilities for instructional innovations, community engagement, and conversations about gender equity in refugee education.

Quality Education for Refugees in Kenya: Instruction in Urban Nairobi and Kakuma Refugee Camp Settings

Research | This article examines the quality of education available to refugees in both urban and refugee camp settings in Kenya, with a particular focus on teacher pedagogy.

The Educational Experiences of Refugee Children in Countries of First Asylum

Report | This policy report explores the educational histories of young refugee children in first-asylum countries, and identifies elements of these that are relevant to post-resettlement education in the United States.

Abdul

Children’s Book | This resource details the personal account of Abdul, an Afghani child whose schooling was interrupted by armed conflict, but who never gave up in his pursuit for education.

The Quality and Qualities Of Educational Research

  • Share article
Experts concur that American educational research is deficient.

In the early 1960s, the role of the federal government in education began its steady, if unspectacular, rise in size and importance. In the early 1980s, the indifferent quality of American schools came to the fore in the report A Nation at Risk . Now, at the start of the 21st century, a new theme—the quality of educational research—pervades discussions of education in America. The National Research Council has issued a widely reported essay on “Scientific Principles for Educational Research"; the influential educator Ellen Condliffe Lagemann has published her long-awaited critique An Elusive Science: The Troubling History of Education Research ; and Congress has enacted legislation, the “No Child Left Behind” Act of 2001, calling explicitly for scientifically based education research. Though the details of their analyses differ, these experts concur that American educational research is deficient—indeed, some imply that it bears the same tenuous relation to “real research” as “military justice” does to “real justice.” And at least on the political front, a solution seems clear: Educational research ought to take its model from medical research— specifically, the vaunted National Institutes of Health model. On this analysis (not one recommended by the aforementioned educational authorities), the more rapidly that we can institute randomized trials—the so-called “gold standard” of research involving human subjects—the sooner we will be able to make genuine progress in our understanding of schooling and education.

Perhaps—but perhaps not. Minds are not the same as bodies; schools are not the same as home or workplace; children cannot legitimately be assigned to or shuttled from one “condition” to another the way that agricultural seeds are planted or transplanted in different soils. It is appropriate to step back, to determine whether educational research is needed at all, whether it should be distinguished in any way from other scholarly research, what questions it might address, what are the principal ways in which it has worked thus far, and how it might proceed more effectively in the future.

If I had average means but flexibility in where I lived, I would send my infant to day care in France; my preschooler to the toddler centers in Reggio Emilia, Italy; my elementary school child to class in Japan; my high schooler to gymnasium in Germany or Hungary; and my 18-year-old to college or university in the United States. Living (as I have for decades) in Cambridge, Mass., and being fortunate enough to be able to afford quality local education, I would send my young child to one of the better public schools or Shady Hill School, my adolescent to Buckingham Browne and Nichols Secondary School, and—depending on his or her inclination—my college-age offspring to Harvard or MIT.

What is striking is that none of these good schools is based in any rigorous sense on educational research of the sort being called for by pundits. Rather, they are based on practices that have evolved over long periods of time. Often, these practices are finely honed by groups of teachers who have worked together for many years—trying out mini-experiments, reflecting on the results, critiquing one another, co-teaching, visiting other schools to observe, and the like. In the past—indeed, in the present—much of the best school practice has been based on such seat-of-the-pants observations, reflections, and informal experimentation. Perhaps we need to be doing more of this, rather than less; perhaps, in fact, research dollars might be better spent on setting up teacher study groups or mini-sabbaticals, rather than on NIH-style field-initiated or targeted-grant competitions.

Still, there is a place for more formal kinds of research, carried out by individuals who have been so trained. Certain questions are best answered by systematic study, rather than by anecdotes or impressions. Controversial issues like the optimal class size, the effects of tracking, the best way to introduce reading, the best method for improving comprehension of written materials, the immediate and long-term effects of charter or voucher schools, the consequences of bilingual vs. immersion programs—these and other issues need a more formal research design.

Note, however, three characteristics of such research. First of all, it is expensive and time-consuming to carry out. Second, it is very difficult to reach consensus. As any reader of the educational literature knows all too well, one can find experts on both sides of any of the aforementioned issues, each armed with his or her supporting data.

Third, and most painfully, even when consensus obtains on an issue, there is no guarantee that policymakers will take heed. As a cognitive psychologist, I know that children must construct knowledge for themselves; they cannot simply be “given” understanding of any important issue. This insight—shared by thousands of cognitive researchers all over the world—does not prevent legislators from calling time and again for “direct instruction” or drill-and- kill regimens. We may properly conclude that the results of educational research make their way only fitfully into classrooms: They are but one of numerous competing inputs.

As a longtime observer of the scene, I have identified two distinctive cohorts, which, as it happens, relate to the two principal organizations involved in educational research. For the purpose of contrast, let me caricature them slightly.

Founded in 1965, the National Academy of Education, or NAE, is a loosely knit set of approximately 120 scholars elected because of the judged quality of their research. Traditionally, the prototype for the NAE has been the scholar in a standard academic discipline whose work has had influence in educational circles. In many cases, the scholars themselves have not been in schools of education and have not thought of themselves primarily as educators. For example, psychologists Eleanor Gibson and Bärbel Inhelder, economist Gary Becker, sociologist James S. Coleman, and historian Bernard Bailyn have all been members of the academy. This association operates on the assumption that the best work is discipline-based; it is not particularly relevant whether the scholars are deeply knowledgeable about conditions “in the trenches.”

The much larger and more democratic American Educational Research Association, or AERA, consists of thousands of researchers, most of whom were trained and teach in schools of education. These individuals differ widely from one another in whether they have a disciplinary base, whether they value the disciplines, whether, indeed, they see the disciplines as obstacles. What links these individuals is a deep concern with the condition of children and schools— particularly (among American members) the conditions of disadvantaged youngsters in American public schools. Research is often evaluated in part in terms of whether it contributes to improving these conditions. If I may continue this caricature for a few more clauses, I would say that the prototypical NAE member is a discipline-based scholar from arts and sciences who happens to have wandered into an educational issue but may well wander out again. The prototypical AERA member is a researcher born and bred in education schools; his or her allegiance is more to problems and persons than to a discipline.

Naturally, one’s own analysis of and solution to the “education research” issue depends mightily on which cohort one values more highly. Were I appointed the czar of education research, I would call for three tried-and-true steps and one new one:

Recognize that much of the most valuable work in improving education has taken place in schools and systems that engage in reflective practice.

1. (Following teachers from Japan, Reggio Emilia, and other sites of exemplary practice) Recognize that much of the most valuable work in improving education has taken place in schools and systems that engage in reflective practice. Take serious steps to encourage such work and, when possible, support it by timely regulations and infusion of funds.

2. (Following the National Academy of Education model) Require that every researcher who wants to work in education have at least one disciplinary base. Such a disciplinary base requires familiarity with the chief approaches in the discipline; knowledge of major contributions; capacity to critique such literature; potential for contributing to scholarship. The discipline does not need to be a scientific one: Important contributions to education are made by humanists, philosophers, historians, and various breeds of social scientist.

3. (Following the American Educational Research Association model) Require that every researcher who wants to work in education become knowledgeable about two issues: First of all, the researcher must have direct knowledge of the educational system; such know-how is best acquired by spending time teaching or observing in schools or other precollegiate educational institutions. Second of all, the researcher must have direct knowledge of the various audiences for educational research. Unless there is familiarity with the audiences that can make use of educational research (teachers, administrators, policymakers, the general public), the chances that even good research will exert any effect are effectively nil.

Identification of relevant educational research by the proper constituents is necessary but, alas, it might not be sufficient. Hence, a new step is needed. In this respect, I have been much impressed by an organization called CIMIT, the Center for Integration of Medicine and Innovative Technologies. Affiliated with universities, hospitals, and research centers in the Boston area, the explicit goal of CIMIT is to create technological breakthroughs for which physicians are eager and to expedite the speed with which these innovations are placed in the hands of physicians who can use them.

Decades ago, the founder of CIMIT learned that even medical innovations that were universally hailed often took years—if not decades—before they could actually be used with patients on a widespread basis. And so, in 1994, a group of medical-science leaders decided to work with top-flight scientists and engineers and to devote their principal energies towards shortening the lead time from invention to use.

The crucial step in educational research will not occur simply because we have quality research along the lines I have specified. It will not occur simply because educational practitioners and consumers recognize the relevance of such research to their workaday concerns. Rather, it will occur only when the fruits of such research are readily available to any teacher or administrator who wants to put them to use.

Howard Gardner is the John H. and Elisabeth A. Hobbs professor of cognition and education at Harvard University’s graduate school of education, in Cambridge, Mass. He is a co-author of Good Work: When Excellence and Ethics Meet (Basic Books), which has just been issued in paperback.

Sign Up for The Savvy Principal

Edweek top school jobs.

Blurred photograph of smiling students running out of a school building.

Sign Up & Sign In

module image 9

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Eur J Investig Health Psychol Educ
  • PMC10606047

Logo of ejihpe

Factors Contributing to School Effectiveness: A Systematic Literature Review

Associated data.

No new data were created. Results are based on existing articles on the topic.

This paper aims to provide a systematic review of the literature on school effectiveness, with a focus on identifying the main factors that contribute to successful educational outcomes. The research question that this paper aimed to address is “what are the main factors of school effectiveness?”. We were interested in several descriptors such as school, effectiveness/efficiency theories, effectiveness/efficiency research and factors. Studies (published within the 2016–2022 period) were retrieved through two databases: JSTOR and ERIC. This paper defines several categories identified by school effectiveness research. Within these categories, various factors that affect the students’ outcomes and the defined effectiveness in school are listed. As the results show, the issue of school effectiveness is multifaceted, as the effectiveness of schools is a complex concept that can be measured through various indicators such as academic achievement, student engagement and teacher satisfaction. The review of school effectiveness revealed that several factors contribute to effective schools, such as strong leadership, effective teaching practices, a positive school culture and parental involvement. Additionally, school resources, such as funding and facilities, can impact school effectiveness, particularly in under-resourced communities.

1. Introduction

The answer to the question “what makes school effective?” is the Holy Grail of educational research [ 1 ]. School effectiveness has been a research topic for several decades, with scholars and policymakers seeking to identify the key factors that contribute to successful educational outcomes. The concept of school effectiveness refers to the extent to which a school is able to achieve its goals and objectives in terms of student learning, development and well-being [ 2 ]. This article is not focused on the historical view of school effectiveness research (SER) or on phases in its development but rather on identifying factors that contribute to school effectiveness. School effectiveness research concerns educational research and explores differences within and between schools and malleable factors that improve school performance [ 3 ] and/or achievements and/or outcomes. Educational (school) effectiveness can be defined as the degree to which an educational system and its components and stakeholders achieve specific desired goals and effects [ 4 ]. Taking into consideration the different terminology used in researching school effectiveness and that we were not focusing on those possible differences when describing our results, let us first focus on possible differences to which effective schooling can contribute—as the specific desired goals and effects of schooling can be numerous and especially because different aspects of those goals can be inter-linked.

Schools have important “effects” on children and their development; so, “schools do make a difference”, as stated by Reynolds and Creemers in [ 5 ] (p. 10). SER studies seek to include factors such as “gender, socio-economic status, mobility and fluency in the majority language used at school” in assessing the impact of schools [ 5 ] (p. 11). In the past, educational assessment mainly relied on basic metrics like the number of students advancing to higher education, the grade repetition rates and special education enrollment. However, it became clear that these metrics were influenced by external factors beyond school and teacher characteristics and were thus abandoned. Instead, more comprehensive measures focusing on academic achievement in subjects like math and language were introduced. Progress in assessing effectiveness continued with the inclusion of control measures, such as students’ prior knowledge and family socio-economic status. Presently, standardized objective tests are the primary tool for measuring educational effectiveness in specific curricula [ 4 ].

In recent years, there has been a growing interest in understanding the factors that contribute to school effectiveness, particularly in light of concerns about the quality of education and the need to improve educational outcomes. Research suggests that school effectiveness is a multifaceted concept that is influenced by a range of factors, including school leadership, teacher quality, curriculum and instruction, school culture and climate, parental involvement and student characteristics [ 2 , 6 , 7 ]. However, the relative importance of these factors may vary depending on the context in which they are examined. Therefore, it is important to conduct a comprehensive review of the literature to identify the key factors of school effectiveness across different contexts.

This paper aims to provide a systematic review of the literature on school effectiveness, with a focus on identifying the main influencing factors. The review drew upon a range of empirical studies, meta-analyses and reviews to provide a comprehensive overview of the current state of knowledge on this topic. In this research, the literature review was conducted according to the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) guidelines [ 8 ]. Systematic reviews of the literature have an important role and can identify different problems that can be addressed in future studies, “they can generate or evaluate theories about how or why phenomena occur’’ and can address questions that cannot be tackled by individual studies through several studies [ 8 ] (p. 1). We were interested in several descriptors such as school, effectiveness/efficiency theories, effectiveness/efficiency research, and factors. Studies were reviewed using two databases: JSTOR and ERIC. This paper defines several categories that are important in school effectiveness research and, within these categories, lists various factors that affect students’ outcomes and the defined school effectiveness. The research question that this paper aimed to address is “what are the main factors of school effectiveness?”. This paper can be helpful as it provides an overview of school effectiveness research, and the research question is of significant importance, as answering it can help inform educational policy and practice by identifying the key areas that schools should focus on in order to improve student outcomes. Several studies attempted to answer this question, but there is still much debate and discussion surrounding the factors that contribute to school effectiveness.

2. Background of School Effectiveness Research

The concept of school effectiveness emerged in the 1960s and 1970s in response to growing concerns about the quality of education and the need to improve educational outcomes for students [ 4 , 7 , 9 ]. Early definitions of school effectiveness focused on the achievement of educational outcomes, such as academic performance and the ability of schools to meet the needs of students from diverse backgrounds [ 4 , 10 ].

Coleman et al. [ 11 ] argued that students’ socioeconomic status is a crucial factor affecting their academic achievement in schools and has a greater impact than school characteristics. This is consistent with the conclusion reached by Jencks [ 12 ], who found that schools do not have a statistically significant impact on student achievement. These findings paved the way for school effectiveness research, which emerged in the early 1970s as a radical movement aimed at exploring the factors within schools that contribute to better students’ educational performance, regardless of their social background [ 13 ]. In the field of education, effective-schools research emerged as a response to previous studies such as Coleman’s and Jencks’, which indicated that schools had little impact on students’ achievement. As titles such as “Schools can make a difference” and “School matters” suggest, the goal of effective-schools research was to challenge this notion and explore factors that contribute to successful schools. What sets effective-schools research apart is its focus on investigating the internal workings of schools, including their organization, structure and content, in order to identify characteristics associated with effectiveness [ 14 , 15 ]. According to Muijs [ 14 ], school effectiveness research sought to move beyond the prevailing pessimism about the impact of schools and education on students’ educational performance. The movement aimed to focus on studying the factors within schools that could lead to better students’ academic performance, irrespective of their social background [ 14 ] (p. 141). Scheerens et al. [ 16 ] (p. 43) summed up the five influencing factors identified in early research on school effectiveness: “strong educational leadership, emphasis on the acquiring of basic skills, an orderly and secure environment, high expectations of pupil attainment, frequent assessment of pupil progress”.

According to various scholars [ 17 , 18 , 19 , 20 , 21 , 22 ], defining educational quality is a challenging task due to the diverse settings, stakeholders and goals involved in education. Generally, educational quality can be defined as achieving the desired standards and goals. Creemers and Scheerens [ 23 ] further added that quality refers to the characteristics and factors of the school that contribute to differences in outcomes between students in different grades, schools and educational systems. However, these definitions fail to provide a clear explanation of the specific characteristics that result in quality education and schools [ 4 ] (p. 2). School effectiveness is a subset of educational effectiveness or educational quality. According to Scheerens [ 21 ], educational effectiveness refers to the extent to which an educational program or institution achieves its intended outcomes, while school effectiveness is concerned with the extent to which a school achieves its goals and objectives. Burusic et al. [ 4 ] also noted that school effectiveness research is a branch of educational effectiveness research that specifically focuses on the functioning of schools and their impact on student outcomes.

Theories of school effectiveness have evolved over time, with a greater emphasis on the role of leadership and school culture in shaping educational outcomes. One of the most influential models of school effectiveness is the “Effective Schools Model” developed by Edmonds [ 24 ]. This model identified five key characteristics of effective schools: high expectations, strong instructional leadership, a safe and orderly environment, a focus on basic skills, and frequent monitoring of student progress.

Subsequent research confirmed the importance of these factors in promoting school effectiveness [ 2 , 25 ]. For example, a study by Leithwood et al. [ 2 ] found that effective school leadership was associated with improved student outcomes, including academic achievement and graduation rates. Similarly, research by Ismail et al. [ 26 ] highlighted the importance of a positive school culture, including supportive relationships among staff and students, in promoting school effectiveness.

Reynolds et al. [ 27 ] (p. 3) proposed that there are three primary areas of focus in School Effectiveness Research (SER):

  • School Effects Research: investigating the scientific characteristics of school effects, which has evolved from input–output studies to current studies that use multilevel models.
  • Effective Schools Research: researching the procedures and mechanisms of effective schooling, which has developed from case studies of exceptional schools to contemporary studies that integrate qualitative and quantitative methods to study classrooms and schools concurrently.
  • School Improvement Research: examining the methods through which schools can be transformed, utilizing increasingly advanced models that surpass the simple implementation of school effectiveness knowledge to employ sophisticated “multiple-lever” models.

Sammons and Bakkum [ 5 ] (p. 10) argued the importance of different factors that are associated with student attainment: “individual characteristics (age, birth weight, gender), family socio-economic characteristics (particularly family structure, parental background: qualification levels, health, socio-economic status, in or out of work, and income level), community and societal characteristics (neighborhood context, cultural expectations, social structural divisions especially in relation to social class)”.

More recent theories of school effectiveness also emphasized the need to address systemic inequities and promote social justice in education [ 28 , 29 ]. These theories recognize the role of societal factors, such as poverty and discrimination, in shaping educational outcomes and the need for schools to adopt a more inclusive and equitable approach to education. For example, Ainscow [ 28 ] developed a model of “inclusive school leadership”, which emphasizes the importance of creating a culture of inclusion and diversity in schools.

Overall, theories of school effectiveness have evolved over time, reflecting changing perspectives on the role of schools in promoting educational outcomes. Key factors identified in the literature include effective school leadership, a positive school culture and a focus on meeting the needs of diverse students. However, more recent theories also recognize the need to address systemic inequities and promote social justice in education.

According to Heyneman and Loxley [ 30 ], multiple linear regression was used to re-analyze IEA data on student achievement in industrialized countries. The researchers found that student background variables such as parental education, father’s occupation, number of books at home, use of a dictionary at home, the sex of the student and the age of the student explained approximately 20% of the total variance in science achievement, which accounted for roughly 50% of the explainable variance. Furthermore, the OECD [ 30 ] reported that PISA 2000 also revealed that various student background factors, such as parental occupational status, cultural possessions at home, parental involvement, home educational resources, participation in cultural activities and family wealth, explained the significant variance in the academic achievement of 15-year-old students.

3. Materials and Methods

For this article, a systematic literature review was carried out. The literature review was conducted using the PRISMA protocol [ 8 ]. “To ensure a systematic review is valuable to users, authors should prepare a transparent, complete, and accurate account of why the review was done, what they did (such as how studies were identified and selected), and what they found (such as characteristics of contributing studies and results of meta-analyses)” [ 8 ] (p. 1). We were interested in several descriptors such as school, effectiveness/efficiency theories, effectiveness/efficiency research and factors. For searching, the following formula was used: (school AND effectiveness) OR (school AND efficiency)) AND (theories OR research OR factors).

Two databases were used: JSTOR and ERIC. The search and review of the studies were carried out from August to October 2022. The period was limited between 2016 and 2022, except for the database ERIC, as we did not have that option. In the ERIC database, we examined research within the last five years, from 2018 to 2022, which was one of the options in the database. In JSTOR, the period was limited between 2016 and 2022. This decision was made because this literature review will be used in further research for a doctoral dissertation of the main authors of this article, where a secondary analysis will be performed considering the ICCS 2016/2022 (International Civic and Citizenship Education Study); therefore, we focused on the literature in that period.

The literature review included all studies in English, qualitative and quantitative. There were no specific restrictions on the studies involved, so book sections and articles published in professional and academic journals were considered.

Before we determined the final search formula, we tried several search terms and combinations. The search using the term “school effectiveness” was too broad, and, for example, that using the terms “school effectiveness theories” or “school effectiveness factors” was too narrow. We were also interested in the term efficiency, besides the term effectiveness; so, the following final search term was chosen: (school AND effectiveness) OR (school AND efficiency)) AND (theories OR research OR factors).

In the first phase of searching, we included descriptors and searched the literature using the final formula mentioned above.

Both databases have different options for searching studies, which is the reason why searching was individually adapted to our interests. With the already mentioned search formula, we obtained 130,371 results. The resources to which we did not have full access were excluded, and the final number of relevant items decreased to 13,446.

In the second phase of the literature review, we reviewed all the titles of the searched items and collected 130 possible relevant studies for our research area. We excluded 4 duplicates. After we read all selected articles, we excluded the irrelevant ones, and the final number of included studies in the systematical literature review was 84. The description of those articles is in the Results section. For a more visual picture of the search process for the literature review, please refer to the PRISMA diagram in Figure 1 .

An external file that holds a picture, illustration, etc.
Object name is ejihpe-13-00148-g001.jpg

PRISMA diagram for the search protocol and the inclusion and exclusion of the reviewed articles.

With the literature review in the area of school effectiveness, we identified key themes and provide theoretical guidance for the achievement of effective schools. The aim of this study was to discover and define the key factors that influence the effectiveness of a school and students’ achievements/outcomes. A few categories were identified within school effectiveness research: teacher effectiveness, effectiveness in digital/online education, and school efficiency. In the different items that we reviewed, some key factors that appeared to have a statistically significant impact on school effectiveness and student achievement were identified in several studies. Factors such as school culture, a supportive climate in the classroom, a positive class climate, the use of digital sources, a strong and firm leadership, an effective leadership, flipped classroom (FC), schools’ economic, social and cultural status, the attitude of principals, teachers, and school counselors, the organizational climate, the aspects of cooperation, inclusion in decision making, presence of teachers with many years of experience, collegial support, collegial leadership, teacher collaboration, the level of participation in decisions, the willingness to participate, the habit of treating students with respect and caring about their problems, high teacher ratings on leadership and the supervisor’s support of teachers were all revealed as important contributors to overall school effectiveness and student achievement.

The majority of the reviewed studies mostly discussed school-level factors. Thrupp in [ 3 ] argued that the background characteristics of the students are often overlooked. “School performance is usually expressed in terms of average student achievement by the school” [ 3 ], (p. 255). Furthermore, research suggests that “student achievement mostly depends on the performance of the student in early education” [ 31 ] (p. 12). School climate was detected as one of the most important factors for school effectiveness [ 32 , 33 , 34 , 35 , 36 , 37 , 38 ], and studies indicated the significance of school climate for teacher commitment [ 33 , 39 , 40 , 41 ].

4.1. Positive School Climate and School Culture

A positive school climate is essential for school effectiveness. Khan [ 33 ] proposed that it would be worthwhile to develop a positive organizational climate strategy to improve teacher commitment, and promoting a positive school climate is important for the improvement of school effectiveness in general [ 34 ]. Authors like Ismail et al. [ 34 ] and Ismara et al. [ 42 ] claimed that improving school effectiveness requires support from stakeholders like government, policymakers, principals, deputy principals, teachers, parents and other school stakeholders.

Also, school culture predicts school effectiveness and has stronger relations with school effectiveness than teacher empowerment [ 43 ]. “A school should have a culture that values the professional development of its teachers, collegiality, collaborative leadership, and teamwork to be effective” [ 43 ] (p. 340). Karadağ et al. [ 44 ] argued that high-performing schools have strong school culture and spiritual leadership characteristics compared to low-performing schools. The results of their study showed the impact of school culture and spiritual leadership on academic success. Ismail et al. [ 26 ] also claimed that school culture has a significant influence on school effectiveness. “If school leaders want to shape a new culture, they should start with an assessment of the climate. If the culture is ineffective, there are probably climate issues that were missed before they became rooted in the culture” [ 45 ] (p. 58). The school should have a culture that “values the professional development of its teachers, collegiality, collaborative leadership, and teamwork to be effective” [ 43 ], (p. 340).

4.2. Teacher Effectiveness

Teacher effectiveness is also known as one of the most important factors for predicting school and student effectiveness [ 31 , 46 , 47 , 48 , 49 , 50 ]. Factors not significant in explaining differences in teacher effectiveness estimates are student gender and students’ language identity, as Aslantas claimed [ 31 ].

Effective teachers provide a positive school climate, collaborate with colleagues and analyze student data. Student achievement is positively associated with years of teaching experience [ 49 ]. The quality of the interactions between teachers and students is also very important. LoCasale-Crouch et al. [ 51 ] argued that teacher–student interactions are important to students’ school outcomes (they affect their engagement, academic performance and motivation). Independent of the overall interaction quality, students with less consistency in their interactions with teachers had more conflicts with them.

School effectiveness is positively correlated with the teachers’ level of participation in decisions and their willingness to participate. Teachers reported that they did not feel enough included in the decisions of the administration and were aware that the administration had an important role. It is therefore very important to increase the level of participation of teachers in decisions [ 52 ]. Yıldırım [ 53 ] claimed that organizational cynicism (OC) indirectly affects perceived school effectiveness (PSE) through involvement in the decision-making (IDM) process and may reduce perceived school effectiveness by reducing teachers’ participation in school decision making. OC had a statistically significant negative effect on PSE, as well as on IDM. IDM showed a statistically significant positive effect on PSE [ 53 ]. Gülbahar [ 54 ] (p. 15) reported that “the perceived supervisor support among teachers is positive on school effectiveness perception, engagement to work and job satisfaction and negative on organizational cynic attitude”.

Javorcíková et al. [ 55 ] analyzed the motivation level of teachers in primary schools. The supervisors’ approach as well as the atmosphere in the workplace, teamwork, fair system and salary are important for teachers’ positive motivation. Khan [ 33 ], on the other hand, tested the impact of organizational climate on teachers’ commitment. School climate is directly connected with school effectiveness, and Khan researched how it is associated with teacher commitment. He performed a regression analysis and argued that the school climate has a significant influence on teacher commitment. Also, collegial leadership and institutional vulnerability appeared as predictors of teacher commitment. Teacher professionalism and academic achievement failed to be predictors of teacher commitment. The study proposed that it would be worthwhile to develop a positive organizational climate strategy to improve teacher commitment [ 33 ].

4.3. Strong Leadership

Many authors agree that strong instructional, school, academic, collaborative and collegial leadership has a significant influence on the effectiveness of schools [ 32 , 33 , 34 , 36 , 39 , 41 , 42 , 43 , 44 , 48 , 56 , 57 , 58 , 59 , 60 , 61 , 62 , 63 , 64 , 65 , 66 , 67 , 68 ].

Reynolds and Teddlie in [ 32 ] (p. 2) “summarized that effective schools were characterized by nine process factors: effective leadership, effective teaching, a pervasive focus on learning, a positive school culture, high expectations for all, student responsibilities and rights, progress monitoring, developing school staff skills, and involving parents”.

Professional development is a dimension of school culture. Gülşen and Çelik [ 43 ] tested the correlation with school effectiveness, and professional development was the most predictive variable. The other significant predictors were collegial support, collegial leadership, unity of purpose, self-efficacy, decision making and teacher collaboration. The following variables were not statistically significant in explaining school effectiveness: the learning partnership dimension of school culture and the status, impact, autonomy, and professional growth dimensions of school participant empowerment.

4.4. Technological Resources and Digital Literacy

This systematic literature review included many articles that discussed technological resources and digital literacy as important factors that can be effective in providing positive effects on education. It is necessary that teachers receive more support and training on using digital resources in education. Teachers partly use digital sources, and most of them do not consider it a workload. Teachers see the usage of digital sources as motivating student engagement in education and as having a positive effect on student success and education [ 69 , 70 ].

The studies examined in this systematic literature review also focused on online education and distance teaching and learning, since there was the COVID-19 pandemic during the considered research period; so, we identified some issues related to education during the COVID-19 pandemic [ 71 , 72 , 73 , 74 ]. The experience of online distance learning (ODL) was new for the students and teachers, and they faced some issues related to the lack of tools and sources and poor Internet connectivity to access virtual classes. Despite some difficulties, the teachers and students reported that ODL has many positive aspects that help working teachers and professionals to continue higher education and professional development [ 71 ].

Zou et al. [ 72 ] found that it is important for teachers who have the opportunity to continue their training to acquire more skills and become more confident, so their online teaching could be more effective. In general, in this study, the majority of students and teachers were satisfied with online teaching during the pandemic and reported that in general it was effective. Basar et al. [ 73 ] argued that tools and sources were not a problem for the students, as they had computers and an internet connection, and their ability and comfort to use computers were high. The main problem for the students was the lack of motivation for online learning. The majority of the participants in the study agreed that face-to-face teaching is very important. The authors also emphasized “the importance of well-equipped facilities and a stable internet connection for effective learning” and that the “support within school communities, and among parents and school administrators, is vital to ensure the success of online learning”. “While online learning has been proven to support the health of students during the pandemic, it is not as effective as conventional learning” [ 73 ] (pp. 76, 119, 128).

However, online education is nowadays more included in school systems; so, we must increase the effectiveness in that area of learning. The teachers who were educated beforehand and used the technology before the pandemic were more self-assured and had fewer problems with the transition [ 72 , 74 , 75 , 76 ].

4.5. Flipped Classroom

In our systematic literature review, we found that a few authors researched the learning method of flipped classroom and tested its association with school effectiveness [ 77 , 78 , 79 , 80 , 81 ].

Flipped classroom is a strategy of active learning that puts the student at the center of teaching and that gained popularity in the recent decade. Authors like Mok and Gilboy et al. in [ 79 ] argued that compared to traditional pedagogical teaching, students positively accept the strategy of a flipped classroom with more enthusiasm and motivation for learning. On the other hand, Atteberry [ 79 ] reported in a preliminary study that, according to four professors, flipped classroom did not improve students learning, though the difference was not significant. The flipped classroom (FC) method is a digital teaching method according to which the courses are taught online through learning applications and are supported by digital media, for example, learning videos and simulations [ 77 ].

Weiß and Friege [ 77 ] (p. 315) listed several definitions of the flipped classroom concept, from different authors:

  • “An inverted (or flipped) classroom is a specific type of blended learning design that uses technology to move lectures outside the classroom and uses learning activities to move practice with concepts inside the classroom” Strayer (2012, p. 171).
  • “We define the flipped classroom as an educational technique that consists of two parts: interactive group learning activities inside the classroom and direct computer-based individual instruction outside the classroom” Bishop and Verleger (2013, p. 9).
  • “Flipped Learning is a pedagogical approach in which direct instruction moves from the group learning space to the individual learning space, and the resulting group space is transformed into a dynamic, interactive learning environment where the educator guides students as they apply concepts and engage creatively in the subject matter” Association of Flipped Learning Network (2014, p. 1), Bergmann and Sams (2014, p. 14)”.

There are not yet clear conclusions on whether the method of the flipped classroom contributes to school effectiveness. This strategy of learning brings many benefits and, on the other hand, students and teachers face new challenges, and students have to be well organized for self-learning and for learning at home. Although research on that theme has grown in the last few years, there is a lack of publications and of relevant publications that meet the scientific standards [ 77 ].

Flipped classroom is an effective method that increases students’ engagement, and most students prefer this method of learning. But, on the other hand, it also has some disadvantages. The strategy did not further improve the scores of top-scoring students. Though the students did not prefer this method to the traditional one, it “helped improve the grades of students who were at the lower end of academic performance” [ 78 ] (p. 2). Knežević et al. [ 81 ] found a positive relationship between the method and school effectiveness, reporting that the strategy of flipped classroom brings higher results than the approach of conventional teaching and learning.

4.6. The Efficiency of Schools

The articles we examined for our systematic literature review reported on the efficiency of schools [ 71 , 82 , 83 , 84 , 85 , 86 ]. This concept includes the financial status of the school (budget), the number of employed staff at the school (teaching staff) and the school’s physical infrastructure [ 87 ]. “Efficient educational institutions are those that can use their inputs optimally to achieve maximum possible outputs. If the output is fixed, efficiency refers to minimizing the use of inputs to achieve the output” [ 87 ] (pp. 1, 2).

The study by Thompson et al. [ 82 ] showed that total student enrolment is a significantly essential and positively affecting factor for the efficiency rating of school districts. The percentage of nonwhite students and of economically disadvantaged students has a significant negative influence on the district efficiency scores. The most used method for measuring technical efficiency involves comparing inputs and outputs in many educational units. This method is called “Data Envelopment Analysis” (DEA) and was developed by Charnes et al. in [ 85 ] (p. 2). Halkiotis et al. [ 85 ] measured the degree of technical efficiency of high schools, and they found that it is necessary to improve the working conditions for teachers and reduce stress. The study results showed that a significant number of teachers did not complete their compulsory weekly teaching schedule. Furthermore, it is essential to develop a healthy competition between the students by increasing the average number of students per class.

4.7. Sociodemographic Characteristics

The sociodemographic characteristics have been identified as important factors that contribute to school effectiveness. Ramberg and Modin [ 56 ] found that schools with a high proportion of students with immigrant backgrounds tended to have lower levels of academic achievement, possibly due to language barriers and cultural differences. Şirin and Şahin [ 88 ] also noted that students from low-income families may face more challenges in their academic performance and school engagement, which can negatively impact the overall effectiveness of schools.

However, Hirschl and Smith [ 89 ] argued that the relationship between socioeconomic status and school effectiveness may not be straightforward, as schools in high-poverty areas may have higher levels of student motivation and community involvement, which can offset the negative effects of poverty. Murwaningsih and Fauziah [ 90 ] further highlighted the importance of considering gender and ethnicity to understand school effectiveness, as these factors can influence student achievement and experiences in different ways.

A study by Rumberger et al. [ 91 ] analyzed data from the National Longitudinal Survey of Youth and found that socioeconomic status was a strong predictor of high school graduation rates. The study also found that students from low-income families were more likely to attend schools with fewer resources, which might contribute to lower academic achievement.

Overall, these studies suggest that the sociodemographic characteristics play a significant role in school effectiveness, but their influence may vary depending on specific contexts and populations.

5. Discussion

Academic performance, sometimes known as school readiness, academic achievement and school performance are often used as synonyms, and several authors agree that it is the result of learning, prompted by the teaching activity of the teacher and produced by the student [ 92 ]. However, at the same time, there seems to be a lack of consensus among researchers regarding the similarities and differences among the constructs of academic performance, achievement, and learning outcomes [ 93 ]. For those who view them as the same concept, they can be used interchangeably. But for others—who mostly come from different disciplines and thus have various knowledge regarding the perception and the ways each of these constructs were used in relation to certain variables—they can have different meanings [ 93 ], although to a low extent. The academic performance of a student can be regarded as the observable and measurable behavior of a student in a particular situation and can be evaluated through the scores obtained in teacher-made tests, first term examinations, mid-semester tests, etc. [ 93 ], which can be measured at any point. Achievement is a measurable behavior in a standardized series of tests, as indicated by Simpson and Weiner in [ 93 ] (p. 6), or is measured by a standardized achievement test developed for school subjects, as indicated by Bruce and Neville in [ 93 ] (p. 6). This means that academic achievement is measured in relation to what is attained at the end of a course, since it is the accomplishment of a medium- or long-term objective of education (cannot be attained within a short period or in one instance). It is important that the test should be standardized to meet the national norm [ 93 ] (pp. 6–9). Academic achievement is a representation of performance outcomes that indicate the level to which the student has attained specific goals that were the focus of activities in instructional environments [ 94 ]. School systems mostly define cognitive goals that either apply across multiple subject areas (e.g., critical thinking) or include the acquisition of knowledge and understanding in a specific intellectual domain (e.g., numeracy, literacy, science, history). Therefore, academic achievement should be considered as a multifaceted construct that comprises different domains of learning [ 94 ]. The definition of academic achievement depends on the indicators used to measure it. Among the many criteria that indicate academic achievement, there are very general indicators (e.g., procedural and declarative knowledge acquired in an educational system), more curricular-based criteria (e.g., grades or performance on an educational achievement test), and cumulative indicators of academic achievement (e.g., educational degrees and certificates) [ 94 ]. Academic achievements are usually expressed through school grades, as reported by Martinez-Otero in [ 92 ]. Learning outcomes may be used when looking for performance or achievement as an attitude of the students towards a particular subject [ 93 ] (p. 14). Aremu and Sokan [ 95 ] indicated that learning outcomes (academic achievement and academic performance) are determined by family, schools, society and motivation factors.

In summary, it appears that academic outcomes (performances and/or achievements) or educational goals and effects are influenced by several school and out-of-school (e.g., family) factors, as well as by student (individual) factors, with inter-relation factors also being of importance. And all of this contributes to school efficiency.

As shown in the previous section (Results), there are many factors contributing to school effectiveness. Some of them can be seen as related to each individual student (e.g., sociodemographic characteristics); however, also those “individual factors” are often directly or indirectly associated with within-school factors which are attributed to teachers and school efficiency (e.g., effective teaching, effective school or classroom leadership, etc.). And this is not a surprise, as Creemers and Kyriakides [ 96 ] already proposed that a new, dynamic model of effectiveness must (a) be multilevel in nature, (b) assume that the relation of some effectiveness factors with achievement may be curvilinear, (c) illustrate the dimensions upon which the measurement of each effectiveness factor should be based and (d) define relations among the effectiveness factors. Their testing of this dynamic model at the school (and, further, system) level placed most attention on describing detailed factors associated with teacher behavior in the classroom [ 96 ].

The subject of school effectiveness is a complex and multifaceted topic that has been the subject of extensive research and debate. There are many different factors that can contribute to the effectiveness of a school, including the quality of teaching, the study curriculum, the leadership and management of the school, the socio-economic background of the students, the level of parental involvement, the school culture and the school climate. The quality of the school seems to be strongly linked to a safe and stimulating learning environment. There is not a clear division in the definition of a safe and of a stimulating learning environment; furthermore, the concepts of a safe learning environment and of a stimulating learning environment are complementary and partly overlap with the concept of school culture and climate, as indicated by Dumont et al. in [ 97 ]. Standards defining the school culture and climate or a safe learning environment, highlight the following aspects: “inclusion, safety, relationships, information and communication, educational strategies” [ 97 ], (p. 9).

One important finding that emerged from the literature is that the quality of teaching is a very important factor in school effectiveness. Research has consistently shown that effective teaching practices can significantly improve student outcomes, including academic achievement, engagement and motivation [ 98 , 99 ]. This underscores the importance of ensuring that teachers have the necessary skills, knowledge and support to deliver high-quality instruction.

Another key finding is that school leadership and management can have a significant impact on school effectiveness. Effective leadership can create a positive school culture that promotes learning and growth, fosters collaboration among staff and students and ensures that resources are allocated effectively [ 2 , 100 ]. The socio-economic background of the students is also an important factor to consider when evaluating school effectiveness. Research has shown that students from disadvantaged backgrounds are more likely to experience academic difficulties and that schools that serve these populations face unique challenges [ 101 ]. In order to be effective, the schools must be able to provide these students with the support and resources they need to succeed. Important factors in the growth of students’ academic success (university students) are their sociodemographic characteristics, variables such as gender, the university where they studied, their fathers’ education and the way they chose their department [ 33 ].

6. Limitations of Our Systematic Review

There are several limitations regarding the review processes used in this literature review and the evidence reported. One limitation is the possibility of a publication bias. This review only included studies that were published in peer-reviewed journals and reported only in two databases, and we only included items to which we had full access. Another limitation is the potential for methodological differences across the studies. The studies included in this review used a variety of research methods and comprised case studies, surveys, and quantitative analyses, which could have resulted in variations in the findings. Furthermore, the studies may have used different definitions of school effectiveness or different measures of school inputs and outputs, which could make it difficult to compare findings across studies. Many of the studies included in the review relied on self-reported data, which may have introduced bias and inaccuracies in the findings. This review was limited by its focus on English-language studies published from 2016. This may have excluded relevant studies published in other languages or earlier than 2016. Additionally, the rapid pace of change in education policies and practices means that this review may not reflect the most up-to-date research in the field.

7. Conclusions and Future Directions

In conclusion, the effectiveness of schools is a complex and multifaceted concept that can be measured through various indicators such as academic achievement, student engagement and teacher satisfaction. This review of school effectiveness revealed that several factors contribute to effective schools, such as strong leadership, effective teaching practices, a positive school culture and parental involvement. Additionally, school resources, such as funding and facilities, can impact school effectiveness, particularly in under-resourced communities.

Leadership is a crucial factor in promoting student success, as noted by multiple researchers [ 2 , 48 , 67 , 102 , 103 ]. Leaders who create a positive school culture and prioritize high-quality teaching practices are more likely to create a learning environment where students can thrive. Furthermore, parental involvement has been linked to improved student achievement, as noted by Akbar et. al. [ 104 ] and Fan and Chen [ 105 ], and can be facilitated through strategies like family engagement programs and clear communication between families and schools.

Teacher quality is another critical factor in student learning and achievement [ 106 , 107 , 108 ]. Teachers who are knowledgeable, experienced and effective in using instructional strategies can significantly impact student outcomes. Effective curriculum and instruction, aligned with standards and assessments and delivered using evidence-based instructional strategies, are critical in promoting student learning and achievement [ 109 , 110 ].

Moreover, the inclusion of all students regardless of their backgrounds and abilities can promote a sense of belonging and engagement, which can positively impact their academic performance and social-emotional development, as noted by Ahn and Davis [ 111 ]. Additionally, a positive school climate is essential for promoting student learning and achievement. A safe, respectful and supportive school environment can positively impact student outcomes [ 33 , 38 , 112 ].

Finally, adequate resources, including funding, facilities and technology, are essential for promoting student learning and achievement, as noted by Bhutoria and Aljabri [ 87 ]. It is important to note that school effectiveness is a complex and multifaceted concept and that different factors may be more or less important depending on the context. Answering the research question, the literature suggests that effective schools are characterized by strong leadership, high-quality teachers, effective curriculum and instruction, parent and community involvement, a positive school climate and adequate resources. These factors work together to create a supportive learning environment that promotes student learning and achievement. However, by studying and understanding the key factors that contribute to school effectiveness, educators and policymakers can work to create environments that promote student success and support all students in reaching their full potential. By implementing evidence-based practices and strategies that prioritize strong leadership, effective teaching practices, parent and community involvement, a positive school culture and adequate resources, schools can provide high-quality education that meets the needs of all students. Overall, the literature suggests that school effectiveness is a multidimensional concept that requires a comprehensive and holistic approach to be achieved. By understanding the various factors that contribute to school effectiveness and implementing evidence-based practices, schools can provide high-quality education that meets the needs of all students.

While this literature review provides valuable insights into the factors that contribute to school effectiveness, its findings should be interpreted with caution, given the limitations of the review processes used. Future research in this area should consider addressing these limitations and building on the findings of this review to provide a more comprehensive understanding of school effectiveness. Despite these limitations, this literature review provides a valuable summary of the current research on school effectiveness and efficiency, highlighting the key factors that contribute to these outcomes. Future research could build on these findings by addressing some of the limitations of this review, such as conducting more comparative studies across different contexts and using consistent measures of effectiveness and efficiency.

Acknowledgments

The authors acknowledge the financial support received from the Slovenian Research Agency (research core funding No. P5-0106) and (research core funding No. J5-2553).

Funding Statement

This research within which this publication was prepared was funded by the Slovenian Research Agency (ARIS) within the program group “Educational Research”, grant number P5-0106 and within the research project “New Domains of Inequality: The digital divide in Slovenia”, grant number J5-2553.

Author Contributions

Conceptualization, Š.J. and E.K.M.; formal analysis, Š.J.; methodology, Š.J.; supervision, E.K.M.; writing—original draft, Š.J.; Writing—review and editing, E.K.M. All authors have read and agreed to the published version of the manuscript.

Data Availability Statement

Conflicts of interest.

The authors declare no conflict of interest.

Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Advertisement

Advertisement

What is quality education? How can it be achieved? The perspectives of school middle leaders in Singapore

  • Published: 12 June 2015
  • Volume 27 , pages 307–322, ( 2015 )

Cite this article

research about quality of education

  • Pak Tee Ng 1  

3744 Accesses

18 Citations

8 Altmetric

Explore all metrics

This paper presents the findings of a research project that examines how middle leaders in Singapore schools understand ‘quality education’ and how they think quality education can be achieved. From the perspective of these middle leaders, quality education emphasises holistic development, equips students with the knowledge and skills for the future, inculcates students with the right values and imbues students with a positive learning attitude. Quality education is delivered by good teachers, enabled by good teaching and learning processes and facilitated by a conducive learning environment. The challenge of achieving quality education is to find the balance between lofty ideals and ground realities. One critical implication of the research findings is that policymakers should appeal to the ideals of practitioners to drive change.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime

Price includes VAT (Russian Federation)

Instant access to the full article PDF.

Rent this article via DeepDyve

Institutional subscriptions

Similar content being viewed by others

research about quality of education

Rethinking Quality Education in Tanzania´S Classrooms

research about quality of education

Quality of Life Through Capacity Development in Junior Primary Teacher Education

research about quality of education

Teachers as Active Contributors in Quality of Education: A Special Reference to the Finnish Context

Explore related subjects.

  • Artificial Intelligence

Adams, D. (1993). Defining educational quality. improving educational quality project publication no. 1: biennial report . Arlington: Institute for International Research.

Google Scholar  

Ball, S. J. (2003). The teacher’s soul and the terrors of performativity. Journal of Education Policy, 18 (2), 215–228.

Article   Google Scholar  

Barbera, E. (2004). Quality in virtual education environments. British Journal of Educational Technology, 35 (1), 13–20.

Barnett, R. (2012). Learning for an unknown future. Higher Education Research & Development, 31 (1), 65–77.

Barretta, A. M. (2011). A millennium learning goal for education post‐2015: a question of outcomes or processes. Comparative Education, 47 (1), 119–133.

Barrett, A. M., & Tikly, L. (2011). Social justice, capabilities and the quality of education in low income countries. International Journal of Educational Development, 31 (1), 3–14.

Barrett, A. M., Chawla-Duggan, R., Lowe, J., Nikel, J., & Ukpo, E. (2006). The concept of quality in education: review of the ‘international’ literature on the concept of quality in education working paper no. 3 . Bristol: EduQual.

Beeby, C. E. (1966). The quality of education in developing countries . Cambridge MA: Harvard University Press.

Book   Google Scholar  

Benavot, A. (2012). Policies toward quality education and student learning: constructing a critical perspective. Innovation: The European Journal of Social Science Research, 25 (1), 67–77.

Bentley, T. (2006). Can we be more creative in thinking about how to scale up educational innovation? Journal of Educational Change, 7 (4), 339–344.

Brown, P., & Tannock, S. (2009). Education, meritocracy and the global war for talent. Journal of Education Policy, 24 (4), 377–392.

Chenail, R. J., 1995. Presenting Qualitative Data. The Qualitative Report , 2 (3), http://www.nova.edu/ssss/QR/QR2-3/presenting.html , January 1, 2008.

Chua, J. S. M. (2009). Saving the teacher’s soul: exorcising the terrors of performativity. London Review of Education, 7 (2), 159–167.

Constas, M. A. (1992). Qualitative analysis as a public event: the documentation of category development procedures. American Educational Research Journal, 29 (2), 253–266.

Crossley, M. (1999). Reconceptualising comparative and international education. Compare, 29 (3), 249–267.

Crossley, M. (2000). Bridging cultures and traditions in the reconceptualisation of comparative and international education. Comparative Education, 36 (3), 319–332.

Delors, J., et al. (1996). Learning: the treasure within . Paris: UNESCO.

Drucker, P. F. (1993). Post-capitalist society . New York: Harper Business.

Drucker, P. F. (2000). Knowledge work. Executive Excellence, 17 (4), 11–12.

Florida, R. (2005). The flight of the creative class . New York: Harper Business.

Goddard, R. D., Hoy, W. K., & Hoy, A. W. (2000). Collective teacher efficacy: its meaning, measure, and impact on student achievement. American Educational Research Journal, 37 (2), 479–507.

Gunter, H., & Rutherford, D. (2000). Professional development for subject leaders: needs, training and impact. Management in Education, 14 (1), 28–30.

Hallinger, P. (2011). Leadership for learning: lessons from 40 years of empirical research. Journal of Educational Administration, 49 (2), 25–142.

Hargreaves, A., & Shirley, D. (2009). The fourth way: the inspiring future for educational change . Thousand Oaks: Corwin Press.

Harris, A. (2011). Distributed leadership: current evidence and future directions. Journal of Management Development, 30 (10), 20–32.

Heng, S. K., 2012. Speech by Mr Heng Swee Keat, Minister for Education, at the MOE Work Plan Seminar 2012, at the Ngee Ann Polytechnic Convention Centre, Singapore, 12 September, http://www.moe.gov.sg/media/speeches/2012/09/12/keynote-address-by-mr-heng-swee-keat-at-wps-2012.php , June 4, 2013.

Heng, S. K., 2014. Speech by Mr Heng Swee Keat, Minister for Education, at the MOE Work Plan Seminar 2014, at the Ngee Ann Polytechnic Convention Centre, Singapore, 23 September, http://www.moe.gov.sg/media/speeches/2014/09/23/keynote-address-by-mr-heng-swee-keat-at-the-ministry-of-education-work-plan-seminar-2014.php , September 23, 2014.

Hoffman, D. M. (1999). Culture and comparative education: toward decentering and recentering the discourse. Comparative Education Review, 43 (4), 464–488.

IIEP-UNESCO. (2011). External quality assurance: options for higher education managers. module 4 understanding and assessing quality . Paris: International Institute of Educational Planning (UNESCO).

Irons, E. J., & Harris, S. (2006). The challenges of no child left behind: understanding the issues of excellence, accountability, and choice . Lanham, Maryland: Rowman and Littlefield Education.

James, M., & Pollard, A. (2011). TLRP’s ten principles for effective pedagogy: rationale, development, evidence, argument and impact. Research Papers in Education, 26 (3), 275–328.

Klees, S. J. (2002). World bank education policy: new rhetoric, old ideology. International Journal of Educational Development, 22 (5), 451–474.

Lee, H. L., 2010. Speech by Prime Minister Lee Hsien Loong at the Pre-University Seminar, at the University Cultural Centre, National University of Singapore, 1 June, http://www.pmo.gov.sg/content/pmosite/mediacentre/speechesninterviews/primeminister/2010/June/speech_by_mr_leehsienloongprimeministeratthepre-universitysemina.html#.VHqOxyEZ5eU , June 1, 2014

Leithwood, K., Patten, S., & Jantzi, D. (2010). Testing a conception of how school leadership influences student learning. Educational Administration Quarterly, 46 (5), 671–706.

MacNeil, A. J., Prater, D. L., & Busch, S. (2009). The effects of school culture and climate on student achievement. International Journal of Leadership in Education, 12 (1), 73–84.

Martin, A. J. (2008). Enhancing student motivation and engagement: the effects of a multidimensional intervention. Contemporary Educational Psychology, 33 (2), 239–269.

Ng, P. T. (2003). The Singapore school and the school excellence model. Educational Research for Policy and Practice, 2 (1), 27–39.

Ng, P. T. (2008a). Educational reform in Singapore: from quantity to quality. Educational Research for Policy and Practice, 7 (1), 5–15.

Ng, P. T. (2008b). Quality assurance in the Singapore education system: phases and paradoxes. Quality Assurance in Education, 16 (2), 112–125.

Ng, P. T. (2010). The evolution and nature of school accountability in the Singapore education system. Educational Assessment, Evaluation and Accountability, 22 (4), 275–292.

Ng, P. T. (2013a). An examination of school accountability from the perspectives of school leaders in Singapore. Educational Research for Policy and Practice, 12 (2), 121–131.

Ng, P. T. (2013b). An examination of lifelong learning policy rhetoric and practice in Singapore. International Journal of Lifelong Education, 32 (3), 318–334.

Norman, A. D. (2010). Assessing accomplished teaching: good strides, great challenges. Theory Into Practice, 49 (3), 203–212.

Perry, T., Moses, R. P., Cortes, E., Jr., Delpit, L., & Wynne, J. T. (2010). Quality education as a constitutional right: creating a grassroots movement to transform public schools . Boston: Beacon.

Pintrich, P. R., & Schunk, D. H. (2002). Motivation in education: theory research, and applications . Upper Saddle River, NJ: Prentice Hall.

Priestley, M., Edwards, R., Priestley, A., & Miller, K. (2012). Teacher agency in curriculum making: agents of change and spaces for manoeuvre. Curriculum Inquiry, 42 (2), 191–214.

Reich, R. (1991). The work of nations . New York: Vintage.

Richardson, J. (2009). ‘Quality Education Is Our Moon Shot’: An Interview with Secretary of Education Arne Duncan. Phi Delta Kappan, 91 (1), 24–29.

Rinehart, G. (1993). Quality education: applying the philosophy of Dr. W. Edwards Deming to transform the educational system . Wisconsin: ASQC Quality Press.

Ryan, R., & Deci, E. (2000). Self determination theory and the facilitation of intrinsic motivation, social development and well-being. American Psychologist, 55 (1), 68–78.

Schleicher, A., & Stewart, V. (2008). Learning from world-class schools. Educational Leadership, 66 (2), 44–51.

Schmoker, M., & Wilson, R. B. (1993). Transforming schools through total quality education. The Phi Delta Kappan, 74 (5), 389–395.

Schweisfurth, M. M. (2013). Learner-centred education in international perspective: whose pedagogy for whose development? New York: Routledge.

Scrabec, Q., Jr. (2000). Viewpoint: a quality education is not customer driven. Journal of Education for Business, 75 (5), 298–300.

Sergiovanni, T. J. (2001). Leadership: what’s in it for schools? London: RoutledgeFalmer.

Sim, J. B. Y., & Ho, L. C. (2010). Transmitting social and national values through education in Singapore: tensions in a globalized era. In T. Lovat, R. Toomey, & N. Clement (Eds.), International research handbook on values education and student wellbeing (pp. 897–917). Netherlands: Springer.

Chapter   Google Scholar  

Sim, J. B. Y., & Print, M. (2009). The state, teachers and citizenship education in Singapore schools. British Journal of Educational Studies, 57 (4), 380–399.

Smith, E. (2005). Raising standards in American schools: the case of No Child Left Behind. Journal of Education Policy, 20 (4), 507–524.

Soudien, C. (2011). Building quality in education: are international standards helpful? Contemporary Education Dialogue, 8 (2), 183–201.

Spillane, J. (2005). Distributed leadership. The Educational Forum, 69 (2), 143–150.

Stronge, J. H., Ward, T. J., & Grant, L. W. (2011). What makes good teachers good? A cross-case analysis of the connection between teacher effectiveness and student achievement. Journal of Teacher Education, 62 (4), 339–355.

Tan, C. (2008). Globalisation, the Singapore state and educational reforms: towards performativity. Education, Knowledge and Economy, 2 (2), 111–120.

Tikly, L. (2011). Towards a framework for researching the quality of education in low-income countries. Comparative Education, 47 (1), 1–23.

Tharman, S., 2005. Achieving quality: bottom up initiative, top down support . Speech by Mr Tharman Shanmugaratnam, Minister for Education, at the 2005 MOE Work Plan Seminar at the Ngee Ann Polytechnic Convention Centre, Singapore, September 22, http://www.moe.gov.sg/media/speeches/2005/sp20050922.htm , June 4, 2013.

Tschannen-Moran, M., Hoy, A. W., & Hoy, W. K. (1998). Teacher efficacy: its meaning and measure. Review of Educational Research, 68 (2), 202–248.

UNESCO. (2005). Education for All (EFA) Global Monitoring Report 2005: the quality imperative . Paris: UNESCO.

UNICEF. (2000). A paper presented by UNICEF at the meeting of The International Working Group on Education Florence, Italy, June, 2000 . New York: UNICEF Working Paper Series. Defining Quality in Education.

Westera, W. (1999). Paradoxes in open, networked learning environments: towards a paradigm shift. Educational Technology, 39 (1), 17–23.

Wiseman, A. W. (2013). Policy responses to PISA in comparative perspective. In H. Meyer & A. Benavot (Eds.), PISA, Power, and Policy (pp. 303–322). Oxford, UK: The Emergence of Global Educational Governance. Symposium Books.

Zhao, Y. (2005). Increasing math and science achievement: the best and worst of the East and West. Phi Delta Kappan, 87 (3), 219–222.

Download references

Author information

Authors and affiliations.

Policy and Leadership Studies Academic Group, National Institute of Education, Nanyang Technological University, 1, Nanyang Walk, Singapore, 637616, Republic of Singapore

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Pak Tee Ng .

Rights and permissions

Reprints and permissions

About this article

Ng, P.T. What is quality education? How can it be achieved? The perspectives of school middle leaders in Singapore. Educ Asse Eval Acc 27 , 307–322 (2015). https://doi.org/10.1007/s11092-015-9223-8

Download citation

Received : 03 July 2014

Accepted : 01 June 2015

Published : 12 June 2015

Issue Date : November 2015

DOI : https://doi.org/10.1007/s11092-015-9223-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • School leadership
  • Middle leaders
  • Quality education
  • Learning environment
  • Find a journal
  • Publish with us
  • Track your research

IMAGES

  1. SDG 4 Quality Education

    research about quality of education

  2. (PDF) The Education of Quality for Quality Education

    research about quality of education

  3. Knowledge Management To Achieving Quality Education

    research about quality of education

  4. framework for the quality of education (uneSCo 2006)

    research about quality of education

  5. How To Improve Quality Of Education In University

    research about quality of education

  6. (PDF) Contributions of Education for Sustainable Development (ESD) to

    research about quality of education

VIDEO

  1. GOOD QUALITY THESES EVALUATION: What, Why, How?

  2. Department of Psychology rankings show excellence in the discipline

  3. Quickly Determine Your Journal's SCI Status: A Guide for Researchers

  4. Paradigms and meta-criteria.mp4

  5. Why PhD Students Leave India

  6. Empowering schools and policy institutions through a culture of research engagement

COMMENTS

  1. Economic freedom and the quality of education

    First, prior research has shown that the quality of education is crucial for both students and the economy. At the individual level, various studies find that the quality of the education children receive is critical for their later earnings (e.g., Hanushek et al., 2015 ; Hanushek et al., 2017 ).

  2. Differentiation and Integration of Research, Evidence-Based Practice

    The release of The Essentials: Core Competencies for Professional Nursing Education (American Association of Colleges of Nursing, 2021) resulted in an urgent need for faculty to familiarize themselves with the distinctions and interrelatedness of research, evidence-based practice (EBP), and quality improvement (QI) in the context of competency ...

  3. FY 22 AOC Full Report

    The Idaho State Board of Education makes policy for K-20 public education in Idaho, to create opportunity for lifelong attainment of high-quality education, research, and innovation.

  4. UF Health Shands chief qualify officer named

    GAINESVILLE, Fla. — John Hollingsworth, M.D., has been appointed chief quality officer for UF Health Shands hospitals and practices, effective Aug. 26.. Hollingsworth brings years of experience overseeing quality and patient safety programs, first leading efforts at University of Michigan Health before managing the quality program at NorthShore University HealthSystem in Evanston, Illinois.

  5. The Influence of Food Quality, Hygiene, Price and Revisit Intention

    This study investigates the influence of food quality, hygiene, price, and revisit intention toward street foods in Dannok, Thailand. The research used convenience sampling with a sample size of 150 respondents. The research employed a structured questionnaire, consisting of demographic information, three variables (food quality, hygiene, and price), and revisit intentions through an online ...

  6. Learning effect of online versus onsite education in health and medical

    The disruption of health and medical education by the COVID-19 pandemic made educators question the effect of online setting on students' learning, motivation, self-efficacy and preference. In light of the health care staff shortage online scalable education seemed relevant. Reviews on the effect of online medical education called for high quality RCTs, which are increasingly relevant with ...

  7. Assessing the Quality of Education Research Through Its Relevance to

    RPPs increase local education leaders' access to research and bolster the use of research. RPPs may also strengthen the alignment between education research and the public good. Notwithstanding, employing RPPs as a vehicle to assess research quality has its challenges. Valuing the work of RPPs in academia is a work in progress.

  8. Quality of Research Evidence in Education: How Do We Know?

    An elusive science: The troubling history of education research. University of Chicago Press. Google Scholar. Patton T. O. (2004). Reflections of a Black woman professor: Racism and sexism in academia. ... Assessing the Quality of Education Research Through Its Relevance to Practice: An Integrative Review of Research-Practice Partnerships. Show ...

  9. A Proposed Unified Conceptual Framework for Quality of Education in

    During the past years, there has been a renewed focus on the quality of education in schools worldwide after the United Nations Educational, Scientific and Cultural Organization (UNESCO, 2004) declared that quality of education in schools was generally declining in many countries.As such, quality of education is pointed out as the crucial issue of the post-2015 educational agenda worldwide ...

  10. Defining and measuring the quality of education

    The current understanding of education quality has considerably benefitted from the conceptual work undertaken through national and international initiatives to assess learning achievement. These provide valuable feedback to policy-makers on the competencies mastered by pupils and youths, and the factors which explain these.

  11. Determinants of education quality: what makes students' perception

    3. Objectives and research questions. Ehrman (Citation 2006) suggested including different demographic, current and background information to segment the students, so that the university authority can target the desired group of students for their institution.This study focuses on the demographic and other background information of the students, and their impact on perception about quality of ...

  12. Research quality: What it is, and how to achieve it

    2) Initiating research stream: The researcher (s) must be able to assemble a research team that can achieve the identified research potential. The team should be motivated to identify research opportunities and insights, as well as to produce top-quality articles, which can reach the highest-level journals.

  13. PDF The Concept of Quality in Education: a Review of The 'International

    By critiquing key approaches to education quality, Sayed highlights what he calls the value-bases of any framework for education quality. Drawing on Bunting (1993) he declares that, „Quality in education does have a bottom line and that line is defined by the goals and values which underpin the essentially human activity of education.‟

  14. Quality of teaching and quality of education: a review of research findings

    Year of publication. 2004. Background paper prepared for Education for All Global Monitoring R The Quality Imperative Quality of teaching and q education: a review of resea Clermont Gauthier Martial Dembélé 2004 This paper was commissioned by the Education for All G background information to assist in drafting the 2005 report.

  15. In the pursuit of "Quality Education": From ancient times to the

    1. Introduction. From ancient times to the digital era, from the International Organizations to ministries of education, from philosophers, academics to every practitioner, from headmasters to every teacher, parent and student, from different psychological theories, pedagogies and teaching methodologies to the actual implementation in the classroom and to the actual process of learning taking ...

  16. (PDF) FEATURES OF QUALITY EDUCATION

    These two words, "quality" and "education" are commonly and carelessly used in every day . discussion - especially among leaders and business men and women. Every leader promises to ...

  17. Quality of Research Evidence in Education: How Do We Know?

    The diversity of education research also contributes to the difficulty of defining quality (Lagemann, 2002). Given that education researchers focus on all levels of a complex system of public education, our efforts to appraise the quality of education research tend to focus on technical aspects of research design. It is easier to assess the

  18. Criteria for Good Qualitative Research: A Comprehensive Review

    Fundamental Criteria: General Research Quality. Various researchers have put forward criteria for evaluating qualitative research, which have been summarized in Table 3.Also, the criteria outlined in Table 4 effectively deliver the various approaches to evaluate and assess the quality of qualitative work. The entries in Table 4 are based on Tracy's "Eight big‐tent criteria for excellent ...

  19. PDF Factors Affecting the Quality of Education and The Importance of The

    of quality management in education and allow the formation and development of a new system of quality assessment of education focused on modern results [13]. New concepts of school education aimed at developing the student as a healthy and creative person are needed in secondary schools today. In the study of the quality of education, teachers

  20. Defining Quality in Higher Education and Identifying Opportunities for

    The authors know of no other research that attempts to define student expectations of quality using the Evans and Lindsay (1996) seven dimensions of service quality. Therefore, this research addresses a gap in the literature and provides an expanded definition of quality from the student perspective.

  21. Quality Education

    Yet quality education is an essential component for securing a future for refugee children. REACH identifies ways to strengthen and build refugees' "unknowable futures.". This work is relevant not only for refugees but for other young people globally who face similar, even if less extreme, uncertainties in the face of rapid globalization ...

  22. PDF Students' Perceptions towards the Quality of Online Education: A

    Yi Yang Linda F. Cornelius Mississippi State University. Abstract. How to ensure the quality of online learning in institutions of higher education has been a growing concern during the past several years. While several studies have focused on the perceptions of faculty and administrators, there has been a paucity of research conducted on ...

  23. The Quality and Qualities Of Educational Research

    Founded in 1965, the National Academy of Education, or NAE, is a loosely knit set of approximately 120 scholars elected because of the judged quality of their research. Traditionally, the ...

  24. Equality and Quality in Education. A Comparative Study of 19 Countries

    Existing comparative research based on student assessment data. For the longest time, reliable empirical estimates of international differences in educational outcomes and, more so, their explanation were largely elusive (Breen and Jonsson 2005).Large-scale, coordinated surveys that assess student outcomes in many countries, such as the International Mathematics and Science Study (TIMSS) or ...

  25. Factors Contributing to School Effectiveness: A Systematic Literature

    2. Background of School Effectiveness Research. The concept of school effectiveness emerged in the 1960s and 1970s in response to growing concerns about the quality of education and the need to improve educational outcomes for students [4,7,9].Early definitions of school effectiveness focused on the achievement of educational outcomes, such as academic performance and the ability of schools to ...

  26. What is quality education? How can it be achieved? The ...

    This paper presents the findings of a research project that examines how middle leaders in Singapore schools understand 'quality education' and how they think quality education can be achieved. From the perspective of these middle leaders, quality education emphasises holistic development, equips students with the knowledge and skills for the future, inculcates students with the right ...

  27. (PDF) What Is Quality in Research? Building a Framework of Design

    A literature-derived framework of research quality attributes is, thus, obtained, which is subject to an expert feedback process, involving scholars and practitioners in the fields of research ...

  28. Achieving Better Educational Practices Through Research Evidence: A

    Our overall conclusion, therefore, is that research evidence for judging program effectiveness is influenced substantially by the properties of the study (currency, rigor, counterfactual viability, implementation quality, measures) and idiosyncratic ideologies, priorities, and experiences of key stakeholders in educational systems.

  29. What constitutes high quality higher education pedagogical research?

    Abstract. Over the last 20 years there has been significant growth in the volume of higher education pedagogical research across disciplines and national contexts, but inherent tensions in defining quality remain. In this paper we present a framework to support understanding of what constitutes internationally excellent research, drawing on a ...