7 Quantitative Data Examples in Education
Quantitative data plays a crucial role in education, providing valuable insights into various aspects of the learning process. By analyzing numerical information, educators can make informed decisions and implement effective strategies to improve educational outcomes. But what exactly is quantitative data in education , and why is it essential? In this article, we’ll delve into seven illustrative quantitative data examples in education and analyze their impact.
- Standardized Test Scores: Measuring Performance at Scale
- Attendance Rates: More than Just Numbers
- Graduation Rates: Tracking Long-Term Success
- Class Average Scores: Gauging Collective Performance
- Student-to-Teacher Ratios: A Reflection of Learning Environments
- Homework Completion Rates: Analyzing Daily Academic Engagement
- Frequency of Library Book Checkouts: Monitoring Reading Habits
The Importance of Quantitative Data Examples in Education
Before delving into specific examples, it’s important to understand the importance of quantitative data in education.
Quantitative data plays a crucial role in education by providing objective evidence of student achievement and progress. Mining educational data allows educators to identify trends and patterns, enabling them to tailor teaching methods and interventions to meet the individual needs of students. For example, if a particular group of students consistently underperforms in standardized tests, quantitative data can help educators identify the specific areas where additional support is needed. This data-driven approach ensures that resources are allocated effectively, and students receive the targeted support they require to succeed.
Read next: How data analytics is reshaping the education industry
In addition to informing classroom instruction, quantitative data also plays a significant role in shaping education policies. Policymakers rely on this data to make informed decisions about curriculum development, resource allocation, and educational reforms. By analyzing quantitative data on a larger scale, policymakers can identify systemic issues and implement evidence-based strategies to address them. For instance, if quantitative data reveals a high dropout rate in a specific region, policymakers can develop targeted interventions to improve graduation rates and ensure that students have access to quality education.
1. Standardized Test Scores: Measuring Performance at Scale
Standardized test scores, spanning from globally recognized exams like the SAT and ACT to national or regional board examinations, have become a cornerstone in the world of education. These scores serve multiple purposes, providing a consistent, objective measure of a student’s grasp of specific subjects and skills. This universal consistency allows for comparisons across regions, states, or even countries, simplifying the monumental task for college and university admissions offices when they sift through thousands of applications from varied educational backgrounds. For these institutions, these scores are invaluable in determining a student’s readiness for the rigors of higher education.
However, the significance of these scores isn’t restricted to tertiary institutions. K-12 schools and districts also harness these numbers to assess the efficacy of their teaching methodologies, curricula, and allocated resources. Consistently low scores might hint at areas where instructional techniques need refinement or indicate students who require additional support. But, as pivotal as they are, it’s essential to approach standardized test scores with a balanced perspective. They capture just one dimension of a student’s academic journey, and their true value is unlocked when integrated with other forms of quantitative and qualitative data.
2. Attendance Rates: More than Just Numbers
Attendance rates in schools often serve as more than just basic metrics of student presence. At its core, this data provides a nuanced understanding of how engaged, motivated, and committed students are to their educational pursuits. By calculating the percentage of days students are present over a set period, institutions can glean insights into a myriad of underlying factors. A consistently high attendance rate, for instance, could indicate a thriving school environment where students feel inspired and eager to participate. Conversely, a sudden drop might hint at external challenges, from health outbreaks to socio-economic disturbances.
However, diving deeper, these rates also unveil more subtle issues affecting education. Consistent absences can indicate personal struggles, whether they be familial, psychological, or health-related. For educators and administrators, understanding the intricacies behind these numbers is essential. Addressing the root causes, whether they involve bolstering student engagement through innovative teaching methods or providing additional resources for those facing challenges, ensures a more inclusive and responsive educational environment.
3. Graduation Rates: Tracking Long-Term Success
Graduation rates stand as a pivotal metric in assessing the long-term success and effectiveness of educational institutions. This rate, which depicts the percentage of students who complete their academic programs within a standard timeframe, is more than just a reflection of student diligence. It also provides insights into the quality of instruction, the adequacy of resources, and the overall support infrastructure in place. High graduation rates often suggest that an institution is not only providing valuable academic content but also fostering an environment conducive to sustained student success.
On the flip side, lower graduation rates can act as an early warning sign for potential challenges within the educational framework. Whether it’s a curriculum that doesn’t resonate with the student body, inadequate support for those with learning differences, or external factors like socio-economic challenges that affect a student’s ability to prioritize education, these numbers prompt introspection. For educators and institutional leaders, these rates serve as a guidepost, highlighting areas of success and illuminating opportunities for enhancement in the ever-evolving landscape of academia.
4. Class Average Scores: Gauging Collective Performance
Class average scores play a fundamental role in deciphering the collective performance of a student group, offering a holistic view of how a class or cohort is faring academically. By taking the mean of scores across a specific subject or class, educators can identify patterns, strengths, and areas that may require more attention. High averages might suggest that teaching methods, curricula, and learning materials are resonating with students, leading to broad comprehension and mastery of the content.
Conversely, consistently lower average scores can serve as a catalyst for introspection and change. They may indicate potential misalignments between the curriculum and students’ learning styles, a need for more interactive or varied teaching methods, or even external factors impacting students’ ability to grasp content. By closely monitoring and analyzing these averages, educational institutions can adapt dynamically, ensuring that teaching strategies evolve to meet the unique needs of every student cohort.
5. Student-to-Teacher Ratios: A Reflection of Learning Environments
The student-to-teacher ratio in educational settings offers a clear, quantifiable snapshot of the learning environment’s structure. A direct representation of how many students are assigned to each educator, this metric provides insights into the potential for individualized attention within a class. In instances where the ratio is low, it often implies that teachers have fewer students to manage, allowing for more one-on-one interactions, personalized feedback, and a closer understanding of each student’s needs and challenges.
However, a higher ratio can signify challenges in resource allocation or an influx of students beyond the institution’s standard capacity. In such scenarios, teachers might find it challenging to address individual student concerns, potentially leading to overlooked learning gaps or unmet needs. Recognizing the implications of these ratios allows educational institutions to strategize effectively, whether it’s hiring additional staff, incorporating teaching assistants, or leveraging technology to ensure every student receives the attention and support they deserve.
6. Homework Completion Rates: Analyzing Daily Academic Engagement
Homework, a staple in the K-12 educational journey, can provide more insights than just individual student performance. By tracking homework completion rates, schools gain a clearer perspective on daily academic engagement outside the classroom. Consistently high completion rates typically indicate a student body that’s committed, understands the material, and can effectively manage their time. It can also suggest that the homework given is appropriately challenging and relevant, resonating with students and thus motivating them to complete it.
Conversely, lower homework completion rates might raise flags about potential challenges students face. These can range from the homework being perceived as too difficult or irrelevant, to external factors such as familial obligations or extracurricular activities taking up significant time. Schools can use this quantitative data to reassess the nature and volume of homework assigned or to initiate conversations with students about their challenges, ensuring that homework remains a productive, beneficial aspect of the learning process.
7. Frequency of Library Book Checkouts: Monitoring Reading Habits
In K-12 schools, libraries often serve as hubs of exploration, learning, and growth. Tracking the frequency of library book checkouts can provide a quantitative measure of students’ reading habits and interests. A high frequency indicates an enthusiastic student body actively engaging with literature, research, or both. It can also reflect the effectiveness of library programs, reading challenges, or events aimed at promoting literary exploration.
On the other hand, a decline or consistently low checkout rate might signal a waning interest in reading or challenges in accessing library resources. This could prompt schools to examine the relevance and variety of available books, consider introducing digital reading platforms, or revamp the library’s ambiance to make it more inviting. Ultimately, this quantitative data aids schools in ensuring their libraries remain vibrant centers of literary exploration and learning for all students.
Quantitative data examples in education offer valuable insights into various aspects of the learning process. By analyzing different types of data in education , policymakers can make informed decisions and develop strategies to enhance educational outcomes. Harnessing the power of quantitative data allows educators to foster an environment where every student has the opportunity to thrive and reach their full potential.
As you delve into the diverse landscape of quantitative data in education, it’s paramount to harness tools that streamline analysis and interpretation. The Inno™ Starter Kits have been meticulously crafted to assist educators in navigating the intricate world of data. Whether you’re just beginning your data-driven journey or are an established expert, these kits offer a comprehensive solution to visualizing, understanding, and applying quantitative insights. Explore today and unlock unparalleled potential in educational outcomes!
Thank you for sharing!
You may also be interested in
Why interactive data visualization is the key to better student outcomes.
Discover the power of interactive data visualization for student data with this in-depth guide.
by Innovare | Dec 4, 2023 | Data in K-12 Schools
Data in K-12 Schools
What is Data in Education? The Ultimate Guide
Discover the power of data in education with our comprehensive guide.
by Innovare | Sep 18, 2023 | Data in K-12 Schools
This website uses cookies to improve your experience. See our Privacy Policy to learn more. Accept
- Subject List
- Take a Tour
- For Authors
- Subscriber Services
- Publications
- African American Studies
- African Studies
- American Literature
- Anthropology
- Architecture Planning and Preservation
- Art History
- Atlantic History
- Biblical Studies
- British and Irish Literature
- Childhood Studies
- Chinese Studies
- Cinema and Media Studies
- Communication
- Criminology
- Environmental Science
- Evolutionary Biology
- International Law
- International Relations
- Islamic Studies
- Jewish Studies
- Latin American Studies
- Latino Studies
- Linguistics
- Literary and Critical Theory
- Medieval Studies
- Military History
- Political Science
- Public Health
- Renaissance and Reformation
- Social Work
- Urban Studies
- Victorian Literature
- Browse All Subjects
How to Subscribe
- Free Trials
In This Article Expand or collapse the "in this article" section Quantitative Research Designs in Educational Research
Introduction, general overviews.
- Survey Research Designs
- Correlational Designs
- Other Nonexperimental Designs
- Randomized Experimental Designs
- Quasi-Experimental Designs
- Single-Case Designs
- Single-Case Analyses
Related Articles Expand or collapse the "related articles" section about
About related articles close popup.
Lorem Ipsum Sit Dolor Amet
Vestibulum ante ipsum primis in faucibus orci luctus et ultrices posuere cubilia Curae; Aliquam ligula odio, euismod ut aliquam et, vestibulum nec risus. Nulla viverra, arcu et iaculis consequat, justo diam ornare tellus, semper ultrices tellus nunc eu tellus.
- Educational Research Approaches: A Comparison
- Methodologies for Conducting Education Research
- Mixed Methods Research
- Multivariate Research Methodology
- Qualitative Data Analysis Techniques
- Qualitative, Quantitative, and Mixed Methods Research Sampling Strategies
- Researcher Development and Skills Training within the Context of Postgraduate Programs
- Single-Subject Research Design
- Social Network Analysis
- Statistical Assumptions
Other Subject Areas
Forthcoming articles expand or collapse the "forthcoming articles" section.
- Cyber Safety in Schools
- Girls' Education in the Developing World
- History of Education in Europe
- Find more forthcoming articles...
- Export Citations
- Share This Facebook LinkedIn Twitter
Quantitative Research Designs in Educational Research by James H. McMillan , Richard S. Mohn , Micol V. Hammack LAST REVIEWED: 24 July 2013 LAST MODIFIED: 24 July 2013 DOI: 10.1093/obo/9780199756810-0113
The field of education has embraced quantitative research designs since early in the 20th century. The foundation for these designs was based primarily in the psychological literature, and psychology and the social sciences more generally continued to have a strong influence on quantitative designs until the assimilation of qualitative designs in the 1970s and 1980s. More recently, a renewed emphasis on quasi-experimental and nonexperimental quantitative designs to infer causal conclusions has resulted in many newer sources specifically targeting these approaches to the field of education. This bibliography begins with a discussion of general introductions to all quantitative designs in the educational literature. The sources in this section tend to be textbooks or well-known sources written many years ago, though still very relevant and helpful. It should be noted that there are many other sources in the social sciences more generally that contain principles of quantitative designs that are applicable to education. This article then classifies quantitative designs primarily as either nonexperimental or experimental but also emphasizes the use of nonexperimental designs for making causal inferences. Among experimental designs the article distinguishes between those that include random assignment of subjects, those that are quasi-experimental (with no random assignment), and those that are single-case (single-subject) designs. Quasi-experimental and nonexperimental designs used for making causal inferences are becoming more popular in education given the practical difficulties and expense in conducting well-controlled experiments, particularly with the use of structural equation modeling (SEM). There have also been recent developments in statistical analyses that allow stronger causal inferences. Historically, quantitative designs have been tied closely to sampling, measurement, and statistics. In this bibliography there are important sources for newer statistical procedures that are needed for particular designs, especially single-case designs, but relatively little attention to sampling or measurement. The literature on quantitative designs in education is not well focused or comprehensively addressed in very many sources, except in general overview textbooks. Those sources that do include the range of designs are introductory in nature; more advanced designs and statistical analyses tend to be found in journal articles and other individual documents, with a couple exceptions. Another new trend in educational research designs is the use of mixed-method designs (both quantitative and qualitative), though this article does not emphasize these designs.
For many years there have been textbooks that present the range of quantitative research designs, both in education and the social sciences more broadly. Indeed, most of the quantitative design research principles are much the same for education, psychology, and other social sciences. These sources provide an introduction to basic designs that are used within the broader context of other educational research methodologies such as qualitative and mixed-method. Examples of these textbooks written specifically for education include Johnson and Christensen 2012 ; Mertens 2010 ; Arthur, et al. 2012 ; and Creswell 2012 . An example of a similar text written for the social sciences, including education that is dedicated only to quantitative research, is Gliner, et al. 2009 . In these texts separate chapters are devoted to different types of quantitative designs. For example, Creswell 2012 contains three quantitative design chapters—experimental, which includes both randomized and quasi-experimental designs; correlational (nonexperimental); and survey (also nonexperimental). Johnson and Christensen 2012 also includes three quantitative design chapters, with greater emphasis on quasi-experimental and single-subject research. Mertens 2010 includes a chapter on causal-comparative designs (nonexperimental). Often survey research is addressed as a distinct type of quantitative research with an emphasis on sampling and measurement (how to design surveys). Green, et al. 2006 also presents introductory chapters on different types of quantitative designs, but each of the chapters has different authors. In this book chapters extend basic designs by examining in greater detail nonexperimental methodologies structured for causal inferences and scaled-up experiments. Two additional sources are noted because they represent the types of publications for the social sciences more broadly that discuss many of the same principles of quantitative design among other types of designs. Bickman and Rog 2009 uses different chapter authors to cover topics such as statistical power for designs, sampling, randomized controlled trials, and quasi-experiments, and educational researchers will find this information helpful in designing their studies. Little 2012 provides a comprehensive coverage of topics related to quantitative methods in the social, behavioral, and education fields.
Arthur, James, Michael Waring, Robert Coe, and Larry V. Hedges, eds. 2012. Research methods & methodologies in education . Thousand Oaks, CA: SAGE.
Readers will find this book more of a handbook than a textbook. Different individuals author each of the chapters, representing quantitative, qualitative, and mixed-method designs. The quantitative chapters are on the treatment of advanced statistical applications, including analysis of variance, regression, and multilevel analysis.
Bickman, Leonard, and Debra J. Rog, eds. 2009. The SAGE handbook of applied social research methods . 2d ed. Thousand Oaks, CA: SAGE.
This handbook includes quantitative design chapters that are written for the social sciences broadly. There are relatively advanced treatments of statistical power, randomized controlled trials, and sampling in quantitative designs, though the coverage of additional topics is not as complete as other sources in this section.
Creswell, John W. 2012. Educational research: Planning, conducting, and evaluating quantitative and qualitative research . 4th ed. Boston: Pearson.
Creswell presents an introduction to all major types of research designs. Three chapters cover quantitative designs—experimental, correlational, and survey research. Both the correlational and survey research chapters focus on nonexperimental designs. Overall the introductions are complete and helpful to those beginning their study of quantitative research designs.
Gliner, Jeffrey A., George A. Morgan, and Nancy L. Leech. 2009. Research methods in applied settings: An integrated approach to design and analysis . 2d ed. New York: Routledge.
This text, unlike others in this section, is devoted solely to quantitative research. As such, all aspects of quantitative designs are covered. There are separate chapters on experimental, nonexperimental, and single-subject designs and on internal validity, sampling, and data-collection techniques for quantitative studies. The content of the book is somewhat more advanced than others listed in this section and is unique in its quantitative focus.
Green, Judith L., Gregory Camilli, and Patricia B. Elmore, eds. 2006. Handbook of complementary methods in education research . Mahwah, NJ: Lawrence Erlbaum.
Green, Camilli, and Elmore edited forty-six chapters that represent many contemporary issues and topics related to quantitative designs. Written by noted researchers, the chapters cover design experiments, quasi-experimentation, randomized experiments, and survey methods. Other chapters include statistical topics that have relevance for quantitative designs.
Johnson, Burke, and Larry B. Christensen. 2012. Educational research: Quantitative, qualitative, and mixed approaches . 4th ed. Thousand Oaks, CA: SAGE.
This comprehensive textbook of educational research methods includes extensive coverage of qualitative and mixed-method designs along with quantitative designs. Three of twenty chapters focus on quantitative designs (experimental, quasi-experimental, and single-case) and nonexperimental, including longitudinal and retrospective, designs. The level of material is relatively high, and there are introductory chapters on sampling and quantitative analyses.
Little, Todd D., ed. 2012. The Oxford handbook of quantitative methods . Vol. 1, Foundations . New York: Oxford Univ. Press.
This handbook is a relatively advanced treatment of quantitative design and statistical analyses. Multiple authors are used to address strengths and weaknesses of many different issues and methods, including advanced statistical tools.
Mertens, Donna M. 2010. Research and evaluation in education and psychology: Integrating diversity with quantitative, qualitative, and mixed methods . 3d ed. Thousand Oaks, CA: SAGE.
This textbook is an introduction to all types of educational designs and includes four chapters devoted to quantitative research—experimental and quasi-experimental, causal comparative and correlational, survey, and single-case research. The author’s treatment of some topics is somewhat more advanced than texts such as Creswell 2012 , with extensive attention to threats to internal validity for some of the designs.
back to top
Users without a subscription are not able to see the full content on this page. Please subscribe or login .
Oxford Bibliographies Online is available by subscription and perpetual access to institutions. For more information or to contact an Oxford Sales Representative click here .
- About Education »
- Meet the Editorial Board »
- Academic Achievement
- Academic Audit for Universities
- Academic Freedom and Tenure in the United States
- Action Research in Education
- Adjuncts in Higher Education in the United States
- Administrator Preparation
- Adolescence
- Advanced Placement and International Baccalaureate Courses
- Advocacy and Activism in Early Childhood
- African American Racial Identity and Learning
- Alaska Native Education
- Alternative Certification Programs for Educators
- Alternative Schools
- American Indian Education
- Animals in Environmental Education
- Art Education
- Artificial Intelligence and Learning
- Assessing School Leader Effectiveness
- Assessment, Behavioral
- Assessment, Educational
- Assessment in Early Childhood Education
- Assistive Technology
- Augmented Reality in Education
- Beginning-Teacher Induction
- Bilingual Education and Bilingualism
- Black Undergraduate Women: Critical Race and Gender Perspe...
- Black Women in Academia
- Blended Learning
- Case Study in Education Research
- Changing Professional and Academic Identities
- Character Education
- Children’s and Young Adult Literature
- Children's Beliefs about Intelligence
- Children's Rights in Early Childhood Education
- Citizenship Education
- Civic and Social Engagement of Higher Education
- Classroom Learning Environments: Assessing and Investigati...
- Classroom Management
- Coherent Instructional Systems at the School and School Sy...
- College Admissions in the United States
- College Athletics in the United States
- Community Relations
- Comparative Education
- Computer-Assisted Language Learning
- Computer-Based Testing
- Conceptualizing, Measuring, and Evaluating Improvement Net...
- Continuous Improvement and "High Leverage" Educational Pro...
- Counseling in Schools
- Critical Approaches to Gender in Higher Education
- Critical Perspectives on Educational Innovation and Improv...
- Critical Race Theory
- Crossborder and Transnational Higher Education
- Cross-National Research on Continuous Improvement
- Cross-Sector Research on Continuous Learning and Improveme...
- Cultural Diversity in Early Childhood Education
- Culturally Responsive Leadership
- Culturally Responsive Pedagogies
- Culturally Responsive Teacher Education in the United Stat...
- Curriculum Design
- Data Collection in Educational Research
- Data-driven Decision Making in the United States
- Deaf Education
- Desegregation and Integration
- Design Thinking and the Learning Sciences: Theoretical, Pr...
- Development, Moral
- Dialogic Pedagogy
- Digital Age Teacher, The
- Digital Citizenship
- Digital Divides
- Disabilities
- Distance Learning
- Distributed Leadership
- Doctoral Education and Training
- Early Childhood Education and Care (ECEC) in Denmark
- Early Childhood Education and Development in Mexico
- Early Childhood Education in Aotearoa New Zealand
- Early Childhood Education in Australia
- Early Childhood Education in China
- Early Childhood Education in Europe
- Early Childhood Education in Sub-Saharan Africa
- Early Childhood Education in Sweden
- Early Childhood Education Pedagogy
- Early Childhood Education Policy
- Early Childhood Education, The Arts in
- Early Childhood Mathematics
- Early Childhood Science
- Early Childhood Teacher Education
- Early Childhood Teachers in Aotearoa New Zealand
- Early Years Professionalism and Professionalization Polici...
- Economics of Education
- Education For Children with Autism
- Education for Sustainable Development
- Education Leadership, Empirical Perspectives in
- Education of Native Hawaiian Students
- Education Reform and School Change
- Educational Statistics for Longitudinal Research
- Educator Partnerships with Parents and Families with a Foc...
- Emotional and Affective Issues in Environmental and Sustai...
- Emotional and Behavioral Disorders
- English as an International Language for Academic Publishi...
- Environmental and Science Education: Overlaps and Issues
- Environmental Education
- Environmental Education in Brazil
- Epistemic Beliefs
- Equity and Improvement: Engaging Communities in Educationa...
- Equity, Ethnicity, Diversity, and Excellence in Education
- Ethical Research with Young Children
- Ethics and Education
- Ethics of Teaching
- Ethnic Studies
- Europe, History of Education in
- Evidence-Based Communication Assessment and Intervention
- Family and Community Partnerships in Education
- Family Day Care
- Federal Government Programs and Issues
- Feminization of Labor in Academia
- Finance, Education
- Financial Aid
- Formative Assessment
- Future-Focused Education
- Gender and Achievement
- Gender and Alternative Education
- Gender, Power and Politics in the Academy
- Gender-Based Violence on University Campuses
- Gifted Education
- Global Mindedness and Global Citizenship Education
- Global University Rankings
- Governance, Education
- Grounded Theory
- Growth of Effective Mental Health Services in Schools in t...
- Higher Education and Globalization
- Higher Education and the Developing World
- Higher Education Faculty Characteristics and Trends in the...
- Higher Education Finance
- Higher Education Governance
- Higher Education Graduate Outcomes and Destinations
- Higher Education in Africa
- Higher Education in China
- Higher Education in Latin America
- Higher Education in the United States, Historical Evolutio...
- Higher Education, International Issues in
- Higher Education Management
- Higher Education Policy
- Higher Education Research
- Higher Education Student Assessment
- High-stakes Testing
- History of Early Childhood Education in the United States
- History of Education in the United States
- History of Technology Integration in Education
- Homeschooling
- Inclusion in Early Childhood: Difference, Disability, and ...
- Inclusive Education
- Indigenous Education in a Global Context
- Indigenous Learning Environments
- Indigenous Students in Higher Education in the United Stat...
- Infant and Toddler Pedagogy
- Inservice Teacher Education
- Integrating Art across the Curriculum
- Intelligence
- Intensive Interventions for Children and Adolescents with ...
- International Perspectives on Academic Freedom
- Intersectionality and Education
- Knowledge Development in Early Childhood
- Leadership Development, Coaching and Feedback for
- Leadership in Early Childhood Education
- Leadership Training with an Emphasis on the United States
- Learning Analytics in Higher Education
- Learning Difficulties
- Learning, Lifelong
- Learning, Multimedia
- Learning Strategies
- Legal Matters and Education Law
- LGBT Youth in Schools
- Linguistic Diversity
- Linguistically Inclusive Pedagogy
- Literacy Development and Language Acquisition
- Literature Reviews
- Mathematics Identity
- Mathematics Instruction and Interventions for Students wit...
- Mathematics Teacher Education
- Measurement for Improvement in Education
- Measurement in Education in the United States
- Meta-Analysis and Research Synthesis in Education
- Methodological Approaches for Impact Evaluation in Educati...
- Mindfulness, Learning, and Education
- Motherscholars
- Multiliteracies in Early Childhood Education
- Multiple Documents Literacy: Theory, Research, and Applica...
- Museums, Education, and Curriculum
- Music Education
- Narrative Research in Education
- Native American Studies
- Nonformal and Informal Environmental Education
- Note-Taking
- Numeracy Education
- One-to-One Technology in the K-12 Classroom
- Online Education
- Open Education
- Organizing for Continuous Improvement in Education
- Organizing Schools for the Inclusion of Students with Disa...
- Outdoor Play and Learning
- Outdoor Play and Learning in Early Childhood Education
- Pedagogical Leadership
- Pedagogy of Teacher Education, A
- Performance Objectives and Measurement
- Performance-based Research Assessment in Higher Education
- Performance-based Research Funding
- Phenomenology in Educational Research
- Philosophy of Education
- Physical Education
- Podcasts in Education
- Policy Context of United States Educational Innovation and...
- Politics of Education
- Portable Technology Use in Special Education Programs and ...
- Post-humanism and Environmental Education
- Pre-Service Teacher Education
- Problem Solving
- Productivity and Higher Education
- Professional Development
- Professional Learning Communities
- Program Evaluation
- Programs and Services for Students with Emotional or Behav...
- Psychology Learning and Teaching
- Psychometric Issues in the Assessment of English Language ...
- Qualitative, Quantitative, and Mixed Methods Research Samp...
- Qualitative Research Design
- Quantitative Research Designs in Educational Research
- Queering the English Language Arts (ELA) Writing Classroom
- Race and Affirmative Action in Higher Education
- Reading Education
- Refugee and New Immigrant Learners
- Relational and Developmental Trauma and Schools
- Relational Pedagogies in Early Childhood Education
- Reliability in Educational Assessments
- Religion in Elementary and Secondary Education in the Unit...
- Researcher Development and Skills Training within the Cont...
- Research-Practice Partnerships in Education within the Uni...
- Response to Intervention
- Restorative Practices
- Risky Play in Early Childhood Education
- Role of Gender Equity Work on University Campuses through ...
- Scale and Sustainability of Education Innovation and Impro...
- Scaling Up Research-based Educational Practices
- School Accreditation
- School Choice
- School Culture
- School District Budgeting and Financial Management in the ...
- School Improvement through Inclusive Education
- School Reform
- Schools, Private and Independent
- School-Wide Positive Behavior Support
- Science Education
- Secondary to Postsecondary Transition Issues
- Self-Regulated Learning
- Self-Study of Teacher Education Practices
- Service-Learning
- Severe Disabilities
- Single Salary Schedule
- Single-sex Education
- Social Context of Education
- Social Justice
- Social Pedagogy
- Social Science and Education Research
- Social Studies Education
- Sociology of Education
- Standards-Based Education
- Student Access, Equity, and Diversity in Higher Education
- Student Assignment Policy
- Student Engagement in Tertiary Education
- Student Learning, Development, Engagement, and Motivation ...
- Student Participation
- Student Voice in Teacher Development
- Sustainability Education in Early Childhood Education
- Sustainability in Early Childhood Education
- Sustainability in Higher Education
- Teacher Beliefs and Epistemologies
- Teacher Collaboration in School Improvement
- Teacher Evaluation and Teacher Effectiveness
- Teacher Preparation
- Teacher Training and Development
- Teacher Unions and Associations
- Teacher-Student Relationships
- Teaching Critical Thinking
- Technologies, Teaching, and Learning in Higher Education
- Technology Education in Early Childhood
- Technology, Educational
- Technology-based Assessment
- The Bologna Process
- The Regulation of Standards in Higher Education
- Theories of Educational Leadership
- Three Conceptions of Literacy: Media, Narrative, and Gamin...
- Tracking and Detracking
- Traditions of Quality Improvement in Education
- Transformative Learning
- Transitions in Early Childhood Education
- Tribally Controlled Colleges and Universities in the Unite...
- Understanding the Psycho-Social Dimensions of Schools and ...
- University Faculty Roles and Responsibilities in the Unite...
- Using Ethnography in Educational Research
- Value of Higher Education for Students and Other Stakehold...
- Virtual Learning Environments
- Vocational and Technical Education
- Wellness and Well-Being in Education
- Women's and Gender Studies
- Young Children and Spirituality
- Young Children's Learning Dispositions
- Young Children's Working Theories
- Privacy Policy
- Cookie Policy
- Legal Notice
- Accessibility
Powered by:
- [91.193.111.216]
- 91.193.111.216
Conducting Quantitative Research in Education
- January 2020
- ISBN: 978-981-13-9131-6
- Edith Cowan University
- University of Notre Dame Australia
Discover the world's research
- 25+ million members
- 160+ million publication pages
- 2.3+ billion citations
- Marina Ahmad
- Aida Hanim A. Hamid
- Khaula Alkaabi
- Julian D. Romero
- Nelson Amponsah
- Awinimi Timothy Agure
- Rolly Robert Oroh
- Aristianto Budi Sutrisno
- Dwi Hardaningtyas
- Marouen Ben abdallah
- Chawalee Na Thalang
- Seri Wongmontha
- Muh. Wahyudi Jasman
- Recruit researchers
- Join for free
- Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
Trends and Motivations in Critical Quantitative Educational Research: A Multimethod Examination Across Higher Education Scholarship and Author Perspectives
- Open access
- Published: 04 June 2024
Cite this article
You have full access to this open access article
- Christa E. Winkler ORCID: orcid.org/0000-0002-1700-5444 1 &
- Annie M. Wofford ORCID: orcid.org/0000-0002-2246-1946 2
1186 Accesses
12 Altmetric
Explore all metrics
To challenge “objective” conventions in quantitative methodology, higher education scholars have increasingly employed critical lenses (e.g., quantitative criticalism, QuantCrit). Yet, specific approaches remain opaque. We use a multimethod design to examine researchers’ use of critical approaches and explore how authors discussed embedding strategies to disrupt dominant quantitative thinking. We draw data from a systematic scoping review of critical quantitative higher education research between 2007 and 2021 ( N = 34) and semi-structured interviews with 18 manuscript authors. Findings illuminate (in)consistencies across scholars’ incorporation of critical approaches, including within study motivations, theoretical framing, and methodological choices. Additionally, interview data reveal complex layers to authors’ decision-making processes, indicating that decisions about embracing critical quantitative approaches must be asset-based and intentional. Lastly, we discuss findings in the context of their guiding frameworks (e.g., quantitative criticalism, QuantCrit) and offer implications for employing and conducting research about critical quantitative research.
Similar content being viewed by others
Publication Patterns of Higher Education Research Using Quantitative Criticalism and QuantCrit Perspectives
Questions of Legitimacy and Quality in Educational Research
Rethinking approaches to research: the importance of considering contextually mitigating factors in promoting equitable practices in science education research
Avoid common mistakes on your manuscript.
Across the field of higher education and within many roles—including policymakers, researchers, and administrators—key leaders and educational partners have historically relied on quantitative methods to inform system-level and student-level changes to policy and practice. This reliance is rooted, in part, on the misconception that quantitative methods depict the objective state of affairs in higher education. This perception is not only inaccurate but also dangerous, as the numbers produced from quantitative methods are “neither objective nor color-blind” (Gillborn et al., 2018 , p. 159). In fact, like all research, quantitative data collection and analysis are informed by theories and beliefs that are susceptible to bias. Further, such bias may come in multiple forms such as researcher bias and bias within the statistical methods themselves (e.g., Bierema et al., 2021 ; Torgerson & Torgerson, 2003 ). Thus, if left unexamined from a critical perspective, quantitative research may inform policies and practices that fuel the engine of cultural and social reproduction in higher education (e.g., Bourdieu, 1977 ).
Largely, critical approaches to higher education research have been dominated by qualitative methods (McCoy & Rodricks, 2015 ). While qualitative approaches are vital, some have argued that a wider conceptualization of critical inquiry may propel our understanding of processes in higher education (Stage & Wells, 2014 ) and that critical research need not be explicitly qualitative (refer to Sablan, 2019 ; Stage, 2007 ). If scholars hope to embrace multiple ways of challenging persistent inequities and structures of oppression in higher education, such as racism, advancing critical quantitative work can help higher education researchers “expose and challenge hidden assumptions that frequently encode racist perspectives beneath the façade of supposed quantitative objectivity” (Gillborn et al., 2018 , p. 158).
Across professional networks in higher education, the perspectives of association leaders (e.g., Association for the Study of Higher Education [ASHE]) have often placed qualitative and quantitative research in opposition to each other, with qualitative research being a primary way to amplify the voices of systemically minoritized students, faculty, and staff (Kimball & Friedensen, 2019 ). Yet, given the vast growth of critical higher education research (e.g., Byrd, 2019 ; Espino, 2012 ; Martínez-Alemán et al., 2015 ), recent ASHE presidents have recognized how prior leaders planted transformative seeds of critical theory and praxis (Renn, 2020 ) and advocated for critical higher education scholarship as a disrupter (Stewart, 2022 ). With this shift in discourse, many members of the higher education research community have also grown their desire to expand upon the legacy of critical research—in both qualitative and quantitative forms.
Critical quantitative approaches hold promise as one avenue for meeting recent calls to embrace equity-mindedness and transform the future of higher education research, yet current structures of training and resources for quantitative methods lack guidance on engaging such approaches. For higher education scholars to advance critical inquiry via quantitative methods, we must first understand the extent to which such approaches have been adopted. Accordingly, this study sheds light on critical quantitative approaches used in higher education literature and provides storied insights from the experiences of scholars who have engaged critical perspectives with quantitative methods. We were guided by the following research questions:
To what extent do higher education scholars incorporate critical perspectives into quantitative research?
How do higher education scholars discuss specific strategies to leverage critical perspectives in quantitative research?
Contextualizing Existing Critical Approaches to Quantitative Research
To foreground our analysis of literature employing critical quantitative lenses to studies about higher education, we first must understand the roots of such framing. Broadly, the foundations of critical quantitative approaches align with many elements of equity-mindedness. Equity-mindedness prompts individuals to question divergent patterns in educational outcomes, recognize that racism is embedded in everyday practices, and invest in un/learning the effects of racial identity and racialized expectations (Bensimon, 2018 ). Yet, researchers’ commitments to critical quantitative approaches stand out as a unique thread in the larger fabric of opportunities to embrace equity-mindedness in higher education research. Below, we discuss three significant publications that have been widely applied as frameworks to engage critical quantitative approaches in higher education. While these publications are not the only ones associated with critical inquiry in quantitative research, their evolution, commonalities, and distinctions offer a robust background of epistemological development in this area of scholarship.
Quantitative Criticalism (Stage, 2007 )
Although some higher education scholars have applied critical perspectives in their research for many years, Stage’s ( 2007 ) introduction of quantitative criticalism was a salient contribution to creating greater discourse related to such perspectives. Quantitative criticalism, as a coined paradigmatic approach for engaging critical questions using quantitative data, was among the first of several crucial publications on this topic in a 2007 edition of New Directions for Institutional Research . Collectively, this special issue advanced perspectives on how higher education scholars may challenge traditional positivist and post-positivist paradigms in quantitative inquiry. Instead, researchers could apply (what Stage referred to as) quantitative criticalism to develop research questions centering on social inequities in educational processes and outcomes as well as challenge widely accepted models, measures, and analytic practices.
Notably, Stage ( 2007 ) grounded the motivation for this new paradigmatic approach in the core concepts of critical inquiry (e.g., Kincheloe & McLaren, 1994 ). Tracing critical inquiry back to the German Frankfurt school, Stage discussed how the principles of critical theory have evolved over time and highlighted Kincheloe and McLaren’s ( 1994 ) definition of critical theory as most relevant to the principles of quantitative criticalism. Kincheloe and McLaren’s definition of critical describes how researchers applying critical paradigms in their scholarship center concepts such as socially and historically created power structures, subjectivity, privilege and oppression, and the reproduction of oppression in traditional research approaches. Perhaps most importantly, Kincheloe and McLaren urge scholars to be self-conscious in their decision making—a tall ask of quantitative scholars operating from positivist and post-positivist vantage points.
In advancing quantitative criticalism, Stage ( 2007 ) first argued that all critical scholars must center their outcomes on equity. To enact this core focus on equity in quantitative criticalism, Stage outlined two tasks for researchers. First, critical quantitative researchers must “use data to represent educational processes and outcomes on a large scale to reveal inequities and to identify social or institutional perpetuation of systematic inequities in such processes and outcomes” (p. 10). Second, Stage advocated for critical quantitative researchers to “question the models, measures, and analytic practices of quantitative research in order to offer competing models, measures, and analytic practices that better describe experiences of those who have not been adequately represented” (p. 10). Stage’s arguments and invitations for criticalism spurred crucial conversations, many of which led to the development of a two-part series on critical quantitative approaches in New Directions for Institutional Research (Stage & Wells, 2014 ; Wells & Stage, 2015 ). With nearly a decade of new perspectives to offer, manuscripts within these subsequent special issues expanded the concepts of quantitative criticalism. Specifically, these new contributions advanced the notion that quantitative criticalism should include all parts of the research process—instead of maintaining a focus on paradigm and research questions alone—and made inroads when it came to challenging the (default, dominant) process of quantitative research. While many scholars offered noteworthy perspectives in these special issues (Stage & Wells, 2014 ; Wells & Stage, 2015 ), we now turn to one specific article within these special issues that offered a conceptual model for critical quantitative inquiry.
Critical Quantitative Inquiry (Rios-Aguilar, 2014 )
Building from and guided by the work of other criticalists (namely, Estela Bensimon, Sara Goldrick-Rab, Frances Stage, and Erin Leahey), Rios-Aguilar ( 2014 ) developed a complementary framework representing the process and application of critical quantitative inquiry in higher education scholarship. At the heart of Rios-Aguilar’s conceptualization lies the acknowledgment that quantitative research is a human activity that requires careful decisions. With this foundation comes the pressing need for quantitative scholars to engage in self-reflection and transparency about the processes and outcomes of their methodological choices—actions that could potentially disrupt traditional notions and deficit assumptions that maintain systems of oppression in higher education.
Rios-Aguilar ( 2014 ) offered greater specificity to build upon many principles from other criticalists. For one, methodologically, Rios-Aguilar challenged the notion of using “fancy” statistical methods just for the sake of applying advanced methods. Instead, she argued that critical quantitative scholars should engage “in a self-reflection of the actual research practices and statistical approaches (i.e., choice of centering approach, type of model estimated, number of control variables, etc.) they use and the various influences that affect those practices” (Rios-Aguilar, 2014 , p. 98). In this purview, scholars should ensure that all methodological choices advance their ability to reveal inequities; such choices may include those that challenge the use of reference groups in coding, the interpretation of statistics in ways that move beyond p -values for statistical significance, or the application and alignment of theoretical and conceptual frameworks that focus on the assets of systemically minoritized students. Rios-Aguilar also noted, in agreement with the foundations of equity-mindedness and critical theory, that quantitative criticalists have an obligation to translate findings into tangible changes in policy and practice that can redress inequities.
Ultimately, Rios-Aguilar’s ( 2014 ) framework focused on “the interplay between research questions, theory, method/research practices, and policy/advocacy” to identify how quantitative criticalists’ scholarship can be “relevant and meaningful” (p. 96). Specifically, Rios-Aguilar called upon quantitative criticalists to ask research questions that center on equity and power, engage in self-reflection about their data sources, analyses, and disaggregation techniques, attend to interpretation with practical/policy-related significance, and expand beyond field-level silos in theory and implications. Without challenging dominant approaches in quantitative higher education research, Rios-Aguilar noted that the field will continue to inaccurately capture the experiences of systemically minoritized students. In college access and success, for example, ignoring this need for evolving approaches and models would continue what Bensimon ( 2007 ) referred to as the Tintonian Dynasty, with scholars widely applying and citing Tinto’s work but failing to acknowledge the unique experiences of systemically minoritized students. These and other concrete recommendations have served as a springboard for quantitative criticalists, prompting scholars to incorporate critical approaches in more cohesive and congruent ways.
QuantCrit (Gillborn et al., 2018 )
As an epistemologically different but related form of critical quantitative scholarship, QuantCrit—quantitative critical race theory—has emerged as a vital stream of inquiry that applies critical race theory to methodological approaches. Given that statistical methods were developed in support of the eugenics movement (Zuberi, 2001 ), QuantCrit researchers must consider how the “norms” of quantitative research support white supremacy (Zuberi & Bonilla-Silva, 2008 ). Fortunately, as Garcia et al. ( 2018 ) noted, “[t]he problems concerning the ahistorical and decontextualized ‘default’ mode and misuse of quantitative research methods are not insurmountable” (p. 154). As such, the goal of QuantCrit is to conduct quantitative research in a way that can contextualize and challenge historical, social, political, and economic power structures that uphold racism (e.g., Garcia et al., 2018 ; Gillborn et al., 2018 ).
In coining the term QuantCrit, Gillborn et al. ( 2018 ) provided five QuantCrit tenets adapted from critical race theory. First, the centrality of racism offers a methodological and political statement about how racism is complex, fluid, and rooted in social dynamics of power. Second, numbers are not neutral demonstrates an imperative for QuantCrit researchers—one that prompts scholars to understand how quantitative data have been collected and analyzed to prioritize interests rooted in white, elite worldviews. As such, QuantCrit researchers must reject numbers as “true” and as presenting a unidimensional truth. Third, categories are neither “natural” nor given prompts researchers to consider how “even the most basic decisions in research design can have fundamental consequences for the re/presentation of race inequity” (Gillborn et al., 2018 , p. 171). Notably, even when race is a focus, scholars must operationalize and interpret findings related to race in the context of racism. Fourth, prioritizing voice and insight advances the notion that data cannot “speak for itself” and numerous interpretations are possible. In QuantCrit, this tenet leverages experiential knowledge among People of Color as an interpretive tool. Finally, the fifth tenet explicates how numbers can be used for social justice but statistical research cannot be placed in a position of greater legitimacy in equity efforts relative to qualitative research. Collectively, although Gillborn et al. ( 2018 ) stated that they expect—much like all epistemological foundations—the tenets of QuantCrit to be expanded, we must first understand how these stated principles arise in critical quantitative research.
Bridging Critical Quantitative Concepts as a Guiding Framework
Guided by these framings (i.e., quantitative criticalism, critical quantitative inquiry, QuantCrit) as a specific stream of inquiry within the larger realm of equity-minded educational research, we explore the extent to which the primary elements of these critical quantitative frameworks are applied in higher education. Across the framings discussed, the commitment to equity-mindedness contributes to a shared underlying essence of critical quantitative approaches. Not only do Stage, Rios-Aguilar, and Gillborn et al. aim for researchers to center on inequities and commit to disrupting “neutral” decisions about and interpretations of statistics, but they also advocate for critical quantitative research (by any name) to serve as a tool for advocacy and praxis—creating structural changes to discriminatory policies and practices, rather than ceasing equity-based commitments with publications alone. Thus, the conceptual framework for the present study brings together alignments and distinctions in scholars’ motivations and actualizations of quantitative research through a critical lens.
Specifically, looking to Stage ( 2007 ), quantitative criticalists must center on inequity in their questions and actions to disrupt traditional models, methods, and practices. Second, extending critical inquiry through all aspects of quantitative research (Rios-Aguilar, 2014 ), researchers must interrogate how critical perspectives can be embedded in every part of research. The embedded nature of critical approaches should consider how study questions, frameworks, analytic practices, and advocacy are developed with intentionality, reflexivity, and the goal of unmasking inequities. Third, centering on the five known tenets of QuantCrit (Gillborn et al., 2018 ), QuantCrit researchers should adapt critical race theory for quantitative research. Although QuantCrit tenets are likely to be expanded in the future, the foundations of such research should continue to acknowledge the centrality of racism, advance critiques of statistical neutrality and categories that serve white racial interests, prioritize the lived experiences of People of Color, and complicate how statistics can be one—but not the lone—part of social justice endeavors.
Over many years, higher education scholars have advanced more critical research, as illustrated through publication trends of critical quantitative manuscripts in higher education (Wofford & Winkler, 2022 ). However, the application of critical quantitative approaches remains laced with tensions among paradigms and analytic strategies. Despite recent systematic examinations of critical quantitative scholarship across educational research broadly (Tabron & Thomas, 2023 ), there has yet to be a comprehensive, systematic review of higher education studies that attempt to apply principles rooted in quantitative criticalism, critical quantitative inquiry, and QuantCrit. Thus, much remains to be learned regarding whether and how higher education researchers have been able to apply the principles previously articulated. In order for researchers to fully (re)imagine possibilities for future critical approaches to quantitative higher education research, we must first understand the landscape of current approaches.
Study Aims and Role of the Researchers
Study aims and scope.
For this study, we examined the extent to which authors adopted critical quantitative approaches in higher education research and the trends in tools and strategies they employed to do so. In other words, we sought to understand to what extent, and in what ways, authors—in their own perspectives—applied critical perspectives to quantitative research. We relied on the nomenclature used by the authors of each manuscript (e.g., whether they operated from the lens of quantitative criticalism, QuantCrit, or another approach determined by the authors). Importantly, our intent was not to evaluate the quality of authors’ applications of critical approaches to quantitative research in higher education.
Researcher Positionality
As with all research, our positions and motivations shape how we conceptualized and executed the present study. We come to this work as early career higher education faculty, drawn to the study of higher education as one way to rectify educational disparities, and thus are both deeply invested in understanding how critical quantitative approaches may advance such efforts. After engaging in initial discussions during an association-sponsored workshop on critical quantitative research in higher education, we were motivated to explore these perspectives, understand trends in our field, and inform our own empirical engagement. Throughout our collaboration, we were also reflexive about the social privileges we hold in the academy and society as white, cisgender women—particularly given how quantitative criticalism and QuantCrit create inroads for systemically minoritized scholars to combat the erasure of perspectives from their communities due to small sample sizes. As we work to understand prior critical quantitative endeavors, with the goal of creating opportunity for this work to flourish in the future, we continually reflect on how we can use our positions of privilege to be co-conspirators in the advancement of quantitative research for social justice in higher education.
This study employed a qualitatively driven multimethod sequential design (Hesse-Biber et al., 2015 ) to illuminate how critical quantitative perspectives and methods have been applied in higher education contexts over 15 years. Anguera et al. ( 2018 ) noted that the hallmark feature of multimethod studies is the coexistence of different methodologies. Unlike mixed-methods studies, which integrate both quantitative and qualitative methods, multimethod studies can be exclusively qualitative, exclusively quantitative, or a combination of qualitative and quantitative methods. A multimethod research design was also appropriate given the distinct research questions in this study—each answered using a different stream of data. Specifically, we conducted a systematic scoping review of existing literature and facilitated follow-up interviews with a subset of corresponding authors from included publications, as detailed below and in Fig. 1 . We employed a systematic scoping review to examine the extent to which higher education scholars incorporated critical perspectives into quantitative research (research question one), and we then conducted follow-up interviews to elucidate how those scholars discussed specific strategies for leveraging critical perspectives in their quantitative research (research question two).
Sequential multimethod approach to data collection and analysis
Given the scope of our work—which examined the extent to which, and in what ways, authors applied critical perspectives to quantitative higher education research—we employed an exploratory approach with a constructivist lens. Using a constructivist paradigm allowed us to explore the many realities of doing critical quantitative research, with the authors themselves constructing truths from their worldviews (Magoon, 1977 ). In what follows, we contextualize both our methodological choices and the limitations of those choices in executing this study.
Data Sources
Systematic scoping review.
First, we employed a systematic scoping review of published higher education literature. Consistent with the purpose of a scoping review, we sought to “examine the extent, range, and nature” of critical quantitative approaches in higher education that integrate quantitative methods and critical inquiry (Arskey & O’Malley, 2005 , p. 6). We used a multi-stage scoping framework (Arskey & O’Malley, 2005 ; Levac et al., 2010 ) to identify studies that were (a) empirical, (b) conducted within a higher education context, and (c) guided by critical quantitative perspectives. We restricted our review to literature published in 2007 or later (i.e., since Stage’s formal introduction of quantitative criticalism in higher education). All studies considered for review were written in the English language.
The literature search spanned multiple databases, including Academic Search Premier, Scopus, ERIC, PsychINFO, Web of Science, SocINDEX , Psychological and Behavioral Sciences Collection, Sociological Abstracts, and JSTOR. To locate relevant works, we used independent and combined keywords that reflected the inclusion criteria, with the initial search resulting in 285 unique records for eligibility screening. All screening was conducted separately by both authors using the CADIMA online platform (Kohl et al., 2018). In total, 285 title/abstract records were screened for eligibility, with 40 full-text records subsequently screened for eligibility. After separately screening all records, we discussed inconsistencies in title/abstract and full-text eligibility ratings to reach consensus. This strategy led us to a sample of 34 manuscripts that met all inclusion criteria (Fig. 2 ).
Identification of systematic scoping review sample via literature search and screening
Systematic scoping reviews are particularly well-suited for initial examinations of emerging approaches in the literature (Munn et al., 2018 ), aligning with our goal to establish an initial understanding of the landscape of critical quantitative research applications in higher education. It also relies heavily on researcher-led qualitative review of the literature, which we viewed as a vital component of our study, as we sought to identify not just what researchers did (e.g., what topics they explored or in what outlets they did so), but also how they articulated their decision-making process in the literature. Alternative methods to examining the literature, such as bibliometric analysis, supervised topic modeling, and network analysis, may reveal additional insights regarding the scope and structure of critical quantitative research in higher education not addressed in the current study. As noted by Munn et al. ( 2018 ), systematic scoping reviews can serve as a useful precursor to more advanced approaches of research synthesis.
Semi-structured Interviews
To understand how scholars navigated the opportunities and tensions of critical quantitative inquiry in their research, we then conducted semi-structured interviews with authors whose work was identified in the scoping review. For each article meeting the review criteria ( N = 34), we compiled information about the corresponding author and their contact information as our sample universe (Robinson, 2014 ). Each corresponding author was contacted via email for participation in a semi-structured interview. There were 32 distinct corresponding authors for the 34 manuscripts, as two corresponding authors led two manuscripts each within our corpus of data. In the recruitment email, we provided corresponding authors with a link to a Qualtrics intake survey; this survey confirmed potential participants’ role as corresponding author on the identified manuscript, collected information about their professional roles and social identities, and provided information about informed consent in the study. Twenty-five authors responded to the Qualtrics survey, with 18 corresponding authors ultimately participating in an interview.
Individual semi-structured interviews were conducted via Zoom and lasted approximately 45–60 min. The interview protocol began with questions about corresponding authors’ backgrounds, then moving into questions regarding their motivations for engaging in critical approaches to quantitative methods, navigation of the epistemological and methodological tensions that may arise when doing quantitative research with a critical lens, approaches to research design, frameworks, and methods that challenged quantitative norms, and experiences with the publication process for their manuscript included in the scoping review. In other words, we asked that corresponding authors explicitly relay the thought processes underlying their methodological choices in the article(s) from our scoping review. Importantly, given the semi-structured nature of these interviews, conversations also reflected participants’ broader trajectory to and through critical quantitative thinking as well as their general reflections about how the field of higher education has grappled with critical approaches to quantitative scholarship. To increase consistency in our data collection and the nature of these conversations, the first author conducted all interviews. With participants’ consent, we recorded each interview, had interviews professionally transcribed, and then de-identified data for subsequent analysis. All interview participants were compensated for their time and contributions with a $50 Amazon gift card.
At the conclusion of each interview, participants were given the opportunity to select their own pseudonym. A profile of interview participants, along with their self-selected pseudonyms, is provided in Table 1 . Although we invited all corresponding authors to participate in interviews, our sample may reflect some self-selection bias, as authors had to opt in to be represented in the interview data. Further, interview insights do not represent all perspectives from participants’ co-authors, some of which may diverge based on lived experiences, history with quantitative research, or engagement with critical quantitative approaches.
Data Analysis
After identifying the sample of 34 publications, we began data analysis for the scoping review by uploading manuscripts to Dedoose. Both researchers then independently applied a priori codes (Saldaña, 2015 ) from Stage’s ( 2007 ) conceptualization of quantitative criticalism, Rios-Aguilar’s ( 2014 ) framework for quantitative critical inquiry, and Gillborn et al.’s ( 2018 ) QuantCrit tenets (Table 2 ). While we applied codes in accordance with Stage’s and Rios-Aguilar’s conceptualizations to each article, codes relevant to Gillborn et al.’s tenets of QuantCrit were only applied to manuscripts where authors self-identified as explicitly employing QuantCrit. Given the distinct epistemological origin of QuantCrit from broader forms of critical quantitative scholarship, codes representing the tenets of QuantCrit reflect its origins in critical race theory and may not be appropriate to apply to broader streams of critical quantitative scholarship that do not center on racism (e.g., scholarship related to (dis)ability, gender identity, sexual identity and orientation). After individually completing a priori coding, we met to reconcile discrepancies and engage in peer debriefing (Creswell & Miller, 2000 ). Data synthesis involved tabulating and reporting findings to explore how each manuscript component aligned with critical quantitative frameworks in higher education research to date.
We analyzed interview data through a multiphase process that engaged deductive and inductive coding strategies. After interviews were transcribed and redacted, we uploaded the transcripts to Dedoose for collaborative qualitative coding. The second author read each transcript in full to holistically understand participants’ insights about generating critical quantitative research. During this initial read, the second author noted quotes that were salient to our question regarding the strategies that scholars use to employ critical quantitative approaches.
Then, using the a priori codes drawn from Stage’s ( 2007 ), Rios-Aguilar’s ( 2014 ) and Gillborn et al.’s ( 2018 ) conceptualizations relevant to quantitative criticalism, critical quantitative inquiry, and QuantCrit, we collaboratively established a working codebook for deductive coding by defining the a priori codes in ways that could capture how participants discussed their work. Although these a priori codes had been previously applied to the manuscripts in the scoping review, definitions and applications of the same codes for interview analysis were noticeably broader (to align with the nature of conversations during interviews). For example, we originally applied the code “policy/advocacy”—established from Rios-Aguilar's work—to components from the implications section of scoping review manuscripts. When (re)developed for deductive coding of interview data, however, we expanded the definition of “policy/advocacy” to include participants’ policy- and advocacy-related actions (beyond writing) that advanced critical inquiry and equity for their educational communities.
In the final phase of analysis, each research team member engaged in inductive coding of the interview data. Specifically, we relied on open coding (Saldaña, 2015 ) to analyze excerpts pertaining to participants’ strategies for employing critical quantitative approaches that were not previously captured by deductive codes. Through open coding, we used successive analysis to work in sequence from a single case to multiple cases (Miles et al., 2014 ). Then, as suggested by Saldaña ( 2015 ), we collapsed our initial codes into broader categories that allowed us insight regarding how participants’ strategies in critical quantitative research expanded beyond those which have been previously articulated. Finally, to draw cohesive interpretations from these data, we independently drafted analytic memos for each interview participant’s transcript, later bridging examples from the scoping review that mapped onto qualitative codes as a form of establishing greater confidence and trustworthiness in our multimethod design.
In introducing study findings through a synthesized lens that heeds our multimethod design, we organize the sections below to draw from both scoping review and interview data. Specifically, we organize findings into two primary areas that address authors’ (1) articulated motivations to adopt critical approaches to quantitative higher education research, and (2) methodological choices that they perceive to align with critical approaches to quantitative higher education research. Within these sections, we discuss several coherent areas where authors collectively grappled with tensions in motivation (i.e., broad motivations, using coined names of critical approaches, conveying positionality, leveraging asset-based frameworks) and method (i.e., using data sources and choosing variables, challenging coding norms, interpreting statistical results), all of which signal authors’ efforts to embody criticality in quantitative research about higher education. Given our sequential research questions, which first examined the landscape of critical quantitative higher education research and then asked authors to elucidate their thought processes and strategies underlying their approaches to these manuscripts, our findings primarily focus on areas of convergence across data sources; we do, however, highlight challenges and tensions authors faced in conducting such work.
Articulated Motivations in Critical Approaches to Quantitative Research
To date, critical quantitative researchers in higher education have heeded Stage’s ( 2007 ) call to use data to reveal the large-scale perpetuation of inequities in educational processes and outcomes. This emerged as a defining aspect of higher education scholars’ critical quantitative work, as all manuscripts ( N = 34) in the scoping review articulated underlying motivations to identify and/or address inequities.
Often, these motivations were reflected in the articulated research questions ( n = 31; 91.2%). For example, one manuscript sought to “critically examine […] whether students were differentially impacted” by an educational policy based on intersecting race/ethnicity, gender, and income (Article 29, p. 39). Others sought to challenge notions of homogeneity across groups of systemically minoritized individuals by “explor[ing] within-group heterogeneity” of constructs such as sense of belonging among Asian American students (Article 32, p. iii) and “challenging the assumption that [economically and educationally challenged] students are a monolithic group with the same values and concerns” (Article 31, p. 5). These underlying motivations for conducting critical quantitative research emerged most clearly in the named approaches, positionality statements, and asset-based frameworks articulated in manuscripts.
Adopting the Coined Names of Quantitative Criticalism, QuantCrit, and Related Approaches
Based on the inclusion criteria applied in the scoping review, we anticipated that all manuscripts would employ approaches that were explicitly critical and quantitative in nature. Accordingly, all manuscripts ( N = 34; 100%) adopted approaches that were coined as quantitative criticalism , QuantCrit , critical policy analysis (CPA), critical quantitative intersectionality (CQI) , or some combination of those terms. Twenty-one manuscripts (61.8%) identified their approach as quantitative criticalism, nine manuscripts (26.5%) identified their approach as QuantCrit, two manuscripts (5.9%) identified their approach as CPA, and two manuscripts (5.9%) identified their approach as CQI.
One of the manuscripts that applied quantitative criticalism broadly described it as an approach that “seeks to quantitatively understand the predictors contributing to completion for a specific population of minority students” (Article 34, p. 62), noting that researchers have historically “attempted to explain the experiences of [minority] students using theories, concepts, and approaches that were initially designed for white, middle and upper class students” (Article 34, p. 62). Although this example speaks only to the limited context and outcomes of one study, it highlights a broader theme found across articles; that is, quantitative criticalism was often leveraged to challenge dominant theories, concepts, and approaches that failed to represent systemically minoritized individuals’ experiences. In challenging dominant theories, QuantCrit applications were most explicitly associated with critical race theory and issues of racism. One manuscript noted that “QuantCrit recognizes the limitations of quantitative data as it cannot fully capture individual experiences and the impact of racism” (Article 29, p. 9). However, these authors subsequently noted that “quantitative methodology can support CRT work by measuring and highlighting inequities” (Article 29, p. 9). Several scholars who employed QuantCrit explicitly identified tenets of QuantCrit that they aimed to address, with several authors making clear how they aligned decisions with two tenets establishing that categories are not given and numbers are not neutral.
Despite broadly applying several of the coined names for critical realms of quantitative research, interview data revealed that several authors felt a palpable tension in labeling. Some participants, like Nathan, questioned the surface-level engagement that may come with coined names: “I don’t know, I think it’s the thinking and the thought processes and the intentionality that matters. How invested should we be in the label?” Nathan elaborated by noting how he has shied away from labeling some of his work as quantitative criticalist , given that he did not have a clear answer about “what would set it apart from the equity-minded, inequality-focused, structurally and systematically-oriented kind of work.” Similarly, Leo stated how labels could (un)intentionally stop short of the true mission for the research, recalling that he felt “more inclined to say that I’m employing critical quantitative leanings or influences from critical quant” because a true application of critical epistemology should be apparent in each part of the research process. Although most interview participants remained comfortable with labeling, we also note that—within both interview data and the articles themselves—authors sometimes presented varied source attributions for labels and conflated some of the coined names, representing the messiness of this emerging body of research.
Challenging Objectivity by Conveying Researcher Positionality
Positionality statements acknowledge the influence of scholars’ identities and social positions on research decisions. Quantitative research has historically been viewed as an objective, value-neutral endeavor, with some researchers deeming positionality statements as unnecessary and inconsistent with the positivist paradigm from which such work is often conducted. Several interviewed authors noted that positivist or post-positivist roots of quantitative research characterized their doctoral training, which often meant that their “original thinking around statistics and research was very post-positivist” (Carter) or that “there really wasn’t much of a discussion, as far as I can remember as a doc student, about epistemology or ontology” (Randall). Although positionality statements have been generally rare in quantitative research studies, half of the manuscripts in our sample ( n = 17; 50.0%) included statements of researcher positionality. One interview participant, Gabrielle, discussed the importance of positionality statements as one way to challenge norms of quantitative research in saying:
It’s not objective, right? I think having more space to say, “This is why I chose the measures I chose. This is how I’m coming to this work. This is why it matters to me. This is my positioning, right?” I think that’s really important in quantitative work…that raises that level of consciousness to say these are not just passive, like every decision you make in your research is an active decision.
While Gabrielle, as well as Carter and Randall, all came to be advocates of positionality statements in quantitative scholarship through different pathways, it became clear through these and other interviews that positionality statements were one way to bring greater transparency to a traditionally value-neutral space.
As an additional source of contextual data, we reviewed submission guidelines for the peer-reviewed journals in which manuscripts were published. Not one of the 15 peer-reviewed outlets represented in our scoping review sample required that authors include positionality statements. One outlet, Journal of Diversity in Higher Education (where two scoping review articles were printed), offered “inclusive reporting standards” where they recommended that authors include reflexivity and positionality statements in their submitted manuscripts (American Psychological Association, 2024 ). Another outlet, Teachers College Record (where one scoping review article was printed), mentioned positionality statements in their author instructions. Yet, Teachers College Record did not require nor recommend the inclusion of author positionality statements; rather, they offered recommendations if authors chose to include them. Specifically, they suggested that if authors chose to include a positionality statement, it should be “more than demographic information or abstract statements” (Sage Journals, 2024 ). The remaining 13 peer-reviewed outlets from the scoping review data made no mention of author reflexivity or positionality in their author guidelines.
When present, the scoping review revealed that positionality statements varied in form and content. Some positionality statements were embedded in manuscript narratives, while others existed as separate tables with each author’s positionality represented as a separate row. In content, it was most common for authors to identify how their identities and experiences motivated their work. For example, one author noted their shared identity with their research participants as a low-income, first-generation Latina college student (Article 2, p. 25). Another author discussed the identity that they and their co-author shared as AAPI faculty, making the research “personally relevant for [them]” (Article 11, p. 344),
In interviews, participants recalled how the relationship between their identities, lived experiences, and motivations for critical approaches to quantitative research were all intertwined. Leo mentioned, “naming who we are in a study helps us be very forthright with the pieces that we’re more likely to attend to.” Yet, Leo went on to say that “one of the most cosmetic choices that people see in critically oriented quantitative research is our positionality statements,” which other participants noted about how information in positionality statements is presented. In several interviews, authors’ reflections on whether these statements should appear as lists of identities or deeper statements about reflexivity presented a clear tension. For some, positionality statements were places to “identify ourselves and our social locations” (David) or “brand yourself” as a critical quantitative scholar to meet “trendy” writing standards in this area (Michelle). Yet, others felt such statements fall short in revealing “how this study was shaped by their background identities and perspectives” (Junco) or appear to “be written in response to the context of the research or people participating” (Ginger). Ultimately, many participants felt that shaping honest positionality statements that better convey “the assumptions, and the biases and experiences we’ve all had” (Randall) was one area where quantitative higher education scholars could significantly improve their writing to reflect a critical lens.
Some manuscripts also clarified how authors’ identities and social positions reshaped the research process and product. For instance, authors of one manuscript reported being “guided by [their] cultural intuition” throughout the research (Article 17, p. 218). Alternatively, another author described the narrative style of their manuscript as intentionally “autobiographical and personally reflexive” in order “to represent the connections [they] made between [their] own experiences and findings that emerged” from their work (Article 28, p. 56). Taken together, among the manuscripts that explicitly included positionality statements, these remarks make clear that authors had widely varying approaches to their reflexivity and writing processes.
Actualizing Asset-Based Frameworks
Notably, conceptual and theoretical frameworks emerged as a common way for critical quantitative scholars to pursue equitable educational processes and outcomes in higher education research. Nearly all ( n = 32; 94.1%) manuscripts explicitly challenged dominant conceptual and theoretical models. Some authors enacted this challenge by countering canonical constructs and theories in the framing of their study. For example, several manuscripts addressed critiques of theoretical concepts such as integration and sense of belonging in building the conceptual framework for their own studies. Other manuscripts were constructed with the underlying goal to problematize and redefine frameworks, such as engagement for Latina/e/o/x students or the “leaky pipeline” discourse related to broadening participation in the sciences.
Across interviews, participants challenged deficit framings or “traditional” theoretical and conceptual approaches in many ways. Some frameworks are taken as a “truism in higher ed” (Leo), such as sense of belonging and Astin’s ( 1984 ) I-E-O model, and these frameworks were sometimes purposefully used to disrupt their normative assumptions. Randall, for one, recalled using a more normative higher education framework but opted to think about this framework “as more culturalized” than had previously been done. Further, Carter noted that “thinking about the findings in an anti-deficit lens” comprised a large portion of critical quantitative approaches. Using frameworks for asset-based interpretation was further exemplified by Caroline stating, “We found that Black students don’t do as well, but it’s not the fault of Black students.” Instead, Caroline challenged deficit understandings through the selected framework and implications for institutional policy. Collectively, challenging normative theoretical underpinnings in higher education was widely favored among participants, and Jackie hoped that “the field continues to turn a critical lens onto itself, to grow and incorporate new knowledges and even older forms of knowledge that maybe it hasn’t yet.”
Alternatively, some participants discussed rejecting widely used frameworks in higher education research in favor of adapting frameworks from other disciplines. For example, QuantCrit researchers drew from critical race theory (and related frameworks, such as intersectionality) to quantitatively examine higher education topics in ways that value the knowledge of People of Color. In using these frameworks, which have origins in critical legal and Black feminist theorization, interview participants noted how important it was “to put yourself out there with talking about race and racism” (Isabel) and connect the statistics “back to systems related to power, privilege, and oppression [because] it’s about connecting [results] to these systemic factors that shape experience, opportunities, barriers, all of that kind of stuff” (Jackie). Further, several authors related pulling theoretical lenses from sociology, gender studies, feminist studies, and queer studies to explore asset-based theorization in higher education contexts and potentially (re)build culturally relevant concepts for quantitative measurement in higher education.
Embodying Criticality in Methodological Sources, Approaches, and Interpretations
Moving beyond underlying motivations of critical quantitative higher education research, scoping review authors also frequently actualized the task of questioning and reconstructing “models, measures, and analytic practices [to] better describe experiences of those who have not been adequately represented” (Stage, 2007 , p. 10). Common across all manuscripts ( N = 34) was the discussion of specific ways in which authors’ critical quantitative approaches informed their analytic decisions. In fact, “analytic practices” was by far the most prevalent code applied to the manuscripts in our dataset, with 342 total references across the 34 manuscripts. This amounted to 20.8% of the excerpts in the scoping review dataset being coded as reflecting critical quantitative approaches to analytic practices, specifically.
Interestingly, many analytic approaches reflected what some would consider “standard” quantitative methodological tools. For example, manuscripts employed factor analysis to assess measures, t-tests to examine differences between groups, and hierarchical linear regression to examine relationships in specific contexts. Some more advanced, though less commonly applied, methods included measurement invariance testing and latent class analysis. Thus, applying a critical quantitative lens tended not to involve applying a separate set of analytic tools; rather, the critical lens was reflected in authors’ selection of data sources and variables, approaches to data coding and (dis)aggregation, and interpretation of statistical results.
Selecting Data Sources and Variables
Although scholars were explicit in their underlying motivations and approaches to critical quantitative research, this did not often translate into explicitly critical data collection endeavors. Most manuscripts ( n = 29; 85.3%) leveraged existing measures and data sources for quantitative analysis. Existing data sources included many national, large-scale datasets including the Educational Longitudinal Study (NCES), National Survey of Recent College Graduates (NSF), and the Current Population Survey (U.S. Census Bureau). Other large-scale data sources reflecting specific higher education contexts and populations included the HEDS Diversity and Equity Campus Climate Survey, Learning About STEM Student Outcomes (LASSO) platform, and National Longitudinal Survey of Freshmen. Only five manuscripts (14.7%) conducted analysis using original data collected and/or with newly designed measures.
It was apparent, however, that many authors grappled with challenges related to using existing data and measures. Interview participants’ stories crystallized the strengths and limitations of secondary data. Over half of the interview participants in our study spoke about their choices regarding quantitative data sources. Some participants noted that surveys “weren’t really designed to ask critical questions” (Sarah) and discussed the issues with survey data collected around sex and gender (Jessica). Still, Sarah and Jessica drew from existing survey data to complicate the higher education experiences they aimed to understand and tried to leverage critical framing to question “traditional” definitions of social constructs. In another discussion about data sources and the design of such sources, Carter expanded by saying:
I came in without [being] able to think through the sampling or data collection portion, but rather “this is what I have, how do I use it in a way that is applying critical frameworks but also staying true to the data themselves.” That is something that looks different for each study.
In discussing quantitative data source design, more broadly, Tyler added: “In a lot of ways, all quantitative methods are mixed methods. All of our measures should be developed with a qualitative component to them.” In the scoping review articles, one example of this qualitative component is evident within the cognitive interviews that Sablan ( 2019 ) employed to validate survey items. Finally, several participants noted how crucial it is to “just be honest and acknowledge the [limitations of secondary data] in the paper” (Caroline) and “not try to hide [the limitations]” (Alexis), illustrating the value of increased transparency when it comes to the selection and use of existing quantitative data in manuscripts advancing critical perspectives.
Regardless of data source, attention to power, oppression, and systemic inequities was apparent in the selection of variables across manuscripts. Many variables, and thus the associated models, captured institutional contexts and conditions. The multilevel nature of variables, which extended beyond individual experiences, aligned with authors’ articulated motivations to disrupt inequitable educational processes and outcomes, which are often systemic and institutionalized in nature. For one, David explained key motivations behind his analytic process: “We could have controlled for various effects, but we really wanted to see how are [the outcomes] differing by these different life experiences?” David’s focus on moving past “controlling” for different effects shows a deep level of intentionality that was reflected among many participants. Carter expanded on this notion by recalling how variable selection required, “thinking through how I can account for systemic oppression in my model even though it’s not included in the survey…I’ve never seen it measured.” Further, Leo discussed how reflexivity shaped variable selection and shared: “Ultimately, it’s thinking about how do these environments not function in value-neutral ways, right? It’s not just selecting X, Y, and Z variable to include. It’s being able to interrogate [how] these variables represent environments that are not power neutral.” The process of selecting quantitative data sources and variables was perhaps best summed up by Nick, who concisely shared, “it’s been very iterative.” Indeed, most participants recalled how their methodological processes necessitated reflexivity—an iterative process of continually revisiting assumptions one brings to the quantitative research process (Jamieson et al., 2023 )—and a willingness to lean into innovative ways of operationalizing data for critical purposes.
Challenging the Norms of Coding
An especially common way of enacting critical principles in quantitative research was to challenge traditional norms of coding. This emerged in three primary ways: (1) disaggregation of categories to reflect heterogeneity in individuals’ experiences, (2) alternative approaches to identifying reference groups, and (3) efforts to capture individuals’ intersecting identities. Across manuscripts, authors often intentionally disaggregated identity subgroups (e.g., race/ethnicity, gender) and ran distinct analytical models for each subgroup separately. In interviews, Junco expressed that running separate models was one way that analyses could cultivate a different way of thinking about racial equity. Specifically, Junco challenged colleagues’ analytic processes by asking whether their research questions “really need to focus on racial comparison?” Junco then pushed her colleagues by asking, “can we make a different story when we look at just the Black groups? Or when we look at only Asian groups, can we make a different story that people have not really heard?” Isabel added that focusing on measurement for People of Color allowed for them (Isabel and her research collaborators) to “apply our knowledge and understanding about minoritized students to understand what the nuances were.” In nearly one third of the manuscripts ( n = 11; 32.4%), focusing on single group analyses emerged as one way that QuantCrit scholars disrupted the perceived neutrality of numbers and how categories have previously been established to serve white, elite interests. Five of those manuscripts (14.7%) explicitly focused on understanding heterogeneity within systemically minoritized subpopulations, including Asian American, Latina/e/o/x, and Black students.
It was not the case, however, that authors avoided group comparisons altogether. For example, one team of authors used separate principal components analysis (PCA) models for Indigenous and non-Indigenous students with the explicit intent of comparing models between groups. The authors noted that “[t]ypically, monolithic comparisons between racial groups perpetuate deficit thinking and marginalization.” However, they sought to “highlight the nuance in belonging for Indigenous community college students as it differs from the White-centric or normative standards” by comparing groups from an asset-driven perspective (Article 5, p. 7). Thus, in cases where critical quantitative scholars included group comparisons, the intentionality underlying those choices as a mechanism to highlight inequities and/or contribute to asset-based narratives was apparent.
Four manuscripts (11.8%) were explicit in their efforts to identify alternative analytic methods to normative reference groups. Reference groups are often required when building quantitative models with categorical variables such as racial/ethnic and gender identity. Often, dominant identities (e.g., respondents who are white and/or men) comprise the largest portion of a research sample and are selected as the comparison group, typifying experiences of individuals with those dominant identities. To counter the traditional practice of reference groups, some manuscript authors stated using effect coding, often referencing the work of Mayhew and Simonoff ( 2015 ), and dynamic centering as two alternatives. Effect coding (used in three manuscripts) removes the need for a reference group; instead, all groups are compared to the overall sample mean. Dynamic centering (used in one manuscript), on the other hand, uses a reference group but one that is intentionally selected based on the construct in question, as opposed to relying on sample size or dominant identities.
Interview participants also discussed navigating alternative coding practices, with several authors raising key points about their exposure to and capacity building for effect coding. As Angela described, effect coding necessitates that “you don’t choose a specific group as your benchmark to do the comparison. And you instead compare to the group.” Angela then stated that this approach made more sense than choosing benchmarks, as she felt uncomfortable identifying one group as a comparison group. Junco, however, noted that “effect coding was much more complicated than what I thought,” as she reflected on unlearning positivist strategies in favor of equity-focused approaches that could elucidate greater nuance. Importantly, using alternative coding practices was not universal among manuscripts or interview participants. One manuscript utilized traditional dummy coding for race in regression models, with white students as the reference group to which all other groups were compared. The authors explicated that “using white students as the reference [was] not a result of ‘privileging’ them or maintaining the patterns of power related to racial categorizations” (Article 8, p. 1282). Instead, they argued that the comparison was a deliberate choice to “reveal patterns of racial or ethnic educational inequality compared to the privileged racial group” (Article 8, p. 1282). Another author maintained the use of reference groups purely for ease of interpretation. David shared, “it’s easier for the person to just look at it and compare magnitudes.” However, by prioritizing the benefit of easy interpretation with traditional reference groups, authors may incur other costs (such as sustaining unnecessary comparisons to white students). Additionally, several manuscripts ( n = 13; 38.2%) employed analytic coding practices that aimed to account for intersectionality. While authors identified these practices by various names (e.g., interaction terms, mediating variables, conditional effects) they all afforded similar opportunities. The most common practice among authors in our sample ( n = 8; 23.5%) was computing interaction terms to account for intersecting identities, such as race and gender. Specifically pertaining to intersectionality, Alexis summarized many researchers’ tensions well in sharing, “I know what Kimberlé Crenshaw says. But how do I operationalize that mathematically into something that’s relevant?” In offering one way that intersectionality could be realized with quantitative data, Tyler stated that “being able to keep in these variables that are interacting [via interaction terms] and showing differences” may align with the core ideas of intersectionality. Yet, participants also recognized that statistics would inherently always fall short of representing respondents’ lived experiences, as discussed by Nick: “We disaggregate as far as we can, but you could only go so far, and like, how do we deal with tension.” Several other participants reflected on bringing in open-text response data about individuals’ social identities, categorizing racial and ethnic groups according to continent (while also recognizing that this did not necessarily attend to the complexities of diasporas), or making decisions about groups that qualify as “minoritized” based on disciplinary and social movements. Collectively, the disparate approaches that authors used and discussed directly speak to critical higher education scholars’ movement away from normative comparisons that did not meaningfully answer questions related to (in)equity and/or intersectionality in higher education.
Interpreting Statistical Results
One notable, albeit less common, way higher education scholars enacted critical quantitative approaches through analytic methods was by challenging traditional ways of reporting and interpreting statistical results. The dominant approach to statistical methods aligns with a null hypothesis significance testing (NHST) approach, whereby p -values—used as indicators of statistically significant effects—serve to identify meaningful results. NHST practices were prevalent in nearly all scoping review manuscripts; yet, there were some exceptions. For example, three manuscripts (8.8%) cautioned against reliance on statistical significance due to its dependence on large sample size (i.e., statistical power), which is often at odds with centering research on systemically minoritized populations. One of those manuscripts (2.9%) even chose to interpret nonsignificant results from their quantitative analyses. In a similar vein, two manuscripts (5.9%) also questioned and adapted common statistical practices related to model selection (e.g., using corrected Akaike information criteria (AIC) instead of p -values) and variable selection (e.g., avoiding use of variance explained so as not to “[exclude] marginalized students from groups with small representations in the data” (Article 23, p. 7). Meanwhile, others attended to raw numeric data and uncertainty associated with quantitative results. The resources to enact these alternative methodological practices were briefly discussed by Tyler through his interview, in which he shared: “The use of p -values is so poorly done that the American Statistical Association has released a statement on p -values, an entire special collection [and people in my field] don’t know those things exist.” Tyler went on to share that this knowledge barrier was tied to the siloed nature of academia, and that such siloes may inhibit the generation of critical quantitative research that draws from different disciplinary origins.
Among interviewed authors, many also viewed interpretation as a stage of quantitative research that required a high level of responsibility and awareness of worldview. Nick related that using a QuantCrit approach changed how he was interpreting results, in “talking about educational debts instead of gaps, talking about racism instead of race.” As demonstrated by Nick, critical interpretations of statistics necessitate congruence with theoretical or conceptual framing, as well, given the explicit call to interrogate structures of inequity and power in research adopting a critical lens. Leo described this responsibility as a necessary challenge:
It’s very easy to look at results and interpret them—I don’t wanna say ‘as is’ because I don’t think that there is an ‘as is’—but interpret them in ways that they’re traditionally interpreted and to keep them there. But, if we’re truly trying to accomplish these critical quantitative themes, then we need to be able to reference these larger structures to make meaning of the results that are put in front of us.
Nick, Leo, and several other participants all emphasized how crucial interpretation is in critical quantitative research in ways that expanded beyond statistical practices; ultimately, the perspective that “behind every number is a human” served as a primary motivation for many authors in fulfilling the call toward ethical and intentional interpretation of statistics.
Leveraging a multimethod approach with 15 years of published manuscripts ( N = 34) and 18 semi-structured interviews with corresponding authors, this study identifies the extent to which principles of quantitative criticalism, critical quantitative inquiry, and QuantCrit have been applied in higher education research. While scholars are continuing to develop strategies to enact a critical quantitative lens in their studies—a path we hope will continue, as continued questioning, creativity, and exploration of new possibilities underscore the foundations of critical theory (Bronner, 2017 )—our findings do suggest that higher education researchers may benefit from intentional conversations regarding specific analytic practices they use to advance critical quantitative research (e.g., confidence intervals versus p -values, finite mixture models versus homogeneous distribution models).
Our interviews with higher education scholars who produced such work also fills a need for guidance on strategies to enact critical perspectives in quantitative research, addressing an absence of such from most quantitative training and resources. By drawing on the work and insights of higher education researchers engaging critical quantitative approaches, we provide a foundation on which future scholars can imagine and implement a fuller range of possibilities for critical inquiry via quantitative methods in higher education. In what follows, we discuss the findings of this study alongside the frameworks from which they drew inspiration. Then, we offer implications for research and practice to catalyze continued exploration and application of critical quantitative approaches in higher education scholarship.
Synthesizing Key Takeaways
First, scoping review data revealed several commonalities across manuscripts regarding authors’ underlying motivations to identify and/or address inequities for systemically minoritized populations—speaking to how critical quantitative approaches can fall within the larger umbrella of equity-mindedness in higher education research. Such motivations were reflected in authors’ research questions and frameworks (consistent with Stage’s ( 2007 ) initial guidance). Most manuscripts identified their approach as quantitative criticalism broadly, although there were sometimes blurred boundaries between approaches termed quantitative criticalism, QuantCrit, critical policy analysis, and critical quantitative intersectionality. Notably, authors’ decisions about which framing their work invoked also determined how scholars enacted a specified critical quantitative approach. For example, the tenets of QuantCrit, offered by Gillborn et al. ( 2018 ), were specifically heeded by researchers seeking to take up a QuantCrit lens. Scholars who noted inspiration from Rios-Aguilar ( 2014 ) often drew specifically from the framework for critical quantitative inquiry. While the key ingredients of these critical quantitative approaches were offered in the foundational framings we introduced, the field has lacked understanding on how scholars take up these considerations. Thus, the present findings create inroads to a conversation about applying and extending the articulated components associated with critical quantitative higher education research.
Second, our multimethod approach illuminated general agreement (in manuscripts and interviews) that quantitative research in higher education—whether explicitly critical or not—is not neutral nor objective. However, despite positionality being a key part of Rios-Aguilar’s ( 2014 ) critical quantitative inquiry framework, only half of the manuscripts included researcher positionality. Thus, while educational researchers may agree that, without challenging objectivity, quantitative methods serve to uphold inequity (e.g., Arellano, 2022 ; Castillo & Babb, 2024 ), higher education scholars may not have yet established consensus on how these principles materialize. To be clear, consensus need not be the goal of critical quantitative approaches, given that critical theory demands constant questioning for new ways of thinking and being (Bronner, 2017 ); yet, greater solidarity among critical quantitative higher education researchers may be beneficial, so that community-based discussions can drive the actualization of equity-minded motivations. Interview data also revealed complications in how scholars choose if, and how, to define and label critical quantitative approaches. Some participants struggled with whether their work was “critical enough” to be labeled as such. Those conversations raise concerns that critical quantitative research in higher education could—or potentially has—become an exclusionary space where level of criticality is measured by an arbitrary barometer (refer to Garvey & Huynh, 2024 ). Meanwhile, other participants worried that attaching such a label to their work was irrelevant (i.e., that it was the motivations and intentionality underlying the work that mattered, not the label). Although the field remains in disagreement regarding if/how labeling should be implemented for critical quantitative approaches, “it is the naming of experience and ideologies of power that initiates the process [of transformation] in its critical form” (Hanley, 2004 , p. 55). As such, we argue that naming critical quantitative approaches can serve as a lever for transforming quantitative higher education research and create power in related dialogue.
Implications for Future Studies on Critical Quantitative Higher Education Research
As with any empirical approach, and especially those that are gaining traction (as critical quantitative approaches are in higher education; Wofford & Winkler, 2022 ), there is utility in conducting research about the research . First, in the context of higher education as a broad field of applied research, there is a need to illustrate what critical quantitative scholars focus on when they conceptualize higher education in the first place. For example, is higher education viewed as a possibility for social mobility? Or are critical quantitative scholars viewing postsecondary institutions as engines of inequity? Second, it was notable that—among the manuscripts including positionality statements—it was common for such statements to read as biographies (i.e., lists of social identities) rather than as reflexive accounts about the roles/commitments of the researcher(s). Future research would benefit from a deeper understanding of the enactment of positionality in critical quantitative higher education research. Third, given the productive tensions associated with naming and understanding the (dis)agreed upon ingredients between quantitative criticalism, critical quantitative inquiry, QuantCrit, as well as additional known and unknown conceptualizations, further research regarding how higher education scholars grapple with definitions, distinctions, and adaptations of these related approaches will clarify how scholars can advance their critical commitments with quantitative postsecondary data.
Implications for Employing Critical Quantitative Higher Education Research
Emerging analytical tools for critical quantitative research.
In terms of employing critical quantitative approaches in higher education research, there is significant room for scholars to explore emerging quantitative methodological tools. We agree with López et al.’s ( 2018 ) assessment that critical quantitative work tends to remain demographic and/or descriptive in its methodological nature, and there is great potential for more advanced inferential quantitative methods to serve critical aims. While there are some examples in the literature—for example, Sablan’s ( 2019 ) work in the realm of quantitative measurement and Malcom-Piqueux’s (2015) work related to latent class analysis and other person-centered modeling approaches—additional examples of advanced and innovative analytical tools were limited in our findings. Thus, integrating more advanced quantitative methodological tools into critical quantitative higher education research, such as finite mixture modeling (as noted by Malcom-Piqueux, 2015), measurement invariance testing, and multi-group structural equation modeling, may advance the ways in which scholars address questions related to heterogeneity in the experiences and outcomes of college students, faculty, and staff.
Traditional quantitative analytical tools have historically highlighted between-group differences that perpetuate deficit narratives for systemically minoritized students, faculty, and staff on college campuses; for example, comparing the educational outcomes of Black students to white students. Emerging approaches such as finite mixture modeling hold promise in unearthing more nuanced understandings. Of growing interest to many critical quantitative scholars is heterogeneity within minoritized populations; finite mixture modeling approaches such as growth mixture modeling, latent class analysis, and latent profile analysis are particularly well suited to reveal within-group differences that are otherwise obfuscated in most quantitative analyses. Although we found a few examples in our scoping review of authors who leveraged more traditional group comparisons for equity-minded aims, these emerging analytical approaches may be better suited for the questions asked by future critical quantitative scholars.
One Size Does Not Fit All
Many emerging analytical tools demonstrate promise in advancing conversations about inequity, particularly related to heterogeneity in subpopulations on college and university campuses. As noted previously, however, Rios-Aguilar ( 2014 ) noted that critical quantitative research need not rely solely on “fancy” or advanced analytical tools; in fact, our findings did not lead us to conclude that higher education scholars have established a set of analytical approaches that are explicitly critical in nature. Rather, our results revealed a common theme: that critical quantitative scholarship in higher education necessitates an elevated degree of intentionality in selection, application, and interpretation of whichever analytical approaches—advanced or not—scholars choose.
As noted, there were several instances in our data where commonly critiqued analytical approaches were still applied in the critical quantitative literature. For example, we found manuscripts that conducted a monolithic comparison of Indigenous and non-Indigenous students and the utilization of traditional dummy coding with white students as a normative reference group. What made these manuscripts distinct from more non-critical quantitative research was the thoughtfulness and intentionality with which those approaches were selected to serve equity-minded goals—an intentionality that was explicitly communicated to readers in the methods section of manuscripts. Just as the inclusion of positionality statements in half of the manuscripts suggests that researcher objectivity was generally not assumed by higher education scholars conducting critical quantitative scholarship, choices that often otherwise go unquestioned were interrogated and discussed in manuscripts.
Cokley and Awad ( 2013 ) share several recommendations for advancing social justice research via quantitative methods. One of their recommendations addresses the utilization of racial group comparisons in quantitative analyses. They do not suggest that researchers avoid comparisons between groups altogether, but rather they avoid “unnecessary” comparisons between groups (p. 35). They elaborate that, “[t]here should be a clear research questions that necessitates the use of the comparison” if utilized in quantitative research with critical aims (Cokley & Awad, 2013 , p. 35). Our findings suggested that—in the current state of critical quantitate scholarship in higher education—it is not so much about a specific set of approaches deeming scholarship as critical (or not), but rather about asking critical questions (as Stage initially called us to do in 2007) and then selecting methods that align with those goals.
Opportunities for Training and Collaboration
Notably, many of the emerging analytical approaches mentioned require a significant degree of methodological training. The limited use of such tools, which are otherwise well-suited for critical quantitative applications, points to a potential disconnect in training of higher education scholars. Some structured opportunities for partnership between disciplinary and methodological scholars have emerged via training programs such as the Quantitative Research Methods (QRM) for STEM Education Scholars Program (funded by the National Science Foundation Award 1937745) and the Institute on Mixture Modeling for Equity-Oriented Researchers, Scholars and Educators (IMMERSE) fellowship (funded by the Institute for Education Sciences Award R305B220021). These grant-funded training opportunities connect quantitative methodological experts with applied researchers across educational contexts.
We must consider additional ways, both formal and informal, to expand training opportunities for higher education scholars with interest in both advanced quantitative methods and equity-focused research; until then, expertise in quantitative methods and critical frameworks will likely inhabit two distinct communities of scholars. For higher education scholars to fully embrace the potential of critical quantitative research, we will be well served by intentional partnerships across methodological (e.g., quantitative and qualitative) and disciplinary (e.g., higher education scholars and methodologists) boundaries. In addition to expanding applied researchers’ analytical skillsets, training and collaboration opportunities also prepare potential critical quantitative scholars in higher education to select methodological approaches, whether introductory or advanced, that most closely align with their research aims.
Historically, critical inquiry has been viewed primarily as an endeavor for qualitative research. Recently, educational scholars have begun considering the possibilities for quantitative research to be leveraged in support of critical inquiry. However, there remains limited work evaluating whether and to what extent principles from quantitative criticalism, critical quantitative inquiry, and QuantCrit have been applied in higher education research. By drawing on the work and insights of scholars engaging in critical quantitative work, we provide a foundation on which future scholars can imagine and implement a vast range of possibilities for critical inquiry via quantitative methods in higher education. Ultimately, this work will allow scholars to realize the potential for research methodologies to directly support critical aims.
Data Availability
The list of manuscripts generated from the scoping review analysis is available via the Online Supplemental Materials Information link. Given the nature of our sample and topics discussed, interview data will not be shared publicly to protect participant anonymity.
American Psychological Association. (2024). Journal of Diversity in Higher Education. https://www.apa.org/pubs/journals/dhe
Anguera, M. T., Blanco-Villaseñor, A., Losada, J. L., Sánchez-Algarra, P., & Onwuegbuzie, A. J. (2018). Revisiting the difference between mixed methods and multimethods: Is it all in the name? Quality & Quantity, 52 , 2757–2770.
Article Google Scholar
Arellano, L. (2022). Questioning the science: How quantitative methodologies perpetuate inequity in higher education. Education Sciences, 12 (2), 116.
Arskey, H., & O’Malley, L. (2005). Scoping studies: Towards a methodological framework. International Journal of Social Research Methodology, 8 (1), 19–32.
Astin, A. W. (1984). Student involvement: A developmental theory for higher education. Journal of College Student Personnel, 25 , 297–308.
Google Scholar
Bensimon, E. M. (2007). The underestimated significance of practitioner knowledge in the scholarship on student success. The Review of Higher Education, 30 (4), 441–469.
Bensimon, E. M. (2018). Reclaiming racial justice in equity. Change: The Magazine of Higher Learning, 50 (3–4), 95–98.
Bierema, A., Hoskinson, A. M., Moscarella, R., Lyford, A., Haudek, K., Merrill, J., & Urban-Lurain, M. (2021). Quantifying cognitive bias in educational researchers. International Journal of Research & Method in Education, 44 (4), 395–413.
Bourdieu, P. (1977). Cultural reproduction and social reproduction. In J. Karabel & A. H. Halsey (Eds.), Power and ideology in education (pp. 487–511). Oxford University Press.
Bronner, S. E. (2017). Critical theory: A very short introduction (Vol. 263). Oxford University Press.
Book Google Scholar
Byrd, D. (2019). The diversity distraction: A critical comparative analysis of discourse in higher education scholarship. Review of Higher Education, 42 , 135–172.
Castillo, W., & Babb, N. (2024). Transforming the future of quantitative educational research: A systematic review of enacting QuantCrit. Race Ethnicity and Education, 27 (1), 1–21.
Cokley, A., & Awad, G. H. (2013). In defense of quantitative methods: Using the “master’s tools” to promote social justice. Journal for Social Action in Counseling and Psychology, 5 (2), 26–41.
Creswell, J. W., & Miller, D. L. (2000). Determining validity in qualitative inquiry. Theory into Practice, 39 (3), 124–130.
Espino, M. M. (2012). Seeking the “truth” in the stories we tell: The role of critical race epistemology in higher education research. The Review of Higher Education, 36 (1), 31–67.
Garcia, N. M., López, N., & Vélez, V. N. (2018). Race, ethnicity, and education: Vol 21, No 2. QuantCrit: Rectifying quantitative methods through critical race theory . Routledge.
Garvey, J. C., & Huynh, J. (2024). Quantitative criticalism in education research. Critical Education, 15 (1), 74–90.
Gillborn, D., Warmington, P., & Demack, S. (2018). QuantCrit: Education, policy, ‘Big Data’ and principles for a critical race theory of statistics. Race Ethnicity and Education, 21 (2), 158–179.
Hanley, M. S. (2004). The name game: Naming in culture, critical theory, and the arts. Journal of Thought, 39 (4), 53–74.
Hesse-Biber, S., Rodriguez, D., & Frost, N. A. (2015). A qualitatively driven approach to multimethod and mixed methods research. In S. Hesse-Biber & R. B. Johnson (Eds.), The Oxford handbook of multimethod and mixed methods research inquiry. Oxford University Press.
Chapter Google Scholar
Jamieson, M. K., Govaart, G. H., & Pownall, M. (2023). Reflexivity in quantitative research: A rationale and beginner’s guide. Social and Personality Psychology Compass, 17 (4), 1–15.
Kimball, E., & Friedensen, R. E. (2019). The search for meaning in higher education research: A discourse analysis of ASHE presidential addresses. The Review of Higher Education, 42 (4), 1549–1574.
Kincheloe, J. L., & McLaren, P. L. (1994). Rethinking critical theory and qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 138–157). Sage Publications Inc.
Levac, D., Colquhoun, H., & O’Brien, K. K. (2010). Scoping studies: Advancing the methodology. Implementation Science, 5 , 69.
López, N., Erwin, C., Binder, M., & Javier Chavez, M. (2018). Making the invisible visible: Advancing quantitative methods in higher education using critical race theory and intersectionality. Race Ethnicity and Education, 21 (2), 180–207.
Magoon, A. J. (1977). Constructivist approaches in educational research. Review of Educational Research, 47 (4), 651–693.
Martínez-Alemán, A. M., Pusser, B., & Bensimon, E. M. (Eds.). (2015). Critical approaches to the study of higher education: A practical introduction . Johns Hopkins University Press.
Mayhew, M. J., & Simonoff, J. S. (2015). Non-White, no more: Effect coding as an alternative to dummy coding with implications for higher education researchers. Journal of College Student Development, 56 (2), 170–175.
McCoy, D. L., & Rodricks, D. J. (2015). Critical race theory in higher education: 20 years of theoretical and research innovations. ASHE Higher Education Report, 41 (3), 1.
Miles, M. B., Huberman, A. M., & Saldaña, J. (2014). Qualitative data analysis: A methods sourcebook . Sage.
Munn, Z., Peters, M. D. J., Stern, C., Tufanaru, C., McArthur, A., & Aromataris, E. (2018). Systematic review of scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Medical Research Methodology, 18 , 1–7.
Renn, K. A. (2020). Reimagining the study of higher education: Generous thinking, chaos, and order in a low consensus field. The Review of Higher Education, 43 (4), 917–934.
Rios-Aguilar, C. (2014). The changing context of critical quantitative inquiry. New Directions for Institutional Research, 158 , 95–107.
Robinson, O. C. (2014). Sampling in interview-based qualitative research: A theoretical and practical guide. Qualitative Research in Psychology, 11 (1), 25–41.
Sablan, J. R. (2019). Can you really measure that? Combining critical race theory and quantitative methods. American Educational Research Journal, 56 (1), 178–203.
Sage Journals. (2024). Teachers college record: The voice of scholarship in education. https://journals.sagepub.com/author-instructions/TCZ
Saldaña, J. (2015). The coding manual for qualitative researchers . Sage Publications.
Stage, F. K. (Ed.). (2007). New directions for institutional research: No. 133. Using quantitative data to answer critical questions . Jossey-Bass.
Stage, F. K., & Wells, R. S. (Eds.). (2014). New directions for institutional research: No. 158. New scholarship in critical quantitative research—Part 1: Studying institutions and people in context . Jossey-Bass.
Stewart, D. L. (2022). Spanning and unsettling the borders of critical scholarship in higher education. The Review of Higher Education, 45 (4), 549–563.
Tabron, L. A., & Thomas, A. K. (2023). Deeper than wordplay: A systematic review of critical quantitative approaches in education research (2007–2021). Review of Educational Research, 93 , 756. https://doi.org/10.3102/00346543221130017
Torgerson, D. J., & Torgerson, C. J. (2003). Avoiding bias in randomised controlled trials in educational research. British Journal of Educational Studies, 51 (1), 36–45.
Wells, R. S., & Stage, F. K. (Eds.). (2015). New directions for institutional research: No. 163. New scholarship in critical quantitative research—Part 2: New populations, approaches, and challenges . Jossey-Bass.
Wofford, A. M., & Winkler, C. E. (2022). Publication patterns of higher education research using quantitative criticalism and QuantCrit perspectives. Innovative Higher Education, 47 (6), 967–988. https://doi.org/10.1007/s10755-022-09628-3
Zuberi, T. (2001). Thicker than blood: How racial statistics lie . University of Minnesota Press.
Zuberi, T., & Bonilla-Silva, E. (2008). White logic, White methods: Racism and methodology . Lanham.
Download references
Acknowledgements
This research was supported by a grant from the American Educational Research Association, Division D. The authors gratefully thank Dr. Jason (Jay) Garvey for his support as an early thought partner with regard to this project, and Dr. Christopher Sewell for his helpful feedback on an earlier version of this manuscript, which was presented at the 2022 Association for the Study of Higher Education meeting.
This research was supported by a grant from the American Educational Research Association, Division D.
Author information
Authors and affiliations.
Department of Counseling, Higher Education Leadership, Educational Psychology, & Foundations, Mississippi State University, 175 President’s Circle, 536 Allen Hall, Mississippi State, MS, 39762, USA
Christa E. Winkler
Department of Educational Leadership & Policy Studies, Florida State University, Tallahassee, FL, USA
Annie M. Wofford
You can also search for this author in PubMed Google Scholar
Corresponding author
Correspondence to Christa E. Winkler .
Ethics declarations
Competing interests.
The authors have no competing interests to declare relevant to the content of this article.
Additional information
Publisher's note.
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Supplementary file1 (DOCX 26 KB)
Rights and permissions.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .
Reprints and permissions
About this article
Winkler, C.E., Wofford, A.M. Trends and Motivations in Critical Quantitative Educational Research: A Multimethod Examination Across Higher Education Scholarship and Author Perspectives. Res High Educ (2024). https://doi.org/10.1007/s11162-024-09802-w
Download citation
Received : 25 June 2023
Accepted : 14 May 2024
Published : 04 June 2024
DOI : https://doi.org/10.1007/s11162-024-09802-w
Share this article
Anyone you share the following link with will be able to read this content:
Sorry, a shareable link is not currently available for this article.
Provided by the Springer Nature SharedIt content-sharing initiative
- Critical quantitative
- Quantitative criticalism
- Scoping review
- Multimethod study
- Find a journal
- Publish with us
- Track your research
Have a language expert improve your writing
Run a free plagiarism check in 10 minutes, generate accurate citations for free.
- Knowledge Base
Methodology
- Qualitative vs. Quantitative Research | Differences, Examples & Methods
Qualitative vs. Quantitative Research | Differences, Examples & Methods
Published on April 12, 2019 by Raimo Streefkerk . Revised on June 22, 2023.
When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge.
Common quantitative methods include experiments, observations recorded as numbers, and surveys with closed-ended questions.
Quantitative research is at risk for research biases including information bias , omitted variable bias , sampling bias , or selection bias . Qualitative research Qualitative research is expressed in words . It is used to understand concepts, thoughts or experiences. This type of research enables you to gather in-depth insights on topics that are not well understood.
Common qualitative methods include interviews with open-ended questions, observations described in words, and literature reviews that explore concepts and theories.
Table of contents
The differences between quantitative and qualitative research, data collection methods, when to use qualitative vs. quantitative research, how to analyze qualitative and quantitative data, other interesting articles, frequently asked questions about qualitative and quantitative research.
Quantitative and qualitative research use different research methods to collect and analyze data, and they allow you to answer different kinds of research questions.
Quantitative and qualitative data can be collected using various methods. It is important to use a data collection method that will help answer your research question(s).
Many data collection methods can be either qualitative or quantitative. For example, in surveys, observational studies or case studies , your data can be represented as numbers (e.g., using rating scales or counting frequencies) or as words (e.g., with open-ended questions or descriptions of what you observe).
However, some methods are more commonly used in one type or the other.
Quantitative data collection methods
- Surveys : List of closed or multiple choice questions that is distributed to a sample (online, in person, or over the phone).
- Experiments : Situation in which different types of variables are controlled and manipulated to establish cause-and-effect relationships.
- Observations : Observing subjects in a natural environment where variables can’t be controlled.
Qualitative data collection methods
- Interviews : Asking open-ended questions verbally to respondents.
- Focus groups : Discussion among a group of people about a topic to gather opinions that can be used for further research.
- Ethnography : Participating in a community or organization for an extended period of time to closely observe culture and behavior.
- Literature review : Survey of published works by other authors.
A rule of thumb for deciding whether to use qualitative or quantitative data is:
- Use quantitative research if you want to confirm or test something (a theory or hypothesis )
- Use qualitative research if you want to understand something (concepts, thoughts, experiences)
For most research topics you can choose a qualitative, quantitative or mixed methods approach . Which type you choose depends on, among other things, whether you’re taking an inductive vs. deductive research approach ; your research question(s) ; whether you’re doing experimental , correlational , or descriptive research ; and practical considerations such as time, money, availability of data, and access to respondents.
Quantitative research approach
You survey 300 students at your university and ask them questions such as: “on a scale from 1-5, how satisfied are your with your professors?”
You can perform statistical analysis on the data and draw conclusions such as: “on average students rated their professors 4.4”.
Qualitative research approach
You conduct in-depth interviews with 15 students and ask them open-ended questions such as: “How satisfied are you with your studies?”, “What is the most positive aspect of your study program?” and “What can be done to improve the study program?”
Based on the answers you get you can ask follow-up questions to clarify things. You transcribe all interviews using transcription software and try to find commonalities and patterns.
Mixed methods approach
You conduct interviews to find out how satisfied students are with their studies. Through open-ended questions you learn things you never thought about before and gain new insights. Later, you use a survey to test these insights on a larger scale.
It’s also possible to start with a survey to find out the overall trends, followed by interviews to better understand the reasons behind the trends.
Qualitative or quantitative data by itself can’t prove or demonstrate anything, but has to be analyzed to show its meaning in relation to the research questions. The method of analysis differs for each type of data.
Analyzing quantitative data
Quantitative data is based on numbers. Simple math or more advanced statistical analysis is used to discover commonalities or patterns in the data. The results are often reported in graphs and tables.
Applications such as Excel, SPSS, or R can be used to calculate things like:
- Average scores ( means )
- The number of times a particular answer was given
- The correlation or causation between two or more variables
- The reliability and validity of the results
Analyzing qualitative data
Qualitative data is more difficult to analyze than quantitative data. It consists of text, images or videos instead of numbers.
Some common approaches to analyzing qualitative data include:
- Qualitative content analysis : Tracking the occurrence, position and meaning of words or phrases
- Thematic analysis : Closely examining the data to identify the main themes and patterns
- Discourse analysis : Studying how communication works in social contexts
If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.
- Chi square goodness of fit test
- Degrees of freedom
- Null hypothesis
- Discourse analysis
- Control groups
- Mixed methods research
- Non-probability sampling
- Quantitative research
- Inclusion and exclusion criteria
Research bias
- Rosenthal effect
- Implicit bias
- Cognitive bias
- Selection bias
- Negativity bias
- Status quo bias
Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.
Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.
In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .
The research methods you use depend on the type of data you need to answer your research question .
- If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
- If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
- If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.
Data collection is the systematic process by which observations or measurements are gathered in research. It is used in many different contexts by academics, governments, businesses, and other organizations.
There are various approaches to qualitative data analysis , but they all share five steps in common:
- Prepare and organize your data.
- Review and explore your data.
- Develop a data coding system.
- Assign codes to the data.
- Identify recurring themes.
The specifics of each step depend on the focus of the analysis. Some common approaches include textual analysis , thematic analysis , and discourse analysis .
A research project is an academic, scientific, or professional undertaking to answer a research question . Research projects can take many forms, such as qualitative or quantitative , descriptive , longitudinal , experimental , or correlational . What kind of research approach you choose will depend on your topic.
Cite this Scribbr article
If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.
Streefkerk, R. (2023, June 22). Qualitative vs. Quantitative Research | Differences, Examples & Methods. Scribbr. Retrieved October 9, 2024, from https://www.scribbr.com/methodology/qualitative-quantitative-research/
Is this article helpful?
Raimo Streefkerk
Other students also liked, what is quantitative research | definition, uses & methods, what is qualitative research | methods & examples, mixed methods research | definition, guide & examples, get unlimited documents corrected.
✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts
Qualitative vs. Quantitative Research: Comparing the Methods and Strategies for Education Research
No matter the field of study, all research can be divided into two distinct methodologies: qualitative and quantitative research. Both methodologies offer education researchers important insights.
Education research assesses problems in policy, practices, and curriculum design, and it helps administrators identify solutions. Researchers can conduct small-scale studies to learn more about topics related to instruction or larger-scale ones to gain insight into school systems and investigate how to improve student outcomes.
Education research often relies on the quantitative methodology. Quantitative research in education provides numerical data that can prove or disprove a theory, and administrators can easily share the number-based results with other schools and districts. And while the research may speak to a relatively small sample size, educators and researchers can scale the results from quantifiable data to predict outcomes in larger student populations and groups.
Qualitative vs. Quantitative Research in Education: Definitions
Although there are many overlaps in the objectives of qualitative and quantitative research in education, researchers must understand the fundamental functions of each methodology in order to design and carry out an impactful research study. In addition, they must understand the differences that set qualitative and quantitative research apart in order to determine which methodology is better suited to specific education research topics.
Generate Hypotheses with Qualitative Research
Qualitative research focuses on thoughts, concepts, or experiences. The data collected often comes in narrative form and concentrates on unearthing insights that can lead to testable hypotheses. Educators use qualitative research in a study’s exploratory stages to uncover patterns or new angles.
Form Strong Conclusions with Quantitative Research
Quantitative research in education and other fields of inquiry is expressed in numbers and measurements. This type of research aims to find data to confirm or test a hypothesis.
Differences in Data Collection Methods
Keeping in mind the main distinction in qualitative vs. quantitative research—gathering descriptive information as opposed to numerical data—it stands to reason that there are different ways to acquire data for each research methodology. While certain approaches do overlap, the way researchers apply these collection techniques depends on their goal.
Interviews, for example, are common in both modes of research. An interview with students that features open-ended questions intended to reveal ideas and beliefs around attendance will provide qualitative data. This data may reveal a problem among students, such as a lack of access to transportation, that schools can help address.
An interview can also include questions posed to receive numerical answers. A case in point: how many days a week do students have trouble getting to school, and of those days, how often is a transportation-related issue the cause? In this example, qualitative and quantitative methodologies can lead to similar conclusions, but the research will differ in intent, design, and form.
Taking a look at behavioral observation, another common method used for both qualitative and quantitative research, qualitative data may consider a variety of factors, such as facial expressions, verbal responses, and body language.
On the other hand, a quantitative approach will create a coding scheme for certain predetermined behaviors and observe these in a quantifiable manner.
Qualitative Research Methods
- Case Studies : Researchers conduct in-depth investigations into an individual, group, event, or community, typically gathering data through observation and interviews.
- Focus Groups : A moderator (or researcher) guides conversation around a specific topic among a group of participants.
- Ethnography : Researchers interact with and observe a specific societal or ethnic group in their real-life environment.
- Interviews : Researchers ask participants questions to learn about their perspectives on a particular subject.
Quantitative Research Methods
- Questionnaires and Surveys : Participants receive a list of questions, either closed-ended or multiple choice, which are directed around a particular topic.
- Experiments : Researchers control and test variables to demonstrate cause-and-effect relationships.
- Observations : Researchers look at quantifiable patterns and behavior.
- Structured Interviews : Using a predetermined structure, researchers ask participants a fixed set of questions to acquire numerical data.
Choosing a Research Strategy
When choosing which research strategy to employ for a project or study, a number of considerations apply. One key piece of information to help determine whether to use a qualitative vs. quantitative research method is which phase of development the study is in.
For example, if a project is in its early stages and requires more research to find a testable hypothesis, qualitative research methods might prove most helpful. On the other hand, if the research team has already established a hypothesis or theory, quantitative research methods will provide data that can validate the theory or refine it for further testing.
It’s also important to understand a project’s research goals. For instance, do researchers aim to produce findings that reveal how to best encourage student engagement in math? Or is the goal to determine how many students are passing geometry? These two scenarios require distinct sets of data, which will determine the best methodology to employ.
In some situations, studies will benefit from a mixed-methods approach. Using the goals in the above example, one set of data could find the percentage of students passing geometry, which would be quantitative. The research team could also lead a focus group with the students achieving success to discuss which techniques and teaching practices they find most helpful, which would produce qualitative data.
Learn How to Put Education Research into Action
Those with an interest in learning how to harness research to develop innovative ideas to improve education systems may want to consider pursuing a doctoral degree. American University’s School of Education online offers a Doctor of Education (EdD) in Education Policy and Leadership that prepares future educators, school administrators, and other education professionals to become leaders who effect positive changes in schools. Courses such as Applied Research Methods I: Enacting Critical Research provides students with the techniques and research skills needed to begin conducting research exploring new ways to enhance education. Learn more about American’ University’s EdD in Education Policy and Leadership .
What’s the Difference Between Educational Equity and Equality?
EdD vs. PhD in Education: Requirements, Career Outlook, and Salary
Top Education Technology Jobs for Doctorate in Education Graduates
American University, EdD in Education Policy and Leadership
Edutopia, “2019 Education Research Highlights”
Formplus, “Qualitative vs. Quantitative Data: 15 Key Differences and Similarities”
iMotion, “Qualitative vs. Quantitative Research: What Is What?”
Scribbr, “Qualitative vs. Quantitative Research”
Simply Psychology, “What’s the Difference Between Quantitative and Qualitative Research?”
Typeform, “A Simple Guide to Qualitative and Quantitative Research”
Request Information
AU Program Helper
This AI chatbot provides automated responses, which may not always be accurate. By continuing with this conversation, you agree that the contents of this chat session may be transcribed and retained. You also consent that this chat session and your interactions, including cookie usage, are subject to our privacy policy .
An official website of the United States government
The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.
The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.
- Publications
- Account settings
The PMC website is updating on October 15, 2024. Learn More or Try it out now .
- Advanced Search
- Journal List
- J Korean Med Sci
- v.37(16); 2022 Apr 25
A Practical Guide to Writing Quantitative and Qualitative Research Questions and Hypotheses in Scholarly Articles
Edward barroga.
1 Department of General Education, Graduate School of Nursing Science, St. Luke’s International University, Tokyo, Japan.
Glafera Janet Matanguihan
2 Department of Biological Sciences, Messiah University, Mechanicsburg, PA, USA.
The development of research questions and the subsequent hypotheses are prerequisites to defining the main research purpose and specific objectives of a study. Consequently, these objectives determine the study design and research outcome. The development of research questions is a process based on knowledge of current trends, cutting-edge studies, and technological advances in the research field. Excellent research questions are focused and require a comprehensive literature search and in-depth understanding of the problem being investigated. Initially, research questions may be written as descriptive questions which could be developed into inferential questions. These questions must be specific and concise to provide a clear foundation for developing hypotheses. Hypotheses are more formal predictions about the research outcomes. These specify the possible results that may or may not be expected regarding the relationship between groups. Thus, research questions and hypotheses clarify the main purpose and specific objectives of the study, which in turn dictate the design of the study, its direction, and outcome. Studies developed from good research questions and hypotheses will have trustworthy outcomes with wide-ranging social and health implications.
INTRODUCTION
Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses. 1 , 2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results. 3 , 4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the inception of novel studies and the ethical testing of ideas. 5 , 6
It is crucial to have knowledge of both quantitative and qualitative research 2 as both types of research involve writing research questions and hypotheses. 7 However, these crucial elements of research are sometimes overlooked; if not overlooked, then framed without the forethought and meticulous attention it needs. Planning and careful consideration are needed when developing quantitative or qualitative research, particularly when conceptualizing research questions and hypotheses. 4
There is a continuing need to support researchers in the creation of innovative research questions and hypotheses, as well as for journal articles that carefully review these elements. 1 When research questions and hypotheses are not carefully thought of, unethical studies and poor outcomes usually ensue. Carefully formulated research questions and hypotheses define well-founded objectives, which in turn determine the appropriate design, course, and outcome of the study. This article then aims to discuss in detail the various aspects of crafting research questions and hypotheses, with the goal of guiding researchers as they develop their own. Examples from the authors and peer-reviewed scientific articles in the healthcare field are provided to illustrate key points.
DEFINITIONS AND RELATIONSHIP OF RESEARCH QUESTIONS AND HYPOTHESES
A research question is what a study aims to answer after data analysis and interpretation. The answer is written in length in the discussion section of the paper. Thus, the research question gives a preview of the different parts and variables of the study meant to address the problem posed in the research question. 1 An excellent research question clarifies the research writing while facilitating understanding of the research topic, objective, scope, and limitations of the study. 5
On the other hand, a research hypothesis is an educated statement of an expected outcome. This statement is based on background research and current knowledge. 8 , 9 The research hypothesis makes a specific prediction about a new phenomenon 10 or a formal statement on the expected relationship between an independent variable and a dependent variable. 3 , 11 It provides a tentative answer to the research question to be tested or explored. 4
Hypotheses employ reasoning to predict a theory-based outcome. 10 These can also be developed from theories by focusing on components of theories that have not yet been observed. 10 The validity of hypotheses is often based on the testability of the prediction made in a reproducible experiment. 8
Conversely, hypotheses can also be rephrased as research questions. Several hypotheses based on existing theories and knowledge may be needed to answer a research question. Developing ethical research questions and hypotheses creates a research design that has logical relationships among variables. These relationships serve as a solid foundation for the conduct of the study. 4 , 11 Haphazardly constructed research questions can result in poorly formulated hypotheses and improper study designs, leading to unreliable results. Thus, the formulations of relevant research questions and verifiable hypotheses are crucial when beginning research. 12
CHARACTERISTICS OF GOOD RESEARCH QUESTIONS AND HYPOTHESES
Excellent research questions are specific and focused. These integrate collective data and observations to confirm or refute the subsequent hypotheses. Well-constructed hypotheses are based on previous reports and verify the research context. These are realistic, in-depth, sufficiently complex, and reproducible. More importantly, these hypotheses can be addressed and tested. 13
There are several characteristics of well-developed hypotheses. Good hypotheses are 1) empirically testable 7 , 10 , 11 , 13 ; 2) backed by preliminary evidence 9 ; 3) testable by ethical research 7 , 9 ; 4) based on original ideas 9 ; 5) have evidenced-based logical reasoning 10 ; and 6) can be predicted. 11 Good hypotheses can infer ethical and positive implications, indicating the presence of a relationship or effect relevant to the research theme. 7 , 11 These are initially developed from a general theory and branch into specific hypotheses by deductive reasoning. In the absence of a theory to base the hypotheses, inductive reasoning based on specific observations or findings form more general hypotheses. 10
TYPES OF RESEARCH QUESTIONS AND HYPOTHESES
Research questions and hypotheses are developed according to the type of research, which can be broadly classified into quantitative and qualitative research. We provide a summary of the types of research questions and hypotheses under quantitative and qualitative research categories in Table 1 .
Quantitative research questions | Quantitative research hypotheses |
---|---|
Descriptive research questions | Simple hypothesis |
Comparative research questions | Complex hypothesis |
Relationship research questions | Directional hypothesis |
Non-directional hypothesis | |
Associative hypothesis | |
Causal hypothesis | |
Null hypothesis | |
Alternative hypothesis | |
Working hypothesis | |
Statistical hypothesis | |
Logical hypothesis | |
Hypothesis-testing | |
Qualitative research questions | Qualitative research hypotheses |
Contextual research questions | Hypothesis-generating |
Descriptive research questions | |
Evaluation research questions | |
Explanatory research questions | |
Exploratory research questions | |
Generative research questions | |
Ideological research questions | |
Ethnographic research questions | |
Phenomenological research questions | |
Grounded theory questions | |
Qualitative case study questions |
Research questions in quantitative research
In quantitative research, research questions inquire about the relationships among variables being investigated and are usually framed at the start of the study. These are precise and typically linked to the subject population, dependent and independent variables, and research design. 1 Research questions may also attempt to describe the behavior of a population in relation to one or more variables, or describe the characteristics of variables to be measured ( descriptive research questions ). 1 , 5 , 14 These questions may also aim to discover differences between groups within the context of an outcome variable ( comparative research questions ), 1 , 5 , 14 or elucidate trends and interactions among variables ( relationship research questions ). 1 , 5 We provide examples of descriptive, comparative, and relationship research questions in quantitative research in Table 2 .
Quantitative research questions | |
---|---|
Descriptive research question | |
- Measures responses of subjects to variables | |
- Presents variables to measure, analyze, or assess | |
What is the proportion of resident doctors in the hospital who have mastered ultrasonography (response of subjects to a variable) as a diagnostic technique in their clinical training? | |
Comparative research question | |
- Clarifies difference between one group with outcome variable and another group without outcome variable | |
Is there a difference in the reduction of lung metastasis in osteosarcoma patients who received the vitamin D adjunctive therapy (group with outcome variable) compared with osteosarcoma patients who did not receive the vitamin D adjunctive therapy (group without outcome variable)? | |
- Compares the effects of variables | |
How does the vitamin D analogue 22-Oxacalcitriol (variable 1) mimic the antiproliferative activity of 1,25-Dihydroxyvitamin D (variable 2) in osteosarcoma cells? | |
Relationship research question | |
- Defines trends, association, relationships, or interactions between dependent variable and independent variable | |
Is there a relationship between the number of medical student suicide (dependent variable) and the level of medical student stress (independent variable) in Japan during the first wave of the COVID-19 pandemic? |
Hypotheses in quantitative research
In quantitative research, hypotheses predict the expected relationships among variables. 15 Relationships among variables that can be predicted include 1) between a single dependent variable and a single independent variable ( simple hypothesis ) or 2) between two or more independent and dependent variables ( complex hypothesis ). 4 , 11 Hypotheses may also specify the expected direction to be followed and imply an intellectual commitment to a particular outcome ( directional hypothesis ) 4 . On the other hand, hypotheses may not predict the exact direction and are used in the absence of a theory, or when findings contradict previous studies ( non-directional hypothesis ). 4 In addition, hypotheses can 1) define interdependency between variables ( associative hypothesis ), 4 2) propose an effect on the dependent variable from manipulation of the independent variable ( causal hypothesis ), 4 3) state a negative relationship between two variables ( null hypothesis ), 4 , 11 , 15 4) replace the working hypothesis if rejected ( alternative hypothesis ), 15 explain the relationship of phenomena to possibly generate a theory ( working hypothesis ), 11 5) involve quantifiable variables that can be tested statistically ( statistical hypothesis ), 11 6) or express a relationship whose interlinks can be verified logically ( logical hypothesis ). 11 We provide examples of simple, complex, directional, non-directional, associative, causal, null, alternative, working, statistical, and logical hypotheses in quantitative research, as well as the definition of quantitative hypothesis-testing research in Table 3 .
Quantitative research hypotheses | |
---|---|
Simple hypothesis | |
- Predicts relationship between single dependent variable and single independent variable | |
If the dose of the new medication (single independent variable) is high, blood pressure (single dependent variable) is lowered. | |
Complex hypothesis | |
- Foretells relationship between two or more independent and dependent variables | |
The higher the use of anticancer drugs, radiation therapy, and adjunctive agents (3 independent variables), the higher would be the survival rate (1 dependent variable). | |
Directional hypothesis | |
- Identifies study direction based on theory towards particular outcome to clarify relationship between variables | |
Privately funded research projects will have a larger international scope (study direction) than publicly funded research projects. | |
Non-directional hypothesis | |
- Nature of relationship between two variables or exact study direction is not identified | |
- Does not involve a theory | |
Women and men are different in terms of helpfulness. (Exact study direction is not identified) | |
Associative hypothesis | |
- Describes variable interdependency | |
- Change in one variable causes change in another variable | |
A larger number of people vaccinated against COVID-19 in the region (change in independent variable) will reduce the region’s incidence of COVID-19 infection (change in dependent variable). | |
Causal hypothesis | |
- An effect on dependent variable is predicted from manipulation of independent variable | |
A change into a high-fiber diet (independent variable) will reduce the blood sugar level (dependent variable) of the patient. | |
Null hypothesis | |
- A negative statement indicating no relationship or difference between 2 variables | |
There is no significant difference in the severity of pulmonary metastases between the new drug (variable 1) and the current drug (variable 2). | |
Alternative hypothesis | |
- Following a null hypothesis, an alternative hypothesis predicts a relationship between 2 study variables | |
The new drug (variable 1) is better on average in reducing the level of pain from pulmonary metastasis than the current drug (variable 2). | |
Working hypothesis | |
- A hypothesis that is initially accepted for further research to produce a feasible theory | |
Dairy cows fed with concentrates of different formulations will produce different amounts of milk. | |
Statistical hypothesis | |
- Assumption about the value of population parameter or relationship among several population characteristics | |
- Validity tested by a statistical experiment or analysis | |
The mean recovery rate from COVID-19 infection (value of population parameter) is not significantly different between population 1 and population 2. | |
There is a positive correlation between the level of stress at the workplace and the number of suicides (population characteristics) among working people in Japan. | |
Logical hypothesis | |
- Offers or proposes an explanation with limited or no extensive evidence | |
If healthcare workers provide more educational programs about contraception methods, the number of adolescent pregnancies will be less. | |
Hypothesis-testing (Quantitative hypothesis-testing research) | |
- Quantitative research uses deductive reasoning. | |
- This involves the formation of a hypothesis, collection of data in the investigation of the problem, analysis and use of the data from the investigation, and drawing of conclusions to validate or nullify the hypotheses. |
Research questions in qualitative research
Unlike research questions in quantitative research, research questions in qualitative research are usually continuously reviewed and reformulated. The central question and associated subquestions are stated more than the hypotheses. 15 The central question broadly explores a complex set of factors surrounding the central phenomenon, aiming to present the varied perspectives of participants. 15
There are varied goals for which qualitative research questions are developed. These questions can function in several ways, such as to 1) identify and describe existing conditions ( contextual research question s); 2) describe a phenomenon ( descriptive research questions ); 3) assess the effectiveness of existing methods, protocols, theories, or procedures ( evaluation research questions ); 4) examine a phenomenon or analyze the reasons or relationships between subjects or phenomena ( explanatory research questions ); or 5) focus on unknown aspects of a particular topic ( exploratory research questions ). 5 In addition, some qualitative research questions provide new ideas for the development of theories and actions ( generative research questions ) or advance specific ideologies of a position ( ideological research questions ). 1 Other qualitative research questions may build on a body of existing literature and become working guidelines ( ethnographic research questions ). Research questions may also be broadly stated without specific reference to the existing literature or a typology of questions ( phenomenological research questions ), may be directed towards generating a theory of some process ( grounded theory questions ), or may address a description of the case and the emerging themes ( qualitative case study questions ). 15 We provide examples of contextual, descriptive, evaluation, explanatory, exploratory, generative, ideological, ethnographic, phenomenological, grounded theory, and qualitative case study research questions in qualitative research in Table 4 , and the definition of qualitative hypothesis-generating research in Table 5 .
Qualitative research questions | |
---|---|
Contextual research question | |
- Ask the nature of what already exists | |
- Individuals or groups function to further clarify and understand the natural context of real-world problems | |
What are the experiences of nurses working night shifts in healthcare during the COVID-19 pandemic? (natural context of real-world problems) | |
Descriptive research question | |
- Aims to describe a phenomenon | |
What are the different forms of disrespect and abuse (phenomenon) experienced by Tanzanian women when giving birth in healthcare facilities? | |
Evaluation research question | |
- Examines the effectiveness of existing practice or accepted frameworks | |
How effective are decision aids (effectiveness of existing practice) in helping decide whether to give birth at home or in a healthcare facility? | |
Explanatory research question | |
- Clarifies a previously studied phenomenon and explains why it occurs | |
Why is there an increase in teenage pregnancy (phenomenon) in Tanzania? | |
Exploratory research question | |
- Explores areas that have not been fully investigated to have a deeper understanding of the research problem | |
What factors affect the mental health of medical students (areas that have not yet been fully investigated) during the COVID-19 pandemic? | |
Generative research question | |
- Develops an in-depth understanding of people’s behavior by asking ‘how would’ or ‘what if’ to identify problems and find solutions | |
How would the extensive research experience of the behavior of new staff impact the success of the novel drug initiative? | |
Ideological research question | |
- Aims to advance specific ideas or ideologies of a position | |
Are Japanese nurses who volunteer in remote African hospitals able to promote humanized care of patients (specific ideas or ideologies) in the areas of safe patient environment, respect of patient privacy, and provision of accurate information related to health and care? | |
Ethnographic research question | |
- Clarifies peoples’ nature, activities, their interactions, and the outcomes of their actions in specific settings | |
What are the demographic characteristics, rehabilitative treatments, community interactions, and disease outcomes (nature, activities, their interactions, and the outcomes) of people in China who are suffering from pneumoconiosis? | |
Phenomenological research question | |
- Knows more about the phenomena that have impacted an individual | |
What are the lived experiences of parents who have been living with and caring for children with a diagnosis of autism? (phenomena that have impacted an individual) | |
Grounded theory question | |
- Focuses on social processes asking about what happens and how people interact, or uncovering social relationships and behaviors of groups | |
What are the problems that pregnant adolescents face in terms of social and cultural norms (social processes), and how can these be addressed? | |
Qualitative case study question | |
- Assesses a phenomenon using different sources of data to answer “why” and “how” questions | |
- Considers how the phenomenon is influenced by its contextual situation. | |
How does quitting work and assuming the role of a full-time mother (phenomenon assessed) change the lives of women in Japan? |
Qualitative research hypotheses | |
---|---|
Hypothesis-generating (Qualitative hypothesis-generating research) | |
- Qualitative research uses inductive reasoning. | |
- This involves data collection from study participants or the literature regarding a phenomenon of interest, using the collected data to develop a formal hypothesis, and using the formal hypothesis as a framework for testing the hypothesis. | |
- Qualitative exploratory studies explore areas deeper, clarifying subjective experience and allowing formulation of a formal hypothesis potentially testable in a future quantitative approach. |
Qualitative studies usually pose at least one central research question and several subquestions starting with How or What . These research questions use exploratory verbs such as explore or describe . These also focus on one central phenomenon of interest, and may mention the participants and research site. 15
Hypotheses in qualitative research
Hypotheses in qualitative research are stated in the form of a clear statement concerning the problem to be investigated. Unlike in quantitative research where hypotheses are usually developed to be tested, qualitative research can lead to both hypothesis-testing and hypothesis-generating outcomes. 2 When studies require both quantitative and qualitative research questions, this suggests an integrative process between both research methods wherein a single mixed-methods research question can be developed. 1
FRAMEWORKS FOR DEVELOPING RESEARCH QUESTIONS AND HYPOTHESES
Research questions followed by hypotheses should be developed before the start of the study. 1 , 12 , 14 It is crucial to develop feasible research questions on a topic that is interesting to both the researcher and the scientific community. This can be achieved by a meticulous review of previous and current studies to establish a novel topic. Specific areas are subsequently focused on to generate ethical research questions. The relevance of the research questions is evaluated in terms of clarity of the resulting data, specificity of the methodology, objectivity of the outcome, depth of the research, and impact of the study. 1 , 5 These aspects constitute the FINER criteria (i.e., Feasible, Interesting, Novel, Ethical, and Relevant). 1 Clarity and effectiveness are achieved if research questions meet the FINER criteria. In addition to the FINER criteria, Ratan et al. described focus, complexity, novelty, feasibility, and measurability for evaluating the effectiveness of research questions. 14
The PICOT and PEO frameworks are also used when developing research questions. 1 The following elements are addressed in these frameworks, PICOT: P-population/patients/problem, I-intervention or indicator being studied, C-comparison group, O-outcome of interest, and T-timeframe of the study; PEO: P-population being studied, E-exposure to preexisting conditions, and O-outcome of interest. 1 Research questions are also considered good if these meet the “FINERMAPS” framework: Feasible, Interesting, Novel, Ethical, Relevant, Manageable, Appropriate, Potential value/publishable, and Systematic. 14
As we indicated earlier, research questions and hypotheses that are not carefully formulated result in unethical studies or poor outcomes. To illustrate this, we provide some examples of ambiguous research question and hypotheses that result in unclear and weak research objectives in quantitative research ( Table 6 ) 16 and qualitative research ( Table 7 ) 17 , and how to transform these ambiguous research question(s) and hypothesis(es) into clear and good statements.
Variables | Unclear and weak statement (Statement 1) | Clear and good statement (Statement 2) | Points to avoid |
---|---|---|---|
Research question | Which is more effective between smoke moxibustion and smokeless moxibustion? | “Moreover, regarding smoke moxibustion versus smokeless moxibustion, it remains unclear which is more effective, safe, and acceptable to pregnant women, and whether there is any difference in the amount of heat generated.” | 1) Vague and unfocused questions |
2) Closed questions simply answerable by yes or no | |||
3) Questions requiring a simple choice | |||
Hypothesis | The smoke moxibustion group will have higher cephalic presentation. | “Hypothesis 1. The smoke moxibustion stick group (SM group) and smokeless moxibustion stick group (-SLM group) will have higher rates of cephalic presentation after treatment than the control group. | 1) Unverifiable hypotheses |
Hypothesis 2. The SM group and SLM group will have higher rates of cephalic presentation at birth than the control group. | 2) Incompletely stated groups of comparison | ||
Hypothesis 3. There will be no significant differences in the well-being of the mother and child among the three groups in terms of the following outcomes: premature birth, premature rupture of membranes (PROM) at < 37 weeks, Apgar score < 7 at 5 min, umbilical cord blood pH < 7.1, admission to neonatal intensive care unit (NICU), and intrauterine fetal death.” | 3) Insufficiently described variables or outcomes | ||
Research objective | To determine which is more effective between smoke moxibustion and smokeless moxibustion. | “The specific aims of this pilot study were (a) to compare the effects of smoke moxibustion and smokeless moxibustion treatments with the control group as a possible supplement to ECV for converting breech presentation to cephalic presentation and increasing adherence to the newly obtained cephalic position, and (b) to assess the effects of these treatments on the well-being of the mother and child.” | 1) Poor understanding of the research question and hypotheses |
2) Insufficient description of population, variables, or study outcomes |
a These statements were composed for comparison and illustrative purposes only.
b These statements are direct quotes from Higashihara and Horiuchi. 16
Variables | Unclear and weak statement (Statement 1) | Clear and good statement (Statement 2) | Points to avoid |
---|---|---|---|
Research question | Does disrespect and abuse (D&A) occur in childbirth in Tanzania? | How does disrespect and abuse (D&A) occur and what are the types of physical and psychological abuses observed in midwives’ actual care during facility-based childbirth in urban Tanzania? | 1) Ambiguous or oversimplistic questions |
2) Questions unverifiable by data collection and analysis | |||
Hypothesis | Disrespect and abuse (D&A) occur in childbirth in Tanzania. | Hypothesis 1: Several types of physical and psychological abuse by midwives in actual care occur during facility-based childbirth in urban Tanzania. | 1) Statements simply expressing facts |
Hypothesis 2: Weak nursing and midwifery management contribute to the D&A of women during facility-based childbirth in urban Tanzania. | 2) Insufficiently described concepts or variables | ||
Research objective | To describe disrespect and abuse (D&A) in childbirth in Tanzania. | “This study aimed to describe from actual observations the respectful and disrespectful care received by women from midwives during their labor period in two hospitals in urban Tanzania.” | 1) Statements unrelated to the research question and hypotheses |
2) Unattainable or unexplorable objectives |
a This statement is a direct quote from Shimoda et al. 17
The other statements were composed for comparison and illustrative purposes only.
CONSTRUCTING RESEARCH QUESTIONS AND HYPOTHESES
To construct effective research questions and hypotheses, it is very important to 1) clarify the background and 2) identify the research problem at the outset of the research, within a specific timeframe. 9 Then, 3) review or conduct preliminary research to collect all available knowledge about the possible research questions by studying theories and previous studies. 18 Afterwards, 4) construct research questions to investigate the research problem. Identify variables to be accessed from the research questions 4 and make operational definitions of constructs from the research problem and questions. Thereafter, 5) construct specific deductive or inductive predictions in the form of hypotheses. 4 Finally, 6) state the study aims . This general flow for constructing effective research questions and hypotheses prior to conducting research is shown in Fig. 1 .
Research questions are used more frequently in qualitative research than objectives or hypotheses. 3 These questions seek to discover, understand, explore or describe experiences by asking “What” or “How.” The questions are open-ended to elicit a description rather than to relate variables or compare groups. The questions are continually reviewed, reformulated, and changed during the qualitative study. 3 Research questions are also used more frequently in survey projects than hypotheses in experiments in quantitative research to compare variables and their relationships.
Hypotheses are constructed based on the variables identified and as an if-then statement, following the template, ‘If a specific action is taken, then a certain outcome is expected.’ At this stage, some ideas regarding expectations from the research to be conducted must be drawn. 18 Then, the variables to be manipulated (independent) and influenced (dependent) are defined. 4 Thereafter, the hypothesis is stated and refined, and reproducible data tailored to the hypothesis are identified, collected, and analyzed. 4 The hypotheses must be testable and specific, 18 and should describe the variables and their relationships, the specific group being studied, and the predicted research outcome. 18 Hypotheses construction involves a testable proposition to be deduced from theory, and independent and dependent variables to be separated and measured separately. 3 Therefore, good hypotheses must be based on good research questions constructed at the start of a study or trial. 12
In summary, research questions are constructed after establishing the background of the study. Hypotheses are then developed based on the research questions. Thus, it is crucial to have excellent research questions to generate superior hypotheses. In turn, these would determine the research objectives and the design of the study, and ultimately, the outcome of the research. 12 Algorithms for building research questions and hypotheses are shown in Fig. 2 for quantitative research and in Fig. 3 for qualitative research.
EXAMPLES OF RESEARCH QUESTIONS FROM PUBLISHED ARTICLES
- EXAMPLE 1. Descriptive research question (quantitative research)
- - Presents research variables to be assessed (distinct phenotypes and subphenotypes)
- “BACKGROUND: Since COVID-19 was identified, its clinical and biological heterogeneity has been recognized. Identifying COVID-19 phenotypes might help guide basic, clinical, and translational research efforts.
- RESEARCH QUESTION: Does the clinical spectrum of patients with COVID-19 contain distinct phenotypes and subphenotypes? ” 19
- EXAMPLE 2. Relationship research question (quantitative research)
- - Shows interactions between dependent variable (static postural control) and independent variable (peripheral visual field loss)
- “Background: Integration of visual, vestibular, and proprioceptive sensations contributes to postural control. People with peripheral visual field loss have serious postural instability. However, the directional specificity of postural stability and sensory reweighting caused by gradual peripheral visual field loss remain unclear.
- Research question: What are the effects of peripheral visual field loss on static postural control ?” 20
- EXAMPLE 3. Comparative research question (quantitative research)
- - Clarifies the difference among groups with an outcome variable (patients enrolled in COMPERA with moderate PH or severe PH in COPD) and another group without the outcome variable (patients with idiopathic pulmonary arterial hypertension (IPAH))
- “BACKGROUND: Pulmonary hypertension (PH) in COPD is a poorly investigated clinical condition.
- RESEARCH QUESTION: Which factors determine the outcome of PH in COPD?
- STUDY DESIGN AND METHODS: We analyzed the characteristics and outcome of patients enrolled in the Comparative, Prospective Registry of Newly Initiated Therapies for Pulmonary Hypertension (COMPERA) with moderate or severe PH in COPD as defined during the 6th PH World Symposium who received medical therapy for PH and compared them with patients with idiopathic pulmonary arterial hypertension (IPAH) .” 21
- EXAMPLE 4. Exploratory research question (qualitative research)
- - Explores areas that have not been fully investigated (perspectives of families and children who receive care in clinic-based child obesity treatment) to have a deeper understanding of the research problem
- “Problem: Interventions for children with obesity lead to only modest improvements in BMI and long-term outcomes, and data are limited on the perspectives of families of children with obesity in clinic-based treatment. This scoping review seeks to answer the question: What is known about the perspectives of families and children who receive care in clinic-based child obesity treatment? This review aims to explore the scope of perspectives reported by families of children with obesity who have received individualized outpatient clinic-based obesity treatment.” 22
- EXAMPLE 5. Relationship research question (quantitative research)
- - Defines interactions between dependent variable (use of ankle strategies) and independent variable (changes in muscle tone)
- “Background: To maintain an upright standing posture against external disturbances, the human body mainly employs two types of postural control strategies: “ankle strategy” and “hip strategy.” While it has been reported that the magnitude of the disturbance alters the use of postural control strategies, it has not been elucidated how the level of muscle tone, one of the crucial parameters of bodily function, determines the use of each strategy. We have previously confirmed using forward dynamics simulations of human musculoskeletal models that an increased muscle tone promotes the use of ankle strategies. The objective of the present study was to experimentally evaluate a hypothesis: an increased muscle tone promotes the use of ankle strategies. Research question: Do changes in the muscle tone affect the use of ankle strategies ?” 23
EXAMPLES OF HYPOTHESES IN PUBLISHED ARTICLES
- EXAMPLE 1. Working hypothesis (quantitative research)
- - A hypothesis that is initially accepted for further research to produce a feasible theory
- “As fever may have benefit in shortening the duration of viral illness, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response when taken during the early stages of COVID-19 illness .” 24
- “In conclusion, it is plausible to hypothesize that the antipyretic efficacy of ibuprofen may be hindering the benefits of a fever response . The difference in perceived safety of these agents in COVID-19 illness could be related to the more potent efficacy to reduce fever with ibuprofen compared to acetaminophen. Compelling data on the benefit of fever warrant further research and review to determine when to treat or withhold ibuprofen for early stage fever for COVID-19 and other related viral illnesses .” 24
- EXAMPLE 2. Exploratory hypothesis (qualitative research)
- - Explores particular areas deeper to clarify subjective experience and develop a formal hypothesis potentially testable in a future quantitative approach
- “We hypothesized that when thinking about a past experience of help-seeking, a self distancing prompt would cause increased help-seeking intentions and more favorable help-seeking outcome expectations .” 25
- “Conclusion
- Although a priori hypotheses were not supported, further research is warranted as results indicate the potential for using self-distancing approaches to increasing help-seeking among some people with depressive symptomatology.” 25
- EXAMPLE 3. Hypothesis-generating research to establish a framework for hypothesis testing (qualitative research)
- “We hypothesize that compassionate care is beneficial for patients (better outcomes), healthcare systems and payers (lower costs), and healthcare providers (lower burnout). ” 26
- Compassionomics is the branch of knowledge and scientific study of the effects of compassionate healthcare. Our main hypotheses are that compassionate healthcare is beneficial for (1) patients, by improving clinical outcomes, (2) healthcare systems and payers, by supporting financial sustainability, and (3) HCPs, by lowering burnout and promoting resilience and well-being. The purpose of this paper is to establish a scientific framework for testing the hypotheses above . If these hypotheses are confirmed through rigorous research, compassionomics will belong in the science of evidence-based medicine, with major implications for all healthcare domains.” 26
- EXAMPLE 4. Statistical hypothesis (quantitative research)
- - An assumption is made about the relationship among several population characteristics ( gender differences in sociodemographic and clinical characteristics of adults with ADHD ). Validity is tested by statistical experiment or analysis ( chi-square test, Students t-test, and logistic regression analysis)
- “Our research investigated gender differences in sociodemographic and clinical characteristics of adults with ADHD in a Japanese clinical sample. Due to unique Japanese cultural ideals and expectations of women's behavior that are in opposition to ADHD symptoms, we hypothesized that women with ADHD experience more difficulties and present more dysfunctions than men . We tested the following hypotheses: first, women with ADHD have more comorbidities than men with ADHD; second, women with ADHD experience more social hardships than men, such as having less full-time employment and being more likely to be divorced.” 27
- “Statistical Analysis
- ( text omitted ) Between-gender comparisons were made using the chi-squared test for categorical variables and Students t-test for continuous variables…( text omitted ). A logistic regression analysis was performed for employment status, marital status, and comorbidity to evaluate the independent effects of gender on these dependent variables.” 27
EXAMPLES OF HYPOTHESIS AS WRITTEN IN PUBLISHED ARTICLES IN RELATION TO OTHER PARTS
- EXAMPLE 1. Background, hypotheses, and aims are provided
- “Pregnant women need skilled care during pregnancy and childbirth, but that skilled care is often delayed in some countries …( text omitted ). The focused antenatal care (FANC) model of WHO recommends that nurses provide information or counseling to all pregnant women …( text omitted ). Job aids are visual support materials that provide the right kind of information using graphics and words in a simple and yet effective manner. When nurses are not highly trained or have many work details to attend to, these job aids can serve as a content reminder for the nurses and can be used for educating their patients (Jennings, Yebadokpo, Affo, & Agbogbe, 2010) ( text omitted ). Importantly, additional evidence is needed to confirm how job aids can further improve the quality of ANC counseling by health workers in maternal care …( text omitted )” 28
- “ This has led us to hypothesize that the quality of ANC counseling would be better if supported by job aids. Consequently, a better quality of ANC counseling is expected to produce higher levels of awareness concerning the danger signs of pregnancy and a more favorable impression of the caring behavior of nurses .” 28
- “This study aimed to examine the differences in the responses of pregnant women to a job aid-supported intervention during ANC visit in terms of 1) their understanding of the danger signs of pregnancy and 2) their impression of the caring behaviors of nurses to pregnant women in rural Tanzania.” 28
- EXAMPLE 2. Background, hypotheses, and aims are provided
- “We conducted a two-arm randomized controlled trial (RCT) to evaluate and compare changes in salivary cortisol and oxytocin levels of first-time pregnant women between experimental and control groups. The women in the experimental group touched and held an infant for 30 min (experimental intervention protocol), whereas those in the control group watched a DVD movie of an infant (control intervention protocol). The primary outcome was salivary cortisol level and the secondary outcome was salivary oxytocin level.” 29
- “ We hypothesize that at 30 min after touching and holding an infant, the salivary cortisol level will significantly decrease and the salivary oxytocin level will increase in the experimental group compared with the control group .” 29
- EXAMPLE 3. Background, aim, and hypothesis are provided
- “In countries where the maternal mortality ratio remains high, antenatal education to increase Birth Preparedness and Complication Readiness (BPCR) is considered one of the top priorities [1]. BPCR includes birth plans during the antenatal period, such as the birthplace, birth attendant, transportation, health facility for complications, expenses, and birth materials, as well as family coordination to achieve such birth plans. In Tanzania, although increasing, only about half of all pregnant women attend an antenatal clinic more than four times [4]. Moreover, the information provided during antenatal care (ANC) is insufficient. In the resource-poor settings, antenatal group education is a potential approach because of the limited time for individual counseling at antenatal clinics.” 30
- “This study aimed to evaluate an antenatal group education program among pregnant women and their families with respect to birth-preparedness and maternal and infant outcomes in rural villages of Tanzania.” 30
- “ The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) give birth in a health facility, (4) have less complications of women at birth, and (5) have less complications and deaths of infants than those who did not receive the education .” 30
Research questions and hypotheses are crucial components to any type of research, whether quantitative or qualitative. These questions should be developed at the very beginning of the study. Excellent research questions lead to superior hypotheses, which, like a compass, set the direction of research, and can often determine the successful conduct of the study. Many research studies have floundered because the development of research questions and subsequent hypotheses was not given the thought and meticulous attention needed. The development of research questions and hypotheses is an iterative process based on extensive knowledge of the literature and insightful grasp of the knowledge gap. Focused, concise, and specific research questions provide a strong foundation for constructing hypotheses which serve as formal predictions about the research outcomes. Research questions and hypotheses are crucial elements of research that should not be overlooked. They should be carefully thought of and constructed when planning research. This avoids unethical studies and poor outcomes by defining well-founded objectives that determine the design, course, and outcome of the study.
Disclosure: The authors have no potential conflicts of interest to disclose.
Author Contributions:
- Conceptualization: Barroga E, Matanguihan GJ.
- Methodology: Barroga E, Matanguihan GJ.
- Writing - original draft: Barroga E, Matanguihan GJ.
- Writing - review & editing: Barroga E, Matanguihan GJ.
Educational resources and simple solutions for your research journey
What is Quantitative Research? Definition, Methods, Types, and Examples
If you’re wondering what is quantitative research and whether this methodology works for your research study, you’re not alone. If you want a simple quantitative research definition , then it’s enough to say that this is a method undertaken by researchers based on their study requirements. However, to select the most appropriate research for their study type, researchers should know all the methods available.
Selecting the right research method depends on a few important criteria, such as the research question, study type, time, costs, data availability, and availability of respondents. There are two main types of research methods— quantitative research and qualitative research. The purpose of quantitative research is to validate or test a theory or hypothesis and that of qualitative research is to understand a subject or event or identify reasons for observed patterns.
Quantitative research methods are used to observe events that affect a particular group of individuals, which is the sample population. In this type of research, diverse numerical data are collected through various methods and then statistically analyzed to aggregate the data, compare them, or show relationships among the data. Quantitative research methods broadly include questionnaires, structured observations, and experiments.
Here are two quantitative research examples:
- Satisfaction surveys sent out by a company regarding their revamped customer service initiatives. Customers are asked to rate their experience on a rating scale of 1 (poor) to 5 (excellent).
- A school has introduced a new after-school program for children, and a few months after commencement, the school sends out feedback questionnaires to the parents of the enrolled children. Such questionnaires usually include close-ended questions that require either definite answers or a Yes/No option. This helps in a quick, overall assessment of the program’s outreach and success.
Table of Contents
What is quantitative research ? 1,2
The steps shown in the figure can be grouped into the following broad steps:
- Theory : Define the problem area or area of interest and create a research question.
- Hypothesis : Develop a hypothesis based on the research question. This hypothesis will be tested in the remaining steps.
- Research design : In this step, the most appropriate quantitative research design will be selected, including deciding on the sample size, selecting respondents, identifying research sites, if any, etc.
- Data collection : This process could be extensive based on your research objective and sample size.
- Data analysis : Statistical analysis is used to analyze the data collected. The results from the analysis help in either supporting or rejecting your hypothesis.
- Present results : Based on the data analysis, conclusions are drawn, and results are presented as accurately as possible.
Quantitative research characteristics 4
- Large sample size : This ensures reliability because this sample represents the target population or market. Due to the large sample size, the outcomes can be generalized to the entire population as well, making this one of the important characteristics of quantitative research .
- Structured data and measurable variables: The data are numeric and can be analyzed easily. Quantitative research involves the use of measurable variables such as age, salary range, highest education, etc.
- Easy-to-use data collection methods : The methods include experiments, controlled observations, and questionnaires and surveys with a rating scale or close-ended questions, which require simple and to-the-point answers; are not bound by geographical regions; and are easy to administer.
- Data analysis : Structured and accurate statistical analysis methods using software applications such as Excel, SPSS, R. The analysis is fast, accurate, and less effort intensive.
- Reliable : The respondents answer close-ended questions, their responses are direct without ambiguity and yield numeric outcomes, which are therefore highly reliable.
- Reusable outcomes : This is one of the key characteristics – outcomes of one research can be used and replicated in other research as well and is not exclusive to only one study.
Quantitative research methods 5
Quantitative research methods are classified into two types—primary and secondary.
Primary quantitative research method:
In this type of quantitative research , data are directly collected by the researchers using the following methods.
– Survey research : Surveys are the easiest and most commonly used quantitative research method . They are of two types— cross-sectional and longitudinal.
->Cross-sectional surveys are specifically conducted on a target population for a specified period, that is, these surveys have a specific starting and ending time and researchers study the events during this period to arrive at conclusions. The main purpose of these surveys is to describe and assess the characteristics of a population. There is one independent variable in this study, which is a common factor applicable to all participants in the population, for example, living in a specific city, diagnosed with a specific disease, of a certain age group, etc. An example of a cross-sectional survey is a study to understand why individuals residing in houses built before 1979 in the US are more susceptible to lead contamination.
->Longitudinal surveys are conducted at different time durations. These surveys involve observing the interactions among different variables in the target population, exposing them to various causal factors, and understanding their effects across a longer period. These studies are helpful to analyze a problem in the long term. An example of a longitudinal study is the study of the relationship between smoking and lung cancer over a long period.
– Descriptive research : Explains the current status of an identified and measurable variable. Unlike other types of quantitative research , a hypothesis is not needed at the beginning of the study and can be developed even after data collection. This type of quantitative research describes the characteristics of a problem and answers the what, when, where of a problem. However, it doesn’t answer the why of the problem and doesn’t explore cause-and-effect relationships between variables. Data from this research could be used as preliminary data for another study. Example: A researcher undertakes a study to examine the growth strategy of a company. This sample data can be used by other companies to determine their own growth strategy.
– Correlational research : This quantitative research method is used to establish a relationship between two variables using statistical analysis and analyze how one affects the other. The research is non-experimental because the researcher doesn’t control or manipulate any of the variables. At least two separate sample groups are needed for this research. Example: Researchers studying a correlation between regular exercise and diabetes.
– Causal-comparative research : This type of quantitative research examines the cause-effect relationships in retrospect between a dependent and independent variable and determines the causes of the already existing differences between groups of people. This is not a true experiment because it doesn’t assign participants to groups randomly. Example: To study the wage differences between men and women in the same role. For this, already existing wage information is analyzed to understand the relationship.
– Experimental research : This quantitative research method uses true experiments or scientific methods for determining a cause-effect relation between variables. It involves testing a hypothesis through experiments, in which one or more independent variables are manipulated and then their effect on dependent variables are studied. Example: A researcher studies the importance of a drug in treating a disease by administering the drug in few patients and not administering in a few.
The following data collection methods are commonly used in primary quantitative research :
- Sampling : The most common type is probability sampling, in which a sample is chosen from a larger population using some form of random selection, that is, every member of the population has an equal chance of being selected. The different types of probability sampling are—simple random, systematic, stratified, and cluster sampling.
- Interviews : These are commonly telephonic or face-to-face.
- Observations : Structured observations are most commonly used in quantitative research . In this method, researchers make observations about specific behaviors of individuals in a structured setting.
- Document review : Reviewing existing research or documents to collect evidence for supporting the quantitative research .
- Surveys and questionnaires : Surveys can be administered both online and offline depending on the requirement and sample size.
The data collected can be analyzed in several ways in quantitative research , as listed below:
- Cross-tabulation —Uses a tabular format to draw inferences among collected data
- MaxDiff analysis —Gauges the preferences of the respondents
- TURF analysis —Total Unduplicated Reach and Frequency Analysis; helps in determining the market strategy for a business
- Gap analysis —Identify gaps in attaining the desired results
- SWOT analysis —Helps identify strengths, weaknesses, opportunities, and threats of a product, service, or organization
- Text analysis —Used for interpreting unstructured data
Secondary quantitative research methods :
This method involves conducting research using already existing or secondary data. This method is less effort intensive and requires lesser time. However, researchers should verify the authenticity and recency of the sources being used and ensure their accuracy.
The main sources of secondary data are:
- The Internet
- Government and non-government sources
- Public libraries
- Educational institutions
- Commercial information sources such as newspapers, journals, radio, TV
When to use quantitative research 6
Here are some simple ways to decide when to use quantitative research . Use quantitative research to:
- recommend a final course of action
- find whether a consensus exists regarding a particular subject
- generalize results to a larger population
- determine a cause-and-effect relationship between variables
- describe characteristics of specific groups of people
- test hypotheses and examine specific relationships
- identify and establish size of market segments
A research case study to understand when to use quantitative research 7
Context: A study was undertaken to evaluate a major innovation in a hospital’s design, in terms of workforce implications and impact on patient and staff experiences of all single-room hospital accommodations. The researchers undertook a mixed methods approach to answer their research questions. Here, we focus on the quantitative research aspect.
Research questions : What are the advantages and disadvantages for the staff as a result of the hospital’s move to the new design with all single-room accommodations? Did the move affect staff experience and well-being and improve their ability to deliver high-quality care?
Method: The researchers obtained quantitative data from three sources:
- Staff activity (task time distribution): Each staff member was shadowed by a researcher who observed each task undertaken by the staff, and logged the time spent on each activity.
- Staff travel distances : The staff were requested to wear pedometers, which recorded the distances covered.
- Staff experience surveys : Staff were surveyed before and after the move to the new hospital design.
Results of quantitative research : The following observations were made based on quantitative data analysis:
- The move to the new design did not result in a significant change in the proportion of time spent on different activities.
- Staff activity events observed per session were higher after the move, and direct care and professional communication events per hour decreased significantly, suggesting fewer interruptions and less fragmented care.
- A significant increase in medication tasks among the recorded events suggests that medication administration was integrated into patient care activities.
- Travel distances increased for all staff, with highest increases for staff in the older people’s ward and surgical wards.
- Ratings for staff toilet facilities, locker facilities, and space at staff bases were higher but those for social interaction and natural light were lower.
Advantages of quantitative research 1,2
When choosing the right research methodology, also consider the advantages of quantitative research and how it can impact your study.
- Quantitative research methods are more scientific and rational. They use quantifiable data leading to objectivity in the results and avoid any chances of ambiguity.
- This type of research uses numeric data so analysis is relatively easier .
- In most cases, a hypothesis is already developed and quantitative research helps in testing and validatin g these constructed theories based on which researchers can make an informed decision about accepting or rejecting their theory.
- The use of statistical analysis software ensures quick analysis of large volumes of data and is less effort intensive.
- Higher levels of control can be applied to the research so the chances of bias can be reduced.
- Quantitative research is based on measured value s, facts, and verifiable information so it can be easily checked or replicated by other researchers leading to continuity in scientific research.
Disadvantages of quantitative research 1,2
Quantitative research may also be limiting; take a look at the disadvantages of quantitative research.
- Experiments are conducted in controlled settings instead of natural settings and it is possible for researchers to either intentionally or unintentionally manipulate the experiment settings to suit the results they desire.
- Participants must necessarily give objective answers (either one- or two-word, or yes or no answers) and the reasons for their selection or the context are not considered.
- Inadequate knowledge of statistical analysis methods may affect the results and their interpretation.
- Although statistical analysis indicates the trends or patterns among variables, the reasons for these observed patterns cannot be interpreted and the research may not give a complete picture.
- Large sample sizes are needed for more accurate and generalizable analysis .
- Quantitative research cannot be used to address complex issues.
Frequently asked questions on quantitative research
Q: What is the difference between quantitative research and qualitative research? 1
A: The following table lists the key differences between quantitative research and qualitative research, some of which may have been mentioned earlier in the article.
Purpose and design | ||
Research question | ||
Sample size | Large | Small |
Data | ||
Data collection method | Experiments, controlled observations, questionnaires and surveys with a rating scale or close-ended questions. The methods can be experimental, quasi-experimental, descriptive, or correlational. | Semi-structured interviews/surveys with open-ended questions, document study/literature reviews, focus groups, case study research, ethnography |
Data analysis |
Q: What is the difference between reliability and validity? 8,9
A: The term reliability refers to the consistency of a research study. For instance, if a food-measuring weighing scale gives different readings every time the same quantity of food is measured then that weighing scale is not reliable. If the findings in a research study are consistent every time a measurement is made, then the study is considered reliable. However, it is usually unlikely to obtain the exact same results every time because some contributing variables may change. In such cases, a correlation coefficient is used to assess the degree of reliability. A strong positive correlation between the results indicates reliability.
Validity can be defined as the degree to which a tool actually measures what it claims to measure. It helps confirm the credibility of your research and suggests that the results may be generalizable. In other words, it measures the accuracy of the research.
The following table gives the key differences between reliability and validity.
Importance | Refers to the consistency of a measure | Refers to the accuracy of a measure |
Ease of achieving | Easier, yields results faster | Involves more analysis, more difficult to achieve |
Assessment method | By examining the consistency of outcomes over time, between various observers, and within the test | By comparing the accuracy of the results with accepted theories and other measurements of the same idea |
Relationship | Unreliable measurements typically cannot be valid | Valid measurements are also reliable |
Types | Test-retest reliability, internal consistency, inter-rater reliability | Content validity, criterion validity, face validity, construct validity |
Q: What is mixed methods research? 10
A: A mixed methods approach combines the characteristics of both quantitative research and qualitative research in the same study. This method allows researchers to validate their findings, verify if the results observed using both methods are complementary, and explain any unexpected results obtained from one method by using the other method. A mixed methods research design is useful in case of research questions that cannot be answered by either quantitative research or qualitative research alone. However, this method could be more effort- and cost-intensive because of the requirement of more resources. The figure 3 shows some basic mixed methods research designs that could be used.
Thus, quantitative research is the appropriate method for testing your hypotheses and can be used either alone or in combination with qualitative research per your study requirements. We hope this article has provided an insight into the various facets of quantitative research , including its different characteristics, advantages, and disadvantages, and a few tips to quickly understand when to use this research method.
References
- Qualitative vs quantitative research: Differences, examples, & methods. Simply Psychology. Accessed Feb 28, 2023. https://simplypsychology.org/qualitative-quantitative.html#Quantitative-Research
- Your ultimate guide to quantitative research. Qualtrics. Accessed February 28, 2023. https://www.qualtrics.com/uk/experience-management/research/quantitative-research/
- The steps of quantitative research. Revise Sociology. Accessed March 1, 2023. https://revisesociology.com/2017/11/26/the-steps-of-quantitative-research/
- What are the characteristics of quantitative research? Marketing91. Accessed March 1, 2023. https://www.marketing91.com/characteristics-of-quantitative-research/
- Quantitative research: Types, characteristics, methods, & examples. ProProfs Survey Maker. Accessed February 28, 2023. https://www.proprofssurvey.com/blog/quantitative-research/#Characteristics_of_Quantitative_Research
- Qualitative research isn’t as scientific as quantitative methods. Kmusial blog. Accessed March 5, 2023. https://kmusial.wordpress.com/2011/11/25/qualitative-research-isnt-as-scientific-as-quantitative-methods/
- Maben J, Griffiths P, Penfold C, et al. Evaluating a major innovation in hospital design: workforce implications and impact on patient and staff experiences of all single room hospital accommodation. Southampton (UK): NIHR Journals Library; 2015 Feb. (Health Services and Delivery Research, No. 3.3.) Chapter 5, Case study quantitative data findings. Accessed March 6, 2023. https://www.ncbi.nlm.nih.gov/books/NBK274429/
- McLeod, S. A. (2007). What is reliability? Simply Psychology. www.simplypsychology.org/reliability.html
- Reliability vs validity: Differences & examples. Accessed March 5, 2023. https://statisticsbyjim.com/basics/reliability-vs-validity/
- Mixed methods research. Community Engagement Program. Harvard Catalyst. Accessed February 28, 2023. https://catalyst.harvard.edu/community-engagement/mmr
Editage All Access is a subscription-based platform that unifies the best AI tools and services designed to speed up, simplify, and streamline every step of a researcher’s journey. The Editage All Access Pack is a one-of-a-kind subscription that unlocks full access to an AI writing assistant, literature recommender, journal finder, scientific illustration tool, and exclusive discounts on professional publication services from Editage.
Based on 22+ years of experience in academia, Editage All Access empowers researchers to put their best research forward and move closer to success. Explore our top AI Tools pack, AI Tools + Publication Services pack, or Build Your Own Plan. Find everything a researcher needs to succeed, all in one place – Get All Access now starting at just $14 a month !
Related Posts
Conference Paper vs. Journal Paper: What’s the Difference
What is Availability Heuristic (with Examples)
Quantitative Research: Examples of Research Questions and Solutions
Are you ready to embark on a journey into the world of quantitative research? Whether you’re a seasoned researcher or just beginning your academic journey, understanding how to formulate effective research questions is essential for conducting meaningful studies. In this blog post, we’ll explore examples of quantitative research questions across various disciplines and discuss how StatsCamp.org courses can provide the tools and support you need to overcome any challenges you may encounter along the way.
Understanding Quantitative Research Questions
Quantitative research involves collecting and analyzing numerical data to answer research questions and test hypotheses. These questions typically seek to understand the relationships between variables, predict outcomes, or compare groups. Let’s explore some examples of quantitative research questions across different fields:
- What is the relationship between class size and student academic performance?
- Does the use of technology in the classroom improve learning outcomes?
- How does parental involvement affect student achievement?
- What is the effect of a new drug treatment on reducing blood pressure?
- Is there a correlation between physical activity levels and the risk of cardiovascular disease?
- How does socioeconomic status influence access to healthcare services?
- What factors influence consumer purchasing behavior?
- Is there a relationship between advertising expenditure and sales revenue?
- How do demographic variables affect brand loyalty?
Stats Camp: Your Solution to Mastering Quantitative Research Methodologies
At StatsCamp.org, we understand that navigating the complexities of quantitative research can be daunting. That’s why we offer a range of courses designed to equip you with the knowledge and skills you need to excel in your research endeavors. Whether you’re interested in learning about regression analysis, experimental design, or structural equation modeling, our experienced instructors are here to guide you every step of the way.
Bringing Your Own Data
One of the unique features of StatsCamp.org is the opportunity to bring your own data to the learning process. Our instructors provide personalized guidance and support to help you analyze your data effectively and overcome any roadblocks you may encounter. Whether you’re struggling with data cleaning, model specification, or interpretation of results, our team is here to help you succeed.
Courses Offered at StatsCamp.org
- Latent Profile Analysis Course : Learn how to identify subgroups, or profiles, within a heterogeneous population based on patterns of responses to multiple observed variables.
- Bayesian Statistics Course : A comprehensive introduction to Bayesian data analysis, a powerful statistical approach for inference and decision-making. Through a series of engaging lectures and hands-on exercises, participants will learn how to apply Bayesian methods to a wide range of research questions and data types.
- Structural Equation Modeling (SEM) Course : Dive into advanced statistical techniques for modeling complex relationships among variables.
- Multilevel Modeling Course : A in-depth exploration of this advanced statistical technique, designed to analyze data with nested structures or hierarchies. Whether you’re studying individuals within groups, schools within districts, or any other nested data structure, multilevel modeling provides the tools to account for the dependencies inherent in such data.
As you embark on your journey into quantitative research, remember that StatsCamp.org is here to support you every step of the way. Whether you’re formulating research questions, analyzing data, or interpreting results, our courses provide the knowledge and expertise you need to succeed. Join us today and unlock the power of quantitative research!
Follow Us On Social! Facebook | Instagram | X
933 San Mateo Blvd NE #500, Albuquerque, NM 87108
4414 82 nd Street #212-121 Lubbock, TX 79424
Monday – Friday: 9:00 AM – 5:00 PM
© Copyright 2003 - 2024 | All Rights Reserved Stats Camp Foundation 501(c)(3) Non-Profit Organization.
Studies in Engineering Education
- Download PDF (US English) XML (US English)
- Alt. Display
New Epistemological Perspectives on Quantitative Methods: An Example Using Topological Data Analysis
- Allison Godwin
- Brianna Benedict
- Jacqueline Rohde
- Aaron Thielmeyer
- Heather Perkins
- Justin Major
- Herman Clements
- Zhihui Chen
Background: Education researchers use quantitative methodologies to examine generalizable correlational trends or causal mechanisms in phenomena or behaviors. These methodologies stem from (post)positivist epistemologies and often rely on statistical methods that use the means of groups or categories to determine significant results. The results can often essentialize findings to all members of a group as truth knowable within some quantifiable error. Additionally, the attitudes and beliefs of the majority (i.e., in engineering, White cis men) often dominate conclusions drawn and underemphasizes responses from minoritized individuals. In recent years, engineering education research has pursued more epistemologically and methodologically diverse perspectives. However, quantitative methodologies remain relatively fixed in their fundamental epistemological framings, goals, and practices.
Purpose: In this theory paper, we discuss the epistemic groundings of traditional quantitative methods and describe an opportunity for new quantitative methods that expand the possible ways of framing and conducting quantitative research—person-centered analyses. This article invites readers to re-examine quantitative research methods.
Scope: This article discusses the challenges and opportunities of novel quantitative methods in engineering education, particularly in the limited epistemic framings associated with traditional statistical methods. The affordances of person-centered analyses within different epistemological paradigms and research methods are considered. Then, we provide an example of a person-centered method, topological data analysis (TDA), to illustrate the unique insights that can be gained from person-centered analyses. TDA is a statistical method that maps the underlying structure of highly dimensional data.
Discussion/Conclusions: This article advances the discussion of quantitative methodologies and methods in engineering education research to offer new epistemological approaches. Considering the foundational epistemic framings of quantitative research can expand the kinds of questions that can be asked and answered. These new approaches offer ways to conduct more interpretive and inclusive quantitative research.
- Epistemology
- Quantitative Methods
- Topological Data Analysis
- Person-Centered Analysis
- Latent Diversity
Introduction
Broadly, the purpose of research is to develop new knowledge or insight regarding a specific topic. As such, researchers and research communities must reflect on how they theorize and frame knowledge (i.e., their epistemologies and methodologies) and their processes to build that knowledge (i.e., their methods). This reflection not only facilitates alignment between research questions, theory, methodology, and methods but also can identify new opportunities for expanding the kinds of questions that can be asked and approaches to conducting research. In this theory paper, we explore emerging epistemic possibilities for quantitative research in the context of engineering education, particularly regarding person-centered analyses. These possibilities may offer ways to conduct more interpretive and inclusive quantitative research.
Engineering education research is practiced within a community that is shaped by the very engineering education systems being studied ( Kant & Kerr, 2019 ). Two major discourses in engineering education research methodologies have emerged from this history: rigor and methodological diversity (Beddoes, 2014). Rigor discourse historically focused on legitimating engineering education as an emerging research field. This discourse has resulted in a history of engineering education research that has emphasized objective and generalizable research methods ( Jesiek et al., 2009 ; Riley, 2017 ). Similarly, this discourse has been critiqued as enforcing limited epistemic framings of what counts as high-quality engineering knowledge and perpetuating inequity (Beddoes, 2014; Riley, 2017 ). More recently, methodological diversity discourse has created calls for and value of varied research approaches, particularly in qualitative research methodologies ( Douglas et al., 2010 ). Researchers have faced challenges with qualitative methods in their inculcation into engineering education research due to boundary spanning between engineering and social science ( Douglas et al., 2010 ). However, in recent years, engineering education has seen a surge in published qualitative papers with methodological diversity ( Walther et al., 2017 ). There have been dedicated conversations to clarifying methodological rigor ( Streveler et al., 2006 ), epistemic foundations (Baille & Douglas, 2010; Douglas, et al., 2010 ), and a holistic framework for qualitative inquiry in engineering education ( Walther et al., 2013 , 2015 , 2017 ). However, there has been little reflection on the epistemic norms of quantitative research. Targeting this reflection towards quantitative studies can situate current scholarship in engineering education as well as identify new possibilities that move beyond research methods aligned with a postpositivist epistemology (i.e., truth is knowable within some margin of error) that may be currently overlooked due to norms in the field ( Baillie & Douglas, 2014 ; Koro-Ljungberg & Douglas, 2008 ).
The purpose of this paper is to outline a discussion that invites readers to re-examine quantitative research methods and provides reflections on how an emerging set of quantitative methods—person-centered analyses (PCA)—can expand how we frame research in engineering education. Approaches that employ PCA treat the individual as a unique, holistic entity and work to maintain their whole response in the analysis, as opposed to traditional variable-centered approaches. We also provide an example of a person-centered analysis in engineering education to illustrate the possibilities of this approach. This paper does not provide an exhaustive review of all possible ways that quantitative research can be reconsidered beyond the epistemic norms of (post)positivism. 1 We use a research example to support the arguments made rather than present this example as a set of research findings or specific implications. Instead, we outline a gap in current methodological approaches to quantitative research and invite dialogue around embedded assumptions and norms within quantitative research.
Epistemologies in Social Science and Educational Research
Epistemology refers to beliefs about knowledge and how knowledge is constructed. It is one part of the philosophical assumptions that influences which methodologies and methods researchers consider appropriate ( Crotty, 1998 ; Lather, 2006 ). All aspects of the research process are informed by one’s epistemology, from embedded assumptions about what is known to the development of theories, research questions, and study designs ( Pallas, 2001 ; Collins, 1990 ). Upon the dissemination of findings, epistemologies also influence how research is interpreted and understood within a research community ( Pallas, 2001 ). In social science research, common terms have been developed to describe general categories of epistemologies. We describe three of these categories in this paper: (post)positivism, constructivism, and critical theory. We do not present these categories to continue the “Paradigm Wars” between quantitative and qualitative research as incompatible research approaches (see Bryman, 2008 ). Instead, we present the categories to provide context to the proposed discussion of quantitative methods and non-positivist approaches.
Postpositivism refers to a set of beliefs characterized by the assumption that reality can be known within some degree of certainty. Historically, postpositivism emerged as a response to positivism, an epistemology that was popular in early social science work ( Reed, 2010 ). Positivism takes a narrow view on knowledge production, focusing only on what can be measured and observed, with a strict focus on causality and the separation between knowledge and observer. Postpositivism allows for the role of human perspective and error, but still maintains a commitment to objective measurement and observation. Researchers leveraging a postpositivist perspective are often concerned with determining averages and trends in the dataset, attempting to minimize or control variation from these trends, and generalizing results to a larger population. Quality or validity is traditionally focused on measurement, generalization, and controlling variables to reduce bias ( Hammersley, 2008 ). While quantitative research is not a monolith, few studies have taken epistemological framings different from positivism or postpositivism ( Bryman, 2008 ).
In contrast, constructivism is often concerned with how an individual develops a socially negotiated and personal understanding of reality ( Baillie & Douglas, 2014 ). This understanding is varied for each individual, leading the researcher to study complexity and shared reality. Research leveraging constructivism recognizes individuals’ perspectives and the constellation of factors that may shape their lived experiences. It also acknowledges that research is a co-production between the researcher and participant(s). Thus, constructivism focuses on the subjective experience and its value for knowledge production.
Similarly, critical approaches emphasize the subjective reality of lived experiences to reveal power and oppression within social contexts with aims for social transformation (i.e., move away from (re)producing knowledge laden with inequity). Critical paradigms include feminist scholarship, Critical Race Theory, and disability studies or Crip Theory, among many others ( Lather, 2006 ). Critical epistemologies acknowledge that conceptions of knowledge are not value-neutral and that marginalized forms of knowledge must be valued and studied. This epistemological approach opposes how postpositivism imposes structural laws and theories that do not fit marginalized individuals or groups and posits that constructivism does not adequately address needed action against oppressive social structures.
Even though epistemologies are not tied to specific research methods, the affordances and foci of these common epistemological paradigms have resulted in historically bifurcated research approaches, where quantitative methods are typically associated with (post)positivism and qualitative methods are typically associated with constructivist, critical, or other non-positivist epistemologies ( Tuli, 2010 ). For instance, education researchers often use quantitative methodologies to study generalizable correlational trends or causal mechanisms. They typically rely on traditional statistics that use the means of groups (e.g., engineers versus non-engineers or women versus men) to determine statistically significant differences between groups or average effects of a variable on an outcome (i.e., variable-centered approaches). Research findings typically report means, line or bar graphs, p -levels, or Bayes factors. These methodologies often result in essentializing results of analyses to all members of a group as truth (a [post]positivist approach) and perpetuate a problematic dichotomy of identity.
As an alternative to such essentializing approaches, this theory paper focuses on the links between novel quantitative research methods in person-centered analyses and non-positivist epistemologies. However, we acknowledge that epistemology informs other components of the research process besides methodology, such as theory and dissemination. Douglas, Koro-Ljungberg, and Borrego ( 2010 ) argued against approaching theory, method, and epistemology separately or decontextualizing the framing of research (p. 255). Thus, despite a focus on methods of analysis, this work also demonstrates the potential need for alternatives to traditional conceptions of quantitative research that are reformulated from the epistemic foundations.
Epistemic Standpoint of Research Team
We are a team of researchers engaged in mixed-methods research focused on identity and diversity in engineering education. Some of us specialize more deeply in quantitative or qualitative paradigms, but together we recognize the value in each paradigm to answer particular kinds of questions, and an added richness in combining research approaches. As such, we approach our research and this discussion from a pragmatic epistemology. Pragmatism emerged in the late 19th century ( Maxcy, 2003 ), and is a set of philosophical tools rather than solely a philosophical standpoint ( Biesta, 2010 ), which focus on research choices that result in anticipated or desired outcomes ( Tashakkori & Teddlie, 2008 ). Pragmatism holds that knowledge is individual and socially constructed; nevertheless, it also posits that much of this knowledge is socially shared and research can begin to examine these shared realities ( Morgan, 2014 ). Pragmatism has been used recently in social science as the epistemology guiding mixed and multiple methods ( Creswell & Clark, 2011 ; Johnson & Onwegbuzie, 2004 ) as it “rejects traditional philosophical dualism of objectivity and subjectivity” ( Kaushik & Walsh, 2019, p. 4 ). With a focus on meaningful research that has utility for action for making purposeful difference in practice, pragmatism is also consistent with action for social justice ( Morgan, 2014 ).
One of the challenges in mixed methods research is synthesizing research findings from qualitative or quantitative paradigms. In this process, we have begun to engage in newer quantitative methods that provide additional nuance and the ability to preserve individuals’ responses within the data. We have found these practices both demanding and rewarding. From this standpoint, we open discussion of considering research questions and approaches in the quantitative paradigm from non-positivist epistemologies.
Traditional Methodological Approaches in Quantitative Research
Stemming out of (post)positivism, most quantitative methodologies emphasize objectivity, replicability, and causality. Most quantitative studies in social science research were designed to address research questions using variable-centric methods. Variable-centered approaches (i.e., correlations, regressions, factor analysis, and structural equation models) are appropriate for addressing inquiries concerned with “how variables, observed or latent, relate to each other” ( Wang et al., 2013, p. 350 ) and generate outcomes based on an averaged set of parameters. In engineering education, the study population is often cisgender, White, male, upper-middle-class, able-bodied, continuing generation, and heterosexual ( Pawley, 2017 ). Historically, this population has been accepted as the default in engineering education research, resulting in findings and implications for practice that are often decontextualized from the social reality of individuals’ backgrounds and experiences. By conducting research with demographic homogeneity, the understanding of phenomena for individuals who are not the default is limited and warrants a need for researchers to justify their rationale for generating theory based on individuals with a dominant presence in engineering ( Slaton & Pawley, 2018 ; Pawley, 2017 ). For our research, particularly in focusing on diversity in engineering education, traditional quantitative methods have provided useful answers to important questions; however, they also present challenges in adequately representing all students.
To illustrate these challenges and highlight how variable-centric statistical methods can reinforce dominant norms, we provide an example related to research on gender segregation in science, technology, engineering, and math (STEM) professions. This example, drawing on common, and well-known phenomena, illustrates the ability of variable-centered approaches to ask nuanced questions while still essentializing the findings of an individual to a group. Thus, even as this approach provides valuable and important research findings, it also shows the ways in which even carefully constructed quantitative studies that meet standards of quality still align with (post)positivism.
The phenomenon in question emerges from studies comparing the future goals and outcome expectations of men and women that find women are more interested in person-oriented or altruistic roles. Engineering, as a male-dominated and thing-oriented field, is not consistent with this characterization (e.g., Ngambeki et al., 2011; Su & Rounds, 2015 ). Therefore, studies conclude that misaligned orientations are a key reason for women’s lack of representation in engineering ( Bairaktarova & Pilotte, 2020 ; Cejka & Eagly, 1999 ; Miller, Eagly, & Linn, 2015 ; National Academy of Engineering, 2008 ; Su & Rounds, 2015 ). These studies give some important general characterization of how engineering culture is gendered, and their findings are consistent across repeated studies and cultural contexts.
However, the limits of this variable-centered approach emerge when we explore the question from an alternate direction. For example, a study of women in engineering disciplines with above-average (i.e., biomedical, industrial, etc.) and below-average female enrollment (i.e., mechanical, electrical, etc.) indicate different patterns, with women in the below-average female enrollment group having less interest in stereotypically feminine outcome expectations ( Verdín et al., 2018 ). This study points to the reality that not all women follow general findings about interests and goals. Thus, even with careful explanation by researchers that quantitative results are true for most women, the nuance of individual differences is not captured by these approaches. Indeed, most social science studies focus on variation between groups and make conclusions based on statistically significantly different average effects ( Fanelli, 2010 ). However, differences between groups, even with so-called large effect sizes, can occur even when two groups are much more similar than different ( Hanel et al., 2019 ). Additionally, the attitudes and beliefs of the majority (i.e., in engineering, White men) dominate conclusions drawn and underemphasizes responses from minoritized individuals.
Slaton and Pawley ( 2018 ) argued that it is not sufficient for scholars to justify the exclusion of individuals based on traditional quantitative norms of sampling and large-n studies. Instead, engineering education must create and learn new methods that empower researchers to learn from small numbers. The number of participants or lack thereof in a study is not an excuse to generate theory based on homogenous populations and perpetuate limited standards of representation ( Pawley, 2018 ; Slaton & Pawley, 2018 ). There is a need for epistemic shifts to advance our understanding and challenge what counts as adequately representative in engineering education research ( Slaton & Pawley, 2018 ). Otherwise, engineering education researchers reinforce systemic inequities through our logic and methods, unconsciously or otherwise.
Pawley and colleagues have offered small-n qualitative studies as a valuable solution to large quantitative studies’ important criticisms. The purpose of these studies is to capture and highlight the experiences of individuals often minoritized in engineering and sometimes (but not always) identify patterns across participants ( Merriam & Tisdell, 2016 ). These studies also can leverage the complexity and power of intersectionality studies to reveal inequities in engineering education. Through the thick description of individuals’ experiences, these qualitative studies lead to a richer and more nuanced understanding of phenomena otherwise left ignored or masked in studies that prioritize large-n studies. However, the level of detail often precludes the breadth of participants seen in quantitative studies. While this focus is a feature of qualitative research rather than a problem, it does constrain the kinds of questions that qualitative research can and cannot answer. There is still a need to conduct quantitative studies that are generalizable, are inclusive, and do not essentialize results to a single average or group.
As a result, in addition to qualitative studies that provide valuable insight into individual lived experiences, new quantitative methodological approaches have emerged in the social sciences that also begin to address the critiques raised about (post)positivist quantitative paradigms. These new approaches can introduce epistemologically novel ways to approach quantitative research questions that fill a gap not addressed by qualitative, mixed methods, or traditional quantitative research alone. New quantitative approaches do not need to replace traditional methods, but instead offer additional ways of understanding and querying a phenomenon. We describe some of these approaches below before focusing on person-centered analyses.
New Methodological Approaches in Quantitative Research
Multi-modal approaches.
Emerging scholarship in engineering education has begun to re-examine quantitative methods, particularly in using multi-modal approaches to understand cognition and emotion in authentic contexts. We provide a few but not exhaustive examples of these approaches. Villanueva, Di Stefano, Gelles, Vicioso Osoria, and Benson ( 2019 ) conducted a study with multi-modal approaches to data collection, including interviews and electrodermal activity sensor data, from 12 womxn students to study psychophysiological responses to academic mentoring. This approach treated inequity issues as core to participants’ experiences rather than moderating quantitative analysis variables. The quantitative data were analyzed using MANOVA and representative response profiles before synthesizing the findings with qualitative data. This approach allowed for both conscious (interview responses) and unconscious (electrodermal activity sensor data) to be examined simultaneously. This multi-modal approach has also been applied to an experimental study of students’ emotional experiences during testing with electrodermal activity sensor data saliva testing during a practice exam ( Villanueva et al., 2019 ).
Other researchers have used similar multi-modal protocols to study design thinking. Gero and Milanovic ( 2020 ) proposed a framework for design thinking that involves design cognition, design physiology, and design neurocognition. Gero and Milanovic ( 2020 ) provided a detailed description of prior studies and various measurement methods for these dimensions (i.e., brain imaging, electrodermal activity, eye movements, protocol analysis, surveys, interviews, etc.). These measurements are combined to inform a larger understanding of these processes in contexts that are often studied separately (i.e., affect and emotion or cognition). These data are examined using traditional statistical techniques but also using novel approaches like linkography to examine relationships between design moves ( Goldschmidt, 2014 ), Markov modeling to examine probable transitions in design reasoning or processes ( Gero & Peng 2009 ; Kan & Gero 2010 ), and correspondence analysis to describe the degree and extent of relationships between categories ( Greenacre & Hastie, 1987 ).
These multi-modal approaches offer new ways to examine complex phenomena and provide ways to integrate the strengths of quantitative and qualitative data. Two of the biggest challenges of multi-modal approaches are the effort (i.e., time, cost, etc.) associated with data collection and synthesis of heterogeneous data. As such, these studies are often conducted with small sample sizes and most studies rely on traditional statistical methods such as the correlation of quantitative results (where qualitative data streams are coded into quantitative frequencies or patterns; Gero & Milanovic, 2020 ). These approaches have strength in examining the underlying mechanisms in rich and nuanced ways.
The novelty of these methods is predominantly in data collection tools and integration of results of these tools to generate new insights and questions in educational research. Fewer studies have deeply examined the epistemic and statistical methods of solely quantitative research for the same goal. We believe that person-centered statistical analyses offer ways to reimagine quantitative educational research using more common numeric data collection approaches such as surveys and observations. This approach re-imagines how student responses are characterized and understood in context through statistical methods.
Person-Centered Approaches
Person-centered approaches sit in contrast to traditional variable-centric approaches and assume that the population under study is heterogeneous. The results of such studies focus on preserving the variation in individual’s responses resulting in authentic groupings of individuals, as opposed to imposing superficial characterizations of groups ( Laursen & Hoff, 2006 ; Morin et al., 2018 ). In a variable-centered approach, individual differences are treated as outliers from a mean value, or even erased, due to low sample size, a decision that disproportionately impacts minoritized individuals. While these approaches are not a panacea for all challenges with quantitative methods, especially concerning measurement and fairness ( Douglas & Purzer, 2015 ), they do open new avenues for quantitative inquiry beyond (post)positivist epistemologies. In doing so, they provide new avenues of research and potentially more equitable approaches to quantitative methodologies.
Person-centered analyses are a relatively young methodological approach arising alongside the increased availability of computing resources ( Laursen & Hoff, 2006 ). As with all innovations, they occupy an ill-defined space with concepts that both overlap and differ in key ways. Consequently, a call for increased use of person-centered analyses requires some discussion for readers to navigate this confusing morass of shared terminology. A central area of overlap and potential confusion that new researchers will likely encounter is between the terms person-centered analysis and data-driven approach . For instance, discussions of specific techniques (e.g., cluster analysis or mixture modeling) occur in both spheres, and both approaches rely on modern computational power and sprawling datasets (also called Big Data; Lazer et al., 2009 ; Gillborn, Warmington, & Demack, 2018 ).
A data-driven approach rejects traditional formulations of the scientific method that begin and end with theory developments. Instead, it lets the data “tell their own story,” independent of researchers’ assumptions and preconceptions, and then reconcile findings and theories once the analysis is complete ( Qiu et al., 2018 ). Data-driven approaches thus utilize bottom-up frameworks centered on relationships instead of top-down frameworks driven by explanations and causality ( Qiu et al., 2018 ). It is not surprising that data-driven approaches have increased in popularity as more and more data is created as part of our daily lives ( Gero & Milanovic, 2020 ; Villanueva, Di Stefano, et al., 2019 ), which also lessens the need for experiments that control for confounds and the influence of covariates. Instead, data-driven approaches accommodate for the lack of control in data generation and collection through sheer numbers and advanced computational power ( Lazer et al., 2009 ).
Person-centered analyses, in contrast, challenge assumptions about group homogeneity, variable effects, and the generalizability of conventional inferential analyses (e.g., linear regression; Eye & Wiedermann, 2015 ). The mean of a dataset is not always the best way to describe or represent a population—not only can it be distorted by a small number of outliers (e.g., the average net worth in the United States where wealth is concentrated among a relatively small group of individuals), but it may also represent an impossible or otherwise inaccurate value (e.g., the average of 2.5 children per American household; Eye & Wiedermann, 2015 ). Similarly, variable-centered analyses estimate the effects of individual variables by controlling for, or removing the effects of, other variables in the model, although this separation cannot occur in real life (e.g., attempting to attribute an outcome to racism or socioeconomic inequality when these experiences exist in a state of mutual or spiraling causality; McCall, 2002 ). Thus, person-centered analyses utilize the identification of underlying groups (i.e., latent profile/class analysis; Jack et al., 2018 ), hidden clusters or structures (i.e., cluster analysis, Topological Data Analysis, Principal Component Analysis, Self-Organizing Maps, and Multidimensional Scaling; Chazal & Michel, 2017 ; Everitt et al., 2011 ), or mixture components (i.e., mixture modeling; Jack et al., 2018 ) when examining the relationships of individual response patterns within the data. This approach preserves heterogeneity instead of masking or minimizing it. In other words, person-centered analyses adopt a data-driven approach and use this approach to identify subpopulations not readily visible to the naked eye and use these subpopulations to improve the clarity and accuracy of predictions and explanations. Although person-centered analyses incorporate data-driven approaches, not all data-driven approaches are person-centered; many other exploratory and Big Data techniques, including Classification and Regression Trees (CART; Breiman et al., 1984 ), still foster variable-centered approaches that aim to reconcile variables with predefined (and thus potentially biased or inaccurate) categories. We provide a description, but not an exhaustive list, of these different analyses in Table 1 .
Examples of person-centered and data-driven analyses.
Analysis | Description | Reference |
---|---|---|
Topological Data Analysis | Used to identify geometric patterns in multivariate data. Continuous structures are built on top of the data and geometric information is extracted from the created structures and used to identify groups. For more information, see the example from engineering education provided below. | |
Cluster Analysis | Used to create groups according to similarity between observations in a dataset, often through the algorithm -means clustering. Groups are created according to their distance from the center of a cluster and group assignment is not probabilistic. | |
Gaussian Mixture Modeling | Used to create groups according to similarity between observations in a dataset. Unlike cluster analysis, this technique accounts for variance in the data, and thus allows for more variability in group shape and size while providing probabilistic assignment to groups. | |
Latent Profile/Class Analysis | Used to recover hidden groups from multivariate data. Falls within the larger umbrella of mixture modeling. Can be used with continuous or categorical data, and results in probability-based assignment to groups. | |
Growth Mixture Modeling | Similar to latent profile/class analysis but used with longitudinal data. Can be used to identify groups and then track individual movement across group lines or can be used to identify groups that emerge over time. | |
Artificial Neural Networks | A machine-learning classical algorithm that performs tasks using methods derived from studies of the human brain. Can be used to recognize patterns or classify data. Self-Organizing Maps (Saxxo, Motta, You, Bertolazzo, Carini, & Ma, 2017) are a form of person-centered neural networking that can be used to convert complex multivariate data into two-dimensional maps that emphasize the relationships between observations. | |
Principal Component Analysis | Used to collapse correlated multivariate data into smaller composite components that maximize the total variance (aka dimension reduction). Often used to reduce a large number of variables to a more manageable number. For non-continuous data, categorical principal component analysis can be used. Data-driven but not person-centered. | |
Multidimensional Scaling | Another form of dimension reduction, but with a focus on graphics and the visual analysis of data. Multivariate data is collapsed into two dimensions by computing the distance between variables and plotting the resulting output. Data-driven but not person-centered. | |
Exploratory Factor Analysis | Used to identify latent factors or variables in correlated multivariate data. Often used in scale development or when analyzing constructs that cannot be measured directly. Data-driven but not person-centered. |
Person-centered analyses are not necessarily associated with a particular epistemological paradigm. The techniques associated with person-centered analysis may be used to make (post)positivist claims, such as clustering engineering students based on learning orientations and study strategies, then evaluating the study success of each cluster (e.g., GPA; Tynjälä et al., 2005 ). However, a benefit of person-centered analyses is that it disrupts some of the assumptions typically associated with (post)positive, variable-centered approaches. Below, we provide an example of one kind of person-centered analysis that takes a non-positivist viewpoint.
An Example of Person-Centered Analysis from Engineering Education
We use a research project that employed Topological Data Analysis (TDA) to demonstrate the kinds of knowledge afforded by a specific type of person-centered analysis. This empirical example was a part of a study titled, CAREER: Actualizing Latent Diversity: Building Innovation through Engineering Students’ Identity Development (NSF Grant No. 1554057), focused on understanding first-year engineering students’ latent diversity through a national survey and longitudinal narrative interviews. Latent diversity refers to students’ underlying attitudes, mindsets, and beliefs that are not readily visible in engineering classrooms yet have the potential to contribute to innovation in engineering solutions ( Godwin, 2017 ). This latent diversity is often undervalued or unacknowledged in engineering education with an emphasis on particular ways of being, thinking, and knowing aligned with rigid norms and expectations centered in engineering’s historic lack of diversity ( Benedict et al., 2018 ; Danielak et al., 2014 ; Foor et al., 2007 ). We hypothesized that these cultural norms force students to conform to these expectations, thus reducing capacity for innovation and creating identity conflict that results in a lack of belonging and, ultimately, attrition. The goal of this project was to characterize latent diversity in incoming students to understand different subpopulations in engineering and how their experiences within the dominant culture of engineering affected their development as engineers to provide more inclusive ways of educating engineering students. The Purdue University Internal Review Board approved this study under protocol number 1508016383.
This study was executed in three consecutive phases: 1) instrument development; 2) characterization of latent diversity from a nationally representative sample; 3) longitudinal narrative interviews. For more details about the survey development, see Godwin et al. ( 2018 ). We used TDA to identify six data progressions among engineering students’ attitudinal profiles. These groups were later used to identify and recruit students to participate in bi-annual longitudinal narrative interviews designed to capture student identity trajectories. Our example focuses on the second phase of research focused on characterizing latent diversity. It demonstrates the type of person-centered characterizations that can be conducted in engineering education research.
Data Sources
We recruited U.S. institutions to participate based on a stratified sample of small (7,750 or fewer), medium (7,751 to 23,050), and large (23,051 or more) institutions in the United States ( Godwin et al., 2018 ). We chose this sampling approach to ensure there was equal representation among the institution types (i.e., small, medium, and large), instead of an overrepresentation of large, public engineering institutions. The survey instruments were administered in common first-year engineering courses via paper-and-pencil format at 32 ABET-accredited institutions during the Fall 2017 semester. This timing captured students’ incoming latent diversity before being influenced by the process and culture of engineering education and captured students interested in a wide range of engineering disciplines. The data were digitized and cleaned by removing indiscriminate responses resulting in 3,711 valid responses.
Study Participants
Students indicated their self-reported demographics at the end of the survey instrument. These measures were designed to include a wide range of identities and included a multi-select question ( Fernandez et al., 2016 ). The majority of participants identified as men ( n = 2150), with other students identifying as a woman ( n = 720), transgender ( n = 70), agender ( n = 17), or genderqueer ( n =14). Some students used the self-identify write-in option to indicate a gender not listed ( n =75), and some did not respond ( n = 782). The majority of the students identified as White ( n = 2089). The remaining students identified as Asian ( n = 380), Latino/a or Hispanic ( n = 347), African American/Black ( n = 209), Middle Eastern or Native African ( n = 65), Pacific Islander or Native Hawaiian ( n = 34), Native American or Alaska Native ( n = 49), used the self-identify write-in option to indicate another race/ethnicity not listed ( n = 72), or did not respond ( n = 793). We note that a large portion of students did not report demographics; often, students do not complete surveys due to fatigue, lack of time, or loss of interest. The survey was extensive, and some students dropped off in responding at the end of the survey. These reasons may account for students who did not report a gender identity or race/ethnicity, which were asked at the end of the survey. Students were allowed to select all that applied regarding their gender and race/ethnicity with which they identified. For example, out of the 2,089 (56%) students who identified as White, 291 (14%) of them also identified with another race/ethnicity. Additionally, students were asked to report their home ZIP code. These ZIP codes were plotted on the U.S. map to provide a geographic distribution of the overall first-year engineering student sample in the dataset, Figure 1 .
The map represents students’ self-reported home Zip Codes from a national survey. Each dot may represent more than one student. This image was generated in R ( R Core Team, 2018 ) using the ggplot2 package ( Wickham, 2009 ).
An Overview of Topological Data Analysis
Generally, the field of topology refers to an area of mathematics, persistent homology, that relies on the study of shapes and structures to make sense of the world. However, more recently, topological data analysis (TDA) has emerged as a person-centered analysis that allows quantitative researchers to take an exploratory approach to draw insights from complex high-dimensional datasets (see Wasserman, 2018 for a detailed review). These shapes or structures allow the researcher to identify subgroups that may not have been considered when using traditional pairwise comparative methods that rely on researchers’ predetermination of groups ( Lum et al., 2013 ). TDA differs from other person-centered approaches (i.e., Principal Component Analysis, multidimensional scaling, and clustering methods) based on its capabilities to capture geometric patterns that may have been ignored by other statistical methods ( Lum et al., 2013 ). Instead, TDA provides a mapping of the data into a two-dimensional representation while maintaining the complex structure of the data. The resulting map is constructed from the shape and proximity of the data to itself, rather than a reference or seed point. As such, the mapping is not influenced by the measurement scale or random generation of multiple possible models. Topological methods are capable of handling the data by compressing the infinite data points into a finite, manageable network of nodes ( Lum et al., 2013 ).
TDA has proven useful for wide-ranging applications in fields such as natural science, social science, and other computational fields. Studies have identified subgroups within breast cancer patients for targeted therapy ( Lum et al., 2013 ), real-time air detection of bacterial agents ( McGuirl et al., 2020 ), stratification of basketball positions above the traditional five characterizations of players ( Lum et al., 2013 ), and player and team performance of football data ( Perdomo Meza, 2015 ). Despite such broad and useful applications, TDA has been underutilized among engineering education and social science research except for two studies. Of the two studies, the first focused on distinguishing between normative and non-normative attitudinal profiles among incoming engineering students at four institutions ( n = 2,916; Benson et al., 2017 ). In that study, TDA was useful for identifying groupings of students based on latent constructs rather than demographic variables. This study also provided evidence that some students’ attitudes differ from the normative group, especially in terms of feeling recognized as an engineer ( Benson et al., 2017 ). The second study is the example used below. The specific results from this study have been published previously (see Godwin et al., 2019 for more detailed discussions of the specific study and TDA analysis); here, we focus on highlighting the ways in which the study illustrates the contributions afforded by person-centered approaches.
Analysis Steps in Topological Data Analysis
The process for conducting TDA for the example provided, including the sensitivity of these parameters is discussed in detail in our previous work ( Godwin et al., 2019 ), but we highlight key details here for context. Before conducting TDA, several considerations must be made to minimize error and bias. First, methods to estimate missing data must be used to address potential errors when computing distance between points within the metric space ( Lum et al., 2013 ; Godwin et al., 2019 ). This specific consideration is especially important in social science research, where missing data are common. Next, if using latent variable measures, a typical practice in engineering education survey methods, a valid factor space must be created. This step involves verifying the study measurements through confirmatory factor analysis and generating factor scores based on the results of this factor analysis. Finally, the TDA algorithm parameters must be tuned to detect the underlying structure of the data. These parameters include the filtering method, clustering method, number of filter slices (n), amount of overlap of individuals, and cut height.
Interpreting TDA Maps
TDA generates a rich graphical representation of the data structure that consists of nodes and edges. The nodes represent multiple students, and the edges represent the overlap of student membership with other nodes. The size of the node indicates the number of students present in that area of the map. The color indicates the density of student responses within the node. Density indicates how similar student response patterns are across all dimensions. The resulting map is descriptive rather than inferential in group determination and differences between groups. It is particularly important to emphasize how TDA results are not a defined group but a representation of the structure of interconnectedness and difference within the data ( Laubenbacher, 2019 ). This approach contrasts with other statistical methods that rely on specifying a probability at which a group is considered different or forcing data into deterministic groups (as in clustering and latent profile analysis. This approach allows for more nuanced relationships and patterns to be identified between groups and individuals while also preserving the individual’s response within the study. The resulting map shows data progressions, which are groupings of students and their relation to one another—the groupings were determined visually by the researchers from this descriptive method rather than from the method’s results.
We created a 17-dimensional factor space based on the items used to measure students’ attitudes, mindsets, and beliefs concerning their STEM role identities (physics, mathematics, and engineering), motivation beliefs (control and autonomous regulation), epistemic beliefs, sense of belonging (engineering and engineering classroom), and two personality dimensions (neuroticism and conscientiousness). The results of TDA indicate six data progressions (i.e., A–F) for the characterization of latent diversity (Figure 2 ).
TDA map generated from the analyses, including groupings based on the distribution of the network of nodes. The colors shown in the map above represent the density of the map. The blue nodes denote a population of approximately 200 students, while the red nodes denote a smaller population of approximately three to five students. Our final parameters included a k-nearest neighbors filtering method, a single-linkage hierarchical agglomerative clustering method, 35 filter slices (n), a 50% overlap in data, and a 4.0 cut height (ɛ).
The resulting data progressions show descriptive differences across various factors, as shown in Figure 3 . We provide these descriptive differences to illustrate the utility of this approach in producing data progressions that indicate unique student groupings and relationships within the dataset. We avoid conducting traditional variable-centered comparisons that reduce these data progressions to finite groups or clusters to avoid the knowledge claims we have critiqued in this paper. The discussion that follows provides the description of these data progressions as evidence for pragmatic validation or the utility of this method to reveal structure in complex, noisy data while still maintaining individual student responses ( Walther et al., 2013 ).
Spider plot of average student responses on factors within TDA. Measures include disciplinary role identity constructs: Math_Int = mathematics interest; Math_PC = mathematics performance/competence beliefs; Math_Rec = mathematics recognition; Phys_Int = physics interest; Phys_PC = physics performance/competence beliefs; Phys_Rec = physics recognition; Eng_Int = engineering interest; Eng_PC = engineering performance/competence beliefs; Eng_Rec = engineering recognition. Two factors from the Big Five Personality measure were used: Ocean_NC = conscientiousness and Ocean_Neu = neuroticism. Belonging was measured in two contexts: Bel_Fac1 = in the engineering classroom and Bel_Fac2 = in engineering as a field. Students’ motivation was captured by Motiv_CR1 = controlled regulation for engaging in courses; Motiv_CR2 = controlled regulation for completing course requirements; and Motiv_AR2 = autonomous regulation for completing course requirements. Students’ epistemic beliefs (Epis_Fac4) captured the certainty of engineering knowledge (i.e., absolute to emergent).
First-year engineering students’ incoming attitudes and beliefs vary across the dimensions, but students also share similarities between the groups. Group A has the largest number of students ( n = 952) with moderately strong STEM role identities, motivation beliefs, epistemic beliefs, and a sense of belonging. In contrast, students in Group E ( n = 144.5, average partial membership because edges in Figure 2 are shared membership) shared moderately low beliefs about their STEM role identities and indicated low emotional stability. These qualities of Group E were similar to students identified in groups A, B ( n = 517), C ( n = 21), and D ( n = 27). Interestingly, students in Group F ( n = 51.5) had high emotional stability, STEM role identities, and a sense of belonging, but indicated low motivation beliefs (i.e., Controlled Regulation).
While additional similarities and differences can be drawn about each progression, such discussion is outside the scope of this paper. Rather, this paper focuses on the utility of person-centered approaches and how the results assert the assumptions of person-centered analysis. Thus, through our example, we wish to highlight how multiple subpopulations can exist among a sample and to explicitly draw attention to the power of taking an exploratory approach to data analysis, as opposed to methods that require defined hypotheses. By relying on the shape of the data, we were able to draw meaningful insights about the landscape of students’ attitudes, beliefs, and mindsets rather than binning students into groups based on demographic variables. Some data progressions show strong common patterns with small sample sizes (for example, Groups C and D). Many statistical techniques would ignore these groups in inferential testing because of this limitation. TDA allows these patterns to be detected and placed within the large dataset structure.
Implications of TDA Example
The TDA map (Figure 2 ) illustrates a wide variation among students’ attitudes, beliefs, and mindsets in engineering education. Students’ incoming latent diversity in U.S. engineering programs is not homogeneous. Additionally, results from this work often reveal small groups of student attitudes that would not emerge using variable-centered methods. This approach also allows new ways of framing research questions to understand general positions of students’ multidimensional attitudes, beliefs, and mindsets in relation to one another rather than forcing students into rigidly defined groupings based on probability. Importantly, this approach highlights how a one-size-fits-all approach to engineering education cannot adequately support the variation of students entering engineering programs with differing ways of seeing themselves in STEM. This variation includes students’ motivation to engage in courses and assignments, personalities, and beliefs about knowledge. Teaching all students in the same way or portraying a stereotype of the kind of person that becomes an engineer can communicate dominant norms that push students out of engineering ( Benedict et al., 2018 ; Cech, 2015 ). This finding indicates how non-positivist epistemologies help frame research questions aimed at understanding how students build their understanding and knowledge of the world. In answering these questions, engineering educators can create experiences and reflection opportunities that support the diversity of students in the classroom.
Comparison to Traditional Methods
To further illustrate the contributions of TDA specifically and person-centered analyses generally, we compared the TDA results to more traditional statistical methods. For example, we examined the demographic representation of students within each data progression by gender identity and race/ethnicity individually and, where possible based on sample sizes, at the intersection of race and gender (i.e., White women, Black women, Asian women, Latinas, White men, Black men, Asian men, and Latinos). We did not find any differences in representation across data progressions using a chi-square test with a Holm-Bonferroni correction for gender, race/ethnicity, and intersectional groups of gender and race/ethnicity at the alpha value of 0.1. In this comparison, we emphasize that these tests rely on traditional statistical tests and do not consider individual responses with small numbers, particularly non-binary students across racial/ethnic categories and Native Hawaiian, Alaska Native, Native American, or other Pacific Islander students within the dataset.
However, when examining the data by traditional demographic groups using a Kruskal-Wallis test with a follow-up Dunn’s test, we did find statistically significant differences across the majority of the 17 factors. For example, we found that students’ controlled regulation motivation for engaging in engineering courses (Mov_CR1) showed significant differences by intersectional gender and race/ethnicity (H(7) = 93.787, p < 001) with a small effect size (η 2 = 0.023; Cohen, 1988 ) as shown in Figure 4 . A post hoc Dunn’s test indicated that Black men and Latinos reported statistically significantly lower controlled regulation motivation ( p < 0.01) than all other groups and that Black women and Latinas reported statistically significantly higher scores than all-male groups ( p < 0.001).
Differences in controlled regulation for classroom engagement by intersectional gender and race/ethnicity groups. Groups with large enough samples for comparisons include: WW = White women, AW = Asian women, BW = Black women, LW = Latinas, WM = White men, AM = Asian men, BM = Black men, and LM = Latinos.
From these results, one might conclude that Black and Latinx groups show average differences (i.e., lower motivation from external sources) by gender and race/ethnicity. However, a focus on demographics as explanations for student outcomes treats minoritized groups as homogeneous and often implicitly suggests race or gender as a causal variable for differences rather than other structural issues ( Holland, 2008 ). Other analyses focused on investigating differences in latent constructs by demographic characteristics often bin together groups of minoritized students to satisfy sample size requirements (i.e., all underrepresented racial and ethnic groups in engineering). This practice assumes that the experiences of minoritized students are a monolith and ignores the context as to why certain norms and inequities exist in engineering education.
Our TDA results, in contrast, indicate that these conclusions, based on a traditional approach to understanding gender and racial/ethnic diversity within our sample, oversimplify students’ responses within the data. Black and Latinx men and women have a wide range of attitudes and are equally represented in the data progressions within our results. This person-centered analysis allows for individual student differences to exist in complex large datasets. Additionally, the person-centered analysis allows for students who do not meet the sample size requirements for traditional statistical comparisons to be included within data analysis. Even with a large social science sample greater than 3,000 responses, many intersectional groups with small numbers were excluded from the demographic analyses presented. A person-centered analysis allows for inclusive representation where data analysis and conclusion include all responses rather than only those with dominant group status. Finally, this approach allows the structure and connections within the data to be uncovered.
Our example illustrates how engineering education researchers might reframe research questions and approaches from non-positivist epistemologies. Engineering culture and structures have been constructed as raced, classed, and gendered, and negatively affect all students. Engineering culture emphasizes and perpetuates demographic normativity of Whiteness, masculinity, competition, and emphasis on technical solutions ( Akpanudo et al., 2017 ; Secules et al., 2018 ; Slaton, 2015 ; Uhlar & Secules, 2018 ).
Challenges and Opportunities for Person-Centered Analysis
Person-centered analysis can provide ways to ask research questions outside of the “to what extent” research questions or hypotheses often tested with quantitative research in (post)positivist paradigms. In our example, we examined the data structure with no a priori hypotheses about how gender, race/ethnicity, or other demographic factors might influence students’ incoming underlying attitudes, beliefs, and mindsets in engineering. TDA allowed us to find the emergent structure of relationships among student responses within the dataset and make generalized and descriptive conclusions about our results. This statistical approach provided ways to re-think the types of questions we asked of our data and the assumptions we brought to our analysis.
Additionally, these methods do not replace the need for qualitative, mixed methods, and multi-modal studies that have different purposes for generating knowledge. However, research methods focused on retaining the integrity of the individual within the dataset do provide opportunities to ask more complex and potentially novel research questions than the ones traditional quantitative methods can address. Person-centered analyses can help reveal relationships and patterns between large amounts of information by allowing discovery to be emergent. This approach aligns more closely with constructivist or even critical epistemologies. As discussed previously, many of our approaches to knowledge are implicitly biased, influenced by an epistemological racism and discrimination woven into the fabric of our social history ( Scheurich & Young, 1997 ). While it is necessary to address these biases and acknowledge the reality of research, traditional variable-centric methods are often framed as “objective” and researchers often do not interrogate the assumptions of statistical tests, prohibiting them from making these types of considerations. Person-centered analysis alleviates some of the systemic discrimination within our research paradigms by challenging or eliminating a priori knowledge necessary for quantitative research methods. More importantly, these new approaches provide new insight and knowledge to bolster our current understanding.
Critical Alternatives to Person-Centered Approaches
While person-centered analyses can address many systemic issues embedded within traditional quantitative research methods, there remain related problems that person-centered analyses still cannot solve. As an option for other research approaches, we discuss critical methodologies, which are approaches that do not distinguish between the methodologies/methods and epistemologies used. Instead, these approaches frame methods and epistemologies in critical studies as inextricably linked. These approaches often used person-centered analysis in conjunction with qualitative data and have specific tenants and framings that make them unique from general person-centered methods.
Critical quantitative methodological approaches are quantitative methodological approaches consistent with critical epistemologies. There are numerous books and excellent studies that give a complete discussion of these approaches (see McCall, 2002 ; Oakley, 1998 ; Sprague & Zimmerman, 1989 ; Sprague, 2005 ; and a special issue by Gillborn, 2018 ). Nevertheless, we still include basic descriptions of these methodologies to illustrate other methodological framings of quantitative inquiry that directly challenge, refute, or build upon (post)positivist approaches to research. There are many bodies of critical quantitative research; here, we focus on just two that are consistent with Feminist and Critical Race Theory: FemQuant and QuantCrit. These two bodies formed separately with FemQuant forming and developing much earlier than the other. Both bodies have similar underlying tenets that provide ways to frame and conduct quantitative research critically.
Feminist-specific or not, critical quantitative approaches build upon general ideas of the feminist paradigm or feminist ethics, assuming systemic power relations beyond gender rule all aspects of social life through the organization of institutions, structures, and practices ( Jagger, 2014 ). This organization of resources results in an unequal system of advantages and disadvantages ( Acker, 1990 ; Ray, 2019 ). The feminist paradigm requires that research and praxis be positioned to promote a more just and equitable society ( Collins & Bilge, 2016 ). In this approach, all methodologies—created and used by researchers who are also social participants—influence and can be influenced by the hierarchical social system in which research is situated ( Oakley, 1998 ). This framing contrasts (post)positivist epistemology, which situates context (including the positionality and influence of the researcher if this context is even acknowledged) as a weakness to the supposed objectivity of quantitative research ( Hundleby, 2012 ; Sprague & Zimmerman, 1989 ). Harding ( 2016 ) wrote that reflexive incorporation actually makes quantitative research more objective or strong. She and others emphasized that the doing of research is messy, unpure, and laden with power relations, and the acknowledgment of these dynamics is essential ( Harding, 2016 ; Hesse-Biber & Piatelli, 2012 ). Quantitative researchers need to explore, and make explicit, how their methodological use is complicit in that larger system of hierarchical power relations.
FemQuant and QuantCrit are based in these same basic epistemological framings but also advance their individual ethical positions to focus on race and racism (QuantCrit) and gender and sexism (FemQuant). Both approaches acknowledge the intersectional nature of multiple identities and different power relations associated with them. Still, each has developed from different historical and theoretical roots. QuantCrit maintains primary adherence to the first tenet of Critical Race Theory, that racism is a normal and ordinary component of daily life ( Delgado & Stefancic, 2012 ), and that other power relations such as gender and class are used to support a larger racist project ( Gillborn et al., 2018 ). FemQuant centers Feminist Theory with the incorporation of post-modern and post-feminist Intersectionality Theory ( Codiroli Mcmaster & Cook, 2019 ), a partnership that highlights the many ways in which gender inequality exists and is enacted through the unique interactions of inequality due to gender, race, class, sexuality, disability, and more ( Bowleg, 2008 ). While FemQuant and QuantCrit’s moral commitments and directions are different, their underlying reflexive methods and feminist philosophy are the same.
We present a very brief summary of these complex ideas here. In addition, we provide multiple brief engineering education-specific examples to situate our summary. Generally, the methodological and epistemological commitments of approaches can be summarized in six tenets ( Major, Godwin, & Kirn, 2021 ) adapted from prior work ( Bowleg, 2008 ; Gillborn et al., 2018 ; Hesse-Biber & Piatelli, 2012 ; Oakley, 1998 ; Sigle-Rushton, 2014 ; Sprague & Zimmerman, 1989 ):
- Naturality – Domination is a central component of society that is not natural but rather is socially constructed and supported through multiple dimensions of difference or categories that quantitative research cannot be absent from. For example, accepted government categories of race and ethnicity that are typically recognized and used in quantitative research, such as in engineering education, have changed over time according to changing U.S. and broader global political motivations, not for natural reasons ( Omi & Winant, 2014 ). Such motivations directly impact the ways in which racially diverse populations in engineering education are represented numerically.
- Neutrality – Numbers cannot be neutral , but are rather numerically constructed representations of domination based on locally or globally rectified meanings relating to differences in human bodies. As such, neutrality often parallels naturality in that what is deemed natural is often connected to political ideology ( Oakley, 1998 ). In a similar example to that of naturality, the gender identity of students, such as those in engineering education, is often assumed according to physical traits such as the existence of sexual organs, or according to social performances of gender that relate to name, hair length and color, and even symbolic expressions of femininity or masculinity ( Connell, 2009 ; Akpanudo et al., 2017 ). These considerations conflate sex and gender. Thus, like race/ethnicity, numerical representations of gender, and their relation to ones’ ability to be an engineer or participate in engineering education, are tied to non-neutral local or global beliefs about gender identity and gender performance.
- Intersectionality – Inequality exists beyond one’s social position. In addition, inequality is multiplicative for persons experiencing multiple inequalities, and that multiplicative effect is not representable by simple variable positions, or identities. Rather, Intersectionality must be acknowledged and quantified as the unique experience it is, including its implications in engineering education, specifically. As one identity-specific example, one may want to consider the unique gendered-raced experiences of Black women as a combined numerical category rather than consider the additive or interactional effects that one who is Black or a woman might experience. In another more inequality-specific example, one instead may want to consider measures of the causes and implications of socioeconomic inequality itself rather than income itself ( Major & Godwin, 2019 ).
- Humanity – Data cannot speak for itself or act anthropomorphically in any other way. Rather, data is interpreted by researchers through their scientific understandings and global enculturation. There are thus implications to ones’ interpretations. For example, if researchers have results in which a control for race/ethnicity or gender is significant, they must consider the social processes associated with the tenets of naturality and neutrality. The data may suggest that race/ethnicity or gender creates statistical difference, but these are not casual variables. Instead, the researcher should identify and discuss the systems of hierarchy and oppression that benefits White and male identified individuals ( Holland, 2008 ; Gillborn, Warmington, & Demack, 2018 ).
- Counter-Majority – Quantification unduly supports assumptions that there is an average , or dominant, group from which marginalized and minoritized individuals simply differ, and that quantification must also seek out counter-stories (quantitative or qualitative) which concurrently challenge those assumptions. Results of person-oriented methodologies, such as those we discuss in this work, may identify narratives that are counter to what may be extracted from traditional variable-oriented engineering education work. Similarly, small-n qualitative accounts of student experience may also identify quantitative components which have gone unaccounted or wrongly accounted (such as identity rather than inequality) in traditional accounts ( Sigle-Rushton, 2014 ).
- Reflexivity – Research is inherently political, biased, and essentialized, as shown through prior tenets. As such, disseminated research containing and striving for the equitable participation of diverse people, such as in engineering education, must be vocal about its association with a socially just political direction. It must also articulate how its data, methods, or results might otherwise support an oppositional direction. For example, one may want to openly disseminate details regarding their political directionality and positionality more broadly, and more specifically as it relates to methods of quantifying experience.
These tenets provide additional epistemic guidance for how quantitative research should be conducted from a critical epistemology. In this paper, we have focused on person-centered analyses as a novel quantitative method that could be used across non-positive paradigms. In conducting work aligned with critical epistemology and theory, person-centered methods may be used but must be grounded in these tenants and supplemented with other research methods.
Conclusions
In writing this paper, our goal is not to replace research traditions in qualitative methodologies with quantitative ones nor to indicate that all quantitative analyses must be person-centered. While methodologies and methods such as TDA, FemQuant, QuantCrit, and others provide more robust and nuanced understandings of relationships, groupings, experiences, and qualities within a dataset, ultimately, there are still individuals who can be misrepresented or unnoticed. As person-centered analyses are used to search for generalizable patterns among large, sprawling information, there remains space for over-generalizations or lack of representation in research findings. Even though the results from person-centered analyses are not restricted to a small number of dimensions or rigid relationships, an individual still may only partially fit within a pattern. Thus, results can give insight into a portion of their experience but may not fully capture the lived experiences of individuals.
We offer this discussion as a way to ask the engineering education research community to evaluate what we can ask and conclude from research aligned with non-positivist epistemologies. We hope that this discussion can expand the conceptualizations and operationalizations of new quantitative methods aligned with non-positivist epistemologies within engineering education research and open new frontiers within the field to serve students better and more inclusively.
In this article we use (post)positivism to refer to the family of epistemologies related to positivism. For concision, we use the term non-positivist to refer to epistemologies outside of this family.
Acknowledgements
We would like to thank the editors and anonymous reviewers for the input on this work that strengthened the focus and argumentation. We would also like to thank the anonymous participants for their time in engaging with this research. This work was supported in part by the National Science Foundation under Grant No. 1554057, and through two Graduate Research Fellowships (DGE-1333468). Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation. We would also like to thank the STRIDE (Shaping Transformative Research on Identity and Diversity in Engineering) research group for their assistance in data collection and review of findings for this project. Specifically, the authors would like to thank Dr. Jacqueline Doyle for her work in developing the Mapper algorithm ( Doyle, 2017 ) used to conduct the TDA analysis and her consultation in data analysis. We would also like to thank Dr. Adam Kirn for his conversations about person-centered analyses and Dr. Elliot Douglas for his discussion of epistemic framings in research with the first author.
Competing Interests
The authors have no competing interests to declare.
Authors Contributions
Regarding this manuscript, AG conceptualized the idea for research, supervised all aspects of the research, conducted post-TDA analyses, wrote portions of each of the sections, and edited the document for flow and consistency. AG also wrote the sections describing the TDA analyses and results. JR wrote the introduction and epistemology section, as well as contributed throughout to link person-centered analysis to particular epistemological framings. In the example project described in this article, AT led and AG and JR assisted with data analysis and interpretation. BB contributed to the sections focused on new methodological approaches in quantitative research and the example of TDA used in engineering education. BB also contributed to the data collection and interpretation of the national survey data, as well as the data collection and analysis of the longitudinal narrative interviews. HP wrote sections on person-centered analyses. JM wrote sections on critical quantitative methodologies. RC contributed to the challenges and opportunities associated with person-centered analysis. RC also contributed to the data collection and analysis of the longitudinal narrative interviews. SC edited the document, found references for claims made in the paper, and properly cited all references used.
Abiodun, O. I., Jantan, A., Omolara, A. E., Dada, K. V., Mohamed, N. A., & Arshad, H. (2018). State-of-the-art in artificial neural network applications: A survey. Heliyon , 4(11), e00938. DOI: https://doi.org/10.1016/j.heliyon.2018.e00938
Acker, J. (1990). Hierarchies, jobs, bodies: A theory of gendered organizations. Gender & Society , 4(2), 139–158. DOI: https://doi.org/10.1177/089124390004002002
Akpanudo, U. M., Huff, J. L., Williams, J. K., & Godwin, A. (2017, October). Hidden in plain sight: Masculine social norms in engineering education. In IEEE Frontiers in Education Conference. DOI: https://doi.org/10.1109/FIE.2017.8190515
Baillie, C., & Douglas, E. P. (2014). Confusions and conventions: Qualitative research in engineering education. Journal of Engineering Education , 103(1), 1–7. DOI: https://doi.org/10.1002/jee.20031
Bairaktarova & Pilotte. (2020). Person or thing oriented: A comparative study of individual differences of first-year engineering students and practitioners. Journal of Engineering Education , 109(2), 230–242. DOI: https://doi.org/10.1002/jee.20309
Benedict, B., Baker, R. A., Godwin, A., & Milton, T. (2018). Uncovering latent diversity: Steps towards understanding ‘what counts’ and ‘who belongs’ in engineering culture. In ASEE Annual Conference & Exposition, Salt Lake City, UT. DOI: https://doi.org/10.18260/1-2-31164
Benson, L., Potvin, G., Kirn, A., Godwin, A., Doyle, J., Rohde, J. A., Verdín, D., & Boone, H. (2017). Characterizing student identities in engineering: Attitudinal profiles of engineering majors. In ASEE Annual Conference & Exposition, Columbus, OH. DOI: https://doi.org/10.18260/1-2--27950
Biesta, G. (2010). Pragmatism and the philosophical foundations of mixed methods research. In A. Tashakkori & C. Teddlie (Eds.), Handbook of Mixed Methods in Social and Behavioral Research (pp. 95–118), SAGE. DOI: https://doi.org/10.4135/9781506335193.n4
Breiman, L., Friedman, J. H., Olshen, R. A., & Stone, C. J. (1984). Classification and Regression Trees . New York, NY: Routledge. DOI: https://doi.org/10.1201/9781315139470
Bowleg, L. (2008). When Black+ lesbian+ woman≠ Black lesbian woman: The methodological challenges of qualitative and quantitative intersectionality research. Sex Roles , 59(5–6), 312–325. DOI: https://doi.org/10.1007/s11199-008-9400-z
Bryman, A. (2008). The end of the paradigm wars? In Alasuutari, P., Bickman, L. and Brannen, J. (Eds.), The SAGE Handbook of Social Research Methods (pp. 13–25), London, UK: SAGE. DOI: https://doi.org/10.4135/9781446212165
Cech, E. (2015). Engineers and engineeresses? Self-conceptions and the development of gendered professional identities. Sociological Perspectives , 58(1), 56–77. DOI: https://doi.org/10.1177/0731121414556543
Cejka, M. A., & Eagly, A. H. (1999). Gender-stereotypic images of occupations correspond to the sex segregation of employment. Personality and Social Psychology Bulletin , 25(4), 413–423. DOI: https://doi.org/10.1177/0146167299025004002
Chazal, F., & Michel, B. (2017). An introduction to Topological Data Analysis: Fundamental and practical aspects for data scientists. Retrieved from http://arxiv.org/abs/1710.04019
Codiroli Mcmaster, N., & Cook, R. (2019). The contribution of intersectionality to quantitative research into educational inequalities. Review of Education , 7(2), 271–292. DOI: https://doi.org/10.1002/rev3.3116
Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Earlbaum Associates.
Collins, P. H. (1990). Black feminist thought: Knowledge, consciousness, and the politics of empowerment . Unwin Hyman.
Collins, P. H., & Bilge, S. (2016). Intersectionality . Cambridge, UK: Polity Press.
Connell, R. W. (2009). Gender: Short introductions (2nd ed.). Cambridge, UK: Polity Press.
Creswell, J. W., & Plano Clark, V. L. (2011). Designing and conducting mixed methods research (2nd Ed.). SAGE.
Crotty, M. (1998). The foundations of social research: Meaning and perspective in the research process . SAGE.
Danielak, B. A., Gupta, A., & Elby, A. (2014). Marginalized identities of sense-makers: Reframing engineering student retention. Journal of Engineering Education , 103(1), 8–44. DOI: https://doi.org/10.1002/jee.20035
Delgado, R., & Stefancic, J. (2012). Critical race theory: An introduction (2 nd ed.). New York, NY: New York University Press. https://ssrn.com/abstract=1640643
Douglas, E. P., Koro-Ljungberg, M., & Borrego, M. (2010). Challenges and promises of overcoming epistemological and methodological partiality: Advancing engineering education through acceptance of diverse ways of knowing. European Journal of Engineering Education , 35(3), 247–257. DOI: https://doi.org/10.1080/03043791003703177
Douglas, K. A., & Purzer, Ş. (2015). Validity: Meaning and relevancy in assessment for engineering education research. Journal of Engineering Education , 104(2), 108–118. DOI: https://doi.org/10.1002/jee.20070
Doyle, J. (2017). Describing and mapping the interactions between student affective factors related to persistence in science, physics, and engineering (Publication No. 10747700). [Doctoral dissertation, Florida International University]. ProQuest Dissertations & Theses Global.
Everitt, B. S., Landau, S., Leese, M., & Stahl, D. (2011). Cluster analysis (5 th ed.). John Wiley & Sons, Inc. DOI: https://doi.org/10.1002/9780470977811
Eye, A., & Wiedermann, W. (2015). Person-Centered Analysis. In Emerging Trends in the Social and Behavioral Sciences (pp. 1–18). John Wiley & Sons, Inc. DOI: https://doi.org/10.1002/9781118900772.etrds0251
Fanelli, D. (2010). “Positive” results increase down the hierarchy of the sciences. PloS one , 5(4), e10068. DOI: https://doi.org/10.1371/journal.pone.0010068
Fernandez, T., & Godwin, A., & Doyle, J., & Verdín, D., & Boone, H., & Kirn, A., & Benson, L., & Potvin, G. (2016). More comprehensive and inclusive approaches to demographic data collection. In ASEE Annual Conference & Exposition, New Orleans, LA. DOI: https://doi.org/10.18260/p.25751
Foor, C. E., Walden, S. E., & Trytten, D. A. (2007). “I wish that I belonged more in this whole engineering group”: Achieving individual diversity. Journal of Engineering Education , 96(2), 103–115. DOI: https://doi.org/10.1002/j.2168-9830.2007.tb00921.x
Garcia-Dias, R., Vieira, S., Pinaya, W. H. L., & Mechelli, A. (2020). Clustering analysis. In Machine Learning (pp. 227–247). Academic Press. DOI: https://doi.org/10.1016/B978-0-12-815739-8.00013-4
Gero, J., & Milovanovic, J. (2020). A framework for studying design thinking through measuring designers’ minds, bodies and brains. Design Science , 6, E19. DOI: https://doi.org/10.1017/dsj.2020.15
Gero, J. S., & Peng, W. (2009). Understanding behaviors of a constructive memory agent: A Markov chain analysis. Knowledge-Based Systems , 22(8), 610–621. DOI: https://doi.org/10.1016/j.knosys.2009.05.006
Gillborn, D. (2018). QuantCrit: Rectifying quantitative methods through Critical Race Theory [Special Issue]. Race Ethnicity and Education , 21(2), 149–273. DOI: https://doi.org/10.1080/13613324.2017.1377675
Gillborn, D., Warmington, P., & Demack, S. (2018). QuantCrit: education, policy, ‘Big Data’ and principles for a critical race theory of statistics. Race Ethnicity and Education , 21(2), 158–179. DOI: https://doi.org/10.1080/13613324.2017.1377417
Godwin, A. (2017). Unpacking latent diversity. In ASEE Annual Conference & Exposition, Columbus, OH. DOI: https://doi.org/10.18260/1-2--29062
Godwin, A., Benedict, B. S., Verdín, D., Thielmeyer, A. R. H., Baker, R. A., & Rohde, J. A. (2018). Board 12: CAREER: Characterizing latent diversity among a national sample of first-year engineering students. In ASEE Annual Conference & Exposition, Tampa, FL. https://peer.asee.org/32207
Godwin, A., Thielmeyer, A. R. H., Rohde, J. A., Verdín, D., Benedict, B. S., Baker, R. A., Doyle, J. (2019). Using topological data analysis in social science research: Unpacking decisions and opportunities for a new method. In ASEE Annual Conference and Exposition, Tampa, FL. https://peer.asee.org/33522
Goldschmidt, G. (2014). Linkography: unfolding the design process . MIT Press. DOI: https://doi.org/10.7551/mitpress/9455.001.0001
Greenacre, M., & Hastie, T. (1987). The geometric interpretation of correspondence analysis. Journal of the American Statistical Association , 82(398), 437–447. DOI: https://doi.org/10.1080/01621459.1987.10478446
Hammersley, M. (2008). Assessing validity in social research. In P. Alasuutari, L. Bickman, & J. Brannen (Eds.), The SAGE Handbook of Social Research Methods (pp. 42–53), SAGE. DOI: https://doi.org/10.4135/9781446212165.n4
Hanel, P. H., Maio, G. R., & Manstead, A. S. (2019). A new way to look at the data: Similarities between groups of people are large and important. Journal of Personality and Social Psychology , 116(4), 541–562. DOI: https://doi.org/10.1037/pspi0000154
Harding, S. (2016). Whose science? Whose knowledge? Thinking from women’s lives . Cornell University Press. DOI: https://doi.org/10.7591/9781501712951
Hesse-Biber, S. N., & Piatelli, D. (2012). The feminist practice of holisitic reflexivity. In S. N. Hesse-Biber (Ed.), Handbook of Feminist Research Theory and Praxis (2nd ed., pp. 557–582). SAGE. DOI: https://doi.org/10.4135/9781483384740.n27
Holland, P. W. (2008). Causation and race. In T. Zuberi & E. Bonilla-Silva (Eds.), White logic, white methods: Racism and methodology . Rowman & Littlefield.
Hout, M. C., Papesh, M. H., & Goldinger, S. D. (2013). Multidimensional scaling. Wiley Interdisciplinary Reviews: Cognitive Science , 4(1), 93–103. DOI: https://doi.org/10.1002/wcs.1203
Hundleby, C. E. (2012). Feminist empiricism. In S. N. Hesse-Biber (Ed.), Handbook of Feminist Research: Theory and Praxis (2nd ed., pp. 28–45). SAGE. DOI: https://doi.org/10.4135/9781483384740.n2
Jack, R. E., Crivelli, C., & Wheatley, T. (2018). Data-Driven Methods to Diversify Knowledge of Human Psychology. Trends in Cognitive Sciences, 22(1), 1–5. DOI: https://doi.org/10.1016/j.tics.2017.10.002
Jagger, A. M. (2014). Introduction: The project of feminist methodology. In A. M. Jagger (Ed.), Just Methods: An Interdisciplinary Feminist Reader (2nd ed., pp. vii–xiii). Paradigm Publishers. DOI: https://doi.org/10.4324/9781315636344
Jesiek, B. K., Newswander, L. K., & Borrego, M. (2009). Engineering education research: Discipline, community, or field? Journal of Engineering Education , 98(1), 39–52. DOI: https://doi.org/10.1002/j.2168-9830.2009.tb01004.x
Johnson, R. B., & Onwuegbuzie, A. J. (2004). Mixed methods research: A research paradigm whose time has come. Educational Researcher , 33(7), 14–26. DOI: https://doi.org/10.3102/0013189X033007014
Kan, J. W., & Gero, J. S. (2010). Exploring quantitative methods to study design behavior in collaborative virtual workspaces. In New Frontiers, Proceedings of the 15th International Conference on CAADRIA (pp. 273–282).
Kant, V., & Kerr, E. (2019). Taking stock of engineering epistemology: Multidisciplinary perspectives. Philosophy & Technology , 32(4), 685–726. DOI: https://doi.org/10.1007/s13347-018-0331-5
Kaushik, V., & Walsh, C. A. (2019). Pragmatism as a research paradigm and its implications for social work research. Social Sciences , 8(255), 1–17. DOI: https://doi.org/10.3390/socsci8090255
Kherif, F., & Latypova, A. (2020). Principal component analysis. In Machine Learning (pp. 209–225). Academic Press. DOI: https://doi.org/10.1016/B978-0-12-815739-8.00012-2
Koro-Ljungberg, M., & Douglas, E. P. (2008). State of qualitative research in engineering education: Meta-analysis of JEE articles, 2005–2006. Journal of Engineering Education , 97(2), 163–175. DOI: https://doi.org/10.1002/j.2168-9830.2008.tb00965.x
Lather, P. (2006). Paradigm proliferation as a good thing to think with: Teaching research in education as a wild profusion. International Journal of Qualitative Studies in Education , 19(1), 35–57. DOI: https://doi.org/10.1080/09518390500450144
Laubenbacher, R., and Hastings, A., (2019). Topological Data Analysis. Bulletin of Mathematical Biology . 81(7), 2051. DOI: https://doi.org/10.1007/s11538-019-00610-3
Laursen, B., & Hoff, E. (2006). Person-centered and variable-centered approaches to longitudinal data. Merrill-Palmer Quarterly , 52(3), 377–389. DOI: https://doi.org/10.1353/mpq.2006.0029
Lazer, D., Pentland, A., Adamic, L., Aral, S., Barabasi, A. L., Brewer, D., Christakis, N., Contractor, N., Fowler, J., Gutmann, M., Jebara, T., King, G., Macy, M., Roy, D., & Van Alstyne, M. (2009). Computational social science. Science , 323(5915), 721–723. DOI: https://doi.org/10.1126/science.1167742
Lum, P. Y., Singh, G., Lehman, A., Ishkanov, T., Vejdemo-Johansson, M., Alagappan, M., Carlsson, J. & Carlsson, G. (2013). Extracting insights from the shape of complex data using topology. Scientific Reports , 3, 1236. DOI: https://doi.org/10.1038/srep01236
Major, J., Godwin, A., & Kirn, A. (2021). Working to achieve equitable access to engineering by redefining disciplinary standards for the use and dissemination of quantitative study demographics. In Collaborative Network for Engineering and Computing Diversity Conference, Washington, DC. https://peer.asee.org/36147
Major, J. C., & Godwin, A. (2019). An intersectional conceptual framework for understanding how to measure socioeconomic inequality in engineering education. In ASEE Annual Conference & Exposition, Tampa, FL. DOI: https://doi.org/10.18260/1-2--33594
Maxcy, S. J. (2003). Pragmatic threads in mixed methods research in the social sciences: The search for multiple modes of inquiry and the end of the philosophy of formalism. In A. Tashakkori & C. Teddlie (Eds.), Handbook of Mixed Methods in Social and Behavioral Research (pp. 51–89), SAGE.
McCall, L. (2002). Complex inequality: Gender, class, and race in the new economy . Routledge. DOI: https://doi.org/10.4324/9780203902455
McGuirl, M. R., Volkening, A., & Sandstede, B. (2020). Topological data analysis of zebrafish patterns. Proceedings of the National Academy of Sciences , 117(10), 5113–5124. DOI: https://doi.org/10.1073/pnas.1917763117
McNicholas, P. D. (2010). Model-based classification using latent Gaussian mixture models. Journal of Statistical Planning and Inference , 140(5), 1175–1181. DOI: https://doi.org/10.1016/j.jspi.2009.11.006
Merriam, S. B., & Tisdell, E. J. (2016). Qualitative research: A guide to design and implementation (4th ed.). John Wiley & Sons.
Miller, D. I., Eagly, A. H., & Linn, M. C. (2015). Women’s representation in science predicts national gender-science stereotypes: Evidence from 66 nations. Journal of Educational Psychology , 107(3), 631–644. DOI: https://doi.org/10.1037/edu0000005
Morgan, D. L. (2014). Pragmatism as a paradigm for social research. Qualitative Inquiry , 20(8), 1045–1053. DOI: https://doi.org/10.1177/1077800413513733
Morin, A. J., Bujacz, A., & Gagné, M. (2018). Person-centered methodologies in the organizational sciences: Introduction to the feature topic. Organizational Research Method , 21(4), 803–813. DOI: https://doi.org/10.1177/1094428118773856
National Academy of Engineering. (2008). Changing the conversation: Messages for improving public understanding of engineering. Washington DC, National Academies Press. DOI: https://doi.org/10.17226/12187
Oakley, A. (1998). Gender, methodology and people’s ways of knowing: Some problems with feminism and the paradigm debate in social science. Sociology , 32(4), 707–731. DOI: https://doi.org/10.1177/0038038598032004005
Oberski, D. (2016) Mixture Models: Latent Profile and Latent Class Analysis. In J. Robertson, M. Kaptein (Eds.) Modern Statistical Methods for HCI . Human–Computer Interaction Series. Springer. DOI: https://doi.org/10.1007/978-3-319-26633-6_12
Omi, M., & Winant, H. (2014). Racial formation in the United States (3rd ed.). Routledge. DOI: https://doi.org/10.4324/9780203076804-6
Pallas, A. M. (2001) Preparing education doctoral students for epistemological diversity. Educational Researcher , 30(5), 1–6. DOI: https://doi.org/10.3102/0013189X030005006
Pawley, A. L. (2017). Shifting the “default”: The case for making diversity the expected condition for engineering education and making whiteness and maleness visible. Journal of Engineering Education , 106(4), 531–533. DOI: https://doi.org/10.1002/jee.20181
Pawley, A. L. (2018). Learning from small numbers: Studying ruling relations that gender and race the structure of US engineering education. Journal of Engineering Education , 108(1), 13–31. DOI: https://doi.org/10.1002/jee.20247
Perdomo Meza, D. A. (2015). Topological data analysis with metric learning and an application to high-dimensional football data [Master’s thesis, Bogotá-Uniandes]. Retrieved from https://repositorio.uniandes.edu.co/bitstream/handle/1992/12963/u713491.pdf?sequence=1
Qiu, L., Chan, S. H. M., & Chan, D. (2018). Big data in social and psychological science: theoretical and methodological issues. Journal of Computational Social Science , 1(1), 59–66. DOI: https://doi.org/10.1007/s42001-017-0013-6
R Core Team. (2018). R: A language and environment for statistical computing . Vienna, Austria: R Foundation for Statistical Computing. Retrieved from https://www.R-project.org .
Ram, N., & Grimm, K. J. (2009). Methods and measures: Growth mixture modeling: A method for identifying differences in longitudinal change among unobserved groups. International journal of behavioral development , 33(6), 565–576. DOI: https://doi.org/10.1177/0165025409343765
Ray, V. (2019). A theory of racialized organizations. American Sociological Review , 84(1), 26–53. DOI: https://doi.org/10.1177/0003122418822335
Reed, I. A. (2010). Epistemology contextualized: Social-scientific knowledge in a postpositivist era. Sociological Theory , 28(1), 20–39. DOI: https://doi.org/10.1111/j.1467-9558.2009.01365.x
Riley, D. (2017). Rigor/Us: Building boundaries and disciplining diversity with standards of merit. Engineering Studies , 9(3), 249–265. DOI: https://doi.org/10.1080/19378629.2017.1408631
Scheurich, J. J., & Young, M. D. (1997). Coloring epistemologies: Are our research epistemologies racially biased? Educational researcher , 26(4), 4–16. DOI: https://doi.org/10.3102/0013189X026004004
Secules, S., Gupta, A., Elby, A., & Turpen, C. (2018). Zooming out from the struggling individual student: An account of the cultural construction of engineering ability in an undergraduate programming class. Journal of Engineering Education , 107(1), 56–86. DOI: https://doi.org/10.1002/jee.20191
Sellbom, M., & Tellegen, A. (2019). Factor analysis in psychological assessment research: Common pitfalls and recommendations. Psychological Assessment , 31(12), 1428–1441. DOI: https://doi.org/10.1037/pas0000623
Sigle-Rushton, W. (2014). Essentially quantified? Towards a more feminist modeling strategy. In M. Evans, C. Hemmings, M. Henry, H. Johnstone, S. Madhok, A. Plomien, & S. Wearing (Eds.), The SAGE Handbook of Feminist Theory (pp. 431–445). SAGE. DOI: https://doi.org/10.4135/9781473909502.n29
Slaton, A. E. (2015). Meritocracy, technocracy, democracy: Understandings of racial and gender equity in American engineering education. In International perspectives on engineering education (pp. 171–189). Springer. DOI: https://doi.org/10.1007/978-3-319-16169-3_8
Slaton, A. E., & Pawley, A. L. (2018). The power and politics of engineering education research design: Saving the ‘Small N’. Engineering Studies , 10(2–3), 133–157. DOI: https://doi.org/10.1080/19378629.2018.1550785
Sprague, J. (2005). How feminists count: Critical strategies for quantitative methods. In J. Sprague (Ed.), Feminist Methodology for Critical Researchers: Bridging Differences (1st ed., pp. 81–117). Rowman & Littlefield.
Sprague, J., & Zimmerman, M. K. (1989). Quality and quantity: Reconstructing feminist methodology. The American Sociologist , 20(1), 71–86. DOI: https://doi.org/10.1007/BF02697788
Streveler, R., & Smith, K. A. (2006). Rigorous research in engineering education. Journal of Engineering Education , 95(2), 103–105. DOI: https://doi.org/10.1002/j.2168-9830.2006.tb00882.x
Su, R., & Rounds, J. (2015). All STEM fields are not created equal: People and things interests explain gender disparities across STEM fields. Frontiers in Psychology , 6(Article 189), 1–20. DOI: https://doi.org/10.3389/fpsyg.2015.00189
Tashakkori, A., & Teddlie, C. (2008). Quality of inferences in mixed methods research: Calling for an integrative framework. In M. M. Bergman (Ed.), Advances in Mixed Methods Research (pp. 101–119), SAGE. DOI: https://doi.org/10.4135/9780857024329.d10
Tuli, F. (2010). The basis of distinction between qualitative and quantitative research in social science: Reflection on ontological, epistemological and methodological perspectives. Ethiopian Journal of Education and Sciences , 6(1), 97–108. DOI: https://doi.org/10.4314/ejesc.v6i1.65384
Tynjälä, P., Salminen, R. T., Sutela, T., Nuutinen, A., & Pitkänen, S. (2005). Factors related to study success in engineering education. European Journal of Engineering Education , 30(2), 221–231. DOI: https://doi.org/10.1080/03043790500087225
Uhlar, J. R., & Secules, S. (2018). Butting heads: Competition and posturing in a paired programming team. In IEEE Frontiers in Education Conference, San Jose, CA. DOI: https://doi.org/10.1109/FIE.2018.8658654
Verdín, D., Godwin, A., Kirn, A., Benson, L., & Potvin, G. (2018). Engineering women’s attitudes and goals in choosing disciplines with above and below average female representation. Social Sciences , 7(3), 44. DOI: https://doi.org/10.3390/socsci7030044
Villanueva, I., Di Stefano, M., Gelles, L., Osoria, P. V., & Benson, S. (2019). A race re-imaged, intersectional approach to academic mentoring: Exploring the perspectives and responses of womxn in science and engineering research. Contemporary Educational Psychology , 59(2019), 101786. DOI: https://doi.org/10.1016/j.cedpsych.2019.101786
Villanueva, I., Husman, J., Christensen, D., Youmans, K., Khan, M. T., Vicioso, P., Lampkins, S., & Graham, M. C. (2019). A cross-disciplinary and multi-modal experimental design for studying near-real-time authentic examination experiences. JoVE (Journal of Visualized Experiments) , (151), e60037. DOI: https://doi.org/10.3791/60037
Walther, J., Pawley, A. L., & Sochacka, N. W. (2015). Exploring ethical validation as a key consideration in interpretive research quality. In ASEE Annual Conference & Exposition, Seattle, WA. DOI: https://doi.org/10.18260/p.24063
Walther, J., Sochacka, N. W., Benson, L. C., Bumbaco, A. E., Kellam, N., Pawley, A. L., & Phillips, C. M. (2017). Qualitative research quality: A collaborative inquiry across multiple methodological perspectives. Journal of Engineering Education , 106(3), 398–430. DOI: https://doi.org/10.1002/jee.20170
Walther, J., Sochacka, N. W., & Kellam, N. N. (2013). Quality in interpretive engineering education research: Reflections on an example study. Journal of Engineering Education, 102(4), 626–659. DOI: https://doi.org/10.1002/jee.20029
Wang, M., Sinclair, R. R., Zhou, L., & Sears, L. E. (2013). Person-centered analysis: Methods, applications, and implications for occupational health psychology. In R. R. Sinclair, M. Wang, & L. E. Tetrick (Eds.), Research methods in occupational health psychology: Measurement, design, and data analysis (p. 349–373). Routledge/Taylor & Francis Group. DOI: https://doi.org/10.4324/9780203095249
Wasserman, L. (2018). Topological data analysis. Annual Review of Statistics and Its Application , (5), 501–532. DOI: https://doi.org/10.1146/annurev-statistics-031017-100045
Wickham, H. (2009). ggplot2: elegant graphics for data analysis. Springer. http://had.co.nz/ggplot2/book . Accessed: August, 5, 2014.
- Recent Posts
- Special issue: Higher education during times of crisis in Türkiye
- Special issue: Essays in Education
- Special issue: Refugee Realities
- Scholarly communities
Leo Falabella
June 28th, 2024, how can quantitative methods enhance research in he.
0 comments | 6 shares
Estimated reading time: 10 minutes
In explaining how the use of quantitative data in pedagogic research can surface important insights and establish causation, Leo Falabella addresses three common concerns – perceived lack of critical depth, ethics, and small numbers of participants.
Pedagogic research is central to the social role of higher education, especially in the face of recent developments. Access to higher education is becoming broader and less exclusive . Debates on university funding populate the news from the UK and US to Brazil and Pakistan , and authoritarian voices assault academic freedom . Quality evidence-based teaching can go a long way towards responding to such pressures.
In this post, I argue that increasing our use of quantitative data is a promising avenue for research in higher education. Quantitative research is not a panacea but holds special promise to uncover causal effects of interventions. Moreover, concerns with quantitative studies can be overcome with appropriate choices of research design. This post discusses three potential sources of concern with quantitative studies in higher education. In so doing, I focus on experimental methods, as they are especially powerful in identifying causal relationships. Importantly, I do not argue for replacing qualitative with quantitative data. Rather, I claim that quantitative studies support qualitative inquiry to enhance our knowledge on teaching and learning.
Leveraging complementarities
On that note, this study published in Proceedings of the National Academy of Sciences (PNAS) is a good example of how quantitative evidence is critical to advancing pedagogical research. Through an experiment comparing active learning practices to traditional teaching methods, Louis Deslauriers and his co-authors found that students in the active classroom learnt more, even though they believed they learnt less. This shows that while students’ perspectives cannot be ignored, self-reported accounts should be taken with a pinch of salt, at least when the goal is to understand the impact of pedagogical interventions on student learning.
Moreover, the study identifies a source of resistance to practices that have long been defended in the field of critical pedagogy , a predominantly qualitative tradition. Paulo Freire’s Pedagogy of the Oppressed – the foundation of critical pedagogy – emphasises the need to stimulate active participation, prompting students to both educate and be educated by peers and instructors. The choice of active learning strategies over traditional methods is validated by empirical research , but traditional didactic teaching methods are still widespread. The experiment conducted by Deslauriers and colleagues helps us understand why. It suggests that active learning practices may be interrupted due to inaccurate perceptions of inefficacy. Without quantitative research, these perceptions could have remained unchallenged.
Methodological pluralism can facilitate dialogue with other areas of knowledge
Despite its value and promise, quantitative research in higher education still faces resistance in the scholarly community. While I cannot determine the precise reasons for this, three hypotheses come to mind. First, researchers in pedagogy may believe that quantitative studies sacrifice critical reflection by glossing over the depth of qualitative data. Second, researchers may fear that conducting experiments in the classroom would violate research ethics. And third, researchers may feel constrained by classes with too few students, amounting to sample sizes too small for reliable statistical testing. All such concerns are valid. In what follows, I propose ways to address them so we can unlock the full potential of quantitative data for pedagogical research.
Overcoming challenges
Let us start with the belief that quantitative analysis lacks the depth of qualitative approaches. One could argue that quantifying student outcomes is a reductionist approach, done at the expense of critical reflection. In addition, researchers may question the reliability of quantitative measures. Anyone with experience writing and marking exams has seen cases where a numerical mark did not seem to reflect a student’s learning. These factors could lead some researchers to believe that qualitative data is more appropriate for pedagogical research.
We can address these concerns by reiterating that quantitative analysis is a supplement, not a substitute, for qualitative research. The in-depth critical reflection made possible by qualitative data need not and must not be lost. Instead, insights from critical reflection can be bolstered through quantitative studies, as demonstrated by the above study on the effectiveness of active learning strategies. Moreover, whereas quantitative data sometimes results in measurement error (for example, a student’s grade often does not accurately reflect the learning or understanding), this is also true for qualitative data. Finally, methodological pluralism can facilitate dialogue with other areas of knowledge, enhancing the impact of research on higher education.
Quantitative analysis is a supplement, not a substitute, for qualitative research
Besides concerns with loss of analytical depth, researchers may also have reservations regarding the ethics of experiments in the classroom. For example, one may expect experiments to disadvantage certain students. This is a valid concern since testing the causal effects of a pedagogical intervention requires a benchmark – a control group where the intervention is absent. Therefore, experiments can create asymmetries in access to teaching methods and resources.
Despite the validity of these concerns, an adequate research design can ensure equitable teaching in experimental studies. For example, within-subject designs expose all participants to control and treatment conditions. Imagine a calculus teacher who wants to test a new method in a class with content on integrals and derivatives. This teacher-researcher could randomly assign half of their students to learn derivatives through the new method but learn integrals through a traditional method. Conversely, the remaining students would learn integrals through the new method and derivatives through a traditional method. A formative or ungraded assessment (exam) with questions on integrals and derivatives would allow the researcher to detect the average effect of the new teaching method, all the while safeguarding equitable access to teaching methods. Additionally, the researcher could allow students to choose their preferred method when preparing for subsequent exams. This would bring the added benefit of the delayed intervention design, where participants can choose to receive an intervention (in this case, the new teaching method) at later stages of the study.
Methodological diversity can help us reach wider audiences
Even if the design ensures equitable teaching, one may find it inherently unethical to have students as participants in experiments. Accordingly, students could feel coerced into consenting to participate in a study. This concern can also be addressed with appropriate design procedures. On informing students of the experiment, researchers can circulate opt-out forms with students and only view the responses after marks are submitted. Informing students that their choice to drop out will remain confidential until after the course should prevent them from feeling coerced.
Lastly, researchers may avoid quantitative studies because several classrooms have too few students, resulting in sample sizes too small to allow for reliable statistical testing. Once again, the appropriate design choice can mitigate these constraints. Let us go back to the within-subject experiment in the calculus course. Each student would be taking a formative exam with questions on integrals and derivatives. The resulting statistical test would have a number of observations equal to the number of students multiplied by the number of questions. A classroom with 20 students taking an exam with 10 questions would yield a statistical test with 200 observations, thus addressing the problem of small sample sizes.
Diverse and mutually supportive methodologies can enhance the social impact of research in higher education. As this post has sought to demonstrate, experimental studies can supplement qualitative data to test for causality without a loss of analytical depth and critical reflection. Further, methodological diversity can help us reach wider audiences, thereby enhancing collaboration with other fields and improving our ability to influence higher education policy.
_____________________________________________________________________________________________ This post is opinion-based and does not reflect the views of the London School of Economics and Political Science or any of its constituent departments and divisions. _____________________________________________________________________________________________
Main image: Nick Hillier on Unsplash
About the author
Leo Falabella is a Fellow at the Department of Government, London School of Economics, UK
Leave a Reply Cancel reply
Your email address will not be published. Required fields are marked *
Notify me of follow-up comments by email.
Notify me of new posts by email.
This site uses Akismet to reduce spam. Learn how your comment data is processed .
In praise of didactic teaching May 17th, 2024
Related posts.
Is authoritarian liberalism a threat to academic freedom?
April 11th, 2024.
How can we enable neurodivergent academics to thrive?
May 13th, 2021.
Teaching Chinese political thought is hard – is decolonising the curriculum the solution?
May 1st, 2019.
Private: Should Universities make you feel safe?
November 25th, 2021.
COMMENTS
The UK National Student Survey (NSS) represents a major resource, never previously used in the economics literature, for understanding how the market signal of quality in higher education works.
Vickie A. Kelly B.S. Washburn University, 1980 M.S. Central Michigan University 1991. Submitted to the Graduate Department and Faculty of the School of Education of Baker University in partial fulfillment of the requirements for the degree. Doctor of Education In Educational Leadership. December 2009.
7 Quantitative Data Examples in Education. 1. Standardized Test Scores: Measuring Performance at Scale. Standardized test scores, spanning from globally recognized exams like the SAT and ACT to national or regional board examinations, have become a cornerstone in the world of education. These scores serve multiple purposes, providing a ...
Part of theOther Educational Administration and Supervision Commons Recommended Citation Johnson, Daniel R., "A Quantitative Study of Teacher Perceptions of Professional Learning Communities' Context, Process, and Content" (2011).Seton Hall University Dissertations and ... I thank my parents for the example and work ethic that they provided for ...
quantitative research methods in education emphasise basic group designs. for research and evaluation, analytic metho ds for exploring re lationships. between categorical and continuous measures ...
Revised on June 22, 2023. Quantitative research is the process of collecting and analyzing numerical data. It can be used to find patterns and averages, make predictions, test causal relationships, and generalize results to wider populations. Quantitative research is the opposite of qualitative research, which involves collecting and analyzing ...
An example of a similar text written for the social sciences, including education that is dedicated only to quantitative research, is Gliner, et al. 2009. In these texts separate chapters are devoted to different types of quantitative designs.
For applied quantitative research in education to become more critical, it is imperative that learners of quantitative methodology be made aware of its historical and modern misuses. ... For example, quantitative assessment has been used to gatekeep entry into universities and professions. For decades, students' grades, which are subject to ...
Educational research has a strong tradition of employing state-of-the-art statistical and psychometric (psychological measurement) techniques. Commonly referred to as quantitative methods, these techniques cover a range of statistical tests and tools. The Sage encyclopedia of educational research, measurement, and evaluation by Bruce B. Frey (Ed.)
Designed to allay anxiety about quantitative research, this practical text introduces readers to the nature of research and science, and then presents the meaning of concepts, variables, and research problems in the field of Education. Rich with concrete examples and illustrations, the Primer emphasizes a conceptual understanding of ...
Quantitative Research in Education: A Primer, Second Edition is a brief and practical text designed to allay anxiety about quantitative research. ... Edition includes suggestions for empirical investigation and features a new section on self-determination theory, examples from the latest research, a concluding chapter illustrating the practical ...
This book presents a clear and straightforward guide for all those seeking to conduct quantitative research in the field of education, using primary research data samples. It provides educational researchers with the tools they can work with to achieve results efficiently. ... Stresses the practical of use of non-parametric tests in ...
This book provides a clear and straightforward guide for all those seeking to conduct quantitative research in the field of education, using primary research data samples. While positioned as less ...
Education Research: Quantitative research is used in education research to study the effectiveness of teaching methods, assess student learning outcomes, and identify factors that influence student success. Researchers use experimental and quasi-experimental designs, as well as surveys and other quantitative methods, to collect and analyze data.
Quantitative research is a type of research that focuses on collecting and analyzing numerical data to answer research questions. There are two main methods used to conduct quantitative research: 1. Primary Method. There are several methods of primary quantitative research, each with its own strengths and limitations.
To challenge "objective" conventions in quantitative methodology, higher education scholars have increasingly employed critical lenses (e.g., quantitative criticalism, QuantCrit). Yet, specific approaches remain opaque. We use a multimethod design to examine researchers' use of critical approaches and explore how authors discussed embedding strategies to disrupt dominant quantitative ...
friends on social media. Earlier research conducted by Gross (2004) reflects similar results. In his survey of 261 students in grades 7-10, he found that students spend an average of 40 minutes texting per day. Likewise, research by Kowalski and Limber (2007) reflected comparable results of 3,767
Abstract. In an era of data-driven decision-making, a comprehensive understanding of quantitative research is indispensable. Current guides often provide fragmented insights, failing to offer a holistic view, while more comprehensive sources remain lengthy and less accessible, hindered by physical and proprietary barriers.
When collecting and analyzing data, quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings. Both are important for gaining different kinds of knowledge. Quantitative research. Quantitative research is expressed in numbers and graphs. It is used to test or confirm theories and assumptions.
Education research often relies on the quantitative methodology. Quantitative research in education provides numerical data that can prove or disprove a theory, and administrators can easily share the number-based results with other schools and districts. And while the research may speak to a relatively small sample size, educators and ...
Definitions and examples of quantitative research hypotheses. Quantitative research hypotheses; Simple hypothesis ... " The study hypothesis was if Tanzanian pregnant women and their families received a family-oriented antenatal group education, they would (1) have a higher level of BPCR, (2) attend antenatal clinic four or more times, (3) ...
Quantitative research is the process of collecting and analyzing numerical data to describe, predict, or control variables of interest. This type of research helps in testing the causal relationships between variables, making predictions, and generalizing results to wider populations. The purpose of quantitative research is to test a predefined ...
Understanding Quantitative Research Questions. Quantitative research involves collecting and analyzing numerical data to answer research questions and test hypotheses. These questions typically seek to understand the relationships between variables, predict outcomes, or compare groups. Let's explore some examples of quantitative research ...
For example, accepted government categories of race and ethnicity that are typically recognized and used in quantitative research, such as in engineering education, have changed over time according to changing U.S. and broader global political motivations, not for natural reasons (Omi & Winant, 2014). Such motivations directly impact the ways ...
Leveraging complementarities On that note, this study published in Proceedings of the National Academy of Sciences (PNAS) is a good example of how quantitative evidence is critical to advancing pedagogical research. Through an experiment comparing active learning practices to traditional teaching methods, Louis Deslauriers and his co-authors found that students in the active classroom learnt ...
Sebring et al. (2003) found, "Our own and others' research convinced us that to achieve and sustain significant advances in instruction, leadership practice had to develop towards a model of distributed leadership" (p. 2). The authors referenced research conducted by the Consortium on Chicago School Research: