S371 Social Work Research - Jill Chonody: What is Quantitative Research?

  • Choosing a Topic
  • Choosing Search Terms
  • What is Quantitative Research?
  • Requesting Materials

Quantitative Research in the Social Sciences

This page is courtesy of University of Southern California: http://libguides.usc.edu/content.php?pid=83009&sid=615867

Quantitative methods emphasize objective measurements and the statistical, mathematical, or numerical analysis of data collected through polls, questionnaires, and surveys, or by manipulating pre-existing statistical data using computational techniques . Quantitative research focuses on gathering numerical data and generalizing it across groups of people or to explain a particular phenomenon.

Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Muijs, Daniel. Doing Quantitative Research in Education with SPSS . 2nd edition. London: SAGE Publications, 2010.

Characteristics of Quantitative Research

Your goal in conducting quantitative research study is to determine the relationship between one thing [an independent variable] and another [a dependent or outcome variable] within a population. Quantitative research designs are either descriptive [subjects usually measured once] or experimental [subjects measured before and after a treatment]. A descriptive study establishes only associations between variables; an experimental study establishes causality.

Quantitative research deals in numbers, logic, and an objective stance. Quantitative research focuses on numberic and unchanging data and detailed, convergent reasoning rather than divergent reasoning [i.e., the generation of a variety of ideas about a research problem in a spontaneous, free-flowing manner].

Its main characteristics are :

  • The data is usually gathered using structured research instruments.
  • The results are based on larger sample sizes that are representative of the population.
  • The research study can usually be replicated or repeated, given its high reliability.
  • Researcher has a clearly defined research question to which objective answers are sought.
  • All aspects of the study are carefully designed before data is collected.
  • Data are in the form of numbers and statistics, often arranged in tables, charts, figures, or other non-textual forms.
  • Project can be used to generalize concepts more widely, predict future results, or investigate causal relationships.
  • Researcher uses tools, such as questionnaires or computer software, to collect numerical data.

The overarching aim of a quantitative research study is to classify features, count them, and construct statistical models in an attempt to explain what is observed.

  Things to keep in mind when reporting the results of a study using quantiative methods :

  • Explain the data collected and their statistical treatment as well as all relevant results in relation to the research problem you are investigating. Interpretation of results is not appropriate in this section.
  • Report unanticipated events that occurred during your data collection. Explain how the actual analysis differs from the planned analysis. Explain your handling of missing data and why any missing data does not undermine the validity of your analysis.
  • Explain the techniques you used to "clean" your data set.
  • Choose a minimally sufficient statistical procedure ; provide a rationale for its use and a reference for it. Specify any computer programs used.
  • Describe the assumptions for each procedure and the steps you took to ensure that they were not violated.
  • When using inferential statistics , provide the descriptive statistics, confidence intervals, and sample sizes for each variable as well as the value of the test statistic, its direction, the degrees of freedom, and the significance level [report the actual p value].
  • Avoid inferring causality , particularly in nonrandomized designs or without further experimentation.
  • Use tables to provide exact values ; use figures to convey global effects. Keep figures small in size; include graphic representations of confidence intervals whenever possible.
  • Always tell the reader what to look for in tables and figures .

NOTE:   When using pre-existing statistical data gathered and made available by anyone other than yourself [e.g., government agency], you still must report on the methods that were used to gather the data and describe any missing data that exists and, if there is any, provide a clear explanation why the missing datat does not undermine the validity of your final analysis.

Babbie, Earl R. The Practice of Social Research . 12th ed. Belmont, CA: Wadsworth Cengage, 2010; Brians, Craig Leonard et al. Empirical Political Analysis: Quantitative and Qualitative Research Methods . 8th ed. Boston, MA: Longman, 2011; McNabb, David E. Research Methods in Public Administration and Nonprofit Management: Quantitative and Qualitative Approaches . 2nd ed. Armonk, NY: M.E. Sharpe, 2008; Quantitative Research Methods . Writing@CSU. Colorado State University; Singh, Kultar. Quantitative Social Research Methods . Los Angeles, CA: Sage, 2007.

Basic Research Designs for Quantitative Studies

Before designing a quantitative research study, you must decide whether it will be descriptive or experimental because this will dictate how you gather, analyze, and interpret the results. A descriptive study is governed by the following rules: subjects are generally measured once; the intention is to only establish associations between variables; and, the study may include a sample population of hundreds or thousands of subjects to ensure that a valid estimate of a generalized relationship between variables has been obtained. An experimental design includes subjects measured before and after a particular treatment, the sample population may be very small and purposefully chosen, and it is intended to establish causality between variables. Introduction The introduction to a quantitative study is usually written in the present tense and from the third person point of view. It covers the following information:

  • Identifies the research problem -- as with any academic study, you must state clearly and concisely the research problem being investigated.
  • Reviews the literature -- review scholarship on the topic, synthesizing key themes and, if necessary, noting studies that have used similar methods of inquiry and analysis. Note where key gaps exist and how your study helps to fill these gaps or clarifies existing knowledge.
  • Describes the theoretical framework -- provide an outline of the theory or hypothesis underpinning your study. If necessary, define unfamiliar or complex terms, concepts, or ideas and provide the appropriate background information to place the research problem in proper context [e.g., historical, cultural, economic, etc.].

Methodology The methods section of a quantitative study should describe how each objective of your study will be achieved. Be sure to provide enough detail to enable the reader can make an informed assessment of the methods being used to obtain results associated with the research problem. The methods section should be presented in the past tense.

  • Study population and sampling -- where did the data come from; how robust is it; note where gaps exist or what was excluded. Note the procedures used for their selection;
  • Data collection – describe the tools and methods used to collect information and identify the variables being measured; describe the methods used to obtain the data; and, note if the data was pre-existing [i.e., government data] or you gathered it yourself. If you gathered it yourself, describe what type of instrument you used and why. Note that no data set is perfect--describe any limitations in methods of gathering data.
  • Data analysis -- describe the procedures for processing and analyzing the data. If appropriate, describe the specific instruments of analysis used to study each research objective, including mathematical techniques and the type of computer software used to manipulate the data.

Results The finding of your study should be written objectively and in a succinct and precise format. In quantitative studies, it is common to use graphs, tables, charts, and other non-textual elements to help the reader understand the data. Make sure that non-textual elements do not stand in isolation from the text but are being used to supplement the overall description of the results and to help clarify key points being made. Further information about how to effectively present data using charts and graphs can be found here .

  • Statistical analysis -- how did you analyze the data? What were the key findings from the data? The findings should be present in a logical, sequential order. Describe but do not interpret these trends or negative results; save that for the discussion section. The results should be presented in the past tense.

Discussion Discussions should be analytic, logical, and comprehensive. The discussion should meld together your findings in relation to those identified in the literature review, and placed within the context of the theoretical framework underpinning the study. The discussion should be presented in the present tense.

  • Interpretation of results -- reiterate the research problem being investigated and compare and contrast the findings with the research questions underlying the study. Did they affirm predicted outcomes or did the data refute it?
  • Description of trends, comparison of groups, or relationships among variables -- describe any trends that emerged from your analysis and explain all unanticipated and statistical insignificant findings.
  • Discussion of implications – what is the meaning of your results? Highlight key findings based on the overall results and note findings that you believe are important. How have the results helped fill gaps in understanding the research problem?
  • Limitations -- describe any limitations or unavoidable bias in your study and, if necessary, note why these limitations did not inhibit effective interpretation of the results.

Conclusion End your study by to summarizing the topic and provide a final comment and assessment of the study.

  • Summary of findings – synthesize the answers to your research questions. Do not report any statistical data here; just provide a narrative summary of the key findings and describe what was learned that you did not know before conducting the study.
  • Recommendations – if appropriate to the aim of the assignment, tie key findings with policy recommendations or actions to be taken in practice.
  • Future research – note the need for future research linked to your study’s limitations or to any remaining gaps in the literature that were not addressed in your study.

Black, Thomas R. Doing Quantitative Research in the Social Sciences: An Integrated Approach to Research Design, Measurement and Statistics . London: Sage, 1999; Gay,L. R. and Peter Airasain. Educational Research: Competencies for Analysis and Applications . 7th edition. Upper Saddle River, NJ: Merril Prentice Hall, 2003; Hector, Anestine.  An Overview of Quantitative Research in Compostion and TESOL . Department of English, Indiana University of Pennsylvania; Hopkins, Will G. “Quantitative Research Design.” Sportscience 4, 1 (2000); A Strategy for Writing Up Research Results . The Structure, Format, Content, and Style of a Journal-Style Scientific Paper. Department of Biology. Bates College; Nenty, H. Johnson. "Writing a Quantitative Research Thesis." International Journal of Educational Science 1 (2009): 19-32; Ouyang, Ronghua (John). Basic Inquiry of Quantitative Research . Kennesaw State University.

  • << Previous: Finding Quantitative Research
  • Next: Databases >>
  • Last Updated: Jul 11, 2023 1:03 PM
  • URL: https://libguides.iun.edu/S371socialworkresearch
  • Search Menu
  • Advance articles
  • Editor's Choice
  • Author Guidelines
  • Submission Site
  • Open Access
  • About The British Journal of Social Work
  • About the British Association of Social Workers
  • Editorial Board
  • Advertising and Corporate Services
  • Journals Career Network
  • Self-Archiving Policy
  • Dispatch Dates
  • Journals on Oxford Academic
  • Books on Oxford Academic

Issue Cover

Article Contents

  • Introduction
  • Quantitative research in social work
  • Limitations
  • Acknowledgements
  • < Previous

Nature and Extent of Quantitative Research in Social Work Journals: A Systematic Review from 2016 to 2020

  • Article contents
  • Figures & tables
  • Supplementary Data

Sebastian Kurten, Nausikaä Brimmel, Kathrin Klein, Katharina Hutter, Nature and Extent of Quantitative Research in Social Work Journals: A Systematic Review from 2016 to 2020, The British Journal of Social Work , Volume 52, Issue 4, June 2022, Pages 2008–2023, https://doi.org/10.1093/bjsw/bcab171

  • Permissions Icon Permissions

This study reviews 1,406 research articles published between 2016 and 2020 in the European Journal of Social Work (EJSW), the British Journal of Social Work (BJSW) and Research on Social Work Practice (RSWP). It assesses the proportion and complexity of quantitative research designs amongst published articles and investigates differences between the journals. Furthermore, the review investigates the complexity of the statistical methods employed and identifies the most frequently addressed topics. From the 1,406 articles, 504 (35.8 percent) used a qualitative methodology, 389 (27.7 percent) used a quantitative methodology, 85 (6 percent) used the mixed methods (6 percent), 253 (18 percent) articles were theoretical in nature, 148 (10.5 percent) conducted reviews and 27 (1.9 percent) gave project overviews. The proportion of quantitative research articles was higher in RSWP (55.4 percent) than in the EJSW (14.1 percent) and the BJSW (20.5 percent). The topic analysis could identify at least forty different topics addressed by the articles. Although the proportion of quantitative research is rather small in social work research, the review could not find evidence that it is of low sophistication. Finally, this study concludes that future research would benefit from making explicit why a certain methodology was chosen.

Email alerts

Citing articles via.

  • Recommend to your Library

Affiliations

  • Online ISSN 1468-263X
  • Print ISSN 0045-3102
  • Copyright © 2024 British Association of Social Workers
  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

University of Bristol Logo

  • Help & Terms of Use

Quantitative Research Methods for Social Work: Making Social Work Count

  • School for Policy Studies

Research output : Book/Report › Authored book

  • Quantitative Research Methods
  • Social Work
  • Persistent link

Fingerprint

  • Social Work Social Sciences 100%
  • Quantitative Research Method Social Sciences 100%
  • Books Social Sciences 40%
  • Research Social Sciences 40%
  • Social Scientists Social Sciences 40%
  • Understanding Social Sciences 40%
  • UK Social Sciences 40%
  • Skills Social Sciences 40%

T1 - Quantitative Research Methods for Social Work

T2 - Making Social Work Count

AU - Teater, Barbra

AU - Devaney, John

AU - Forrester, Donald

AU - Scourfield, Jonathan

AU - Carpenter, John

PY - 2017/1/1

Y1 - 2017/1/1

N2 - Social work knowledge and understanding draws heavily on research, and the ability to critically analyse research findings is a core skill for social workers. However, while many social work students are confident in reading qualitative data, a lack of understanding in basic statistical concepts means that this same confidence does not always apply to quantitative data.The book arose from a curriculum development project funded by the Economic and Social Research Council (ESRC), in conjunction with the Higher Education Funding Council for England, the British Academy and the Nuffield Foundation. This was part of a wider initiative to increase the numbers of quantitative social scientists in the UK in order to address an identified skills gap. This gap related to both the conduct of quantitative research and the literacy of social scientists in being able to read and interpret statistical information. The book is a comprehensive resource for students and educators. It is packed with activities and examples from social work covering the basic concepts of quantitative research methods – including reliability, validity, probability, variables and hypothesis testing – and explores key areas of data collection, analysis and evaluation, providing a detailed examination of their application to social work practice.

AB - Social work knowledge and understanding draws heavily on research, and the ability to critically analyse research findings is a core skill for social workers. However, while many social work students are confident in reading qualitative data, a lack of understanding in basic statistical concepts means that this same confidence does not always apply to quantitative data.The book arose from a curriculum development project funded by the Economic and Social Research Council (ESRC), in conjunction with the Higher Education Funding Council for England, the British Academy and the Nuffield Foundation. This was part of a wider initiative to increase the numbers of quantitative social scientists in the UK in order to address an identified skills gap. This gap related to both the conduct of quantitative research and the literacy of social scientists in being able to read and interpret statistical information. The book is a comprehensive resource for students and educators. It is packed with activities and examples from social work covering the basic concepts of quantitative research methods – including reliability, validity, probability, variables and hypothesis testing – and explores key areas of data collection, analysis and evaluation, providing a detailed examination of their application to social work practice.

KW - Quantitative Research Methods

KW - Social Work

M3 - Authored book

SN - 978-1-137-40026-0

BT - Quantitative Research Methods for Social Work

PB - Palgrave Macmillan

CY - London

Social Work Research Methods That Drive the Practice

A social worker surveys a community member.

Social workers advocate for the well-being of individuals, families and communities. But how do social workers know what interventions are needed to help an individual? How do they assess whether a treatment plan is working? What do social workers use to write evidence-based policy?

Social work involves research-informed practice and practice-informed research. At every level, social workers need to know objective facts about the populations they serve, the efficacy of their interventions and the likelihood that their policies will improve lives. A variety of social work research methods make that possible.

Data-Driven Work

Data is a collection of facts used for reference and analysis. In a field as broad as social work, data comes in many forms.

Quantitative vs. Qualitative

As with any research, social work research involves both quantitative and qualitative studies.

Quantitative Research

Answers to questions like these can help social workers know about the populations they serve — or hope to serve in the future.

  • How many students currently receive reduced-price school lunches in the local school district?
  • How many hours per week does a specific individual consume digital media?
  • How frequently did community members access a specific medical service last year?

Quantitative data — facts that can be measured and expressed numerically — are crucial for social work.

Quantitative research has advantages for social scientists. Such research can be more generalizable to large populations, as it uses specific sampling methods and lends itself to large datasets. It can provide important descriptive statistics about a specific population. Furthermore, by operationalizing variables, it can help social workers easily compare similar datasets with one another.

Qualitative Research

Qualitative data — facts that cannot be measured or expressed in terms of mere numbers or counts — offer rich insights into individuals, groups and societies. It can be collected via interviews and observations.

  • What attitudes do students have toward the reduced-price school lunch program?
  • What strategies do individuals use to moderate their weekly digital media consumption?
  • What factors made community members more or less likely to access a specific medical service last year?

Qualitative research can thereby provide a textured view of social contexts and systems that may not have been possible with quantitative methods. Plus, it may even suggest new lines of inquiry for social work research.

Mixed Methods Research

Combining quantitative and qualitative methods into a single study is known as mixed methods research. This form of research has gained popularity in the study of social sciences, according to a 2019 report in the academic journal Theory and Society. Since quantitative and qualitative methods answer different questions, merging them into a single study can balance the limitations of each and potentially produce more in-depth findings.

However, mixed methods research is not without its drawbacks. Combining research methods increases the complexity of a study and generally requires a higher level of expertise to collect, analyze and interpret the data. It also requires a greater level of effort, time and often money.

The Importance of Research Design

Data-driven practice plays an essential role in social work. Unlike philanthropists and altruistic volunteers, social workers are obligated to operate from a scientific knowledge base.

To know whether their programs are effective, social workers must conduct research to determine results, aggregate those results into comprehensible data, analyze and interpret their findings, and use evidence to justify next steps.

Employing the proper design ensures that any evidence obtained during research enables social workers to reliably answer their research questions.

Research Methods in Social Work

The various social work research methods have specific benefits and limitations determined by context. Common research methods include surveys, program evaluations, needs assessments, randomized controlled trials, descriptive studies and single-system designs.

Surveys involve a hypothesis and a series of questions in order to test that hypothesis. Social work researchers will send out a survey, receive responses, aggregate the results, analyze the data, and form conclusions based on trends.

Surveys are one of the most common research methods social workers use — and for good reason. They tend to be relatively simple and are usually affordable. However, surveys generally require large participant groups, and self-reports from survey respondents are not always reliable.

Program Evaluations

Social workers ally with all sorts of programs: after-school programs, government initiatives, nonprofit projects and private programs, for example.

Crucially, social workers must evaluate a program’s effectiveness in order to determine whether the program is meeting its goals and what improvements can be made to better serve the program’s target population.

Evidence-based programming helps everyone save money and time, and comparing programs with one another can help social workers make decisions about how to structure new initiatives. Evaluating programs becomes complicated, however, when programs have multiple goal metrics, some of which may be vague or difficult to assess (e.g., “we aim to promote the well-being of our community”).

Needs Assessments

Social workers use needs assessments to identify services and necessities that a population lacks access to.

Common social work populations that researchers may perform needs assessments on include:

  • People in a specific income group
  • Everyone in a specific geographic region
  • A specific ethnic group
  • People in a specific age group

In the field, a social worker may use a combination of methods (e.g., surveys and descriptive studies) to learn more about a specific population or program. Social workers look for gaps between the actual context and a population’s or individual’s “wants” or desires.

For example, a social worker could conduct a needs assessment with an individual with cancer trying to navigate the complex medical-industrial system. The social worker may ask the client questions about the number of hours they spend scheduling doctor’s appointments, commuting and managing their many medications. After learning more about the specific client needs, the social worker can identify opportunities for improvements in an updated care plan.

In policy and program development, social workers conduct needs assessments to determine where and how to effect change on a much larger scale. Integral to social work at all levels, needs assessments reveal crucial information about a population’s needs to researchers, policymakers and other stakeholders. Needs assessments may fall short, however, in revealing the root causes of those needs (e.g., structural racism).

Randomized Controlled Trials

Randomized controlled trials are studies in which a randomly selected group is subjected to a variable (e.g., a specific stimulus or treatment) and a control group is not. Social workers then measure and compare the results of the randomized group with the control group in order to glean insights about the effectiveness of a particular intervention or treatment.

Randomized controlled trials are easily reproducible and highly measurable. They’re useful when results are easily quantifiable. However, this method is less helpful when results are not easily quantifiable (i.e., when rich data such as narratives and on-the-ground observations are needed).

Descriptive Studies

Descriptive studies immerse the researcher in another context or culture to study specific participant practices or ways of living. Descriptive studies, including descriptive ethnographic studies, may overlap with and include other research methods:

  • Informant interviews
  • Census data
  • Observation

By using descriptive studies, researchers may glean a richer, deeper understanding of a nuanced culture or group on-site. The main limitations of this research method are that it tends to be time-consuming and expensive.

Single-System Designs

Unlike most medical studies, which involve testing a drug or treatment on two groups — an experimental group that receives the drug/treatment and a control group that does not — single-system designs allow researchers to study just one group (e.g., an individual or family).

Single-system designs typically entail studying a single group over a long period of time and may involve assessing the group’s response to multiple variables.

For example, consider a study on how media consumption affects a person’s mood. One way to test a hypothesis that consuming media correlates with low mood would be to observe two groups: a control group (no media) and an experimental group (two hours of media per day). When employing a single-system design, however, researchers would observe a single participant as they watch two hours of media per day for one week and then four hours per day of media the next week.

These designs allow researchers to test multiple variables over a longer period of time. However, similar to descriptive studies, single-system designs can be fairly time-consuming and costly.

Learn More About Social Work Research Methods

Social workers have the opportunity to improve the social environment by advocating for the vulnerable — including children, older adults and people with disabilities — and facilitating and developing resources and programs.

Learn more about how you can earn your  Master of Social Work online at Virginia Commonwealth University . The highest-ranking school of social work in Virginia, VCU has a wide range of courses online. That means students can earn their degrees with the flexibility of learning at home. Learn more about how you can take your career in social work further with VCU.

From M.S.W. to LCSW: Understanding Your Career Path as a Social Worker

How Palliative Care Social Workers Support Patients With Terminal Illnesses

How to Become a Social Worker in Health Care

Gov.uk, Mixed Methods Study

MVS Open Press, Foundations of Social Work Research

Open Social Work Education, Scientific Inquiry in Social Work

Open Social Work, Graduate Research Methods in Social Work: A Project-Based Approach

Routledge, Research for Social Workers: An Introduction to Methods

SAGE Publications, Research Methods for Social Work: A Problem-Based Approach

Theory and Society, Mixed Methods Research: What It Is and What It Could Be

READY TO GET STARTED WITH OUR ONLINE M.S.W. PROGRAM FORMAT?

Bachelor’s degree is required.

VCU Program Helper

This AI chatbot provides automated responses, which may not always be accurate. By continuing with this conversation, you agree that the contents of this chat session may be transcribed and retained. You also consent that this chat session and your interactions, including cookie usage, are subject to our privacy policy .

Logo for VIVA Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

19 11. Quantitative measurement

Chapter outline.

  • Overview of measurement (11 minute read)
  • Operationalization and levels of measurement (20 minute read)
  • Scales and indices (15 minute read)
  • Reliability and validity (20 minute read)
  • Ethical and social justice considerations for measurement (6 minute read)

Content warning: Discussions of immigration issues, parents and gender identity, anxiety, and substance use.

11.1 Overview of measurement

Learning Objectives

Learners will be able to…

  • Provide an overview of the measurement process in social work research
  • Describe why accurate measurement is important for research

This chapter begins with an interesting question: Is my apple the same as your apple? Let’s pretend you want to study apples. Perhaps you have read that chemicals in apples may impact neurotransmitters and you want to test if apple consumption improves mood among college students. So, in order to conduct this study, you need to make sure that you provide apples to a treatment group, right? In order to increase the rigor of your study, you may also want to have a group of students, ones who do not get to eat apples, to serve as a comparison group. Don’t worry if this seems new to you. We will discuss this type of design in Chapter 13 . For now, just concentrate on apples.

In order to test your hypothesis about apples, you need to define exactly what is meant by the term “apple” so you ensure everyone is consuming the same thing. You also need to know what you consider a “dose” of this thing that we call “apple” and make sure everyone is consuming the same kind of apples and you need a way to ensure that you give the same amount of apples to everyone in your treatment group. So, let’s start by making sure we understand what the term “apple” means. Say you have an object that you identify as an apple and I have an object that I identify as an apple. Perhaps my “apple” is a chocolate apple, one that looks similar to an apple but made of chocolate and red dye, and yours is a honeycrisp. Perhaps yours is papier-mache and mine is a Macbook Pro.  All of these are defined as apples, right?

Decorative image

You can see the multitude of ways we could conceptualize “apple,” and how that could create a problem for our research. If I get a Red Delicious (ick) apple and you get a Granny Smith (yum) apple and we observe a change in neurotransmitters, it’s going to be even harder than usual to say the apple influenced the neurotransmitters because we didn’t define “apple” well enough. Measurement in this case is essential to treatment fidelity , which is when you ensure that everyone receives the same, or close to the same, treatment as possible. In other words, you need to make sure everyone is consuming the same kind of apples and you need a way to ensure that you give the same amount of apples to everyone in your treatment group.

In social science, when we use the term  measurement , we mean the process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that we are investigating. At its core, measurement is about defining one’s terms in as clear and precise a way as possible. Of course, measurement in social science isn’t quite as simple as using a measuring cup or spoon, but there are some basic tenets on which most social scientists agree when it comes to measurement. We’ll explore those, as well as some of the ways that measurement might vary depending on your unique approach to the study of your topic.

An important point here is that measurement does not require any particular instruments or procedures. What it does  require is  some systematic procedure for assigning scores, meanings, and descriptions to individuals or objects so that those scores represent the characteristic of interest. You can measure phenomena in many different ways, but you must be sure that how you choose to measure gives you information and data that lets you answer your research question. If you’re looking for information about a person’s income, but your main points of measurement have to do with the money they have in the bank, you’re not really going to find the information you’re looking for!

What do social scientists measure?

The question of what social scientists measure can be answered by asking yourself what social scientists study. Think about the topics you’ve learned about in other social work classes you’ve taken or the topics you’ve considered investigating yourself. Let’s consider Melissa Milkie and Catharine Warner’s study (2011) [1] of first graders’ mental health. In order to conduct that study, Milkie and Warner needed to have some idea about how they were going to measure mental health. What does mental health mean, exactly? And how do we know when we’re observing someone whose mental health is good and when we see someone whose mental health is compromised? Understanding how measurement works in research methods helps us answer these sorts of questions.

As you might have guessed, social scientists will measure just about anything that they have an interest in investigating. For example, those who are interested in learning something about the correlation between social class and levels of happiness must develop some way to measure both social class and happiness. Those who wish to understand how well immigrants cope in their new locations must measure immigrant status and coping. Those who wish to understand how a person’s gender shapes their workplace experiences must measure gender and workplace experiences. You get the idea. Social scientists can and do measure just about anything you can imagine observing or wanting to study. Of course, some things are easier to observe or measure than others.

In 1964, philosopher Abraham Kaplan (1964) [2] wrote The   Conduct of Inquiry,  which has since become a classic work in research methodology (Babbie, 2010). [3] In his text, Kaplan describes different categories of things that behavioral scientists observe. One of those categories, which Kaplan called “observational terms,” is probably the simplest to measure in social science. Observational terms are the sorts of things that we can see with the naked eye simply by looking at them. Kaplan roughly defines them as conditions that are easy to identify and verify through direct observation. If, for example, we wanted to know how the conditions of playgrounds differ across different neighborhoods, we could directly observe the variety, amount, and condition of equipment at various playgrounds.

Indirect observables , on the other hand, are less straightforward to assess. In Kaplan’s framework, they are conditions that are subtle and complex that we must use existing knowledge and intuition to define.If we conducted a study for which we wished to know a person’s income, we’d probably have to ask them their income, perhaps in an interview or a survey. Thus, we have observed income, even if it has only been observed indirectly. Birthplace might be another indirect observable. We can ask study participants where they were born, but chances are good we won’t have directly observed any of those people being born in the locations they report.

How do social scientists measure?

Measurement in social science is a process. It occurs at multiple stages of a research project: in the planning stages, in the data collection stage, and sometimes even in the analysis stage. Recall that previously we defined measurement as the process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that we are investigating. Once we’ve identified a research question, we begin to think about what some of the key ideas are that we hope to learn from our project. In describing those key ideas, we begin the measurement process.

Let’s say that our research question is the following: How do new college students cope with the adjustment to college? In order to answer this question, we’ll need some idea about what coping means. We may come up with an idea about what coping means early in the research process, as we begin to think about what to look for (or observe) in our data-collection phase. Once we’ve collected data on coping, we also have to decide how to report on the topic. Perhaps, for example, there are different types or dimensions of coping, some of which lead to more successful adjustment than others. However we decide to proceed, and whatever we decide to report, the point is that measurement is important at each of these phases.

As the preceding example demonstrates, measurement is a process in part because it occurs at multiple stages of conducting research. We could also think of measurement as a process because it involves multiple stages. From identifying your key terms to defining them to figuring out how to observe them and how to know if your observations are any good, there are multiple steps involved in the measurement process. An additional step in the measurement process involves deciding what elements your measures contain. A measure’s elements might be very straightforward and clear, particularly if they are directly observable. Other measures are more complex and might require the researcher to account for different themes or types. These sorts of complexities require paying careful attention to a concept’s level of measurement and its dimensions. We’ll explore these complexities in greater depth at the end of this chapter, but first let’s look more closely at the early steps involved in the measurement process, starting with conceptualization.

The idea of coming up with your own measurement tool might sound pretty intimidating at this point. The good news is that if you find something in the literature that works for you, you can use it with proper attribution. If there are only pieces of it that you like, you can just use those pieces, again with proper attribution. You don’t always have to start from scratch!

Key Takeaways

  • Measurement (i.e. the measurement process) gives us the language to define/describe what we are studying.
  • In research, when we develop measurement tools, we move beyond concepts that may be subjective and abstract to a definition that is clear and concise.
  • Good social work researchers are intentional with the measurement process.
  • Engaging in the measurement process requires us to think critically about what we want to study. This process may be challenging and potentially time-consuming.
  • How easy or difficult do you believe it will be to study these topics?
  • Think about the chapter on literature reviews. Is there a significant body of literature on the topics you are interested in studying?
  • Are there existing measurement tools that may be appropriate to use for the topics you are interested in studying?

11.2 Operationalization and levels of measurement

  • Define constructs and operationalization and describe their relationship
  • Be able to start operationalizing variables in your research project
  • Identify the level of measurement for each type of variable
  • Demonstrate knowledge of how each type of variable can be used

Now we have some ideas about what and how social scientists need to measure, so let’s get into the details. In this section, we are going to talk about how to make your variables measurable (operationalization) and how you ultimately characterize your variables in order to analyze them (levels of measurement).

Operationalizing your variables

“Operationalizing” is not a word I’d ever heard before I became a researcher, and actually, my browser’s spell check doesn’t even recognize it. I promise it’s a real thing, though. In the most basic sense, when we operationalize a variable, we break it down into measurable parts. Operationalization is the process of determining how to measure a construct that cannot be directly observed. And a constructs are conditions that are not directly observable and represent states of being, experiences, and ideas. But why construct ? We call them constructs because they are built using different ideas and parameters.

As we know from Section 11.1, sometimes the measures that we are interested in are more complex and more abstract than observational terms or indirect observables . Think about some of the things you’ve learned about in other social work classes—for example, ethnocentrism. What is ethnocentrism? Well, from completing an introduction to social work class you might know that it’s a construct that has something to do with the way a person judges another’s culture. But how would you measure  it? Here’s another construct: bureaucracy. We know this term has something to do with organizations and how they operate, but measuring such a construct is trickier than measuring, say, a person’s income. In both cases, ethnocentrism and bureaucracy, these theoretical notions represent ideas whose meaning we have come to agree on. Though we may not be able to observe these abstractions directly, we can observe the things that they are made up of.

importance of quantitative research in social work

Now, let’s operationalize bureaucracy and ethnocentrism. The construct of bureaucracy could be measured by counting the number of supervisors that need to approve routine spending by public administrators. The greater the number of administrators that must sign off on routine matters, the greater the degree of bureaucracy. Similarly, we might be able to ask a person the degree to which they trust people from different cultures around the world and then assess the ethnocentrism inherent in their answers. We can measure constructs like bureaucracy and ethnocentrism by defining them in terms of what we can observe.

How we operationalize our constructs (and ultimately measure our variables) can affect the conclusions we can draw from our research. Let’s say you’re reviewing a state program to make it more efficient in connecting people to public services. What might be different if we decide to measure bureaucracy by the number of forms someone has to fill out to get a public service instead of the number of people who have to review the forms, like we talked about above? Maybe you find that there is an unnecessary amount of paperwork based on comparisons to other state programs, so you recommend that some of it be eliminated. This is probably a good thing, but will it actually make the program more efficient like eliminating some of the reviews that paperwork has to go through would? I’m not really making a judgment on which way is better to measure bureaucracy, but I encourage you to think about the costs and benefits of each way we operationalized the construct of bureaucracy, and extend this to the way you operationalize your own concepts in your research project.

Levels of Measurement

Now, we’re going to move into some more concrete characterizations of variables. You now hopefully understand how to operationalize your concepts so that you can turn them into variables. Imagine a process kind of like what you see in Figure 11.1 below.

importance of quantitative research in social work

Notice that the arrows from the construct point toward the research question, because ultimately, measuring them will help answer your question!

The level of measuremen t of a variable tells us how the values of the variable relate to each other  and what mathematical operations we can perform with the variable. (That second part will become important once we move into quantitative analysis in Chapter 14  and Chapter 15 ).  Many students find this definition a bit confusing. What does it mean when we say that the level of measurement tells us about mathematical operations? So before we move on, let’s clarify this a bit. 

Let’s say you work for your a community nonprofit that wants to develop programs relevant to community members’ ages (i.e., tutoring for kids in school, job search and resume help for adults, and home visiting for elderly community members). However, you do not have a good understanding of the ages of the people who visit your community center. Below is a part of a questionnaire that you developed to.

  • How old are you? – Under 18 years old – 18-30 years old – 31-50 years old – 51-60 years old – Over 60 years old
  • How old are you? _____ years

Look at the two items on this questionnaire. They both ask about age, but t he first item asks about age but asks the participant to identify the age range. The second item asks you to identify the actual age in years. These two questions give us data that represent the same information measured at a different level.

It would help your agency if you knew the average age of clients, right? So, which item on the questionnaire will provide this information? Item one’s choices are grouped into categories. Can you compute an average age from these choices? No. Conversely, participants completing item two are asked to provide an actual number, one that you could use to determine an average age. In summary, the two items both ask the participants to report their age. However, the type of data collected from both items is different and must be analyzed differently. 

We can think about the four levels of measurement as going from less to more specific, or as it’s more commonly called, lower to higher: nominal, ordinal , interval , and ratio . Each of these levels differ and help the researcher understand something about their data.  Think about levels of measurement as a hierarchy.

In order to determine the level of measurement, please examine your data and then ask these four questions (in order).

  • Do I have mutually exclusive categories? If the answer is yes, continue to question #2.
  • Do my item choices have a hierarchy or order? In other words, can you put your item choices in order? If no, stop–you have nominal level data. If the answer is yes, continue to question #3.
  • Can I add, subtract, divide, and multiply my answer choices? If no, stop–you have ordinal level data. If the answer is yes, continue to question #4.
  • Is it possible that the answer to this item can be zero? If the answer is no—you have interval level data. If the answer is yes, you are at the ratio level of measurement.

Nominal level .  The nominal level of measurement is the lowest level of measurement. It contains categories are mutually exclusive, which means means that anyone who falls into one category cannot not fall into another category. The data can be represented with words (like yes/no) or numbers that correspond to words or a category (like 1 equaling yes and 0 equaling no). Even when the categories are represented as numbers in our data, the number itself does not have an actual numerical value. It is merely a number we have assigned so that we can use the variable in mathematical operations (which we will start talking about in Chapter 14.1 ). We say this level of measurement is lowest or least specific because someone who falls into a category we’ve designated could differ from someone else in the same category. Let’s say on our questionnaire above, we also asked folks whether they own a car. They can answer yes or no, and they fall into mutually exclusive categories. In this case, we would know whether they own a car, but not whether owning a car really affects their life significantly. Maybe they have chosen not to own one and are happy to take the bus, bike, or walk. Maybe they do not own one but would like to own one. We cannot get this information from a nominal variable, which is ok when we have meaningful categories. Nominal variables are especially useful when we just need the frequency of a particular characteristic in our sample.

The nominal level of measurement usually includes many demographic characteristics like race, gender, or marital status.

Ordinal leve l . The ordinal level of measurement is the next level of measurement and contains slightly more specific information than the nominal level. This level has mutually exclusive categories and a hierarchy or order. Let’s go back to the first item on the questionnaire we talked about above.

Do we have mutually exclusive categories? Yes. Someone who selects item A cannot also select item B. So, we know that we have at least nominal level data. However, the next question that we need to ask is “Do my answer choices have order?” or “Can I put my answer choices in order?” The answer is yes, someone who selects A is younger than someone who selects B or C. So, you have at least ordinal level data.

From a data analysis and statistical perspective, ordinal variables get treated exactly like nominal variables because they are both categorical variables , or variables whose values are organized into mutually exclusive groups but whose numerical values cannot be used in mathematical operations. You’ll see this term used again when we get into bivariate analysis in Chapter 15.

Interval level The interval level of measurement is a higher level of measurement. This level marks the point where we are able . This level contains all of the characteristics of the previous levels (mutually exclusive categories and order). What distinguishes it from the ordinal level is that the interval level can be used to conduct mathematical computations with data (like an average, for instance).

Let’s think back to our questionnaire about age again and take a look at the second question where we asked for a person’s exact age in years. Age in years is mutually exclusive – someone can’t be 14 and 15 at the same time – and the order of ages is meaningful, since being 18 means something different than being 32. Now, we can also take the answers to this question and do math with them, like addition, subtraction, multiplication, and division.

Ratio level . Ratio level data is the highest level of measurement. It has mutually exclusive categories, order, and you can perform mathematical operations on it. The main difference between the interval and ratio levels is that the ratio level has an absolute zero, meaning that a value of zero is both possible and meaningful. You might be thinking, “Well, age has an absolute zero,” but someone who is not yet born does not have an age, and the minute they’re born, they are not zero years old anymore.

Data at the ratio level of measurement are usually amounts or numbers of things, and can be negative (if that makes conceptual sense, of course). For example, you could ask someone to report how many A’s they have on their transcript or how many semesters they have earned a 4.0. They could have zero A’s and that would be a valid answer.

From a data analysis and statistical perspective, interval and ratio variables are treated exactly the same because they are both continuous variables , or variables whose values are mutually exclusive and can be used in mathematical operations. Technically, a continuous variable could have an infinite number of values.

What does the level of measurement tell us?

We have spent time learning how to determine our data’s level of measurement. Now what? How could we use this information to help us as we measure concepts and develop measurement tools? First, the types of statistical tests that we are able to use are dependent on our data’s level of measurement. (We will discuss this soon in Chapter 15.) The higher the level of measurement, the more complex statistical tests we are able to conduct. This knowledge may help us decide what kind of data we need to gather, and how. That said, we have to balance this knowledge with the understanding that sometimes, collecting data at a higher level of measurement could negatively impact our studies. For instance, sometimes providing answers in ranges may make prospective participants feel more comfortable responding to sensitive items. Imagine that you were interested in collecting information on topics such as income, number of sexual partners, number of times used illicit drugs, etc. You would have to think about the sensitivity of these items and determine if it would make more sense to collect some data at a lower level of measurement.

Finally, sometimes when analyzing data, researchers find a need to change a data’s level of measurement. For example, a few years ago, a student was interested in studying the relationship between mental health and life satisfaction. This student collected a variety of data. One item asked about the number of mental health diagnoses, reported as the actual number. When analyzing data, my student examined the mental health diagnosis variable and noticed that she had two groups, those with none or one diagnosis and those with many diagnoses. Instead of using the ratio level data (actual number of mental health diagnoses), she collapsed her cases into two categories, few and many. She decided to use this variable in her analyses. It is important to note that you can move a higher level of data to a lower level of data; however, you are unable to move a lower level to a higher level.

  • Operationalization involves figuring out how to measure a construct you cannot directly observe.
  • Nominal variables have mutually exclusive categories with no natural order. They cannot be used for mathematical operations like addition or subtraction. Race or gender would be one example.
  • Ordinal variables have mutually exclusive categories  and a natural order. They also cannot be used for mathematical operations like addition or subtraction. Age when measured in categories (i.e., 18-25 years old) would be an example.
  • Interval variables have mutually exclusive categories, a natural order, and can be used for mathematical operations. Age as a raw number would be an example.
  • Ratio variables have mutually exclusive categories, a natural order, can be used for mathematical operations, and have an absolute zero value. The number of times someone calls a legislator to advocate for a policy would be an example.
  • Nominal and ordinal variables are categorical variables, meaning they have mutually exclusive categories and cannot be used for mathematical operations, even when assigned a number.
  • Interval and ratio variables are continuous variables, meaning their values are mutually exclusive and can be used in mathematical operations.
  • Researchers should consider the costs and benefits of how they operationalize their variables, including what level of measurement they choose, since the level of measurement can affect how you must gather your data.
  • What are the primary constructs being explored in the research?
  • Could you (or the study authors) have chosen another way to operationalize this construct?
  • What are these variables’ levels of measurement?
  • Are they categorical or continuous?

11.3 Scales and indices

  • Identify different types of scales and compare them to each other
  • Understand how to begin the process of constructing scales or indices

Quantitative data analysis requires the construction of two types of measures of variables: indices and scales. These measures are frequently used and are important since social scientists often study variables that possess no clear and unambiguous indicators–for instance, age or gender. Researchers often centralize much of work in regards to the attitudes and orientations of a group of people, which require several items to provide indication of the variables. Secondly, researchers seek to establish ordinal categories from very low to very high (vice-versa), which single data items can not ensure, while an index or scale can.

Although they exhibit differences (which will later be discussed) the two have in common various factors.

  • Both are ordinal measures of variables.
  • Both can order the units of analysis in terms of specific variables.
  • Both are composite measures of variables ( measurements based on more than one one data item ).

In general, indices are a sum of series of individual yes/no questions, that are then combined in a single numeric score. They are usually a measure of the quantity of some social phenomenon and are constructed at a ratio level of measurement. More sophisticated indices weigh individual items according to their importance in the concept being measured (i.e. in a multiple choice test where different questions are worth different numbers of points). Some interval-level indices are not weight counted, but contain other indexes or scales within them (i.e. college admissions that score an applicant based on GPA, SAT scores, essays, and place a different point from each source).

This section discusses two formats used for measurement in research: scales and indices (sometimes called indexes). These two formats are helpful in research because they use multiple indicators to develop a composite (or total) score. Co mposite scores provide a much greater understanding of concepts than a single item could. Although we won’t delve too deeply into the process of scale development, we will cover some important topics for you to understand how scales and indices can be used.

Types of scales

As a student, you are very familiar with end of the semester course evaluations. These evaluations usually include statements such as, “My instructor created an environment of respect” and ask students to use a scale to indicate how much they agree or disagree with the statements.  These scales, if developed and administered appropriately, provide a wealth of information to instructors that may be used to refine and update courses. If you examine the end of semester evaluations, you will notice that they are organized, use language that is specific to your course, and have very intentional methods of implementation. In essence, these tools are developed to encourage completion.

As you read about these scales, think about the information that you want to gather from participants. What type or types of scales would be the best for you to use and why? Are there existing scales or do you have to create your own?

The Likert scale

Most people have seen some version of a Likert scale. Designed by Rensis Likert (Likert, 1932) [4] , a Likert scale is a very popular rating scale for measuring ordinal data in social work research. This scale includes Likert items that are simply-worded statements to which participants can indicate their extent of agreement or disagreement on a five- or seven-point scale ranging from “strongly disagree” to “strongly agree.” You will also see Likert scales used for importance, quality, frequency, and likelihood, among lots of other concepts. Below is an example of how we might use a Likert scale to assess your attitudes about research as you work your way through this textbook.

Likert scales are excellent ways to collect information. They are popular; thus, your prospective participants may already be familiar with them. However, they do pose some challenges. You have to be very clear about your question prompts. What does strongly agree mean and how is this differentiated from agree ? In order to clarify this for participants, some researchers will place definitions of these items at the beginning of the tool.

There are a few other, less commonly used, scales discussed next.

Semantic differential scale

This is a composite (multi-item) scale where respondents are asked to indicate their opinions or feelings toward a single statement using different pairs of adjectives framed as polar opposites. For instance, in the above Likert scale, the participant is asked how much they agree or disagree with a statement. In a semantic differential scale, the participant is asked to indicate how they feel about a specific item. This makes the s emantic differential scale an excellent technique for measuring people’s attitudes or feelings toward objects, events, or behaviors. The following is an example of a semantic differential scale that was created to assess participants’ feelings about the content taught in their research class.  

Feelings About My Research Class

Directions: Please review the pair of words and then select the one that most accurately reflects your feelings about the content of your research class.

Boring……………………………………….Exciting

Waste of Time…………………………..Worthwhile

Dry…………………………………………….Engaging

Irrelevant…………………………………..Relevant

Guttman scale

This composite scale was designed by Louis Guttman and uses a series of items arranged in increasing order of intensity (least intense to most intense) of the concept. This type of scale allows us to understand the intensity of beliefs or feelings. Each item in the above Guttman scale has a weight (this is not indicated on the tool) which varies with the intensity of that item, and the weighted combination of each response is used as an aggregate measure of an observation. Let’s pretend that you are working with a group of parents whose children have identified as part of the transgender community. You want to know how comfortable they feel with their children. You could develop the following items.

Example Guttman Scale Items

  • I would allow my child to use a name that was not gender-specific (e.g., Ryan, Taylor)    Yes/No
  • I would allow my child to wear clothing of the opposite gender (e.g., dresses for boys)   Yes/No
  • I would allow my child to use the pronoun of the opposite sex                                             Yes/No
  • I would allow my child to live as the opposite gender                                                             Yes/No

Notice how the items move from lower intensity to higher intensity. A researcher reviews the yes answers and creates a score for each participant.

Indices (Indexes)

An index is a composite score derived from aggregating measures of multiple concepts (called components) using a set of rules and formulas. It is different from a scale. Scales also aggregate measures; however, these measures examine different dimensions or the same dimension of a single construct. A well-known example of an index is the consumer price index (CPI), which is computed every month by the Bureau of Labor Statistics of the U.S. Department of Labor. The CPI is a measure of how much consumers have to pay for goods and services (in general) and is divided into eight major categories (food and beverages, housing, apparel, transportation, healthcare, recreation, education and communication, and “other goods and services”), which are further subdivided into more than 200 smaller items. Each month, government employees call all over the country to get the current prices of more than 80,000 items. Using a complicated weighting scheme that takes into account the location and probability of purchase for each item, analysts then combine these prices into an overall index score using a series of formulas and rules.

Another example of an index is the Duncan Socioeconomic Index (SEI). This index is used to quantify a person’s socioeconomic status (SES) and is a combination of three concepts: income, education, and occupation. Income is measured in dollars, education in years or degrees achieved, and occupation is classified into categories or levels by status. These very different measures are combined to create an overall SES index score. However, SES index measurement has generated a lot of controversy and disagreement among researchers.

The process of creating an index is similar to that of a scale. First, conceptualize (define) the index and its constituent components. Though this appears simple, there may be a lot of disagreement on what components (concepts/constructs) should be included or excluded from an index. For instance, in the SES index, isn’t income correlated with education and occupation? And if so, should we include one component only or all three components? Reviewing the literature, using theories, and/or interviewing experts or key stakeholders may help resolve this issue. Second, operationalize and measure each component. For instance, how will you categorize occupations, particularly since some occupations may have changed with time (e.g., there were no Web developers before the Internet)? Third, create a rule or formula for calculating the index score. Again, this process may involve a lot of subjectivity. Lastly, validate the index score using existing or new data.

Differences Between Scales and Indices

Though indices and scales yield a single numerical score or value representing a concept of interest, they are different in many ways. First, indices often comprise components that are very different from each other (e.g., income, education, and occupation in the SES index) and are measured in different ways. Conversely, scales typically involve a set of similar items that use the same rating scale (such as a five-point Likert scale about customer satisfaction).

Second, indices often combine objectively measurable values such as prices or income, while scales are designed to assess subjective or judgmental constructs such as attitude, prejudice, or self-esteem. Some argue that the sophistication of the scaling methodology makes scales different from indexes, while others suggest that indexing methodology can be equally sophisticated. Nevertheless, indexes and scales are both essential tools in social science research.

A note on scales and indices

Scales and indices seem like clean, convenient ways to measure different phenomena in social science, but just like with a lot of research, we have to be mindful of the assumptions and biases underneath. What if a scale or an index was developed using only White women as research participants? Is it going to be useful for other groups? It very well might be, but when using a scale or index on a group for whom it hasn’t been tested, it will be very important to evaluate the validity and reliability of the instrument, which we address in the next section.

It’s important to note that while scales and indices are often made up of nominal or ordinal variables, when we analyze them into composite scores, we will treat them as interval/ratio variables.

  • Scales and indices are common ways to collect information and involve using multiple indicators in measurement.
  • A key difference between a scale and an index is that a scale contains multiple indicators for one concept, whereas an indicator examines multiple concepts (components).
  • In order to create scales or indices, researchers must have a clear understanding of the indicators for what they are studying.
  • What is the level of measurement for each item on each tool? Take a second and think about why the tool’s creator decided to include these levels of measurement. Identify any levels of measurement you would change and why.
  • If these tools don’t exist for what you are interested in studying, why do you think that is?

11.4 Reliability and validity in measurement

  • Discuss measurement error, the different types, and how to minimize the probability of them
  • Differentiate between reliability and validity and understand how these are related to each other and relevant to understanding the value of a measurement tool
  • Compare and contrast the types of reliability and demonstrate how to evaluate each type
  • Compare and contrast the types of validity and demonstrate how to evaluate each type

The previous chapter provided insight into measuring concepts in social work research. We discussed the importance of identifying concepts and their corresponding indicators as a way to help us operationalize them. In essence, we now understand that when we think about our measurement process, we must be intentional and thoughtful in the choices that we make. Before we talk about how to evaluate our measurement process, let’s discuss why we want to evaluate our process. We evaluate our process so that we minimize our chances of error . But what is measurement error?

Types of Errors

We need to be concerned with two types of errors in measurement: systematic and random errors. Systematic errors are errors that are generally predictable. These are errors that, “are due to the process that biases the results.” [5] For instance, my cat stepping on the scale with me each morning is a systematic error in measuring my weight. I could predict that each measurement would be off by 13 pounds. (He’s a bit of a chonk.)

There are multiple categories of systematic errors.

  • Social desirability , occurs when you ask participants a question and they answer in the way that they feel is the most socially desired . For instance, let's imagine that you want to understand the level of prejudice that participants feel regarding immigrants and decide to conduct face-to-face interviews with participants. Some participants may feel compelled to answer in a way that indicates that they are less prejudiced than they really are. 
  • [pb_glossary id="2096"]Acquiescence bias  occurs when participants answer items in some type of pattern, usually skewed to more favorable responses. For example, imagine that you took a research class and loved it. The professor was great and you learned so much. When asked to complete the end of course questionnaire, you immediately mark "strongly agree" to all items without really reading all of the items. After all, you really loved the class. However, instead of reading and reflecting on each item, you "acquiesced" and used your overall impression of the experience to answer all of the items.
  • Leading questions are those questions that are worded in a way so that the participant is "lead" to a specific answer. For instance, think about the question, "Have you ever hurt a sweet, innocent child?" Most people, regardless of their true response, may answer "no" simply because the wording of the question leads the participant to believe that "no" is the correct answer.

In order to minimize these types of errors, you should think about what you are studying and examine potential public perceptions of this issue. Next, think about how your questions are worded and how you will administer your tool (we will discuss these in greater detail in the next chapter). This will help you determine if your methods inadvertently increase the probability of these types of errors. 

These errors differ from random errors , whic are "due to chance and are not systematic in any way." [6] Sometimes it is difficult to "tease out" random errors. When you take your statistics class, you will learn more about random errors and what to do about them. They're hard to observe until you start diving deeper into statistical analysis, so put a pin in them for now.

Now that we have a good understanding of the two types of errors, let's discuss what we can do to evaluate our measurement process and minimize the chances of these occurring. Remember, quality projects are clear on what is measured , how it is measured, and why it is measured . In addition, quality projects are attentive to the appropriateness of measurement tools and evaluate whether tools are used correctly and consistently.  But how do we do that? Good researchers  do not simply  assume  that their measures work. Instead, they collect data to  demonstrate that they work. If their research does not demonstrate that a measure works, they stop using it. There are two key factors to consider in deciding whether your measurements are good: reliability and validity.

Reliability

Reliability refers to the consistency of a measure. Psychologists consider three types of reliability: over time (test-retest reliability), across items (internal consistency), and across different researchers (inter-rater reliability).

Test-retest reliability

When researchers measure a construct that they assume to be consistent across time, then the scores they obtain should also be consistent across time. Test-retest reliability is the extent to which this is actually the case. For example, intelligence is generally thought to be consistent across time. A person who is highly intelligent today will be highly intelligent next week. This means that any good measure of intelligence should produce roughly the same scores for this individual next week as it does today. Clearly, a measure that produces highly inconsistent scores over time cannot be a very good measure of a construct that is supposed to be consistent.

Assessing test-retest reliability requires using the measure on a group of people at one time, using it again on the  same group of people at a later time. At neither point has the research participant received any sort of intervention. Once you have these two measurements, you then look at the correlation between the two sets of scores. This is typically done by graphing the data in a scatterplot and computing the correlation coefficient. Figure 11.2 shows the correlation between two sets of scores of several university students on the Rosenberg Self-Esteem Scale, administered two times, a week apart. The correlation coefficient for these data is +.95. In general, a test-retest correlation of +.80 or greater is considered to indicate good reliability.

A scatterplot with scores at time 1 on the x-axis and scores at time 2 on the y-axis, both ranging from 0 to 30. The dots on the scatter plot indicate a strong, positive correlation.

Again, high test-retest correlations make sense when the construct being measured is assumed to be consistent over time, which is the case for intelligence, self-esteem, and the Big Five personality dimensions. But other constructs are not assumed to be stable over time. The very nature of mood, for example, is that it changes. So a measure of mood that produced a low test-retest correlation over a period of a month would not be a cause for concern.

Internal consistency

Another kind of reliability is internal consistency , which is the consistency of people’s responses across the items on a multiple-item measure. In general, all the items on such measures are supposed to reflect the same underlying construct, so people’s scores on those items should be correlated with each other. On the Rosenberg Self-Esteem Scale, people who agree that they are a person of worth should tend to agree that they have a number of good qualities. If people’s responses to the different items are not correlated with each other, then it would no longer make sense to claim that they are all measuring the same underlying construct. This is as true for behavioral and physiological measures as for self-report measures. For example, people might make a series of bets in a simulated game of roulette as a measure of their level of risk seeking. This measure would be internally consistent to the extent that individual participants’ bets were consistently high or low across trials.

Interrater Reliability

Many behavioral measures involve significant judgment on the part of an observer or a rater. Interrater reliability is the extent to which different observers are consistent in their judgments. For example, if you were interested in measuring university students’ social skills, you could make video recordings of them as they interacted with another student whom they are meeting for the first time. Then you could have two or more observers watch the videos and rate each student’s level of social skills. To the extent that each participant does, in fact, have some level of social skills that can be detected by an attentive observer, different observers’ ratings should be highly correlated with each other.

Validity , another key element of assessing measurement quality, is the extent to which the scores from a measure represent the variable they are intended to. But how do researchers make this judgment? We have already considered one factor that they take into account—reliability. When a measure has good test-retest reliability and internal consistency, researchers should be more confident that the scores represent what they are supposed to. There has to be more to it, however, because a measure can be extremely reliable but have no validity whatsoever. As an absurd example, imagine someone who believes that people’s index finger length reflects their self-esteem and therefore tries to measure self-esteem by holding a ruler up to people’s index fingers. Although this measure would have extremely good test-retest reliability, it would have absolutely no validity. The fact that one person’s index finger is a centimeter longer than another’s would indicate nothing about which one had higher self-esteem.

Discussions of validity usually divide it into several distinct “types.” But a good way to interpret these types is that they are other kinds of evidence—in addition to reliability—that should be taken into account when judging the validity of a measure.

Face validity

Face validity is the extent to which a measurement method appears “on its face” to measure the construct of interest. Most people would expect a self-esteem questionnaire to include items about whether they see themselves as a person of worth and whether they think they have good qualities. So a questionnaire that included these kinds of items would have good face validity. The finger-length method of measuring self-esteem, on the other hand, seems to have nothing to do with self-esteem and therefore has poor face validity. Although face validity can be assessed quantitatively—for example, by having a large sample of people rate a measure in terms of whether it appears to measure what it is intended to—it is usually assessed informally.

Face validity is at best a very weak kind of evidence that a measurement method is measuring what it is supposed to. One reason is that it is based on people’s intuitions about human behavior, which are frequently wrong. It is also the case that many established measures in psychology work quite well despite lacking face validity. The Minnesota Multiphasic Personality Inventory-2 (MMPI-2) measures many personality characteristics and disorders by having people decide whether each of over 567 different statements applies to them—where many of the statements do not have any obvious relationship to the construct that they measure. For example, the items “I enjoy detective or mystery stories” and “The sight of blood doesn’t frighten me or make me sick” both measure the suppression of aggression. In this case, it is not the participants’ literal answers to these questions that are of interest, but rather whether the pattern of the participants’ responses to a series of questions matches those of individuals who tend to suppress their aggression.

Content validity

Content validity is the extent to which a measure “covers” the construct of interest. For example, if a researcher conceptually defines test anxiety as involving both sympathetic nervous system activation (leading to nervous feelings) and negative thoughts, then his measure of test anxiety should include items about both nervous feelings and negative thoughts. Or consider that attitudes are usually defined as involving thoughts, feelings, and actions toward something. By this conceptual definition, a person has a positive attitude toward exercise to the extent that they think positive thoughts about exercising, feels good about exercising, and actually exercises. So to have good content validity, a measure of people’s attitudes toward exercise would have to reflect all three of these aspects. Like face validity, content validity is not usually assessed quantitatively. Instead, it is assessed by carefully checking the measurement method against the conceptual definition of the construct.

Criterion validity

Criterion validity is the extent to which people’s scores on a measure are correlated with other variables (known as criteria) that one would expect them to be correlated with. For example, people’s scores on a new measure of test anxiety should be negatively correlated with their performance on an important school exam. If it were found that people’s scores were in fact negatively correlated with their exam performance, then this would be a piece of evidence that these scores really represent people’s test anxiety. But if it were found that people scored equally well on the exam regardless of their test anxiety scores, then this would cast doubt on the validity of the measure.

A criterion can be any variable that one has reason to think should be correlated with the construct being measured, and there will usually be many of them. For example, one would expect test anxiety scores to be negatively correlated with exam performance and course grades and positively correlated with general anxiety and with blood pressure during an exam. Or imagine that a researcher develops a new measure of physical risk taking. People’s scores on this measure should be correlated with their participation in “extreme” activities such as snowboarding and rock climbing, the number of speeding tickets they have received, and even the number of broken bones they have had over the years. When the criterion is measured at the same time as the construct, criterion validity is referred to as concurrent validity ; however, when the criterion is measured at some point in the future (after the construct has been measured), it is referred to as predictive validity (because scores on the measure have “predicted” a future outcome).

Discriminant validity

Discriminant validity , on the other hand, is the extent to which scores on a measure are not  correlated with measures of variables that are conceptually distinct. For example, self-esteem is a general attitude toward the self that is fairly stable over time. It is not the same as mood, which is how good or bad one happens to be feeling right now. So people’s scores on a new measure of self-esteem should not be very highly correlated with their moods. If the new measure of self-esteem were highly correlated with a measure of mood, it could be argued that the new measure is not really measuring self-esteem; it is measuring mood instead.

Increasing the reliability and validity of measures

We have reviewed the types of errors and how to evaluate our measures based on reliability and validity considerations. However, what can we do while selecting or creating our tool so that we minimize the potential of errors? Many of our options were covered in our discussion about reliability and validity. Nevertheless, the following table provides a quick summary of things that you should do when creating or selecting a measurement tool.

  • In measurement, two types of errors can occur: systematic, which we might be able to predict, and random, which are difficult to predict but can sometimes be addressed during statistical analysis.
  • There are two distinct criteria by which researchers evaluate their measures: reliability and validity. Reliability is consistency across time (test-retest reliability), across items (internal consistency), and across researchers (interrater reliability). Validity is the extent to which the scores actually represent the variable they are intended to.
  • Validity is a judgment based on various types of evidence. The relevant evidence includes the measure’s reliability, whether it covers the construct of interest, and whether the scores it produces are correlated with other variables they are expected to be correlated with and not correlated with variables that are conceptually distinct.
  • Once you have used a measure, you should reevaluate its reliability and validity based on your new data. Remember that the assessment of reliability and validity is an ongoing process.
  • Provide a clear statement regarding the reliability and validity of these tools. What strengths did you notice? What were the limitations?
  • Think about your target population . Are there changes that need to be made in order for one of these tools to be appropriate for your population?
  • If you decide to create your own tool, how will you assess its validity and reliability?

11.5 Ethical and social justice considerations for measurement

  • Identify potential cultural, ethical, and social justice issues in measurement.

Just like with other parts of the research process, how we decide to measure what we are researching is influenced by our backgrounds, including our culture, implicit biases, and individual experiences. For me as a middle-class, cisgender white woman, the decisions I make about measurement will probably default to ones that make the most sense to me and others like me, and thus measure characteristics about us most accurately if I don't think carefully about it. There are major implications for research here because this could affect the validity of my measurements for other populations.

This doesn't mean that standardized scales or indices, for instance, won't work for diverse groups of people. What it means is that researchers must not ignore difference in deciding how to measure a variable in their research. Doing so may serve to push already marginalized people further into the margins of academic research and, consequently, social work intervention. Social work researchers, with our strong orientation toward celebrating difference and working for social justice, are obligated to keep this in mind for ourselves and encourage others to think about it in their research, too.

This involves reflecting on what we are measuring, how we are measuring, and why we are measuring. Do we have biases that impacted how we operationalized our concepts? Did we include st a keholders and gatekeepers in the development of our concepts? This can be a way to gain access to vulnerable populations. What feedback did we receive on our measurement process and how was it incorporated into our work? These are all questions we should ask as we are thinking about measurement. Further, engaging in this intentionally reflective process will help us maximize the chances that our measurement will be accurate and as free from bias as possible.

The NASW Code of Ethics discusses social work research and the importance of engaging in practices that do not harm participants. [14] This is especially important considering that many of the topics studied by social workers are those that are disproportionately experienced by marginalized and oppressed populations. Some of these populations have had negative experiences with the research process: historically, their stories have been viewed through lenses that reinforced the dominant culture's standpoint. Thus, when thinking about measurement in research projects, we must remember that the way in which concepts or constructs are measured will impact how marginalized or oppressed persons are viewed.  It is important that social work researchers examine current tools to ensure appropriateness for their population(s). Sometimes this may require researchers to use or adapt existing tools. Other times, this may require researchers to develop completely new measures. In summary, the measurement protocols selected should be tailored and attentive to the experiences of the communities to be studied.

But it's not just about reflecting and identifying problems and biases in our measurement, operationalization, and conceptualization - what are we going to  do about it? Consider this as you move through this book and become a more critical consumer of research. Sometimes there isn't something you can do in the immediate sense - the literature base at this moment just is what it is. But how does that inform what you will do later?

  • Social work researchers must be attentive to personal and institutional biases in the measurement process that affect marginalized groups.
  • What are the potential social justice considerations surrounding your methods?
  • What are some strategies you could employ to ensure that you engage in ethical research?
  • Milkie, M. A., & Warner, C. H. (2011). Classroom learning environments and the mental health of first grade children. Journal of Health and Social Behavior, 52 , 4–22 ↵
  • Kaplan, A. (1964). The conduct of inquiry: Methodology for behavioral science . San Francisco, CA: Chandler Publishing Company. ↵
  • Earl Babbie offers a more detailed discussion of Kaplan’s work in his text. You can read it in: Babbie, E. (2010). The practice of social research (12th ed.). Belmont, CA: Wadsworth. ↵
  • Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology, 140, 1–55. ↵
  • Engel, R. & Schutt, R. (2013). The practice of research in social work (3rd. ed.) . Thousand Oaks, CA: SAGE. ↵
  • Engel, R. & Shutt, R. (2013). The practice of research in social work (3rd. ed.). Thousand Oaks, CA: SAGE. ↵
  • Sullivan G. M. (2011). A primer on the validity of assessment instruments. Journal of graduate medical education, 3 (2), 119–120. doi:10.4300/JGME-D-11-00075.1 ↵
  • https://www.socialworkers.org/about/ethics/code-of-ethics/code-of-ethics-english ↵

The process by which we describe and ascribe meaning to the key facts, concepts, or other phenomena that we are investigating.

In measurement, conditions that are easy to identify and verify through direct observation.

In measurement, conditions that are subtle and complex that we must use existing knowledge and intuition to define.

The process of determining how to measure a construct that cannot be directly observed

Conditions that are not directly observable and represent states of being, experiences, and ideas.

“a logical grouping of attributes that can be observed and measured and is expected to vary from person to person in a population” (Gillespie & Wagner, 2018, p. 9)

The level that describes the type of operations can be conducted with your data. There are four nominal, ordinal, interval, and ratio.

Level of measurement that follows nominal level. Has mutually exclusive categories and a hierarchy (order).

A higher level of measurement. Denoted by having mutually exclusive categories, a hierarchy (order), and equal spacing between values. This last item means that values may be added, subtracted, divided, and multiplied.

The highest level of measurement. Denoted by mutually exclusive categories, a hierarchy (order), values can be added, subtracted, multiplied, and divided, and the presence of an absolute zero.

variables whose values are organized into mutually exclusive groups but whose numerical values cannot be used in mathematical operations.

variables whose values are mutually exclusive and can be used in mathematical operations

The differerence between that value that we get when we measure something and the true value

Errors that are generally predictable.

Errors lack any perceptable pattern.

The ability of a measurement tool to measure a phenomenon the same way, time after time. Note: Reliability does not imply validity.

The extent to which scores obtained on a scale or other measure are consistent across time

The extent to which different observers are consistent in their assessment or rating of a particular characteristic or item.

The extent to which the scores from a measure represent the variable they are intended to.

The extent to which a measurement method appears “on its face” to measure the construct of interest

The extent to which a measure “covers” the construct of interest, i.e., it's comprehensiveness to measure the construct.

The extent to which people’s scores on a measure are correlated with other variables (known as criteria) that one would expect them to be correlated with.

A type of Criterion validity. Examines how well a tool provides the same scores as an already existing tool.

A type of criterion validity that examines how well your tool predicts a future criterion.

the group of people whose needs your study addresses

individuals or groups who have an interest in the outcome of the study you conduct

the people or organizations who control access to the population you want to study

Graduate research methods in social work Copyright © 2020 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for Open Oregon Educational Resources

18 10. Quantitative sampling

Chapter outline.

  • The sampling process (25 minute read)
  • Sampling approaches for quantitative research (15 minute read)
  • Sample quality (24 minute read)

Content warning: examples contain references to addiction to technology, domestic violence and batterer intervention, cancer, illegal drug use, LGBTQ+ discrimination, binge drinking, intimate partner violence among college students, child abuse, neocolonialism and Western hegemony.

10.1 The sampling process

Learning objectives.

Learners will be able to…

  • Decide where to get your data and who you might need to talk to
  • Evaluate whether it is feasible for you to collect first-hand data from your target population
  • Describe the process of sampling
  • Apply population, sampling frame, and other sampling terminology to sampling people your project’s target population

One of the things that surprised me most as a research methods professor is how much my students struggle with understanding sampling. It is surprising because people engage in sampling all the time. How do you learn whether you like a particular food, like BBQ ribs? You sample them from different restaurants! Obviously, social scientists put a bit more effort and thought into the process than that, but the underlying logic is the same. By sampling a small group of BBQ ribs from different restaurants and liking most of them, you can conclude that when you encounter BBQ ribs again, you will probably like them. You don’t need to eat all of the BBQ ribs in the world to come to that conclusion, just a small sample. [1] Part of the difficulty my students face is learning sampling terminology, which is the focus of this section.

importance of quantitative research in social work

Who is your study about and who should you talk to?

At this point in the research process, you know what your research question is. Our goal in this chapter is to help you understand how to find the people (or documents) you need to study in order to find the answer to your research question. It may be helpful at this point to distinguish between two concepts. Your unit of analysis is the entity that you wish to be able to say something about at the end of your study (probably what you’d consider to be the main focus of your study). Your unit of observation is the entity (or entities) that you actually observe, measure, or collect in the course of trying to learn something about your unit of analysis.

It is often the case that your unit of analysis and unit of observation are the same. For example, we may want to say something about social work students (unit of analysis), so we ask social work students at our university to complete a survey for our study (unit of observation). In this case, we are observing individuals , i.e. students, so we can make conclusions about individual s .

On the other hand, our unit of analysis and observation can differ. We could sample social work students to draw conclusions about organizations or universities. Perhaps we are comparing students at historically Black colleges and universities (HBCUs) and primarily white institutions (PWIs). Even though our sample was made up of individual students from various colleges (our unit of observation), our unit of analysis was the university as an organization. Conclusions we made from individual-level data were used to understand larger organizations.

Similarly, we could adjust our sampling approach to target specific student cohorts. Perhaps we wanted to understand the experiences of Black social work students in PWIs. We could choose either an individual unit of observation by selecting students, or a group unit of observation by studying the National Association of Black Social Workers .

Sometimes the units of analysis and observation differ due to pragmatic reasons. If we wanted to study whether being a social work student impacted family relationships, we may choose to study family members of students in social work programs who could give us information about how they behaved in the home. In this case, we would be observing family members to draw conclusions about individual students.

In sum, there are many potential units of analysis that a social worker might examine, but some of the most common include i ndividuals, groups, and organizations. Table 10.1 details examples identifying the units of observation and analysis in a hypothetical study of student addiction to electronic gadgets.

First-hand vs. second-hand knowledge

Your unit of analysis will be determined by your research question. Specifically, it should relate to your target population. Your unit of observation, on the other hand, is determined largely by the method of data collection you use to answer that research question. Let’s consider a common issue in social work research: understanding the effectiveness of different social work interventions. Who has first-hand knowledge and who has second-hand knowledge? Well, practitioners would have first-hand knowledge about implementing the intervention. For example, they might discuss with you the unique language they use help clients understand the intervention. Clients, on the other hand, have first-hand knowledge about the impact of those interventions on their lives. If you want to know if an intervention is effective, you need to ask people who have received it!

Unfortunately, student projects run into pragmatic limitations with sampling from client groups. Clients are often diagnosed with severe mental health issues or have other ongoing issues that render them a vulnerable population at greater risk of harm. Asking a person who was recently experiencing suicidal ideation about that experience may interfere with ongoing treatment. Client records are also confidential and cannot be shared with researchers unless clients give explicit permission. Asking one’s own clients to participate in the study creates a dual relationship with the client, as both clinician and researcher, and dual relationship have conflicting responsibilities and boundaries.

Obviously, studies are done with social work clients all the time. But for student projects in the classroom, it is often required to get second-hand information from a population that is less vulnerable. Students may instead choose to study clinicians and how they perceive the effectiveness of different interventions. While clinicians can provide an informed perspective, they have less knowledge about personally receiving the intervention. In general, researchers prefer to sample the people who have first-hand knowledge about their topic, though feasibility often forces them to analyze second-hand information instead.

Population: Who do you want to study?

In social scientific research, a  population   is the cluster of people you are most interested in. It is often the “who” that you want to be able to say something about at the end of your study. While populations in research may be rather large, such as “the American people,” they are more typically more specific than that. For example, a large study for which the population of interest is the American people will likely specify which American people, such as adults over the age of 18 or citizens or legal permanent residents. Based on your work in Chapter 2 , you should have a target population identified in your working question. That might be something like “people with developmental disabilities” or “students in a social work program.”

It is almost impossible for a researcher to gather data from their entire population of interest. This might sound surprising or disappointing until you think about the kinds of research questions that social workers typically ask. For example, let’s say we wish to answer the following question: “How does gender impact attendance in a batterer intervention program?” Would you expect to be able to collect data from all people in batterer intervention programs across all nations from all historical time periods? Unless you plan to make answering this research question your entire life’s work (and then some), I’m guessing your answer is a resounding no. So, what to do? Does not having the time or resources to gather data from every single person of interest mean having to give up your research interest?

Let’s think about who could possibly be in your study.

  • What is your population, the people you want to make conclusions about?
  • Do your unit of analysis and unit of observation differ or are they the same?
  • Can you ethically and practically get first-hand information from the people most knowledgeable about the topic, or will you rely on second-hand information from less vulnerable populations?

Setting: Where will you go to get your data?

While you can’t gather data from everyone, you can find some people from your target population to study. The first rule of sampling is: go where your participants are. You will need to figure out where you will go to get your data. For many student researchers, it is their agency, their peers, their family and friends, or whoever comes across students’ social media posts or emails asking people to participate in their study.

Each setting (agency, social media) limits your reach to only a small segment of your target population who has the opportunity to be a part of your study. This intermediate point between the overall population and the sample of people who actually participate in the researcher’s study is called a sampling frame . A sampling frame is a list of people from which you will draw your sample.

But where do you find a sampling frame? Answering this question is the first step in conducting human subjects research. Social work researchers must think about locations or groups in which your target population gathers or interacts. For example, a study on quality of care in nursing homes may choose a local nursing home because it’s easy to access. The sampling frame could be all of the residents of the nursing home. You would select your participants for your study from the list of residents. Note that this is a real list. That is, an administrator at the nursing home would give you a list with every resident’s name or ID number from which you would select your participants. If you decided to include more nursing homes in your study, then your sampling frame could be all the residents at all the nursing homes who agreed to participate in your study.

Let’s consider some more examples. Unlike nursing home patients, cancer survivors do not live in an enclosed location and may no longer receive treatment at a hospital or clinic. For social work researchers to reach participants, they may consider partnering with a support group that services this population. Perhaps there is a support group at a local church survivors may attend. Without a set list of people, your sampling frame would simply be the people who showed up to the support group on the nights you were there. Similarly, if you posted an advertisement in an online peer-support group for people with cancer, your sampling frame is the people in that group.

More challenging still is recruiting people who are homeless, those with very low income, or those who belong to stigmatized groups. For example, a research study by Johnson and Johnson (2014) [2] attempted to learn usage patterns of “bath salts,” or synthetic stimulants that are marketed as “legal highs.” Users of “bath salts” don’t often gather for meetings, and reaching out to individual treatment centers is unlikely to produce enough participants for a study, as the use of bath salts is rare. To reach participants, these researchers ingeniously used online discussion boards in which users of these drugs communicate. Their sampling frame included everyone who participated in the online discussion boards during the time they collected data. Another example might include using a flyer to let people know about your study, in which case your sampling frame would be anyone who walks past your flyer wherever you hang it—usually in a strategic location where you know your population will be.

In conclusion, sampling frames can be a real list of people like the list of faculty and their ID numbers in a university department, which allows you to clearly identify who is in your study and what chance they have of being selected. However, not all sampling frames allow you to be so specific. It is also important to remember that accessing your sampling frame must be practical and ethical, as we discussed in Chapter 2 and Chapter 6 . For studies that present risks to participants, approval from gatekeepers and the university’s institutional review board (IRB) is needed.

Criteria: What characteristics must your participants have/not have?

Your sampling frame is not just everyone in the setting you identified. For example, if you were studying MSW students who are first-generation college students, you might select your university as the setting, but not everyone in your program is a first-generation student. You need to be more specific about which characteristics or attributes individuals either must have or cannot have before they participate in the study. These are known as inclusion and exclusion criteria, respectively.

Inclusion criteria are the characteristics a person must possess in order to be included in your sample. If you were conducting a survey on LGBTQ+ discrimination at your agency, you might want to sample only clients who identify as LGBTQ+. In that case, your inclusion criteria for your sample would be that individuals have to identify as LGBTQ+.

Comparably,  exclusion criteria are characteristics that disqualify a person from being included in your sample. In the previous example, you could think of cisgenderism and heterosexuality as your exclusion criteria because no person who identifies as heterosexual or cisgender would be included in your sample. Exclusion criteria are often the mirror image of inclusion criteria. However, there may be other criteria by which we want to exclude people from our sample. For example, we may exclude clients who were recently discharged or those who have just begun to receive services.

importance of quantitative research in social work

Recruitment: How will you ask people to participate in your study?

Once you have a location and list of people from which to select, all that is left is to reach out to your participants. Recruitment refers to the process by which the researcher informs potential participants about the study and asks them to participate in the research project. Recruitment comes in many different forms. If you have ever received a phone call asking for you to participate in a survey, someone has attempted to recruit you for their study. Perhaps you’ve seen print advertisements on buses, in student centers, or in a newspaper. I’ve received many emails that were passed around my school asking for participants, usually for a graduate student project. As we learn more about specific types of sampling, make sure your recruitment strategy makes sense with your sampling approach. For example, if you put up a flyer in the student health office to recruit student athletes for your study, you may not be targeting your recruitment efforts to settings where your target population is likely to see your recruitment materials.

Recruiting human participants

Sampling is the first time in which you will contact potential study participants. Before you start this process, it is important to make sure you have approval from your university’s institutional review board as well as any gatekeepers at the locations in which you plan to conduct your study. As we discussed in section 10.1, the first rule of sampling is to go where your participants are. If you are studying domestic violence, reach out to local shelters, advocates, or service agencies. Gatekeepers will be necessary to gain access to your participants. For example, a gatekeeper can forward your recruitment email across their employee email list. Review our discussion of gatekeepers in Chapter 2 before proceeding with contacting potential participants as part of recruitment.

Recruitment can take many forms. You may show up at a staff meeting to ask for volunteers. You may send a company-wide email. Each step of this process should be vetted by the IRB as well as other stakeholders and gatekeepers. You will also need to set reasonable expectations for how many reminders you will send to the person before moving on. Generally, it is a good idea to give people a little while to respond, though reminders are often accompanied by an increase in participation. Pragmatically, it is a good idea for you to think through each step of the recruitment process and how much time it will take to complete it.

For example, as a graduate student, I conducted a study of state-level disabilities administrators in which I was recruiting a sample of very busy people and had no financial incentives to offer them for participating in my study. It helped for my research team to bring on board a well-known agency as a research partner, allowing them to review and offer suggestions on our survey and interview questions. This collaborative process took time and had to be completed before sampling could start. Once sampling commenced, I pulled contact names from my collaborator’s database and public websites, and set a weekly schedule of email and phone contacts. I would contact the director once via email. Ten days later, I would follow up via email and by leaving a voicemail with their administrative support staff. Ten days after that, I would reach out to state administrators in a different office via email and then again via phone, if needed. The process took months to complete and required a complex Excel tracking document.

Recruitment will also expose your participants to the informed consent information you prepared. For students going through the IRB, there are templates you will have to follow in order to get your study approved. For students whose projects unfold under the supervision of their department, rather than the IRB, you should check with your professor on what the expectations are for getting participant consent. In the aforementioned study, I used our IRB’s template to create a consent form but did not include a signature line. The IRB allowed me to collect my data without a signature, as there was little risk of harm from the study. It was imperative to review consent information before completing the survey and interview with participants. Only when the participant is totally clear on the purpose, risks and benefits, confidentiality protections, and other information detailed in Chapter 6 , can you ethically move forward with including them in your sample.

Sampling available documents

As with sampling humans, sampling documents centers around the question: which documents are the most relevant to your research question, in that which will provide you first-hand knowledge. Common documents analyzed in student research projects include client files, popular media like film and music lyrics, and policies from service agencies. In a case record review, the student would create exclusion and inclusion criteria based on their research question. Once a suitable sampling frame of potential documents exists, the researcher can use probability or non-probability sampling to select which client files are ultimately analyzed.

Sampling documents must also come with consent and buy-in from stakeholders and gatekeepers. Assuming you have approval to conduct your study and access to the documents you need, the process of recruitment is much easier than in studies sampling humans. There is no informed consent process with documents, though research with confidential health or education records must be done in accordance with privacy laws such as the Health Insurance Portability and Accountability Act and the Family Educational Rights and Privacy Act . Barring any technical or policy obstacles, the gathering of documents should be easier and less time consuming than sampling humans.

Sample: Who actually participates in your study?

Once you find a sampling frame from which you can recruit your participants and decide which characteristics you will  include  and   exclude, you will recruit people using a specific sampling approach, which we will cover in Section 10.2. At the end, you’re left with the group of people you successfully recruited from your sampling frame to participate in your study, your sample . If you are a participant in a research project—answering survey questions, participating in interviews, etc.—you are part of the sample in that research project.

Visualizing sampling terms

Sampling terms can be a bit daunting at first. However, with some practice, they will become second nature. Let’s walk through an example from a research project of mine. I collected data for a research project related to how much it costs to become a licensed clinical social worker (LCSW) in each state. Becoming an LCSW is necessary to work in private clinical practice and is used by supervisors in human service organizations to sign off on clinical charts from less credentialed employees, and to provide clinical supervision. If you are interested in providing clinical services as a social worker, you should become familiar with the licensing laws in your state.

Moving from population to setting, you should consider access and consent of stakeholders and the representativeness of the setting. In moving from setting to sampling frame, keep in mind your inclusion and exclusion criteria. In moving finally to sample, keep in mind your sampling approach and recruitment strategy.

Using Figure 10.1 as a guide, my population is clearly clinical social workers, as these are the people about whom I want to draw conclusions. The next step inward would be a sampling frame. Unfortunately, there is no list of every licensed clinical social worker in the United States. I could write to each state’s social work licensing board and ask for a list of names and addresses, perhaps even using a Freedom of Information Act request if they were unwilling to share the information. That option sounds time-consuming and has a low likelihood of success. Instead, I tried to figure out a convenient setting social workers are likely to congregate. I considered setting up a booth at a National Association of Social Workers (NASW) conference and asking people to participate in my survey. Ultimately, this would prove too costly, and the people who gather at an NASW conference may not be representative of the general population of clinical social workers. I finally discovered the NASW membership email list, which is available to advertisers, including researchers advertising for research projects. While the NASW list does not contain every clinical social worker, it reaches over one hundred thousand social workers regularly through its monthly e-newsletter, a large proportion of social workers in practice, so the setting was likely to draw a representative sample. To gain access to this setting from gatekeepers, I had to provide paperwork showing my study had undergone IRB review and submit my measures for approval by the mailing list administrator.

Once I gained access from gatekeepers, my setting became the members of the NASW membership list. I decided to recruit 5,000 participants because I knew that people sometimes do not read or respond to email advertisements, and I figured maybe 20% would respond, which would give me around 1,000 responses. Figuring out my sample size was a challenge, because I had to balance the costs associated with using the NASW newsletter. As you can see on their pricing page , it would cost money to learn personal information about my potential participants, which I would need to check later in order to determine if my population was representative of the overall population of social workers. For example, I could see if my sample was comparable in race, age, gender, or state of residence to the broader population of social workers by comparing my sample with information about all social workers published by NASW. I presented my options to my external funder as:

  • I could send an email advertisement to a lot of people (5,000), but I would know very little about them and they would get only one advertisement.
  • I could send multiple advertisements to fewer people (1,000) reminding them to participate, but I would also know more about them by purchasing access to personal information.
  • I could send multiple advertisements to fewer people (2,500), but not purchase access to personal information to minimize costs.

In your project, there is no expectation you purchase access to anything, and if you plan on using email advertisements, consider places that are free to access like employee or student listservs. At the same time, you will need to consider what you can know or not know about the people who will potentially be in your study, and I could collect any personal information we wanted to check representativeness in the study itself. For this reason, we decided to go with option #1. When I sent my email recruiting participants for the study, I specified that I only wanted to hear from social workers who were either currently receiving or recently received clinical supervision for licensure—my inclusion criteria. This was important because many of the people on the NASW membership list may not be licensed or license-seeking social workers. So, my sampling frame was the email addresses on the NASW mailing list who fit the inclusion criteria for the study, which I figured would be at least a few thousand people. Unfortunately, only 150 licensed or license-seeking clinical social workers responded to my recruitment email and completed the survey. You will learn in Section 10.3 why this did not make for a very good sample.

From this example, you can see that sampling is a process. The process flows sequentially from figuring out your target population, to thinking about where to find people from your target population, to figuring out how much information you know about potential participants, and finally to selecting recruiting people from that list to be a part of your sample. Through the sampling process, you must consider where people in your target population are likely to be and how best to get their attention for your study. Sampling can be an easy process, like calling every 100th name from the phone book, or challenging, like standing every day for a few weeks in an area in which people who are homeless gather for shelter. In either case, your goal is to recruit enough people who will participate in your study so you can learn about your population.

What about sampling non-humans?

Many student projects do not involve recruiting and sampling human subjects. Instead, many research projects will sample objects like client charts, movies, or books. The same terms apply, but the process is a bit easier because there are no humans involved. If a research project involves analyzing client files, it is unlikely you will look at every client file that your agency has. You will need to figure out which client files are important to your research question. Perhaps you want to sample clients who have a diagnosis of reactive attachment disorder. You would have to create a list of all clients at your agency (setting) who have reactive attachment disorder (your inclusion criteria) then use your sampling approach (which we will discuss in the next section) to select which client files you will actually analyze for your study (your sample). Recruitment is a lot easier because, well, there’s no one to convince but your gatekeepers, the managers of your agency. However, researchers who publish chart reviews must obtain IRB permission before doing so.

Key Takeaways

  • The first rule of sampling is to go where your participants are. Think about virtual or in-person settings in which your target population gathers. Remember that you may have to engage gatekeepers and stakeholders in accessing many settings, and that you will need to assess the pragmatic challenges and ethical risks and benefits of your study.
  • Consider whether you can sample documents like agency files to answer your research question. Documents are much easier to “recruit” than people!
  • Researchers must consider which characteristics are necessary for people to have (inclusion criteria) or not have (exclusion criteria), as well as how to recruit participants into the sample.
  • Social workers can sample individuals, groups, or organizations.
  • Sometimes the unit of analysis and the unit of observation in the study differ. In student projects, this is often true as target populations may be too vulnerable to expose to research whose potential harms may outweigh the benefits.
  • One’s recruitment method has to match one’s sampling approach, as will be explained in the next chapter.

Once you have identified who may be a part of your study, the next step is to think about where those people gather. Are there in-person locations in your community or on the internet that are easily accessible. List at least one potential setting for your project. Describe for each potential setting:

  • Based on what you know right now, how representative of your population are potential participants in the setting?
  • How much information can you reasonably know about potential participants before you recruit them?
  • Are there gatekeepers and what kinds of concerns might they have?
  • Are there any stakeholders that may be beneficial to bring on board as part of your research team for the project?
  • What interests might stakeholders and gatekeepers bring to the project and would they align with your vision for the project?
  • What ethical issues might you encounter if you sampled people in this setting.

Even though you may not be 100% sure about your setting yet, let’s think about the next steps.

  • For the settings you’ve identified, how might you recruit participants?
  • Identify your inclusion criteria and exclusion criteria, and assess whether you have enough information on whether people in each setting will meet them.

10.2 Sampling approaches for quantitative research

  • Determine whether you will use probability or non-probability sampling, given the strengths and limitations of each specific sampling approach
  • Distinguish between approaches to probability sampling and detail the reasons to use each approach

Sampling in quantitative research projects is done because it is not feasible to study the whole population, and researchers hope to take what we learn about a small group of people (your sample) and apply it to a larger population. There are many ways to approach this process, and they can be grouped into two categories—probability sampling and non-probability sampling. Sampling approaches are inextricably linked with recruitment, and researchers should ensure that their proposal’s recruitment strategy matches the sampling approach.

Probability sampling approaches use a random process, usually a computer program, to select participants from the sampling frame so that everyone has an equal chance of being included. It’s important to note that random means the researcher used a process that is truly random . In a project sampling college students, standing outside of the building in which your social work department is housed and surveying everyone who walks past is not random. Because of the location, you are likely to recruit a disproportionately large number of social work students and fewer from other disciplines. Depending on the time of day, you may recruit more traditional undergraduate students, who take classes during the day, or more graduate students, who take classes in the evenings.

In this example, you are actually using non-probability sampling . Another way to say this is that you are using the most common sampling approach for student projects, availability sampling . Also called convenience sampling, this approach simply recruits people who are convenient or easily available to the researcher. If you have ever been asked by a friend to participate in their research study for their class or seen an advertisement for a study on a bulletin board or social media, you were being recruited using an availability sampling approach.

There are a number of benefits to the availability sampling approach. First and foremost, it is less costly and time-consuming for the researcher. As long as the person you are attempting to recruit has knowledge of the topic you are studying, the information you get from the sample you recruit will be relevant to your topic (although your sample may not necessarily be representative of a larger population). Availability samples can also be helpful when random sampling isn’t practical. If you are planning to survey students in an LGBTQ+ support group on campus but attendance varies from meeting to meeting, you may show up at a meeting and ask anyone present to participate in your study. A support group with varied membership makes it impossible to have a real list—or sampling frame—from which to randomly select individuals. Availability sampling would help you reach that population.

Availability sampling is appropriate for student and smaller-scale projects, but it comes with significant limitations. The purpose of sampling in quantitative research is to generalize from a small sample to a larger population. Because availability sampling does not use a random process to select participants, the researcher cannot be sure their sample is representative of the population they hope to generalize to. Instead, the recruitment processes may have been structured by other factors that may bias the sample to be different in some way than the overall population.

So, for instance, if we asked social work students about their level of satisfaction with the services at the student health center, and we sampled in the evenings, we would get most likely get a biased perspective of the issue. Students taking only night classes are much more likely to commute to school, spend less time on campus, and use fewer campus services. Our results would not represent what all social work students feel about the topic. We might get the impression that no social work student had ever visited the health center, when that is not actually true at all. Sampling bias will be discussed in detail in Section 10.3.

importance of quantitative research in social work

Approaches to probability sampling

What might be a better strategy is getting a list of all email addresses of social work students and randomly selecting email addresses of students to whom you can send your survey. This would be an example of simple random sampling . It’s important to note that you need a real list of people in your sampling frame from which to select your email addresses. For projects where the people who could potentially participate is not known by the researcher, probability sampling is not possible. It is likely that administrators at your school’s registrar would be reluctant to share the list of students’ names and email addresses. Always remember to consider the feasibility and ethical implications of the sampling approach you choose.

Usually, simple random sampling is accomplished by assigning each person, or element , in your sampling frame a number and selecting your participants using a random number generator. You would follow an identical process if you were sampling records or documents as your elements, rather than people. True randomness is difficult to achieve, and it takes complex computational calculations to do so. Although you think you can select things at random, human-generated randomness is actually quite predictable, as it falls into patterns called heuristics . To truly randomly select elements, researchers must rely on computer-generated help. Many free websites have good pseudo-random number generators. A good example is the website Random.org , which contains a random number generator that can also randomize lists of participants. Sometimes, researchers use a table of numbers that have been generated randomly. There are several possible sources for obtaining a random number table. Some statistics and research methods textbooks provide such tables in an appendix.

Though simple, this approach to sampling can be tedious since the researcher must assign a number to each person in a sampling frame. Systematic sampling techniques are somewhat less tedious but offer the benefits of a random sample. As with simple random samples, you must possess a list of everyone in your sampling frame. Once you’ve done that, to draw a systematic sample you’d simply select every k th element on your list. But what is k , and where on the list of population elements does one begin the selection process?

Diagram showing four people being selected using systematic sampling, starting at number 2 and every third person after that (5, 8, 11)

k is your selection interval or the distance between the elements you select for inclusion in your study. To begin the selection process, you’ll need to figure out how many elements you wish to include in your sample. Let’s say you want to survey 25 social work students and there are 100 social work students on your campus. In this case, your selection interval, or  k , is 4. To get your selection interval, simply divide the total number of population elements by your desired sample size. Systematic sampling starts by randomly selecting a number between 1 and  k  to start from, and then recruiting every  kth person. In our example, we may start at number 3 and then select the 7th, 11th, 15th (and so forth) person on our list of email addresses. In Figure 10.2, you can see the researcher starts at number 2 and then selects every third person for inclusion in the sample.

There is one clear instance in which systematic sampling should not be employed. If your sampling frame has any pattern to it, you could inadvertently introduce bias into your sample by using a systemic sampling strategy. (Bias will be discussed in more depth in section 10.3.) This is sometimes referred to as the problem of periodicity. Periodicity refers to the tendency for a pattern to occur at regular intervals.

To stray a bit from our example, imagine we were sampling client charts based on the date they entered a health center and recording the reason for their visit. We may expect more admissions for issues related to alcohol consumption on the weekend than we would during the week. The periodicity of alcohol intoxication may bias our sample towards either overrepresenting or underrepresenting this issue, depending on our sampling interval and whether we collected data on a weekday or weekend.

Advanced probability sampling techniques

Returning again to our idea of sampling student email addresses, one of the challenges in our study will be the different types of students. If we are interested in all social work students, it may be helpful to divide our sampling frame, or list of students, into three lists—one for traditional, full-time undergraduate students, another for part-time undergraduate students, and one more for full-time graduate students—and then randomly select from these lists. This is particularly important if we wanted to make sure our sample had the same proportion of each type of student compared with the general population.

This approach is called stratified random sampling . In stratified random sampling, a researcher will divide the study population into relevant subgroups or strata and then draw a sample from each subgroup, or stratum. Strata is the plural of stratum, so it refers to all of the groups while stratum refers to each group. This can be used to make sure your sample has the same proportion of people from each stratum. If, for example, our sample had many more graduate students than undergraduate students, we may draw incorrect conclusions that do not represent what all social work students experience.

Selecting a proportion of black, grey, and white students from a population into a sample

Generally, the goal of stratified random sampling is to recruit a sample that makes sure all elements of the population are included sufficiently that conclusions can be drawn about them. Usually, the purpose is to create a sample that is identical to the overall population along whatever strata you’ve identified. In our sample, it would be graduate and undergraduate students. Stratified random sampling is also useful when a subgroup of interest makes up a relatively small proportion of the overall sample. For example, if your social work program contained relatively few Asian students but you wanted to make sure you recruited enough Asian students to conduct statistical analysis, you could use race to divide people into subgroups or strata and then disproportionately sample from the Asian students to make sure enough of them were in your sample to draw meaningful conclusions. Statistical tests may have a minimum number

Up to this point in our discussion of probability samples, we’ve assumed that researchers will be able to access a list of population elements in order to create a sampling frame. This, as you might imagine, is not always the case. Let’s say, for example, that you wish to conduct a study of health center usage across students at each social work program in your state. Just imagine trying to create a list of every single social work student in the state. Even if you could find a way to generate such a list, attempting to do so might not be the most practical use of your time or resources. When this is the case, researchers turn to cluster sampling. Cluster sampling  occurs when a researcher begins by sampling groups (or clusters) of population elements and then selects elements from within those groups.

For a population of six clusters of two students each, two clusters were selected for the sample

Let’s work through how we might use cluster sampling. While creating a list of all social work students in your state would be next to impossible, you could easily create a list of all social work programs in your state. Then, you could draw a random sample of social work programs (your cluster) and then draw another random sample of elements (in this case, social work students) from each of the programs you randomly selected from the list of all programs.

Cluster sampling often works in stages. In this example, we sampled in two stages—(1) social work programs and (2) social work students at each program we selected. However, we could add another stage if it made sense to do so. We could randomly select (1) states in the United States (2) social work programs in that state and (3) individual social work students. As you might have guessed, sampling in multiple stages does introduce a  greater   possibility of error. Each stage is subject to its own sampling problems. But, cluster sampling is nevertheless a highly efficient method.

Jessica Holt and Wayne Gillespie (2008) [3] used cluster sampling in their study of students’ experiences with violence in intimate relationships. Specifically, the researchers randomly selected 14 classes on their campus and then drew a random sub-sample of students from those classes. But you probably know from your experience with college classes that not all classes are the same size. So, if Holt and Gillespie had simply randomly selected 14 classes and then selected the same number of students from each class to complete their survey, then students in the smaller of those classes would have had a greater chance of being selected for the study than students in the larger classes. Keep in mind, with random sampling the goal is to make sure that each element has the same chance of being selected. When clusters are of different sizes, as in the example of sampling college classes, researchers often use a method called probability proportionate to size  (PPS). This means that they take into account that their clusters are of different sizes. They do this by giving clusters different chances of being selected based on their size so that each element within those clusters winds up having an equal chance of being selected.

To summarize, probability samples allow a researcher to make conclusions about larger groups. Probability samples require a sampling frame from which elements, usually human beings, can be selected at random from a list. The use of random selection reduces the error and bias present in non-probability samples, which we will discuss in greater detail in section 10.3, though some error will always remain. In relying on a random number table or generator, researchers can more accurately state that their sample represents the population from which it was drawn. This strength is common to all probability sampling approaches summarized in Table 10.2.

In determining which probability sampling approach makes the most sense for your project, it helps to know more about your population. A simple random sample and systematic sample are relatively similar to carry out. They both require a list all elements in your sampling frame. Systematic sampling is slightly easier in that it does not require you to use a random number generator, instead using a sampling interval that is easy to calculate by hand.

However, the relative simplicity of both approaches is counterweighted by their lack of sensitivity to characteristics of your population. Stratified samples can better account for periodicity by creating strata that reduce or eliminate its effects. Stratified sampling also ensure that smaller subgroups are included in your sample, thereby making your sample more representative of the overall population. While these benefits are important, creating strata for this purpose requires having information about your population before beginning the sampling process. In our social work student example, we would need to know which students are full-time or part-time, graduate or undergraduate, in order to make sure our sample contained the same proportions. Would you know if someone was a graduate student or part-time student, just based on their email address? If the true population parameters are unknown, stratified sampling becomes significantly more challenging.

Common to each of the previous probability sampling approaches is the necessity of using a real list of all elements in your sampling frame. Cluster sampling is different. It allows a researcher to perform probability sampling in cases for which a list of elements is not available or feasible to create. Cluster sampling is also useful for making claims about a larger population (in our previous example, all social work students within a state). However, because sampling occurs at multiple stages in the process, (in our previous example, at the university and student level), sampling error increases. For many researchers, the benefits of cluster sampling outweigh this weaknesses.

Matching recruitment and sampling approach

Recruitment must match the sampling approach you choose in section 10.2. For many students, that will mean using recruitment techniques most relevant to availability sampling. These may include public postings such as flyers, mass emails, or social media posts. However, these methods would not make sense for a study using probability sampling. Probability sampling requires a list of names or other identifying information so you can use a random process to generate a list of people to recruit into your sample. Posting a flyer or social media message means you don’t know who is looking at the flyer, and thus, your sample could not be randomly drawn. Probability sampling often requires knowing how to contact specific participants. For example, you may do as I did, and contact potential participants via phone and email. Even then, it’s important to note that not everyone you contact will enter your study. We will discuss more about evaluating the quality of your sample in section 10.3.

  • Probability sampling approaches are more accurate when the researcher wants to generalize from a smaller sample to a larger population. However, non-probability sampling approaches are often more feasible. You will have to weigh advantages and disadvantages of each when designing your project.
  • There are many kinds of probability sampling approaches, though each require you know some information about people who potentially would participate in your study.
  • Probability sampling also requires that you assign people within the sampling frame a number and select using a truly random process.

Building on the step-by-step sampling plan from the exercises in section 10.1:

  • Identify one of the sampling approaches listed in this chapter that might be appropriate to answering your question and list the strengths and limitations of it.
  • Describe how you will recruit your participants and how your plan makes sense with the sampling approach you identified.

Examine one of the empirical articles from your literature review.

  • Identify what sampling approach they used and how they carried it out from start to finish.

10.3 Sample quality

  • Assess whether your sampling plan is likely to produce a sample that is representative of the population you want to draw conclusions about
  • Identify the considerations that go into producing a representative sample and determining sample size
  • Distinguish between error and bias in a sample and explain the factors that lead to each

Okay, so you’ve chosen where you’re going to get your data (setting), what characteristics you want and don’t want in your sample (inclusion/exclusion criteria), and how you will select and recruit participants (sampling approach and recruitment). That means you are done, right? (I mean, there’s an entire section here, so probably not.) Even if you make good choices and do everything the way you’re supposed to, you can still draw a poor sample. If you are investigating a research question using quantitative methods, the best choice is some kind of probability sampling, but aside from that, how do you know a good sample from a bad sample? As an example, we’ll use a bad sample I collected as part of a research project that didn’t go so well. Hopefully, your sampling will go much better than mine did, but we can always learn from what didn’t work.

importance of quantitative research in social work

Representativeness

A representative sample is, “a sample that looks like the population from which it was selected in all respects that are potentially relevant to the study” (Engel & Schutt, 2011). [4] For my study on how much it costs to get an LCSW in each state, I did not get a sample that looked like the overall population to which I wanted to generalize. My sample had a few states with more than ten responses and most states with no responses. That does not look like the true distribution of social workers across the country. I could compare the number of social workers in each state, based on data from the National Association of Social Workers, or the number of recent clinical MSW graduates from the Council on Social Work Education. More than that, I could see whether my sample matched the overall population of clinical social workers in gender, race, age, or any other important characteristics. Sadly, it wasn’t even close. So, I wasn’t able to use the data to publish a report.

Critique the representativeness of the sample you are planning to gather.

  • Will the sample of people (or documents) look like the population to which you want to generalize?
  • Specifically, what characteristics are important in determining whether a sample is representative of the population? How do these characteristics relate to your research question?

Consider returning to this question once you have completed the sampling process and evaluate whether the sample in your study was similar to what you designed in this section.

Many of my students erroneously assume that using a probability sampling technique will guarantee a representative sample. This is not true. Engel and Schutt (2011) identify that probability sampling increases the chance of representativeness; however, it does not guarantee that the sample will be representative. If a representative sample is important to your study, it would be best to use a sampling approach that allows you to control the proportion of specific characteristics in your sample. For instance, stratified random sampling allows you to control the distribution of specific variables of interest within your sample. However, that requires knowing information about your participants before you hand them surveys or expose them to an experiment.

In my study, if I wanted to make sure I had a certain number of people from each state (state being the strata), making the proportion of social workers from each state in my sample similar to the overall population, I would need to know which email addresses were from which states. That was not information I had. So, instead I conducted simple random sampling and randomly selected 5,000 of 100,000 email addresses on the NASW list. There was less of a guarantee of representativeness, but whatever variation existed between my sample and the population would be due to random chance. This would not be true for an availability or convenience sample. While these sampling approaches are common for student projects, they come with significant limitations in that variation between the sample and population is due to factors other than chance. We will discuss these non-random differences later in the chapter when we talk about bias. For now, just remember that the representativeness of a sample is helped by using random sampling, though it is not a guarantee.

  • Before you start sampling, do you know enough about your sampling frame to use stratified random sampling, which increases the potential of getting a representative sample?
  • Do you have enough information about your sampling frame to use another probability sampling approach like simple random sampling or cluster sampling?
  • If little information is available on which to select people, are you using availability sampling? Remember that availability sampling is okay if it is the only approach that is feasible for the researcher, but it comes with significant limitations when drawing conclusions about a larger population.

Assessing representativeness should start prior to data collection. I mentioned that I drew my sample from the NASW email list, which (like most organizations) they sell to advertisers when companies or researchers need to advertise to social workers. How representative of my population is my sampling frame? Well, the first question to ask is what proportion of my sampling frame would actually meet my exclusion and inclusion criteria. Since my study focused specifically on clinical social workers, my sampling frame likely included social workers who were not clinical social workers, like macro social workers or social work managers. However, I knew, based on the information from NASW marketers, that many people who received my recruitment email would be clinical social workers or those working towards licensure, so I was satisfied with that. Anyone who didn’t meet my inclusion criteria and opened the survey would be greeted with clear instructions that this survey did not apply to them.

At the same time, I should have assessed whether the demographics of the NASW email list and the demographics of clinical social workers more broadly were similar. Unfortunately, this was not information I could gather. I had to trust that this was likely to going to be the best sample I could draw and the most representative of all social workers.

  • Before you start, what do you know about your setting and potential participants?
  • Are there likely to be enough people in the setting of your study who meet the inclusion criteria?

You want to avoid throwing out half of the surveys you get back because the respondents aren’t a part of your target population. This is a common error I see in student proposals.

Many of you will sample people from your agency, like clients or staff. Let’s say you work for a children’s mental health agency, and you wanted to study children who have experienced abuse. Walking through the steps here might proceed like this:

  • Think about or ask your coworkers how many of the clients at your agency have experienced this issue. If it’s common, then clients at your agency would probably make a good sampling frame for your study. If not, then you may want to adjust your research question or consider a different agency to sample. You could also change your target population to be more representative with your sample. For example, while your agency’s clients may not be representative of all children who have survived abuse, they may be more representative of abuse survivors in your state, region, or county. In this way, you can draw conclusions about a smaller population, rather than everyone in the world who is a victim of child abuse.
  • Think about those characteristics that are important for individuals in your sample to have or not have. Obviously, the variables in your research question are important, but so are the variables related to it. Take a look at the empirical literature on your topic. Are there different demographic characteristics or covariates that are relevant to your topic?
  • All of this assumes that you can actually access information about your sampling frame prior to collecting data. This is a challenge in the real world. Even if you ask around your office about client characteristics, there is no way for you to know for sure until you complete your study whether it was the most representative sampling frame you could find. When in doubt, go with whatever is feasible and address any shortcomings in sampling within the limitations section of your research report. A good project is a done project.
  • While using a probability sampling approach helps with sample representativeness, it does not guarantee it. Due to random variation, samples may differ across important characteristics. If you can feasibly use a probability sampling approach, particularly stratified random sampling, it will help make your sample more representative of the population.
  • Even if you choose a sampling frame that is representative of your population and use a probability sampling approach, there is no guarantee that the sample you are able to collect will be representative. Sometimes, people don’t respond to your recruitment efforts. Other times, random chance will mean people differ on important characteristics from your target population. ¯\_(ツ)_/¯

In agency-based samples, the small size of the pool of potential participants makes it very likely that your sample will not be representative of a broader target population. Sometimes, researchers look for specific outcomes connected with sub-populations for that reason. Not all agency-based research is concerned with representativeness, and it is still worthwhile to pursue research that is relevant to only one location as its purpose is often to improve social work practice.

importance of quantitative research in social work

Sample size

Let’s assume you have found a representative sampling frame, and that you are using one of the probability sampling approaches we reviewed in section 10.2. That should help you recruit a representative sample, but how many people do you need to recruit into your sample? As with many questions about sample quality, students should keep feasibility in mind. The easiest answer I’ve given as a professor is, “as many as you can, without hurting yourself.” While your quantitative research question would likely benefit from hundreds or thousands of respondents, that is not likely to be feasible for a student who is working full-time, interning part-time, and in school full-time. Don’t feel like your study has to be perfect, but make sure you note any limitations in your final report.

To the extent possible, you should gather as many people as you can in your sample who meet your criteria. But why? Let’s think about an example you probably know well. Have you ever watched the TV show Family Feud ? Each question the host reads off starts with, “we asked 100 people…” Believe it or not,  Family Feud uses simple random sampling to conduct their surveys the American public. Part of the challenge on  Family Feud is that people can usually guess the most popular answers, but those answers that only a few people chose are much harder. They seem bizarre, and are more difficult to guess. That’s because 100 people is not a lot of people to sample. Essentially, Family Feud is trying to measure what the answer is for all 327 million people in the United States by asking 100 of them. As a result, the weird and idiosyncratic responses of a few people are likely to remain on the board as answers, and contestants have to guess answers fewer and fewer people in the sample provided. In a larger sample, the oddball answers would likely fade away and only the most popular answers would be represented on the game show’s board.

In my ill-fated study of clinical social workers, I received 87 complete responses. That is far below the hundred thousand licensed or license-eligible clinical social workers. Moreover, since I wanted to conduct state-by-state estimates, there was no way I had enough people in each state to do so. For student projects, samples of 50-100 participants are more than enough to write a paper (or start a game show), but for projects in the real world with real-world consequences, it is important to recruit the appropriate number of participants. For example, if your agency conducts a community scan of people in your service area on what services they need, the results will inform the direction of your agency, which grants they apply for, who they hire, and its mission for the next several years. Being overly confident in your sample could result in wasted resources for clients.

So what is the right number? Theoretically, we could gradually increase the sample size so that the sample approaches closer and closer to the total size of the population (Bhattacherjeee, 2012). [5] But as we’ve talked about, it is not feasible to sample everyone. How do we find that middle ground? To answer this, we need to understand the sampling distribution . Imagine in your agency’s survey of the community, you took three different probability samples from your community, and for each sample, you measured whether people experienced domestic violence. If each random sample was truly representative of the population, then your rate of domestic violence from the three random samples would be about the same and equal to the true value in the population.

But this is extremely unlikely, given that each random sample will likely constitute a different subset of the population, and hence, the rate of domestic violence you measure may be slightly different from sample to sample. Think about the sample you collect as existing on a distribution of infinite possible samples. Most samples you collect will be close to the population mean but many will not be. The degree to which they differ is associated with how much the subject you are sampling about varies in the population. In our example, samples will vary based on how varied the incidence of domestic violence is from person to person. The difference between the domestic violence rate we find and the rate for our overall population is called the sampling error .

An easy way to minimize sampling error is to increase the number of participants in your sample, but in actuality, minimizing sampling error relies on a number of factors outside of the scope of a basic student project. You can see this online textbook for more examples on sampling distributions or take an advanced methods course at your university, particularly if you are considering becoming a social work researcher. Increasing the number of people in your sample also increases your study’s power , or the odds you will detect a significant relationship between variables when one is truly present in your sample. If you intend on publishing the findings of your student project, it is worth using a power analysis to determine the appropriate sample size for your project. You can follow this excellent video series from the Center for Open Science on how to conduct power analyses using free statistics software. A faculty members who teaches research or statistics could check your work. You may be surprised to find out that there is a point at which you adding more people to your sample will not make your study any better.

Honestly, I did not do a power analysis for my study. Instead, I asked for 5,000 surveys with the hope that 1,000 would come back. Given that only 87 came back, a power analysis conducted after the survey was complete would likely to reveal that I did not have enough statistical power to answer my research questions. For your projects, try to get as many respondents as you feasibly can, but don’t worry too much about not reaching the optimal amount of people to maximize the power of your study unless you goal is to publish something that is generalizable to a large population.

A final consideration is which statistical test you plan to use to analyze your data. We have not covered statistics yet, though we will provide a brief introduction to basic statistics in this textbook. For now, remember that some statistical tests have a minimum number of people that must be present in the sample in order to conduct the analysis. You will complete a data analysis plan before you begin your project and start sampling, so you can always increase the number of participants you plan to recruit based on what you learn in the next few chapters.

  • How many people can you feasibly sample in the time you have to complete your project?

importance of quantitative research in social work

One of the interesting things about surveying professionals is that sometimes, they email you about what they perceive to be a problem with your study. I got an email from a well-meaning participant in my LCSW study saying that my results were going to be biased! She pointed out that respondents who had been in practice a long time, before clinical supervision was required, would not have paid anything for supervision. This would lead me to draw conclusions that supervision was cheap, when in fact, it was expensive. My email back to her explained that she hit on one of my hypotheses, that social workers in practice for a longer period of time faced fewer costs to becoming licensed. Her email reinforced that I needed to account for the impact of length of practice on the costs of licensure I found across the sample. She was right to be on the lookout for bias in the sample.

One of the key questions you can ask is if there is something about your process that makes it more likely you will select a certain type of person for your sample, making it less representative of the overall population. In my project, it’s worth thinking more about who is more likely to respond to an email advertisement for a research study. I know that my work email and personal email filter out advertisements, so it’s unlikely I would even see the recruitment for my own study (probably something I should have thought about before using grant funds to sample the NASW email list). Perhaps an older demographic that does not screen advertisements as closely, o r those whose NASW account was linked to a personal email with fewer junk filters would be more likely to respond. To the extent I made conclusions about clinical social workers of all ages based on a sample that was biased towards older social workers, my results would be biased. This is called selection bias , or the degree to which people in my sample differ from the overall population.

Another potential source of bias here is nonresponse bias . Because people do not often respond to email advertisements (no matter how well-written they are), my sample is likely to be representative of people with characteristics that make them more likely to respond. They may have more time on their hands to take surveys and respond to their junk mail. To the extent that the sample is comprised of social workers with a lot of time on their hands (who are those people?) my sample will be biased and not representative of the overall population.

It’s important to note that both bias and error describe how samples differ from the overall population. Error describes random variations between samples, due to chance. Using a random process to recruit participants into a sample means you will have random variation between the sample and the population. Bias creates variance between the sample and population in a specific direction, such as towards those who have time to check their junk mail. Bias may be introduced by the sampling method used or due to conscious or unconscious bias introduced by the researcher (Rubin & Babbie, 2017). [6] A researcher might select people who “look like good research participants,” in the process transferring their unconscious biases to their sample. They might exclude people from the sampling from who “would not do well with the intervention.” Careful researchers can avoid these, but unconscious and structural biases can be challenging to root out.

  • Identify potential sources of bias in your sample and brainstorm ways you can minimize them, if possible.

Critical considerations

Think back to you undergraduate degree. Did you ever participate in a research project as part of an introductory psychology or sociology course? Social science researchers on college campuses have a luxury that researchers elsewhere may not share—they have access to a whole bunch of (presumably) willing and able human guinea pigs. But that luxury comes at a cost—sample representativeness. One study of top academic journals in psychology found that over two-thirds (68%) of participants in studies published by those journals were based on samples drawn in the United States (Arnett, 2008). [7] Further, the study found that two-thirds of the work that derived from US samples published in the Journal of Personality and Social Psychology was based on samples made up entirely of American undergraduate students taking psychology courses.

These findings certainly raise the question: What do we actually learn from social science studies and about whom do we learn it? That is exactly the concern raised by Joseph Henrich and colleagues (Henrich, Heine, & Norenzayan, 2010), [8] authors of the article “The Weirdest People in the World?” In their piece, Henrich and colleagues point out that behavioral scientists very commonly make sweeping claims about human nature based on samples drawn only from WEIRD (Western, Educated, Industrialized, Rich, and Democratic) societies, and often based on even narrower samples, as is the case with many studies relying on samples drawn from college classrooms. As it turns out, robust findings about the nature of human behavior when it comes to fairness, cooperation, visual perception, trust, and other behaviors are based on studies that excluded participants from outside the United States and sometimes excluded anyone outside the college classroom (Begley, 2010). [9] This certainly raises questions about what we really know about human behavior as opposed to US resident or US undergraduate behavior. Of course, not all research findings are based on samples of WEIRD folks like college students. But even then, it would behoove us to pay attention to the population on which studies are based and the claims being made about those to whom the studies apply.

Another thing to keep in mind is that just because a sample may be representative in all respects that a researcher thinks are relevant, there may be relevant aspects that didn’t occur to the researcher when she was drawing her sample. You might not think that a person’s phone would have much to do with their voting preferences, for example. But had pollsters making predictions about the results of the 2008 presidential election not been careful to include both cell phone-only and landline households in their surveys, it is possible that their predictions would have underestimated Barack Obama’s lead over John McCain because Obama was much more popular among cell phone-only users than McCain (Keeter, Dimock, & Christian, 2008). [10] This is another example of bias.

importance of quantitative research in social work

Putting it all together

So how do we know how good our sample is or how good the samples gathered by other researchers are? While there might not be any magic or always-true rules we can apply, there are a couple of things we can keep in mind as we read the claims researchers make about their findings.

First, remember that sample quality is determined only by the sample actually obtained, not by the sampling method itself. A researcher may set out to administer a survey to a representative sample by correctly employing a random sampling approach with impeccable recruitment materials. But, if only a handful of the people sampled actually respond to the survey, the researcher should not make claims like their sample went according to plan.

Another thing to keep in mind, as demonstrated by the preceding discussion, is that researchers may be drawn to talking about implications of their findings as though they apply to some group other than the population actually sampled. Whether the sampling frame does not match the population or the sample and population differ on important criteria, the resulting sampling error can lead to bad science.

We’ve talked previously about the perils of generalizing social science findings from graduate students in the United States and other Western countries to all cultures in the world, imposing a Western view as the right and correct view of the social world. As consumers of theory and research, it is our responsibility to be attentive to this sort of (likely unintentional) bait and switch. And as researchers, it is our responsibility to make sure that we only make conclusions from samples that are representative. A larger sample size and probability sampling can improve the representativeness and generalizability of the study’s findings to larger populations, though neither are guarantees.

Finally, keep in mind that a sample allowing for comparisons of theoretically important concepts or variables is certainly better than one that does not allow for such comparisons. In a study based on a nonrepresentative sample, for example, we can learn about the strength of our social theories by comparing relevant aspects of social processes. We talked about this as theory-testing in Chapter 8 .

At their core, questions about sample quality should address who has been sampled, how they were sampled, and for what purpose they were sampled. Being able to answer those questions will help you better understand, and more responsibly interpret, research results. For your study, keep the following questions in mind.

  • Are your sample size and your sampling approach appropriate for your research question?
  • How much do you know about your sampling frame ahead of time? How will that impact the feasibility of different sampling approaches?
  • What gatekeepers and stakeholders are necessary to engage in order to access your sampling frame?
  • Are there any ethical issues that may make it difficult to sample those who have first-hand knowledge about your topic?
  • Does your sampling frame look like your population along important characteristics? Once you get your data, ask the same question of the sample you successfully recruit.
  • What about your population might make it more difficult or easier to sample?
  • Are there steps in your sampling procedure that may bias your sample to render it not representative of the population?
  • If you want to skip sampling altogether, are there sources of secondary data you can use? Or might you be able to answer you questions by sampling documents or media, rather than people?
  • The sampling plan you implement should have a reasonable likelihood of producing a representative sample. Student projects are given more leeway with nonrepresentative samples, and this limitation should be discussed in the student’s research report.
  • Researchers should conduct a power analysis to determine sample size, though quantitative student projects should endeavor to recruit as many participants as possible. Sample size impacts representativeness of the sample, its power, and which statistical tests can be conducted.
  • The sample you collect is one of an infinite number of potential samples that could have been drawn. To the extent the data in your sample varies from the data in the entire population, it includes some error or bias. Error is the result of random variations. Bias is systematic error that pushes the data in a given direction.
  • Even if you do everything right, there is no guarantee that you will draw a good sample. Flawed samples are okay to use as examples in the classroom, but the results of your research would have limited generalizability beyond your specific participants.
  • Historically, samples were drawn from dominant groups and generalized to all people. This shortcoming is a limitation of some social science literature and should be considered a colonialist scientific practice.
  • I clearly need a snack. ↵
  • Johnson, P. S., & Johnson, M. W. (2014). Investigation of “bath salts” use patterns within an online sample of users in the United States. Journal of psychoactive drugs ,  46 (5), 369-378. ↵
  • Holt, J. L., & Gillespie, W. (2008). Intergenerational transmission of violence, threatened egoism, and reciprocity: A test of multiple psychosocial factors affecting intimate partner violence.  American  Journal of Criminal Justice, 33 , 252–266. ↵
  • Engel, R. & Schutt (2011). The practice of research in social work (2nd ed.) . California: SAGE ↵
  • Bhattacherjee, A. (2012). Social science research: Principles, methods, and practices . Retrieved from: https://scholarcommons.usf.edu/cgi/viewcontent.cgi?article=1002&context=oa_textbooks ↵
  • Rubin, C. & Babbie, S. (2017). Research methods for social work (9th edition) . Boston, MA: Cengage. ↵
  • Arnett, J. J. (2008). The neglected 95%: Why American psychology needs to become less American. American Psychologist , 63, 602–614. ↵
  • Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences , 33, 61–135. ↵
  • Newsweek magazine published an interesting story about Henrich and his colleague’s study: Begley, S. (2010). What’s really human? The trouble with student guinea pigs. Retrieved from http://www.newsweek.com/2010/07/23/what-s-really-human.html ↵
  • Keeter, S., Dimock, M., & Christian, L. (2008). Calling cell phones in ’08 pre-election polls. The Pew Research Center for the People and the Press . Retrieved from  http://people-press.org/files/legacy-pdf/cell-phone-commentary.pdf ↵

entity that a researcher wants to say something about at the end of her study (individual, group, or organization)

the entities that a researcher actually observes, measures, or collects in the course of trying to learn something about her unit of analysis (individuals, groups, or organizations)

the larger group of people you want to be able to make conclusions about based on the conclusions you draw from the people in your sample

the list of people from which a researcher will draw her sample

the people or organizations who control access to the population you want to study

an administrative body established to protect the rights and welfare of human research subjects recruited to participate in research activities conducted under the auspices of the institution with which it is affiliated

Inclusion criteria are general requirements a person must possess to be a part of your sample.

characteristics that disqualify a person from being included in a sample

the process by which the researcher informs potential participants about the study and attempts to get them to participate

the group of people you successfully recruit from your sampling frame to participate in your study

sampling approaches for which a person’s likelihood of being selected from the sampling frame is known

sampling approaches for which a person’s likelihood of being selected for membership in the sample is unknown

researcher gathers data from whatever cases happen to be convenient or available

(as in generalization) to make claims about a large population based on a smaller sample of people or items

selecting elements from a list using randomly generated numbers

the units in your sampling frame, usually people or documents

selecting every kth element from your sampling frame

the distance between the elements you select for inclusion in your study

the tendency for a pattern to occur at regular intervals

dividing the study population into subgroups based on a characteristic (or strata) and then drawing a sample from each subgroup

the characteristic by which the sample is divided in stratified random sampling

a sampling approach that begins by sampling groups (or clusters) of population elements and then selects elements from within those groups

in cluster sampling, giving clusters different chances of being selected based on their size so that each element within those clusters has an equal chance of being selected

a sample that looks like the population from which it was selected in all respects that are potentially relevant to the study

the set of all possible samples you could possibly draw for your study

The difference between what you find in a sample and what actually exists in the population from which the sample was drawn.

the odds you will detect a significant relationship between variables when one is truly present in your sample

the degree to which people in my sample differs from the overall population

The bias that occurs when those who respond to your request to participate in a study are different from those who do not respond to you request to participate in a study.

Graduate research methods in social work Copyright © 2021 by Matthew DeCarlo, Cory Cummings, Kate Agnelli is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Logo for Mavs Open Press

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

4.3 Quantitative research questions

Learning objectives.

  • Describe how research questions for exploratory, descriptive, and explanatory quantitative questions differ and how to phrase them
  • Identify the differences between and provide examples of strong and weak explanatory research questions

Quantitative descriptive questions

The type of research you are conducting will impact the research question that you ask. Probably the easiest questions to think of are quantitative descriptive questions. For example, “What is the average student debt load of MSW students?” is a descriptive question—and an important one. We aren’t trying to build a causal relationship here. We’re simply trying to describe how much debt MSW students carry. Quantitative descriptive questions like this one are helpful in social work practice as part of community scans, in which human service agencies survey the various needs of the community they serve. If the scan reveals that the community requires more services related to housing, child care, or day treatment for people with disabilities, a nonprofit office can use the community scan to create new programs that meet a defined community need.

an illuminated street sign that reads "ask"

Quantitative descriptive questions will often ask for percentage, count the number of instances of a phenomenon, or determine an average. Descriptive questions may only include one variable, such as ours about debt load, or they may include multiple variables. Because these are descriptive questions, we cannot investigate causal relationships between variables. To do that, we need to use a quantitative explanatory question.

Quantitative explanatory questions

Most studies you read in the academic literature will be quantitative and explanatory. Why is that? Explanatory research tries to build something called nomothetic causal explanations.Matthew DeCarlo says “com[ing]up with a broad, sweeping explanation that is universally true for all people” is the hallmark of nomothetic causal relationships (DeCarlo, 2018, chapter 7.2, para 5 ). They are generalizable across space and time, so they are applicable to a wide audience. The editorial board of a journal wants to make sure their content will be useful to as many people as possible, so it’s not surprising that quantitative research dominates the academic literature.

Structurally, quantitative explanatory questions must contain an independent variable and dependent variable. Questions should ask about the relation between these variables. A standard format for an explanatory quantitative research question is: “What is the relation between [independent variable] and [dependent variable] for [target population]?” You should play with the wording for your research question, revising it as you see fit. The goal is to make the research question reflect what you really want to know in your study.

Let’s take a look at a few more examples of possible research questions and consider the relative strengths and weaknesses of each. Table 4.1 does just that. While reading the table, keep in mind that it only includes some of the most relevant strengths and weaknesses of each question. Certainly each question may have additional strengths and weaknesses not noted in the table.

Making it more specific

A good research question should also be specific and clear about the concepts it addresses. A group of students investigating gender and household tasks knows what they mean by “household tasks.” You likely also have an impression of what “household tasks” means. But are your definition and the students’ definition the same? A participant in their study may think that managing finances and performing home maintenance are household tasks, but the researcher may be interested in other tasks like childcare or cleaning. The only way to ensure your study stays focused and clear is to be specific about what you mean by a concept. The student in our example could pick a specific household task that was interesting to them or that the literature indicated was important—for example, childcare. Or, the student could have a broader view of household tasks, one that encompasses childcare, food preparation, financial management, home repair, and care for relatives. Any option is probably okay, as long as the researchers are clear on what they mean by “household tasks.”

Table 4.2 contains some “watch words” that indicate you may need to be more specific about the concepts in your research question.

It can be challenging in social work research to be this specific, particularly when you are just starting out your investigation of the topic. If you’ve only read one or two articles on the topic, it can be hard to know what you are interested in studying. Broad questions like “What are the causes of chronic homelessness, and what can be done to prevent it?” are common at the beginning stages of a research project. However, social work research demands that you examine the literature on the topic and refine your question over time to be more specific and clear before you begin your study. Perhaps you want to study the effect of a specific anti-homelessness program that you found in the literature. Maybe there is a particular model to fighting homelessness, like Housing First or transitional housing that you want to investigate further. You may want to focus on a potential cause of homelessness such as LGBTQ discrimination that you find interesting or relevant to your practice. As you can see, the possibilities for making your question more specific are almost infinite.

Quantitative exploratory questions

In exploratory research, the researcher doesn’t quite know the lay of the land yet. If someone is proposing to conduct an exploratory quantitative project, the watch words highlighted in Table 4.2 are not problematic at all. In fact, questions such as “What factors influence the removal of children in child welfare cases?” are good because they will explore a variety of factors or causes. In this question, the independent variable is less clearly written, but the dependent variable, family preservation outcomes, is quite clearly written. The inverse can also be true. If we were to ask, “What outcomes are associated with family preservation services in child welfare?”, we would have a clear independent variable, family preservation services, but an unclear dependent variable, outcomes. Because we are only conducting exploratory research on a topic, we may not have an idea of what concepts may comprise our “outcomes” or “factors.” Only after interacting with our participants will we be able to understand which concepts are important.

Key Takeaways

  • Quantitative descriptive questions are helpful for community scans but cannot investigate causal relationships between variables.
  • Quantitative explanatory questions must include an independent and dependent variable.

Image attributions

Ask by terimakasih0 cc-0.

Guidebook for Social Work Literature Reviews and Research Questions Copyright © 2020 by Rebecca Mauldin and Matthew DeCarlo is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

Can Quantitative Research Solve Social Problems? Pragmatism and the Ethics of Social Research

  • Open access
  • Published: 13 June 2019
  • Volume 167 , pages 41–48, ( 2020 )

Cite this article

You have full access to this open access article

importance of quantitative research in social work

  • Thomas C. Powell 1  

25k Accesses

7 Citations

3 Altmetric

Explore all metrics

Journal of Business Ethics recently published a critique of ethical practices in quantitative research by Zyphur and Pierides (J Bus Ethics 143:1–16, 2017). The authors argued that quantitative research prevents researchers from addressing urgent problems facing humanity today, such as poverty, racial inequality, and climate change. I offer comments and observations on the authors’ critique. I agree with the authors in many areas of philosophy, ethics, and social research, while making suggestions for clarification and development. Interpreting the paper through the pragmatism of William James, I suggest that the authors’ arguments are unlikely to change attitudes in traditional quantitative research, though they may point the way to a new worldview, or Jamesian “sub-world,” in social research.

Avoid common mistakes on your manuscript.

Introduction

I was invited by the editors of this journal to comment on an article called “Is Quantitative Research Ethical? Tools for Ethically Practicing, Evaluating, and Using Quantitative Research” (Zyphur and Pierides 2017 ). The topic of the article is important and of great intrinsic interest to me, so I am pleased to offer this commentary. As will become clear, I agree with much of what the authors wrote, and with their overall philosophical orientation. The authors presented a compelling critique of traditional approaches to quantitative research (QR) in the social sciences, offering an ethical orientation for organizational research and providing helpful examples of ethical approaches to QR in management studies.

Of course, agreement makes uninteresting commentary and it is pointless to repeat the authors’ arguments and agree with them. What I have done instead is to play the devil’s advocate, selecting a few key points from the paper and evaluating them from the perspective of a traditional QR practitioner. For example, the authors argued that traditional approaches to QR are not objective but value-laden. I agree with this (see Powell 2001a ), but I evaluate whether the authors’ proposals alleviate this problem or make it worse. The authors argued that quantitative researchers should do their part to solve human problems. I agree with this (see Powell 2014a ), but I consider whether the problems they discussed were caused by quantitative methods, and whether it is reasonable to expect any research method to solve them. The authors endorsed a pragmatist epistemology as against foundationalist or “correspondence” epistemologies. I agree with this (see Powell 2001b , 2002 , 2003 , 2014b ), but I review the origins of pragmatist philosophy and consider whether pragmatism can legitimately be used to justify the social agendas of academic researchers.

My overall theme is that we should be careful what we wish for. If all social research is value-laden, then we should be wary of privileging the values of a particular community, including the community of socially minded elites working in universities, or of academic journal editors who aspire to make social impacts in the world. Many of us want to solve human problems but we should also examine ourselves, asking if we are the right people for the job, working in the right places, carrying the right tools. The authors’ pragmatic philosophy can help explain the nature of research practices, but it may not be the best platform for choosing between the competing values of academic researchers.

In the next section, I briefly summarize the argument in Zyphur and Pierides ( 2017 ) and provide overall comments. After that, I explore a few of the authors’ arguments in detail, considering their wider implications for social research. I conclude with a discussion of pragmatism in social research, examining the pragmatism of William James and its consequences for ethics in social research.

Overview of Zyphur and Pierides ( 2017 )

Zyphur and Pierides ( 2017 ) offered an ethical critique of traditional quantitative research (QR) in the social sciences. The authors challenged the philosophical foundations of traditional QR, such as its rationalist-materialist ontology and presumed scientific “objectivity.” They also challenged specific methods and practices in social research, including “best practices” in research design, sampling, measurement, hypothesis testing and data analysis.

The authors’ core message was that traditional QR practices are not objective but value-laden. Implicitly or otherwise, traditional QR takes an ethical position that impedes research that might address the real problems facing humankind today, such as racial inequality, poverty, corruption, and climate change. The authors did not reject QR as an enterprise, but criticized the assumption that standard QR “best practices” are objective and value-free. The authors called for a new “built for purpose” approach, in which researchers make their ethical purposes clear at the start, adapting research designs and analytical methods to those purposes.

The authors criticized traditional assumptions of “representation,” “correspondence,” and “probabilistic inference.” In traditional QR, researchers begin by defining theoretical constructs or variables that purportedly represent tangible or intangible objects in the real world. QR practitioners assume that these labels correspond to things in the external world with a degree of one-to-one accuracy; and they propose correlational or causal relations, which presumably correspond with how these objects relate in the real world. These relations are then tested by drawing samples which presumably correspond to larger populations. To establish the correspondence of sample results to populations, researchers use probabilistic inference, which they represent by statistical artefacts such as regression coefficients, confidence intervals and p -values.

The authors argued that the assumptions of traditional QR—that labels correspond to things; that hypothesized relations correspond to actual relations; that samples correspond to populations; that probabilistic inference allows valid inductions—are open to philosophical and methodological objections across the board, some of which the authors discussed in the paper. More to the point, however, traditional QR methods smuggle hidden values into the research process, dictating which research questions can be asked, how constructs can be measured, and how data can be gathered and analysed. According to the authors, these values, operating under a cover of scientific respectability, impede social researchers who would use QR to improve the human condition.

As an alternative to traditional QR, the authors proposed a “built for purpose” approach, in which researchers do not mimic traditional QR, but develop QR practices capable of addressing real human problems in the world. Instead of starting with representations and correspondences, researchers should start with a clear ethical purpose—for example, to combat corporate corruption or to eliminate racial discrimination. Instead of focusing on concept validity, construct validity, and other correspondences, researchers should maximize “relational validity”—that is, the “mutual fitness” of research designs, analytical methods, and ethical purposes. According to the authors, “relational validity offers a novel response to the centuries old problem of induction.” (p. 12).

The authors argued that a “built for purpose” approach requires new perspectives and research practices, which they called “orientations” and “ways of doing.” The new “orientations” require researchers to begin by asking, “Who is the research for?” “What is it trying to achieve?” and “How will it improve the human condition?” The new “ways of doing” require researchers to adapt research methods—measurement, sampling, data analysis, and causal inferences—to ethical purposes. In proposing this approach, the authors aimed to dispel traditional QR’s fixation on scientific objectivity by “putting QR to work for other purposes that are of greater concern—inequality, global warming, or corruption.” (p. 2).

Comments on Zyphur and Pierides ( 2017 )

To evaluate the paper on its own terms, I want to say what I think the authors were trying to do, and not trying to do. In particular, I do not think the authors were making an exhaustive technical critique of quantitative methods in social research. The authors criticized certain tendencies in QR practice—such as data-mining for low p -values (“ p -hacking”), and focusing on averages in regression analysis—but seemed less concerned with statistical technique than with the broader goal of “disrupting the universality” of scientific method. Their main recommendations—that researchers should focus on human problems, “built for purpose” research designs, and the “mutual fitness” of methods and purposes—could be applied equally to quantitative or qualitative research. If the authors had intended the paper as a technical deconstruction of QR in the social sciences, much more could have been said, and indeed has been said by other authors (e.g. Gelman 2015 ; Schwab et al. 2011 ; Simmons et al. 2011 ; Vul et al. 2009 ).

But the authors focused on a different point; namely, that traditional QR practices impede research on social problems even when these practices are used as they were intended. Researchers should avoid obvious errors in statistical inference, such as inferring causation from correlation. But if the real problem in QR is unthinking obedience to the orthodoxies of scientific method, the issues are behavioural rather than statistical. For example, replicability and generalizability are sound QR principles, but they incentivize research on repeatable problems while neglecting specific or non-replicable contexts; and representative sampling improves probabilistic inference, but many human problems involve minorities facing unique hardships. Hence the authors focused less on QR technique than on the need for new “orientations” and “ways of doing” in the choice and implementation of QR methods.

This approach places the authors in a tradition reminiscent of C. Wright Mills in The Sociological Imagination ( 1959 ). Mills criticized the “abstracted empiricism” of quantitative sociology in mid-20th century North America; that is, the trend of importing assumptions from the natural sciences, defining social constructs as if they were physical objects, and using statistical methods to define problems rather than the other way around. Like Zyphur and Pierides, Mills argued that social researchers should put the scientific method in the service of research problems: “Controversy over different views of ‘methodology’ and ‘theory’ is properly carried on in close and continuous relation with substantive problems.” (Mills 1959 , p. 83) Mills was concerned less with statistical methods than with the philosophies lurking beneath the supposed “objectivity” of quantitative social research:

As a matter of practice, abstracted empiricists often seem more concerned with the… Scientific Method. Methodology, in short, seems to determine the problems. And this, after all, is only to be expected. The Scientific Method that is projected here did not grow out of, and is not a generalization of, what are generally and correctly taken to be the classic lines of social science work. It has been largely drawn, with expedient modifications, from one philosophy of natural science. (Mills 1959 , pp. 39, 40)

The authors share not only Mills’ scepticism of scientific method but also his philosophical pragmatism. It is important to recognize the authors’ pragmatism and not to conflate it with ontologies aligned with nominalism, subjectivism, social constructionism, and postmodern social theory. The authors sympathize with these views, but to classify their position as “subjectivism” or “social constructionism” would be to misunderstand what they are saying. When the authors use a term like “correspondence,” they are not making a vague reference to similarity, but invoking the terminology of the pragmatist philosophy of science. Pragmatism anticipated many of the philosophical moves that would later characterize postmodern social theory, but the two approaches have different origins, and different consequences for social research.

Although the authors explained their pragmatism in an earlier paper (Zyphur et al. 2016 ), they might have done more in the current paper to guide readers through their philosophical position, linking pragmatism with their critique of traditional QR and recommendations for future QR practice. As it stands, the paper seems to blend non-pragmatist and pragmatist ideas together in a kind of free-floating subjectivist relativism that may strike some readers as confusing or unhelpful. For example, in explaining their approach, the authors used abstract language that is hard to place in any philosophical tradition:

To begin, we put forth two infinitely long and intersecting dimensions of QR practice that we call orientations and ways of doing, which connect purposes to QR practice. Instead of being ‘foundations’ or somehow fundamental in a representation correspondence sense, each category and its contents are akin to idioms or axiomatic lists that tend toward infinity because they can be populated indefinitely, limited only by the creativity of those who adopt them. They may also be orthogonal, indicating that each orientation can, at least in theory, be combined with any way of doing QR in order to achieve a given purpose. In what follows, we describe these dimensions, beginning to populate the lists that may constitute each dimension while illustrating the fruitfulness of combinations that emerge. However, there are two caveats to mentioned upfront which, if ignored, undermine our broader recommendations. (p. 4)

Due in part to the paper’s lack of clarity, both in language and narrative structure, a skeptical reader might argue that the authors have left their recommendations open to ethical misinterpretation, even by those who want to put them into practice. For example, a reader might interpret the authors’ “built for purpose” method roughly as follows: (1) Identify a serious social problem (poverty, racial inequality, climate change, etc.); (2) Choose a desired outcome (elimination of poverty, racial equality, climate stabilization, etc.); (3) Design a quantitative study that demonstrates the severity of the problem; (4) Use the quantitative results to campaign for social change that solves the problem.

This is an oversimplification of the authors’ advice, but a traditional QR practitioner might argue that the method ignores the crucial distinction between ethical outcomes and ethical processes . The received scientific method has many faults, but it recognizes, in principle, that scientists should not choose their desired outcomes or contrive their research processes to achieve those outcomes. Admittedly, scientists have abused scientific method in exactly this way, but the whole point of scientific method is to neutralize researchers’ preferences. Without a relatively objective process, researchers will choose the outcomes they want and manipulate research processes to achieve them. These manipulations may produce social changes, and some of the changes may be socially desirable—but this is not an ethical process unless we believe that “the ends justify the means” (consequentialist ethics) or that “bad people achieve their goals this way, so good people must do it too” (compensatory ethics). Either way, achieving social purposes comes at a high ethical price.

Similarly, a critic might stand behind the Rawlsian “veil of ignorance” (Rawls 1971 ) and ask: If we wanted accurate and reliable research on a social problem, would we prefer a team of researchers bound to a fixed research process they believe is ethical, or a team of researchers bound to a fixed social outcome they believe is ethical? Either team might have false beliefs, so the research process might actually be unethical, or the social outcome might be unethical. The problem is that a research team bound to a fixed outcome will reach the same conclusions whether the outcome is ethical or not, whereas a team that follows a fixed process has a chance of reaching new conclusions; and, if its process is ethical, of reaching conclusions independent of its own preferences. A reasonable person behind the Rawlsian veil might prefer a process capable of producing new or unbiased results, even if the process was imperfectly implemented.

Traditional quantitative researchers might also challenge the authors’ assumption that researchers who practice their methods are the ones who should define the world’s problems and decide which ideas get published. How do we know their values are trustworthy? If an ethical problem exists with scientific method, should we relocate our trust from the scientific community to a sub-community of socially minded university professors, journal editors and government funding agencies? Does this sub-community have a shared and coherent view of social ethics and human purposes—and if not, by what process will they define and prioritize social outcomes? What would stop an ambitious social researcher with political connections and a social media profile from hijacking the method to perpetrate mass social harm?

The authors rightly point out that traditional QR is not value-free, nor is scientific method in general. Scientists often allow quantitative methods to dictate research problems, and they have perverse incentives to find publishable results in their data. When QR is driven by methods indifferent to human purposes, it crowds out research that might address large-scale human problems. All of this is true. On the other hand, the authors’ claim that traditional QR is “ethics-laden” relies on the charge that it “produces an orientation toward ‘facts’ rather than ‘values.’” (p. 3). Therefore, the authors need to show how a commitment to facts undermines the solving of human problems, and how a commitment to values removes the ethical biases of traditional methods. Unfortunately the paper does not provide the needed clarity:

By separating facts from values, facts appear to be unrelated to ethics; and with a focus on facts, ethics appear irrelevant for QR validity… New understandings of validity are needed to address the ways that QR is an ethical act and ethically consequential. This ethicality may be unrelated to representation or correspondence, such as if QR is meant to produce images of society that change the way people think and act – an enactment of a reality that did not yet exist to be merely ‘represented’… (p. 7)

Many researchers, qualitative and quantitative alike, may disagree with the authors on the fundamental nature and purpose of social research. For example, the authors implicated traditional QR in the global financial crisis (p. 10), but many traditional researchers would reject any suggestion that the financial crisis can be laid at the doorstep of a research method—why not qualitative methods?—or that a research method can solve the world’s problems. Traditional QR practitioners would acknowledge that “Determine the nature and extent of human poverty” is a QR problem; but they would not acknowledge that “Eliminate human poverty” is a QR problem. In their worldview, it is a human problem, a social problem, an economic problem, a political problem, a gender problem, a racial problem, and many other kinds of problem. QR can help us understand what is going on in the domain of human poverty, and QR analysis provides input to policy-makers. But this does not prove that researchers should stop using “best practices,” but merely begs the question of whether traditional QR or “built for purpose” QR is the better method for understanding what is going on.

QR practitioners might also question the authors’ logical consistency in rejecting assumptions like “representation” and “correspondence.” Presumably, when the authors made statements about traditional QR—for example, that QR includes hypothesis testing and regression analysis—they were affirming that their sentences represented something, and that their ideas corresponded to something beyond words on a page. When the authors wrote “QR is often done in terms of representation and correspondence (Zyphur et al. 2016 )” (p. 2), they affirmed that this proposition, and the term “Zyphur et al. ( 2016 ),” corresponded to things and persons that possessed a reality independent of the words, even if that reality was a social construction rather than an objective material object. In other words, the authors seemed to be using representation and correspondence to critique representation and correspondence.

I think QR researchers will be especially interested in the illustrations of exemplary QR provided by the authors. Without critiquing the papers individually, the examples show that the search for better QR methods is fraught with pitfalls. Regardless of QR methods, social research is a human process concerned with human subjects. It is not obvious that researchers who follow the QR method of Zyphur and Pierides are behaving more ethically than researchers who follow traditional QR, even when they are researching worthy causes. The choice is not between ethical and unethical QR, but among a range of imperfect quantitative methods, each inviting its own forms of human error. Along with the authors, I hope that QR can make contributions to solving human problems; but I also believe that if we did not have something like traditional QR, we would have to invent it. Whatever its flaws, the question is not whether scientific method eliminates human error, but whether, among the imperfect alternatives available to us, it gives us the best explanations for what is going on in the world.

This is why we must be careful what we wish for. Humanity is confronted with many problems, and social researchers need to find a way to do their part. Whatever objections can be raised against the paper, I endorse what the authors are trying to achieve. But removing or reforming traditional QR will not solve the problems of the human condition because these problems are not caused by a research method. The problems of the human condition are caused by people, and shifting responsibility from one group of researchers to another will not improve the human condition. We should use QR to address human problems, while bearing in mind that the authors’ method confers power to solve the world’s problems on fallible people and institutions—academic elites, journal editors, and the governments, corporations and philanthropic agencies that fund social research—which have vested interests of their own, and are, to a significant degree, responsible for the problems we are trying to solve.

William James and Ethics in Management Research

I have written elsewhere about pragmatism and its relevance for organizational research Powell 2001a , 2002 , 2014a ). In this section, I want to show how pragmatism, or a version of it, relates to the ethical issues raised by Zyphur and Pierides. In particular, I want to examine the pragmatism of William James and its consequences for ethics in management research.

In doing this I am prompted by statements in Zyphur and Pierides ( 2017 ) and its predecessor (Zyphur et al. 2016 ), such as:

‘Best practices’… may be useful for the purpose of standardizing QR, but this… distracts from the task of putting QR to work for other purposes. (Zyphur and Pierides 2017 , p. 14) Inductive inference means actively working to enact research purposes, making research ‘true’ by helping it to shape the world. (Zyphur and Pierides 2017 , p. 13) Many pragmatists propose that organizing practical action is the point of thinking and speaking… (Zyphur et al. 2016 , p. 478) Instead of having to gather large samples to avoid statistical errors of inference, researchers would be better off trying to guard against actions that have unhelpful consequences. (Zyphur et al. 2016 , p. 478)

I agree with the authors’ turn to pragmatism and their rejection of conventional philosophical foundations. As a philosophy of science, pragmatism is concerned with human purposes, and the authors are right to cite pragmatism in supporting ethics in social research. However I hope that readers will not misconstrue pragmatism as a literal invitation to reject QR “best practices” while “putting QR to work for other purposes.” Philosophical pragmatism argues that most statements about truth and being (epistemology and ontology) can be resolved into statements about things people actually do, or might do. They do not resolve, however, into statements about what people should do. To explain why pragmatists make this distinction, and to show its consequences for the method proposed by Zyphur and Pierides, requires a brief digression on the origins of William James’s pragmatism.

William James developed his pragmatist philosophy over many years, exploring its consequences for psychology, epistemology, ontology and philosophy of science (e.g., James 1890a , 1902 , 1907 ). In the two-volume Principles of Psychology ( 1890b ), James linked psychology with philosophy through the concept of belief . He asked: What happens in the consciousness of a subjectively experiencing human being when confronted with a statement of fact, or proposition? What feelings are evoked? How does a person translate the words of a proposition into the state of consciousness we call “belief”?

James argued that belief occurs when a proposition evokes marked feelings of subjective rightness, like a puzzle piece fitting into place. Instead of discomfort or agitation, a proposition evokes a sense of cognitive and emotional harmony, producing a warm psychological glow of mental assent. In Jamesian psychology, belief is the warm psychological glow of mental assent. People justify their beliefs using underlying reasons or causes, such as sense data, logic, intuition, authority, persuasion, or common sense. But belief is a feeling, not a fact; and the belief that a belief is a fact and not a feeling, is also a feeling. In Jamesian psychology, a justification only counts as belief when it evokes the feeling, or psychological “yes” signal, that provides the glow of subjective rightness.

James’s theory of belief formed the psychological foundation for his pragmatism. An object becomes real for people when they believe in its existence; and a proposition becomes true for people when they believe in its truth. Scientific propositions become true for scientists when scientists believe in their truth. This does not mean that scientific propositions are arbitrary, or do not correspond with sense observations shared with other people. In Jamesian ontology, people have no direct access to metaphysical “truths” or “realities,” so their beliefs rely on sense data, logic, intuition, and other forms of justification. Indeed, other peoples’ beliefs are themselves unobservable, so we infer them from what people say and do. When we affirm that something is “true” for other people, what we mean is that we have observed people saying and doing things that make us believe that they believe that the thing is true. Human behaviour—what people say and do, including ourselves—comprises the whole of the evidence.

Despite his interest in the experiencing person, James did not see pragmatism as justifying a relativist or subjectivist social science, any more than it justified a positivist or objectivist one. James began as a physiologist, building his psychological theories on experimental physiology and the functional anatomy of the human brain; and his pragmatism did not deny the existence of the external world, the laws of propositional logic, or the validity of scientific method. James observed reality through the medium of human consciousness, which he held to speak for itself and not for things outside itself. James did not deny the existence of an external world, but saw the external world as becoming manifest through the meanings conferred on it in the perceptions and interpretations of human beings.

In Jamesian pragmatism, people adopt beliefs in order to solve problems in human consciousness, and these problems are infinitely varied. James did not propose pragmatism as a form of prescriptive advice to scientists, urging them to focus on final purposes instead of processes, or to “be practical” instead of thinking abstractly. The problems of poets and metaphysicians are not “practical” compared to the problems of engineers and airline pilots, but pragmatism does not urge poets to become more practical. Pragmatism is a descriptive theory of human ontology and epistemology which holds that people derive their ideas of truth and reality not from comparisons between propositions and realities, but by solving problems that arise in human consciousness. The theory makes no claims about the degree of “practicality” of interests or problems people may or should have.

Beyond affirming the centrality of human experience for psychology and philosophy of science, James argued that any phenomenon in the domain of human consciousness can become a legitimate object of human inquiry. Whatever has meaning for an experiencing human being has ontological reality, and there are no greater or lesser realities. The beliefs we call “scientific,” derived from theory and empirical observation, do not have higher ontological status, or “more reality,” than other propositions in human consciousness, and hence no superior claim on truth. Everything that enters human consciousness has the same eligibility for inquiry, examination and analysis, whether derived from sense experience, reasoning, dreams, delusions, hallucinations, mythology, or religious faith.

James saw the natural sciences as domains of human inquiry concerned with aspects of human consciousness associated with the natural world. While recognizing the widespread human impacts of science, James regarded the scientific community as one of many “sub-worlds” of human inquiry (see James 1890b , p. 291). A Jamesian sub-world, like a Wittgensteinian “language game,” denotes a community of inquiry with its own conventions, beliefs and vocabularies (on Wittgenstein’s reliance on James’s pragmatism, see Goodman 2002 ). By the conventions of the scientific sub-world, scientists follow a method of inquiry grounded in theory, quantification, measurement and experiment, while rejecting propositions grounded in private opinion or groundless speculation. James acknowledged the efficacy and significance of science, since its problems very often affect non-scientists, and solving them makes life better (or worse) for many people. At the same time, James affirmed that scientific problem-solving conventions do not give scientists superior access to extra-experiential realities, but function in human consciousness like other conventions.

James’s philosophy allowed for great pluralism among the sub-worlds of human inquiry; for example, there are sub-worlds of myth, literature, art and religion, each with its own conventions, beliefs, and vocabularies. James argued that these sub-worlds produced beliefs that were no less real or true, within their own conventions, than the beliefs of natural scientists—and that it was futile to judge the beliefs of one sub-world by the conventions of another: to the poet, a scientific discovery may seem misguided; or to a scientist, a religious inspiration may seem superstitious. Participants negotiate conventions within their own sub-worlds, but pragmatism does not make value judgments about the comparative legitimacy of sub-worlds. As in James’s The Varieties of Religious Experience ( 1902 ), any phenomenon that appears in human consciousness is fruitful subject matter for human inquiry.

And this is where ethics comes in. Pragmatists do not hold that truth is relative or a matter of preference. Each sub-world of inquiry follows its own problem-solving conventions, and these conventions become relatively hardened, even as they continue to evolve. Within the sub-world of science, truth is not groundless but evidence-based, and to pretend otherwise is not merely to get it wrong, but to behave unethically . Scientists can debate the kinds of evidence that bear on a particular question, but they cannot debate whether evidence bears on scientific questions. Theologians can debate whether God is known by faith, revelation, or church tradition, but they cannot debate whether God is relevant to theology. Outside claimants who deny the conventions of a sub-world, unlike insiders disputing how those conventions are applied, are perceived not merely as mistaken, but as immoral, ignorant or both. Disputes within a sub-world of practice can solve problems for participants, but disputes across sub-worlds tend to devolve into name-calling and ethical recrimination.

From a Jamesian pragmatist perspective, the debate initiated by Zyphur and Pierides is a dispute across sub-worlds rather than a conversation that can be progressed within the sub-world of QR practice. Quantitative social research is a legitimate sub-world of practice, and like the proposals by C. Wright Mills a half-century earlier, the authors’ proposals are unlikely to alter or deter that sub-world. From the perspective of the traditional QR sub-world, the debate initiated by the authors—dropping QR best practices and using QR to achieve researchers’ social aspirations—is not so much a challenge to QR practice as a kind of category mistake. The authors’ paper is not written in the language of quantitative researchers, it does not acknowledge their legitimacy, it does not anticipate or address their potential responses, and it proposes to overthrow the standards of their community without discussing the consequences of abandoning those standards. The authors seem to inhabit a different sub-world altogether, and the two sub-worlds do not have very much to say to each other.

I believe this is a good thing. If traditional QR is a legitimate sub-world of practice, so is the sub-world proposed by the authors. I can imagine a sub-world in which people put QR in the service of solving human problems, with its own conventions, beliefs and vocabularies. This sub-world cannot replace traditional QR or operate within it, but requires an independent code of practice and community of participants. To build this community within the conventions of traditional QR would constrain and diminish both traditional QR and what the authors are trying to accomplish. The authors have started the conversation by articulating a set of principles for purpose-driven QR. Perhaps now they can define the social and intellectual agenda for this community, and build the human and institutional infrastructure required for its growth and development. I support what they are trying to do and I hope they succeed.

Gelman, A. (2015). The connection between varying treatment effects and the crisis of unreplicable research: A Bayesian perspective. Journal of Management, 41 (2), 632–643.

Article   Google Scholar  

Goodman, R. B. (2002). Wittgenstein and William James . Cambridge UK: Cambridge University Press.

Book   Google Scholar  

James, W. (1890a). The principles of psychology (Vol. I). New York: Henry Holt and Company.

Google Scholar  

James, W. (1890b). The principles of psychology (Vol. II). New York: Henry Holt and Company.

James, W. (1902). The varieties of religious experience: A study in human nature . London: Longmans, Green and Co.

James, W. (1907). Pragmatism: A new word for some old ways of thinking . London: Longmans, Green and Co.

Mills, C. Wright. (1959). The sociological imagination . New York: Oxford University Press.

Powell, T. C. (2001a). Competitive advantage: Logical and philosophical considerations. Strategic Management Journal, 22 (9), 875–888.

Powell, T. C. (2001b). Fallibilism and organizational research: The third epistemology. Journal of Management Research, 4, 201–219.

Powell, T. C. (2002). The philosophy of strategy. Strategic Management Journal, 23 (9), 873–880.

Powell, T. C. (2003). Strategy without ontology. Strategic Management Journal, 24 (3), 285–291.

Powell, T. C. (2014a). Strategic management and the person. Strategic Organization, 12 (3), 200–207.

Powell, T. C. (2014b). William James. In Jenny Helin, Tor Hernes, Daniel Hjorth, & Robin Holt (Eds.), The Oxford handbook of process philosophy and organization studies (pp. 166–184). Oxford: Oxford University Press.

Rawls, J. (1971). A theory of justice . Cambridge MA: Belknap.

Schwab, A., Abrahamson, E., Starbuck, W. H., & Fidler, F. (2011). Researchers should make thoughtful assessments instead of null-hypothesis significance tests. Organization Science, 22, 1105–1120.

Simmons, J., Nelson, L., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allow presenting anything as significant. Psychological Science, 22, 1359–1366.

Vul, E., Harris, C., Winkielman, P., & Pashler, H. (2009). Puzzlingly high correlations in fMRI studies of emotion, personality, and social cognition. Perspectives on Psychological Science, 4 (3), 274–290.

Zyphur, M. J., & Pierides, D. C. (2017). Is quantitative research ethical? Tools for ethically practicing, evaluating, and using quantitative research. Journal of Business Ethics, 143, 1–16.

Zyphur, M. J., Pierides, D. C., & Roffe, J. (2016). Measurement and statistics in ‘organization science’: Philosophical, sociological, and historical perspectives. In R. Mir, H. Willmott, & M. Greenwood (Eds.), The Routledge companion to philosophy in organization studies (pp. 474–482). Abingdon: Routledge.

Download references

The author received no funding to support this research.

Author information

Authors and affiliations.

Said Business School, University of Oxford, Park End Street, Oxford, OX1 1HP, UK

Thomas C. Powell

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Thomas C. Powell .

Ethics declarations

Conflict of interest.

The author declares having no conflict of interest in this research.

Research Involving in Human and Animal Rights

The author declares that this article does not contain any studies with human participants or animals performed by the author.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Powell, T.C. Can Quantitative Research Solve Social Problems? Pragmatism and the Ethics of Social Research. J Bus Ethics 167 , 41–48 (2020). https://doi.org/10.1007/s10551-019-04196-7

Download citation

Received : 21 December 2017

Accepted : 23 May 2019

Published : 13 June 2019

Issue Date : November 2020

DOI : https://doi.org/10.1007/s10551-019-04196-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Quantitative research

Advertisement

  • Find a journal
  • Publish with us
  • Track your research

NASW Press

  • Reference Works
  • Children, Youths, & Families
  • Health & Mental Health
  • Practice & Policy
  • NASW Standards
  • Children & Schools
  • Health & Social Work
  • Social Work
  • Social Work Abstracts

Social Work Research

  • Calls for Papers
  • Faculty Center
  • Librarian Center
  • Student Center
  • Booksellers
  • Advertisers
  • Copyrights & Permissions
  • eBook Support
  • Write for Us
  • Graduation Sale

NASW members can access articles online at: Social Work Research

Social Work Research publishes exemplary research to advance the development of knowledge and inform social work practice. Widely regarded as the outstanding journal in the field, it includes analytic reviews of research, theoretical articles pertaining to social work research, evaluation studies, and diverse research studies that contribute to knowledge about social work issues and problems.

IMAGES

  1. A Quick Guide to Quantitative Research in the Social Sciences

    importance of quantitative research in social work

  2. Importance of Quantitative Research Across Different Fields

    importance of quantitative research in social work

  3. Importance of Quantitative Research

    importance of quantitative research in social work

  4. Quantitative Research: What it is, Tips & Examples

    importance of quantitative research in social work

  5. Lesson 2

    importance of quantitative research in social work

  6. Quantitative Social Research

    importance of quantitative research in social work

VIDEO

  1. The Importance of Quantitative Research Across Fields || Practical Research 2 || Quarter 1/3 Week 2

  2. Social Work Research: Steps/Procedure

  3. Quantitative Technique For Business Unit-1|Meaning|Characteristics|Functions|Importance|Limitations

  4. What is quantitative research?

  5. Social Work Research: Cultural Competence in Research (Chapter 6)

  6. Donna Mertens- guest lecture on transformative mixed methods with questions (Dr. CohenMiller)

COMMENTS

  1. The impact of quantitative research in social work

    The importance of quantitative research in the social sciences generally and social work specifically has been highlighted in recent years, in both an international and a British context. Consensus opinion in the UK is that quantitative work is the 'poor relation' in social work research, leading to a number of initiatives.

  2. Shaping Social Work Science: What Should Quantitative Researchers Do

    Based on a review of economists' debates on mathematical economics, this article discusses a key issue for shaping the science of social work—research methodology. The article describes three important tasks quantitative researchers need to fulfill in order to enhance the scientific rigor of social work research.

  3. Work-life balance, social support, and burnout: A quantitative study of

    Social work is acknowledged to be a high-stress profession that involves working with people in distressing circumstances and complex life situations such as those experiencing abuse, domestic violence, substance misuse, and crime (Stanley & Mettilda, 2016).It has been observed that important sources of occupational stress for social workers include excessive workload, working overtime ...

  4. Quantitative Research Methods for Social Work: Making Social Work Count

    This book arose from funding from the Economic and Social Research Council to address the quantitative skills gap in the social sciences. The grants were applied for under the auspices of the Joint University Council Social Work Education Committee to upskill social work academics and develop a curriculum resource with teaching aids.

  5. The Positive Contributions of Quantitative Methodology to Social Work

    In his important critique of Campbell's position on internal validity, he argues that "External validity"—validity of inferences that go beyond the data—is the crux of social action, not 'internal validity'" (1980, p. 231). ... Quantitative social work research does face peculiarly acute difficulties arising from the intangible ...

  6. The Nature and Extent of Quantitative Research in Social Work: A Ten

    Nature and Extent of Quantitative Research in Social Work 1521 Introduction Quantitative work seems to present many people in social work with particu lar problems. Sharland's (2009) authoritative review notes the difficulty 'with people doing qualitative research not by choice but because it's the only thing they feel safe in' (p. 31).

  7. What is Quantitative Research?

    Quantitative research deals in numbers, logic, and an objective stance. Quantitative research focuses on numberic and unchanging data and detailed, convergent reasoning rather than divergent reasoning [i.e., the generation of a variety of ideas about a research problem in a spontaneous, free-flowing manner]. Its main characteristics are:

  8. Nature and Extent of Quantitative Research in Social Work Journals: A

    Although the proportion of quantitative research is rather small in social work research, the review could not find evidence that it is of low sophistication. Finally, this study concludes that future research would benefit from making explicit why a certain methodology was chosen.

  9. Quantitative Research Methods for Social Work: Making Social Work Count

    The book is a comprehensive resource for students and educators. It is packed with activities and examples from social work covering the basic concepts of quantitative research methods - including reliability, validity, probability, variables and hypothesis testing - and explores key areas of data collection, analysis and evaluation ...

  10. Social Work Research Methods

    Quantitative data — facts that can be measured and expressed numerically — are crucial for social work. Quantitative research has advantages for social scientists. Such research can be more generalizable to large populations, as it uses specific sampling methods and lends itself to large datasets. ... The Importance of Research Design. Data ...

  11. 11. Quantitative measurement

    The NASW Code of Ethics discusses social work research and the importance of engaging in practices that do not harm participants. [14] This is especially important considering that many of the topics studied by social workers are those that are disproportionately experienced by marginalized and oppressed populations.

  12. (PDF) Social Work Research and Its Relevance to Practice: "The Gap

    The history of social work education may have also contributed to making it difficult for those teaching on university social work courses to engage routinely in research (Orme and Powell, 2007).

  13. A Practical Guide to Writing Quantitative and Qualitative Research

    INTRODUCTION. Scientific research is usually initiated by posing evidenced-based research questions which are then explicitly restated as hypotheses.1,2 The hypotheses provide directions to guide the study, solutions, explanations, and expected results.3,4 Both research questions and hypotheses are essentially formulated based on conventional theories and real-world processes, which allow the ...

  14. 11. Quantitative measurement

    This is the opposite of quantitative research, in which definitions must be completely set in stone before the inquiry can begin. ... The NASW Code of Ethics discusses social work research and the importance of engaging in practices that do not harm participants. This is especially important considering that many of the topics studied by social ...

  15. Shaping Social Work Science: What Should Quantitative Researchers Do

    First, the Society for Social Work and Research (SSWR) was founded in 1994 as a free-standing organization dedicated to the advancement of social work research. Since its founding, the 18 SSWR annual conferences and other SSWR-sponsored activities have elevated the scientific level of research in the social work profession.

  16. Causality and Causal Inference in Social Work: Quantitative and

    The Nature of Causality and Causal Inference. The human sciences, including social work, place great emphasis on understanding the causes and effects of human behavior, yet there is a lack of consensus as to how cause and effect can and should be linked (Parascandola & Weed, 2001; Salmon, 1998; Susser, 1973).What little consensus exists seems to be that effects are assumed to be consequences ...

  17. 10. Quantitative sampling

    Each setting (agency, social media) limits your reach to only a small segment of your target population who has the opportunity to be a part of your study. This intermediate point between the overall population and the sample of people who actually participate in the researcher's study is called a sampling frame.

  18. The impact of quantitative research in social work

    This paper is the first to focus on the academic impact of quantitative research in social work developing measurable outcomes. It focuses on three leading British-based generic journals over a 10 ...

  19. Quantitative Research Methods for Social Work

    Quantitative research makes a very important contribution to both understanding and responding effectively to the problems that social work service users face. In this unique and authoritative text, a group of expert authors explore the key areas of data collection, analysis and evaluation and outline in detail how they can be applied to practice.

  20. 4.3 Quantitative research questions

    You should play with the wording for your research question, revising it as you see fit. The goal is to make the research question reflect what you really want to know in your study. Let's take a look at a few more examples of possible research questions and consider the relative strengths and weaknesses of each. Table 4.1 does just that.

  21. Can Quantitative Research Solve Social Problems? Pragmatism ...

    Journal of Business Ethics recently published a critique of ethical practices in quantitative research by Zyphur and Pierides (J Bus Ethics 143:1-16, 2017). The authors argued that quantitative research prevents researchers from addressing urgent problems facing humanity today, such as poverty, racial inequality, and climate change. I offer comments and observations on the authors ...

  22. Social Work Research

    Social Work Research publishes exemplary research to advance the development of knowledge and inform social work practice. Widely regarded as the outstanding journal in the field, it includes analytic reviews of research, theoretical articles pertaining to social work research, evaluation studies, and diverse research studies that contribute to knowledge about social work issues and problems.

  23. Quantitative research methods for social work: Making social work count

    PDF | On Jan 1, 2017, Barbra Teater and others published Quantitative research methods for social work: Making social work count | Find, read and cite all the research you need on ResearchGate

  24. How Do I Critically Consume Quantitative Research?

    The backbone of quantitative research is data. In order to have any data, participants or cases must be found and measured for the phenomena of interest. These participants are all unique, and it is this uniqueness that needs to be disclosed to the reader.