Examining the world through qualitative inquiry

subjective research paper

Tips on considering “subjectivity” in qualitative research

Many newcomers to qualitative studies struggle with the idea of how one’s self, and “subject positions” or “subjectivities” might be represented in qualitative inquiry. For those more attuned to positivist approaches to research in which the researcher is depicted as “neutral” and “objective,” discussing one’s own interests and relationships to a topic and participants of a research study can be viewed as erring dangerously into the territory of “biased” research that is viewed as problematic, if not lacking in validity.

One scholar who wrote about his subjectivities in relation to his research was Alan “Buddy” Peshkin (1931-2000), who was an educational ethnographer who worked at Stanford University. Over the course of his career, Peshkin used ethnographic methods to explore how schooling was accomplished in multiple settings in the United States. His ethnographies include studies of a Midwestern school (1978), a fundamentalist Christian school (1986), an ethnically diverse school in California (1991), a Native American school (1997) and an elite school (2001). What all of these ethnographies have in common is an interest in providing multi-faceted and in-depth portrayals of what goes on in school settings.

Peshkin also talked about how his own subject positions intersected with those of research participants in these studies. For example, Peshkin describes how his positionality as a Jewish person conflicted with that of his hosts in his study of Bethany Baptist Academy (1986). In this book, Peshkin details the personal challenges and costs of undertaking a study in which he was consistently made aware of his “potential nonexistence, or disappearance” (p. 287) as a Jewish person. The participants he worked with believed that non-Christians would not be saved and were “fair game for conversion” (p. 289). In this book, Peshkin considers the personal and societal costs inherent in these views, and ponders over the potential problems represented by the positions taken by fundamentalist groups within a pluralist society. More to Peshkin’s liking was the cultural diversity and ethnic maintenance promoted at Riverview High, a school attended by a multicultural student body, including Sicilians, Mexicans, blacks and Filipinos.

Peshkin takes up the idea of how one might consider one’s subject positions in a much-cited article (1988) entitled: In search of subjectivity: One’s own.  Peshkin defines “subjectivity” as the “amalgam of the persuasions that stem from the circumstances of one’s class, statuses, and values interacting with the particulars of one’s object of investigation” (Peshkin, 1988, p. 17). He inventories his “subjective I’s”, describes how these I’s surfaced in the conduct of his research students, and gives each “I” a distinctive label to indicate how it surfaced in his research in schools, namely:

  • The Ethnic-Maintenance I
  • The Community-Maintenance I
  • The E-Pluribus-Unum I
  • The Justice-Seeking I
  • The Pedagogical-Meliorist I
  • The Non-research Human I

Suggesting that qualitative researchers need to notice the emergence of the various “subjective I’s” in any given study, Peshkin (1988, p. 17) observes that

When researchers observe themselves in the focused way that I propose, they learn about the particular subset of personal qualities that contact with their research phenomenon has released. These qualities have the capacity to filter, skew, shape, block, transform, construe, and misconstrue what transpires from the outset of a research project to its culmination in a written statement.

I’ve found reading Peshkin’s ethnographies and thinking about his reflections on how his subjectivities emerged differentially in the studies he conducted helpful in my own research, as well as in teaching.

Although some scholars have critiqued how the notion of reflexivity has been taken up in qualitative inquiry (e.g., the writing of subjectivity statements), for newcomers to qualitative research, Peshkin’s (1988) article is still a useful reminder and starting point. This article suggests that qualitative researchers ask themselves questions, such as:

  • What subjectivities might you bring to your research?
  • How might you label your “subjective-I’s”?
  • What have you left out?
  • What do each of these subjective-I’s allow with respect to your research study?
  • How do these subjective-I’s potentially limit you as a researcher of your topic?

Through his use of ethnographic methods to examine a multitude of school settings, Alan Peshkin has left a wonderful legacy in qualitative inquiry that contributes not only to how qualitative research studies are conducted, but how schools work.

There are many more articles and books on the issue of subjectivity and reflexivity in qualitative research. For starters, I recommend the following texts to begin (Finlay, 2002, 2012; Finlay & Gough, 2003; Macbeth, 2001; Pillow, 2003; Roulston & Shelton, 2015).

Kathy Roulston

Finlay, L. (2002). Negotiating the swamp: The opportunity and challenge of reflexivity in research practice. Qualitative Research, 2 (2), 209-230.

Finlay, L. (2012). Five lenses for the reflexive interviewer. In J. F. Gubrium, J. A. Holstein, A. Marvasti, & K. McKinney (Eds.), The SAGE Handbook of interview research: The complexity of the craft (2nd ed., pp. 317-331). Los Angeles: Sage.

Finlay, L., & Gough, B. (Eds.). (2003). Reflexivity: A practical guide for researchers in health and social sciences . Oxford: Blackwell Science.

Macbeth, D. (2001). On “reflexivity” in qualitative research: Two readings: and a third. Qualitative Inquiry, 7 (1), 35-68.

Peshkin, A. (1978). Growing up American: Schooling and the survival of community . Chicago: Chicago University Press.

Peshkin, A. (1986). God’s choice: The total world of a fundamentalist Christian School . Chicago and London: The University of Chicago Press.

Peshkin, A. (1988). In search of subjectivity: One’s Own. Educational Researcher, 17 (7), 17-22.

Peshkin, A. (1991). The color of strangers, the color of friends: The play of ethnicity in school and community . Chicago, IL: University of Chicago Press.

Peshkin, A. (1997). Places of memory: Whiteman’s schools and Native American communities . Mahwah, NJ: Lawrence Erlbaum Associates.

Peshkin, A. (2001). Permissible advantage? The moral consequences of elite schooling . Mahwah, NJ: Lawrence Erlbaum Associates.

Pillow, W. S. (2003). Confession, catharsis, or cure? Rethinking the uses of reflexivity as methodological power in qualitative research. International Journal of Qualitative Studies in Education, 16 (2), 175-196.

Roulston, K., & Shelton, S. A. (2015). Reconceptualizing bias in teaching qualitative research methods. Qualitative Inquiry, 21 (4), 332-342. doi:10.1177/1077800414563803

Share this:

' src=

Published by qualpage

Kathy Roulston is a professor in the Qualitative Research program in the Department of Lifelong Education, Administration and Policy at the University of Georgia, Athens, GA, USA. She teaches qualitative research methods, and has written on qualitative interviewing. https://orcid.org/0000-0002-9429-2694 Kathryn J. Roulston on ResearchGate My books include: Interviewing: A guide to theory and practice, see: https://us.sagepub.com/en-us/nam/interviewing/book272521 Interactional studies of qualitative interviews. See: https://benjamins.com/catalog/z.220 View all posts by qualpage

Leave a comment Cancel reply

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar
  • Privacy Policy

Research Method

Home » Research Paper – Structure, Examples and Writing Guide

Research Paper – Structure, Examples and Writing Guide

Table of Contents

Research Paper

Research Paper

Definition:

Research Paper is a written document that presents the author’s original research, analysis, and interpretation of a specific topic or issue.

It is typically based on Empirical Evidence, and may involve qualitative or quantitative research methods, or a combination of both. The purpose of a research paper is to contribute new knowledge or insights to a particular field of study, and to demonstrate the author’s understanding of the existing literature and theories related to the topic.

Structure of Research Paper

The structure of a research paper typically follows a standard format, consisting of several sections that convey specific information about the research study. The following is a detailed explanation of the structure of a research paper:

The title page contains the title of the paper, the name(s) of the author(s), and the affiliation(s) of the author(s). It also includes the date of submission and possibly, the name of the journal or conference where the paper is to be published.

The abstract is a brief summary of the research paper, typically ranging from 100 to 250 words. It should include the research question, the methods used, the key findings, and the implications of the results. The abstract should be written in a concise and clear manner to allow readers to quickly grasp the essence of the research.

Introduction

The introduction section of a research paper provides background information about the research problem, the research question, and the research objectives. It also outlines the significance of the research, the research gap that it aims to fill, and the approach taken to address the research question. Finally, the introduction section ends with a clear statement of the research hypothesis or research question.

Literature Review

The literature review section of a research paper provides an overview of the existing literature on the topic of study. It includes a critical analysis and synthesis of the literature, highlighting the key concepts, themes, and debates. The literature review should also demonstrate the research gap and how the current study seeks to address it.

The methods section of a research paper describes the research design, the sample selection, the data collection and analysis procedures, and the statistical methods used to analyze the data. This section should provide sufficient detail for other researchers to replicate the study.

The results section presents the findings of the research, using tables, graphs, and figures to illustrate the data. The findings should be presented in a clear and concise manner, with reference to the research question and hypothesis.

The discussion section of a research paper interprets the findings and discusses their implications for the research question, the literature review, and the field of study. It should also address the limitations of the study and suggest future research directions.

The conclusion section summarizes the main findings of the study, restates the research question and hypothesis, and provides a final reflection on the significance of the research.

The references section provides a list of all the sources cited in the paper, following a specific citation style such as APA, MLA or Chicago.

How to Write Research Paper

You can write Research Paper by the following guide:

  • Choose a Topic: The first step is to select a topic that interests you and is relevant to your field of study. Brainstorm ideas and narrow down to a research question that is specific and researchable.
  • Conduct a Literature Review: The literature review helps you identify the gap in the existing research and provides a basis for your research question. It also helps you to develop a theoretical framework and research hypothesis.
  • Develop a Thesis Statement : The thesis statement is the main argument of your research paper. It should be clear, concise and specific to your research question.
  • Plan your Research: Develop a research plan that outlines the methods, data sources, and data analysis procedures. This will help you to collect and analyze data effectively.
  • Collect and Analyze Data: Collect data using various methods such as surveys, interviews, observations, or experiments. Analyze data using statistical tools or other qualitative methods.
  • Organize your Paper : Organize your paper into sections such as Introduction, Literature Review, Methods, Results, Discussion, and Conclusion. Ensure that each section is coherent and follows a logical flow.
  • Write your Paper : Start by writing the introduction, followed by the literature review, methods, results, discussion, and conclusion. Ensure that your writing is clear, concise, and follows the required formatting and citation styles.
  • Edit and Proofread your Paper: Review your paper for grammar and spelling errors, and ensure that it is well-structured and easy to read. Ask someone else to review your paper to get feedback and suggestions for improvement.
  • Cite your Sources: Ensure that you properly cite all sources used in your research paper. This is essential for giving credit to the original authors and avoiding plagiarism.

Research Paper Example

Note : The below example research paper is for illustrative purposes only and is not an actual research paper. Actual research papers may have different structures, contents, and formats depending on the field of study, research question, data collection and analysis methods, and other factors. Students should always consult with their professors or supervisors for specific guidelines and expectations for their research papers.

Research Paper Example sample for Students:

Title: The Impact of Social Media on Mental Health among Young Adults

Abstract: This study aims to investigate the impact of social media use on the mental health of young adults. A literature review was conducted to examine the existing research on the topic. A survey was then administered to 200 university students to collect data on their social media use, mental health status, and perceived impact of social media on their mental health. The results showed that social media use is positively associated with depression, anxiety, and stress. The study also found that social comparison, cyberbullying, and FOMO (Fear of Missing Out) are significant predictors of mental health problems among young adults.

Introduction: Social media has become an integral part of modern life, particularly among young adults. While social media has many benefits, including increased communication and social connectivity, it has also been associated with negative outcomes, such as addiction, cyberbullying, and mental health problems. This study aims to investigate the impact of social media use on the mental health of young adults.

Literature Review: The literature review highlights the existing research on the impact of social media use on mental health. The review shows that social media use is associated with depression, anxiety, stress, and other mental health problems. The review also identifies the factors that contribute to the negative impact of social media, including social comparison, cyberbullying, and FOMO.

Methods : A survey was administered to 200 university students to collect data on their social media use, mental health status, and perceived impact of social media on their mental health. The survey included questions on social media use, mental health status (measured using the DASS-21), and perceived impact of social media on their mental health. Data were analyzed using descriptive statistics and regression analysis.

Results : The results showed that social media use is positively associated with depression, anxiety, and stress. The study also found that social comparison, cyberbullying, and FOMO are significant predictors of mental health problems among young adults.

Discussion : The study’s findings suggest that social media use has a negative impact on the mental health of young adults. The study highlights the need for interventions that address the factors contributing to the negative impact of social media, such as social comparison, cyberbullying, and FOMO.

Conclusion : In conclusion, social media use has a significant impact on the mental health of young adults. The study’s findings underscore the need for interventions that promote healthy social media use and address the negative outcomes associated with social media use. Future research can explore the effectiveness of interventions aimed at reducing the negative impact of social media on mental health. Additionally, longitudinal studies can investigate the long-term effects of social media use on mental health.

Limitations : The study has some limitations, including the use of self-report measures and a cross-sectional design. The use of self-report measures may result in biased responses, and a cross-sectional design limits the ability to establish causality.

Implications: The study’s findings have implications for mental health professionals, educators, and policymakers. Mental health professionals can use the findings to develop interventions that address the negative impact of social media use on mental health. Educators can incorporate social media literacy into their curriculum to promote healthy social media use among young adults. Policymakers can use the findings to develop policies that protect young adults from the negative outcomes associated with social media use.

References :

  • Twenge, J. M., & Campbell, W. K. (2019). Associations between screen time and lower psychological well-being among children and adolescents: Evidence from a population-based study. Preventive medicine reports, 15, 100918.
  • Primack, B. A., Shensa, A., Escobar-Viera, C. G., Barrett, E. L., Sidani, J. E., Colditz, J. B., … & James, A. E. (2017). Use of multiple social media platforms and symptoms of depression and anxiety: A nationally-representative study among US young adults. Computers in Human Behavior, 69, 1-9.
  • Van der Meer, T. G., & Verhoeven, J. W. (2017). Social media and its impact on academic performance of students. Journal of Information Technology Education: Research, 16, 383-398.

Appendix : The survey used in this study is provided below.

Social Media and Mental Health Survey

  • How often do you use social media per day?
  • Less than 30 minutes
  • 30 minutes to 1 hour
  • 1 to 2 hours
  • 2 to 4 hours
  • More than 4 hours
  • Which social media platforms do you use?
  • Others (Please specify)
  • How often do you experience the following on social media?
  • Social comparison (comparing yourself to others)
  • Cyberbullying
  • Fear of Missing Out (FOMO)
  • Have you ever experienced any of the following mental health problems in the past month?
  • Do you think social media use has a positive or negative impact on your mental health?
  • Very positive
  • Somewhat positive
  • Somewhat negative
  • Very negative
  • In your opinion, which factors contribute to the negative impact of social media on mental health?
  • Social comparison
  • In your opinion, what interventions could be effective in reducing the negative impact of social media on mental health?
  • Education on healthy social media use
  • Counseling for mental health problems caused by social media
  • Social media detox programs
  • Regulation of social media use

Thank you for your participation!

Applications of Research Paper

Research papers have several applications in various fields, including:

  • Advancing knowledge: Research papers contribute to the advancement of knowledge by generating new insights, theories, and findings that can inform future research and practice. They help to answer important questions, clarify existing knowledge, and identify areas that require further investigation.
  • Informing policy: Research papers can inform policy decisions by providing evidence-based recommendations for policymakers. They can help to identify gaps in current policies, evaluate the effectiveness of interventions, and inform the development of new policies and regulations.
  • Improving practice: Research papers can improve practice by providing evidence-based guidance for professionals in various fields, including medicine, education, business, and psychology. They can inform the development of best practices, guidelines, and standards of care that can improve outcomes for individuals and organizations.
  • Educating students : Research papers are often used as teaching tools in universities and colleges to educate students about research methods, data analysis, and academic writing. They help students to develop critical thinking skills, research skills, and communication skills that are essential for success in many careers.
  • Fostering collaboration: Research papers can foster collaboration among researchers, practitioners, and policymakers by providing a platform for sharing knowledge and ideas. They can facilitate interdisciplinary collaborations and partnerships that can lead to innovative solutions to complex problems.

When to Write Research Paper

Research papers are typically written when a person has completed a research project or when they have conducted a study and have obtained data or findings that they want to share with the academic or professional community. Research papers are usually written in academic settings, such as universities, but they can also be written in professional settings, such as research organizations, government agencies, or private companies.

Here are some common situations where a person might need to write a research paper:

  • For academic purposes: Students in universities and colleges are often required to write research papers as part of their coursework, particularly in the social sciences, natural sciences, and humanities. Writing research papers helps students to develop research skills, critical thinking skills, and academic writing skills.
  • For publication: Researchers often write research papers to publish their findings in academic journals or to present their work at academic conferences. Publishing research papers is an important way to disseminate research findings to the academic community and to establish oneself as an expert in a particular field.
  • To inform policy or practice : Researchers may write research papers to inform policy decisions or to improve practice in various fields. Research findings can be used to inform the development of policies, guidelines, and best practices that can improve outcomes for individuals and organizations.
  • To share new insights or ideas: Researchers may write research papers to share new insights or ideas with the academic or professional community. They may present new theories, propose new research methods, or challenge existing paradigms in their field.

Purpose of Research Paper

The purpose of a research paper is to present the results of a study or investigation in a clear, concise, and structured manner. Research papers are written to communicate new knowledge, ideas, or findings to a specific audience, such as researchers, scholars, practitioners, or policymakers. The primary purposes of a research paper are:

  • To contribute to the body of knowledge : Research papers aim to add new knowledge or insights to a particular field or discipline. They do this by reporting the results of empirical studies, reviewing and synthesizing existing literature, proposing new theories, or providing new perspectives on a topic.
  • To inform or persuade: Research papers are written to inform or persuade the reader about a particular issue, topic, or phenomenon. They present evidence and arguments to support their claims and seek to persuade the reader of the validity of their findings or recommendations.
  • To advance the field: Research papers seek to advance the field or discipline by identifying gaps in knowledge, proposing new research questions or approaches, or challenging existing assumptions or paradigms. They aim to contribute to ongoing debates and discussions within a field and to stimulate further research and inquiry.
  • To demonstrate research skills: Research papers demonstrate the author’s research skills, including their ability to design and conduct a study, collect and analyze data, and interpret and communicate findings. They also demonstrate the author’s ability to critically evaluate existing literature, synthesize information from multiple sources, and write in a clear and structured manner.

Characteristics of Research Paper

Research papers have several characteristics that distinguish them from other forms of academic or professional writing. Here are some common characteristics of research papers:

  • Evidence-based: Research papers are based on empirical evidence, which is collected through rigorous research methods such as experiments, surveys, observations, or interviews. They rely on objective data and facts to support their claims and conclusions.
  • Structured and organized: Research papers have a clear and logical structure, with sections such as introduction, literature review, methods, results, discussion, and conclusion. They are organized in a way that helps the reader to follow the argument and understand the findings.
  • Formal and objective: Research papers are written in a formal and objective tone, with an emphasis on clarity, precision, and accuracy. They avoid subjective language or personal opinions and instead rely on objective data and analysis to support their arguments.
  • Citations and references: Research papers include citations and references to acknowledge the sources of information and ideas used in the paper. They use a specific citation style, such as APA, MLA, or Chicago, to ensure consistency and accuracy.
  • Peer-reviewed: Research papers are often peer-reviewed, which means they are evaluated by other experts in the field before they are published. Peer-review ensures that the research is of high quality, meets ethical standards, and contributes to the advancement of knowledge in the field.
  • Objective and unbiased: Research papers strive to be objective and unbiased in their presentation of the findings. They avoid personal biases or preconceptions and instead rely on the data and analysis to draw conclusions.

Advantages of Research Paper

Research papers have many advantages, both for the individual researcher and for the broader academic and professional community. Here are some advantages of research papers:

  • Contribution to knowledge: Research papers contribute to the body of knowledge in a particular field or discipline. They add new information, insights, and perspectives to existing literature and help advance the understanding of a particular phenomenon or issue.
  • Opportunity for intellectual growth: Research papers provide an opportunity for intellectual growth for the researcher. They require critical thinking, problem-solving, and creativity, which can help develop the researcher’s skills and knowledge.
  • Career advancement: Research papers can help advance the researcher’s career by demonstrating their expertise and contributions to the field. They can also lead to new research opportunities, collaborations, and funding.
  • Academic recognition: Research papers can lead to academic recognition in the form of awards, grants, or invitations to speak at conferences or events. They can also contribute to the researcher’s reputation and standing in the field.
  • Impact on policy and practice: Research papers can have a significant impact on policy and practice. They can inform policy decisions, guide practice, and lead to changes in laws, regulations, or procedures.
  • Advancement of society: Research papers can contribute to the advancement of society by addressing important issues, identifying solutions to problems, and promoting social justice and equality.

Limitations of Research Paper

Research papers also have some limitations that should be considered when interpreting their findings or implications. Here are some common limitations of research papers:

  • Limited generalizability: Research findings may not be generalizable to other populations, settings, or contexts. Studies often use specific samples or conditions that may not reflect the broader population or real-world situations.
  • Potential for bias : Research papers may be biased due to factors such as sample selection, measurement errors, or researcher biases. It is important to evaluate the quality of the research design and methods used to ensure that the findings are valid and reliable.
  • Ethical concerns: Research papers may raise ethical concerns, such as the use of vulnerable populations or invasive procedures. Researchers must adhere to ethical guidelines and obtain informed consent from participants to ensure that the research is conducted in a responsible and respectful manner.
  • Limitations of methodology: Research papers may be limited by the methodology used to collect and analyze data. For example, certain research methods may not capture the complexity or nuance of a particular phenomenon, or may not be appropriate for certain research questions.
  • Publication bias: Research papers may be subject to publication bias, where positive or significant findings are more likely to be published than negative or non-significant findings. This can skew the overall findings of a particular area of research.
  • Time and resource constraints: Research papers may be limited by time and resource constraints, which can affect the quality and scope of the research. Researchers may not have access to certain data or resources, or may be unable to conduct long-term studies due to practical limitations.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Research Paper Citation

How to Cite Research Paper – All Formats and...

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Paper Formats

Research Paper Format – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

subjective research paper

  • The Open University
  • Guest user / Sign out
  • Study with The Open University

My OpenLearn Profile

Personalise your OpenLearn profile, save your favourite content and get recognition for your learning

About this free course

Become an ou student, download this course, share this free course.

Understanding different research perspectives

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

1 Objective and subjective research perspectives

Research in social science requires the collection of data in order to understand a phenomenon. This can be done in a number of ways, and will depend on the state of existing knowledge of the topic area. The researcher can:

  • Explore a little known issue. The researcher has an idea or has observed something and seeks to understand more about it (exploratory research).
  • Connect ideas to understand the relationships between the different aspects of an issue, i.e. explain what is going on (explanatory research).
  • Describe what is happening in more detail and expand the initial understanding (explicatory or descriptive research).

Exploratory research is often done through observation and other methods such as interviews or surveys that allow the researcher to gather preliminary information.

Explanatory research, on the other hand, generally tests hypotheses about cause and effect relationships. Hypotheses are statements developed by the researcher that will be tested during the research. The distinction between exploratory and explanatory research is linked to the distinction between inductive and deductive research. Explanatory research tends to be deductive and exploratory research tends to be inductive. This is not always the case but, for simplicity, we shall not explore the exceptions here.

Descriptive research may support an explanatory or exploratory study. On its own, descriptive research is not sufficient for an academic project. Academic research is aimed at progressing current knowledge.

The perspective taken by the researcher also depends on whether the researcher believes that there is an objective world out there that can be objectively known; for example, profit can be viewed as an objective measure of business performance. Alternatively the researcher may believe that concepts such as ‘culture’, ‘motivation’, ‘leadership’, ‘performance’ result from human categorisation of the world and that their ‘meaning’ can change depending on the circumstances. For example, performance can mean different things to different people. For one it may refer to a hard measure such as levels of sales. For another it may include good relationships with customers. According to this latter view, a researcher can only take a subjective perspective because the nature of these concepts is the result of human processes. Subjective research generally refers to the subjective experiences of research participants and to the fact that the researcher’s perspective is embedded within the research process, rather than seen as fully detached from it.

On the other hand, objective research claims to describe a true and correct reality, which is independent of those involved in the research process. Although this is a simplified view of the way in which research can be approached, it is an important distinction to think about. Whether you think about your research topic in objective or subjective terms will determine the development of the research questions, the type of data collected, the methods of data collection and analysis you adopt and the conclusions that you draw. This is why it is important to consider your own perspective when planning your project.

Subjective research is generally referred to as phenomenological research. This is because it is concerned with the study of experiences from the perspective of an individual, and emphasises the importance of personal perspectives and interpretations. Subjective research is generally based on data derived from observations of events as they take place or from unstructured or semi-structured interviews. In unstructured interviews the questions emerged from the discussion between the interviewer and the interviewee. In semi-structured interviews the interviewer prepares an outline of the interview topics or general questions, adding more as needs emerged during the interview. Structured interviews include the full list of questions. Interviewers do not deviate from this list. Subjective research can also be based on examinations of documents. The researcher will attribute personal interpretations of the experiences and phenomena during the process of both collecting and analysing data. This approach is also referred to as interpretivist research. Interpretivists believe that in order to understand and explain specific management and HR situations, one needs to focus on the viewpoints, experiences, feelings and interpretations of the people involved in the specific situation.

Conversely, objective research tends to be modelled on the methods of the natural sciences such as experiments or large scale surveys. Objective research seeks to establish law-like generalisations which can be applied to the same phenomenon in different contexts. This perspective, which privileges objectivity, is called positivism and is based on data that can be subject to statistical analysis and generalisation. Positivist researchers use quantitative methodologies, which are based on measurement and numbers, to collect and analyse data. Interpretivists are more concerned with language and other forms of qualitative data, which are based on words or images. Having said that, researchers using objectivist and positivist assumptions sometimes use qualitative data while interpretivists sometimes use quantitative data. (Quantitative and qualitative methodologies will be discussed in more detail in the final part of this course.) The key is to understand the perspective you intend to adopt and realise the limitations and opportunities it offers. Table 1 compares and contrasts the perspectives of positivism and interpretivism.

Some textbooks include the realist perspective or discuss constructivism, but, for the purpose of your work-based project, you do not need to engage with these other perspectives. This course keeps the discussion of research perspectives to a basic level.

Search and identify two articles that are based on your research topic. Ideally you may want to identify one article based on quantitative and one based on qualitative methodologies.

Now answer the following questions:

  • In what ways are the two studies different (excluding the research focus)?
  • Which research perspective do the author/s in article 1 take in their study (i.e. subjective or objective or in other words, phenomenological/interpretivist or positivist)?
  • What elements (e.g. specific words, sentences, research questions) in the introduction reveal the approach taken by the authors?
  • Which research perspective do the author/s in article 2 take in their study (i.e. subjective or objective, phenomenological/interpretivist or positivist)?
  • What elements (e.g. specific words, sentences, research questions) in the introduction and research questions sections reveal the approach taken by the authors?

This activity has helped you to distinguish between objective and subjective research by recognising the type of language and the different ways in which objectivists/positivists and subjectivists/interpretivists may formulate their research aims. It should also support the development of your personal preference on objective or subjective research.

Previous

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Subjective data, objective data and the role of bias in predictive modelling: Lessons from a dispositional learning analytics application

Roles Conceptualization, Data curation, Formal analysis, Investigation, Methodology, Writing – original draft, Writing – review & editing

* E-mail: [email protected]

Affiliation School of Business and Economics, Maastricht University, Maastricht, The Netherlands

ORCID logo

Contributed equally to this work with: Bart Rienties, Quan Nguyen

Roles Conceptualization, Investigation, Methodology, Writing – original draft, Writing – review & editing

Affiliation Institute of Educational Technology, Open University UK, Milton Keynes, United Kingdom

Roles Data curation, Formal analysis, Methodology, Writing – original draft, Writing – review & editing

Affiliation School of Information, University of Michigan, Ann Arbor, MI, United States of America

  • Dirk Tempelaar, 
  • Bart Rienties, 
  • Quan Nguyen

PLOS

  • Published: June 12, 2020
  • https://doi.org/10.1371/journal.pone.0233977
  • Peer Review
  • Reader Comments

Table 1

For decades, self-report measures based on questionnaires have been widely used in educational research to study implicit and complex constructs such as motivation, emotion, cognitive and metacognitive learning strategies. However, the existence of potential biases in such self-report instruments might cast doubts on the validity of the measured constructs. The emergence of trace data from digital learning environments has sparked a controversial debate on how we measure learning. On the one hand, trace data might be perceived as “objective” measures that are independent of any biases. On the other hand, there is mixed evidence of how trace data are compatible with existing learning constructs, which have traditionally been measured with self-reports. This study investigates the strengths and weaknesses of different types of data when designing predictive models of academic performance based on computer-generated trace data and survey data. We investigate two types of bias in self-report surveys: response styles (i.e., a tendency to use the rating scale in a certain systematic way that is unrelated to the content of the items) and overconfidence (i.e., the differences in predicted performance based on surveys’ responses and a prior knowledge test). We found that the response style bias accounts for a modest to a substantial amount of variation in the outcomes of the several self-report instruments, as well as in the course performance data. It is only the trace data, notably that of process type, that stand out in being independent of these response style patterns. The effect of overconfidence bias is limited. Given that empirical models in education typically aim to explain the outcomes of learning processes or the relationships between antecedents of these learning outcomes, our analyses suggest that the bias present in surveys adds predictive power in the explanation of performance data and other questionnaire data.

Citation: Tempelaar D, Rienties B, Nguyen Q (2020) Subjective data, objective data and the role of bias in predictive modelling: Lessons from a dispositional learning analytics application. PLoS ONE 15(6): e0233977. https://doi.org/10.1371/journal.pone.0233977

Editor: Vitomir Kovanovic, University of South Australia, AUSTRALIA

Received: February 16, 2020; Accepted: May 16, 2020; Published: June 12, 2020

Copyright: © 2020 Tempelaar et al. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The data, the MPlus and SPSS codes, and the main components of the output are archived in DANS, the Data Archiving and Networked Services of the NOW, the Dutch organization of scientific research. DANS is an open access resource. The final version of this archive, labelled Tempelaar, D, 2020, "Replication Data for PlosOne 2020 manuscript Tempelaar ea", has received the unique handle: https://hdl.handle.net/10411/YAF7CJ , DataverseNL

Funding: The author(s) received no specific funding for this work.

Competing interests: The authors have declared that no competing interests exist.

Introduction

SBBG or the ‘Snapshot, Bookend, Between-Groups’ paradigm is the unflattering description by Winne and Nesbit [ 1 ] of the current state of affairs of building and estimating educational models.

The data reference in that description is provided by the snapshot ‘S’, an S that could equally well stand for a survey or self-report. In the alternative paradigm of a ‘more productive psychology of academic achievement’ [1, p. 671] that the authors offer, the use of trace data collected over time that describe learning episodes, supplemented with some snapshot data represents one of the paradigmatic changes suggested. Other researchers go even further in restricting the role of snapshot type of data in educational research. For example, in line with traditions in the area of metacognition terming the different data paradigms as off-line and online, Veenman [ 2 , 3 ] limits the description of the properties of off-line data to a list of ‘fundamental validity problems’, such as the problem of the individual reference point (variability in perspective chosen by learners), memory problems (failing to correctly retrieve past experiences), and the prompting effect problem (item steering in a direction different from what a spontaneous self-report will bring). If these observations were to be representative for the development of empirical educational research, it suggests that research papers based on questionnaire data do not have such a bright future.

Reading through this methodological litany, the asymmetry in descriptions of data sources and data types in educational research stands out. Where the fundamental validity problems going with questionnaire data are typically spelt out at considerable detail, with the above issues raised by Veenman [ 2 , 3 ] being no more than the top of the iceberg, a critical evaluation of the characteristics of online data, or trace data, is often missing. It is as if online data represents by definition “true”, unbiased data, which are both valid and reliable [ 2 – 4 ]. Much to our surprise, in the research area of learning analytics, an opposite development can be observed. Several learning analytics researchers explicitly recognize the limitations of models designed on online data only [ 5 , 6 ]. There is an emergence of research that seeks to integrate different types of data, such as visible from the new area of dispositional learning analytics that searches to append trace data, the classical subject of learning analytics research, with questionnaire measured disposition data, or educational research using multi-modal data [ 5 , 6 ].

The aim of this study is to showcase the benefits of critically assessing the characteristics of trace and questionnaire data. This showcase is developed in the context of a dispositional learning analytics application that combines a wide variety of data and data types: trace data from technology-enhanced learning systems, computer log data of static nature, questionnaire data, and course performance data. In the survey research literature, it is widely acknowledged that although questionnaires and psychometric instruments measuring constructs like anxiety, motivation, or self-regulation, have strong internal and external validity, many respondents have a typical response style [ 7 , 8 ]. For example, some learners are more inclined to have an acquiescence style of response (i.e., the tendency to yeah saying), while others tend to extreme responses (i.e., using the extremes on the Likert response scale). Similarly, in terms of confidence biases, some learners might underestimate their abilities, skills, and knowledge, while others might overestimate their confidence [ 9 ]. One view is to consider these response styles and confidence biases as unwelcome, another is that these “biases” could potentially be used as interesting proxies of underlying features of a respondent. Therefore, as an instrument to characterise this rich set of data, we develop two alternative approaches: one building on the framework of response styles; and an alternative one based on the difference between subjective and objective notions of confidence in one’s learning.

Both response styles and confidence differences serve as a potential source of biases in the data. One would expect this to refer to self-report questionnaire data only, but we will investigate other data types (e.g., trace data, performance data) on the presence of these anomalies. Based on that analysis, we intend to answer two generalised research questions:

  • In a data-rich context consisting of data of questionnaire type, trace data of both process and product type and performance data, how can we decompose each data element into a component that represents the contribution of biases, such as response style bias or overconfidence, and a component independent of biases?
  • If our modelling endeavour aims to design models that help predict course performance or explain the relationship between student’s characteristics that act as antecedents of performance, what lessons can be learned from these decompositions into bias and non-bias components?

First, we will introduce the reader to the three buildings blocks of our study: dispositional learning analytics, response styles, subjective and objective confidence measures. Second, we will investigate the presence of any response styles and confidence difference components in subjective questionnaire data, objective trace data, and learning outcomes data types, and discuss their implications.

Three buildings blocks: Dispositional learning analytics, response styles and confidence measures

Dispositional learning analytics.

Dispositional learning analytics proposes a learning analytics infrastructure [ 10 – 12 ] that combines learning data, generated in learning activities through the traces of technology-enhanced learning systems, with learner data, such as student dispositions, values, and attitudes measured through self-report questionnaires [ 5 ]. The unique feature of dispositional learning analytics is in the combination of learning data with learner data: digital footprints of learning activities, as in all learning analytics applications, together with self-response questionnaire learner data. In [ 5 , 13 ], the source of learner data is found in the use of a dedicated questionnaire instrument specifically developed to identify learning power: a mix of dispositions, experiences, social relations, values and attitudes that influence the engagement with learning. In our own dispositional learning analytic research [ 14 – 18 ], we sought to operationalise dispositions with the help of instruments developed in the context of contemporary social-cognitive educational research, as to make the connection with educational theory as strong as possible. Another motivation to select these instruments is that they are closely related to educational interventions. These instruments include:

  • The expectancy-value framework of learning behaviour [ 19 ], encompassing affective, behavioural, and cognitive facets;
  • The motivation and engagement framework of learning cognitions and behaviours [ 20 ] that distinguishes learning cognitions and learning behaviours of adaptive and maladaptive types;
  • Aspects of a student approach to learning (SAL) framework: cognitive processing strategies and metacognitive regulation strategies, from Vermunt’s [ 21 ] learning styles instrument, encompassing cognitions and behaviours (see also [ 22 ]);
  • The control-value theory of achievement emotions, both about learning emotions of activity and epistemic types, at the affective pole of the spectrum [ 23 – 25 ];
  • Goal setting behaviour in the approach and avoidance dimensions [ 26 ];
  • Academic motivations that distinguish intrinsically versus extrinsically motivated learning [ 27 ].

The type of dispositional learning analytics models we have developed within the above theoretical frameworks fit in the current trend in educational research to apply multi-modal data analysis by combining data from a range of different sources. In our research, we invariably find that predictive modelling focusing on learning outcomes or dropout finds formative assessment data as its dominant predictor. However, these formative assessment data are often less timely than one would wish, for example, for doing educational interventions early in the course. The best timely prediction models we were able to design are typically dominated by trace data of product type (e.g. tool mastery scores) combined with questionnaire data, with secondary roles for trace data of process type (e.g. number of attempts to solve math exercise 21, number of assignments completed in week 4), due to its unstable nature [ 15 – 18 ].

Response styles

Response styles refer to typical patterns in responses to Likert response scales questionnaire items [ 7 , 8 , 28 , 29 ]. Although intensively investigated in marketing, cultural, and health studies, response styles went largely unnoticed in empirical educational research. Response styles are induced by the tendency of respondents to respond in a similar way to items, independent of the content of the item, such as yeah saying, or seeking for extreme responses. In the literature, nine common types of response styles are distinguished:

  • Acquiescence Response Style, ARS: the tendency to respond positively
  • Dis-Acquiescence Response Style, DARS: the tendency to respond negatively
  • Net-Acquiescence, NARS: ARS-DARS
  • MidPoint Response Style, MRS: the tendency to respond neutrally
  • Non-Contingent Response, NCR: the tendency to respond at random
  • Extreme Response Scale, ERS: the tendency to respond extremely
  • Extreme Response Scale, ERSpos and ERSneg: the tendency to respond extremely positively or extremely negatively
  • Response range, RR: the difference between the maximum and minimum response
  • Mild Response Style, MLRS: the tendency to provide a mild response.

Longitudinal research into the stability of response styles concludes that response styles function as relatively stable, individual characteristics that can be included as control variables in the analysis of questionnaire data [ 29 ]. Largest effects were found in studies of the ERS style [ 30 ], but explained variation never exceeded the level of 10%. Other empirical studies, such as [ 28 , 31 ] focussed on the ERS only. Response styles constitute a highly collinear set of observations, by definition: for example, mild responses are the complement of extreme responses. Therefore, any analysis of response styles has to be based on a selection from the above styles.

In the fifties and sixties, response styles research focused on a second antecedent of response styles beyond personality: the domain of the questionnaire. The leading research question in those investigations was if response style findings can be generalised over different instruments. Findings indicate that this generalisation is partial: response styles contain both a generic component and an instrument-specific component [ 30 , 32 ]. Empirical research is however limited in most cases to the comparison of response styles in two or three instruments; it is only in applications of dispositional learning analytics as in the current study that one can investigate commonalities in response styles over a broad range of instruments.

Confidence measures

As a second source of response bias, we sought for an indicator of under- or overconfidence, or the difference between a subjective confidence measure and an objective one. Different operationalizations of this can be found, such as judgements of learning, feeling of learning, or ease of learning judgements [ 9 ]. Our operationalization of subjective confidence is best interpreted as a prospective, ease-of-learning indicator. It is based on an expectancy-value framework oriented survey [ 33 ] administered at the start of a course that generates several expectancy scores (such as perceived cognitive competence, or the expectation not to encounter difficulties in learning), and personal value scores. Subjective confidence is then defined as the predicted value of a learning outcome based on survey responses (e.g. the regression of exam performance for mathematics on the scores of the several expectancy- and value-constructs). A similar procedure can be applied to define objective confidence, whereby we define the predicted value of the regression of the exam score on two objective predictors available at the start of the course: the level of prior education and the score on a diagnostic entry test. The difference between these two regression-based predictions is seen as the difference of subjective and objective confidence or a measure of subjective overconfidence. The variables used in the calculations of response styles and the confidence difference will be described in the next section.

Research methods

Ethics approval was obtained by the Ethical Review Committee Inner City faculties of Maastricht University (ERCIC_044_14_07). Participants of the research all provided written consent.

Context of the empirical study

This study took place in a large-scale introductory mathematics and statistics course for first-year undergraduate students in a business and economics program in the Netherlands. The educational system is best described as ‘blended’ or ‘hybrid' [ 34 ]. The main component is face-to-face: Problem-Based Learning (PBL), in small groups (14 students), coached by a content expert tutor (see [ 35 ] for further information on PBL and the course design). Participation in tutorial groups is required. Optional is the online component of the blend: the use of the two e-tutorials—SOWISO and MyStatLab (MSL) [ 18 ]. This design is based on the philosophy of student-centred education, placing the responsibility for making educational choices primarily on the student. Since most of the learning takes place during self-study outside class through the e-tutorials or other learning materials, class time is used to discuss solving advanced problems. Thus, the instructional format is best characterized as a flipped-classroom design [ 35 ].

The student-centred nature of the instructional design requires, first and foremost, adequate actionable feedback to students so that they can appropriately monitor their study progress and topic mastery. The provision of relevant feedback starts on the first day of the course when students take two diagnostic entry tests for mathematics and statistics, the mathematics test based on a validated, nation-wide instrument. Feedback from these entry tests provides a first signal for the importance of using the e-tutorials. Next, the e-tutorials take over the monitoring function: at any time, students can see their performance in the practice sessions, their progress in preparing for the next quiz, and detailed feedback on their completed quizzes, all in the absolute and relative (to their peers) sense. Students receive feedback about their learning dispositions through a dataset containing their personal scores on several instruments, and aggregate scores. These datasets are the basis of the individual student projects students do in the second last week of the course, in which they statistically analyse and interpret their personal data and compare it with class means. Profiting from the intensive contact between students and their tutors of the PBL tutorial groups, learning feedback is directed at students and their tutors, who carry first responsibility for pedagogical interventions.

The subject of this study is the full 2018/2019 cohort of students, i.e. all students who enrolled in the course and administered the learning dispositions instruments: in total, 1080 students (that includes all first-year students, since the student project is a required assignment, but excludes repeat students, who did the project the previous year). A large diversity in the student population was present: only 21.6% were educated in the Dutch high school system. The largest group, 32.6% of the students, followed secondary education in Germany, followed by 20.8% of students with Belgian education. In total, 57 nationalities were present. A large share of students was of European nationality, with only 4.8% of students from outside Europe. High school systems in Europe differ strongly, most particularly in the teaching of mathematics and statistics. For example, the Dutch high school system has a strong focus on the topic of statistics, whereas statistics are completely missing in high school programs of many other European countries. Next, all countries distinguish different tracks of mathematics education at the secondary level, with 31.5% of our students educated at the highest, advanced level preparing sciences, and 68.5% of students educated at the intermediate level, preparing social sciences. Therefore, it is crucial that this present introductory module is flexible and allows for individual learning paths, which is the reason to opt for a blended design with providing students with a lot of learning feedback generated by the application of dispositional learning analytics [ 18 , 35 ].

Instruments and procedure

In this study, we combine data of different types: course performance measures, Learning Management System (LMS) and e-tutorial trace variables, Students Information System (SIS) based variables, and learning disposition variables measured by self-report questionnaires. As suggested by Winne’s taxonomy of data sources [ 4 , 36 , 37 ], our study applies self-report questionnaire data and trace data through the logging of study behaviours and the specific choices students make in the e-tutorials.

The self-report questionnaires applied in this study are described in S1 Appendix : achievement emotions (A. 1), epistemic emotions (A. 2), achievement goals (A. 3), motivation and engagement (A. 4), attitudes towards learning (A. 5), approaches to learning (A. 6) and academic motivations (A. 7). These questionnaires are all long-existing instruments, well-described, and validated in decades of empirical research into educational psychology. Most were administered in the first two weeks of the course, at different days, each administration taking between five and ten minutes. The first exception is the instrument quantifying emotions by participating in learning activities (described in section A. 1), which was administered halfway through the course. This was done to allow students sufficient experiences with the learning activities, while simultaneously avoiding the danger that an approaching exam might strongly impact learning emotions. A second exception is that the motivation and engagement instrument (described in section A. 4), was administered twice: at the start and the end of the course (T2). Since data from the self-report questionnaires are used by the students in individual statistical projects that analyse personal learning data, the responses cover all students (except for about 15 students dropping out). To ease the administration of the questionnaires, all applied the same response format of a seven-point Likert scale. Students provide consent that their personal data is used outside the project for learning analytics-based feedback and educational research.

Course performance measures.

The final course performance measure, Grade, is a weighted average of final exam score (87%) and quiz scores (13%). Performance in the exam has two components with equal weight: exam score mathematics (MathExam) and exam score statistics (StatsExam). The same decomposition refers to the aggregated performance in the quizzes for both topics: MathQuiz and StatsQuiz.

Trace data from technology enhancing learning systems.

Three digital systems have been used to organise the learning of students and to facilitate the creation of individual learning paths: the LMS BlackBoard and the two e-tutorials SOWISO for mathematics and MSL for statistics. From the BlackBoard trace variables, all of the process type, based upon our previous research, we choose BBClicks as the total number of clicks in BlackBoard. From the thousands of trace variables available from the two e-tutorial systems, we selected one product type variable and a few process type variables, all on an aggregate level. The product variable represents mastery achieved in the e-tutorials, as the proportion of exercises correctly solved: MathMastery and StatsMastery. Main process type of variables are the number of attempts to solve an exercise, totalled over all exercises: MathAttempts and StatsAttempts, and total time on task: MathTime and StatsTime. Next, the Sowiso system archives the feedback strategies students apply in solving any exercise, resulting in additional process variables MathHints, the total number of hints asked for, and MathSolutions, the number of worked-out examples asked for.

SIS system data and entry tests.

Our university SIS provided several further variables mainly used for control purposes. Standard demographic variables are Gender (with an indicator variable for female students), International (with an indicator for non-Dutch high school education), and MathMajor (with an indicator for the advanced mathematics track in high school). The MathMajor indicator is constructed based on distinguishing prior education preparing for either sciences or social sciences. Finally, students were required upon entering the course to complete two diagnostic entry tests, one for mathematics (MathEntry), and one for statistics (StatsEntry).

Data analysis

The data analysis of this study contained a sequence of steps. In several of these steps, different options were available as to how to proceed in the analysis. We will shortly explain the choices we made, without suggesting that other choices cannot work as well. In fact, this study would lend itself to an application of ‘multiverse analysis’ [ 38 ]: performing the analyses across a set of alternative data sets applying alternative statistical methods to find out how robust the empirical outcomes are.

Our dispositional learning analytics-based dataset consisted of several types of data: self-report questionnaire data as the dispositions, trace data from learning enhancing systems, demographic data from SIS type of systems, and course performance data.

All questionnaires were administered with items of the Likert 1…7 type, to simplify the response by students. Since the different instruments applied different labels for the several Likert options, we used the three anchors as labels: the negative pole, the neutral anchor and the positive pole. The 7-point Likert scale is a relatively long scale where most response style literature is based on 4-point or 5-point Likert scales. The use of this 7-point Likert scale, as well as the large size of our sample, comply with the outcomes of a recent simulation study [ 39 ] that signals a loss of control of Type 1 error for scales shorter than 7-point and samples smaller than 100.

Researchers investigating very long scales (9-, 10- or 11-point scales) have applied alternative operationalisations of extreme responses, including two extreme response categories at each end of the continuum [ 32 ]. Being in between those short and long scales, we opted to analyse both cases defining extreme responses as the proportion of responses in the single most extreme category as well as the proportion of responses in the two most extreme categories. That is: we defined extreme negative response as the proportion of responses equal to 1 (ERSneg1) or equal to 1 or 2 (ERSneg2), and we defined extreme positive response as the proportion of responses equal to 6 or 7 (ERSpos2) or equal to 7 only (ERSpos1). Analyses were performed for both operationalisations, but reporting was restricted to the case of defining extremity by two categories. An important reason to do so was based on distributional properties of the data: where measures of extreme responses based on the single most extreme outcome are strongly right-skewed, measures based on 1, 2 or 6, 7 together are only moderately skewed. Thus, to prevent the need of data transformations that would make an interpretation of the outcomes of the regression models less straightforward, we opted for the current operationalisation. Next: preliminary analysis suggested that the effects of extreme responses depend on the direction: positive or negative. So, whereas most empirical studies in response styles aggregate positive and negative extreme responses into one category [ 29 , 31 ], we chose to differentiate the two directions. As the Results section will indicate, in most of the models, we found that positive and negative extreme responses had opposite effects, suggesting that aggregation into the total extreme response is dubious.

We used response styles as one approach to operationalising bias. A set of 13 response styles was calculated for all eight questionnaire administrations: ARS, ARSW, DARS, DARSW, MRS, NARS, NARSW, RR, NCR, ERSneg1, ERSpos1, ERSneg2, and ERSpos2, where the last four styles were described above: negative and positive extreme responses, and taking one or two response categories into account. By definition, this set of response styles was strongly collinear, making a selection necessary. We followed other empirical studies in this area [ 31 ] by focusing on only ERS as a descriptor of response styles since the style was found to be relatively stable in repeated measurements and in this way acted as a personality characteristic [ 29 ]. This constitutes the response style that had the strongest impact on measures of central tendency for questionnaire scales, thus the strongest bias.

After computing response style measures for each of the eight questionnaire administrations, we investigated stability over different questionnaire instruments and calculated aggregated measures of response styles. In that aggregation, we excluded the second, end of course administration of the motivation and engagement instrument, so that aggregated measures represented averages of response styles from seven different instruments. An advantage of keeping one instrument apart was that it allowed investigating the role of both stability and endogeneity (the external validation of our extreme response measures, by investigating their role in the explanation of responses to an instrument not included in the calculation of extreme response measures). Concerning endogeneity: if we analysed the role of an aggregate measure of response style and the outcomes of one survey, did it matter much if in the calculation of the aggregated measures we included or excluded the specific survey?

The seven instruments used to generate the aggregated response styles counted in total 77 scales. Of these scales, a majority of scales, 46, were of adaptive or positive valence (examples are the enjoyment of learning, study management, valuing university, intrinsic but also extrinsic motivation). A minority of scales, 13, were of maladaptive (hampering learning activity) or negative (unpleasant) valence (such as a-motivation, boredom, disengagement). The balance between positive or adaptive and negative or maladaptive items differed from instrument to instrument, thereby impacting response style measures, as described in the literature [ 40 ].

subjective research paper

In exactly the same manner, we constructed the ΔConfidence(LAX) score as the beta weight of the regression of LAX on ΔConfidence (see Table B1 in S2 Appendix ; since this is a univariate regression, that beta weight equals the correlation) and we decomposed the variable LAX into a predicted and residual part using the variable ΔConfidence as an instrument. That decomposition is indicated as LAXConf and LAXConfcor. In these decompositions, LAXRS and LAXConf represent the bias components, and LAXRScor and LAXConfcor the de-biased, bias-corrected, components.

This procedure was applied to all variables under study, including the ‘objectively’ measured variables. That is, self-report constructs, trace variables of the process and product types, and course performance variables were all assigned variable specific scores for ERSpos, ERSneg and ΔConfidence, and were all decomposed into predicted and residuals components, both with response styles and ΔConfidence as instruments. For the self-report constructs, an alternative operationalisation of extreme responses would have been the extreme response scores of the items belonging to the specific scale. However, this procedure would have limited the analysis to the scale-based self-report variables only and would not allow for constructing an overconfidence component in the data; therefore, we opted for the above approach.

The last step in the analysis was to estimate models of educational processes, in three different modes:

  • using only observed, uncorrected versions of the variables, resulting in traditional models based on observed data;
  • using corrected, de-biased versions of the variables only, deriving alternative models that excluded biases resulting from response styles or confidence differences;
  • the combined model, with observed, uncorrected response variables, and as explanatory variables the combination of corrected, de-biased versions of the survey or trace variables, together with the response style variables or the confidence difference variable.

In this third model, the bias terms are orthogonal to the bias-corrected variables, allowing quantifying the differential impact of response styles or confidence difference variables on models of educational processes. Models we estimated are all of the multiple regression types, for which IBM SPSS vs 26 was applied. Omega reliability measures were calculated in MPlus vs 8.4, using code developed by Bandalos [41, p. 396].

The several subsections will follow the sequential steps in the statistical analysis, described above. At first, we investigate response styles and confidence differences as sources of bias in questionnaire measurements in the first two subsections. All the following subsections document comparisons of models estimated with observed scores and models based on corrected scores using the instrumental variables approach. The first three subsections give insight into the outcomes of the decompositions of all variables under study. In the following subsection, we investigate the impact of these decompositions on the design and estimation of educational models. Given that we collected data based on a wide range of theoretical frameworks, a large number of different models can be estimated (and was indeed estimated). Our reporting is based on a, somewhat arbitrary, selection from all these models. In section four, we estimate the CVTAE model for achievement emotions. Section five investigates epistemic emotions as antecedents of achievement emotions. In the last two sections, we look into models that include other types of data than survey data only. In section six, we predict course performance variables from achievement emotions. And in section seven, we predict course performance variables from trace data. In all of these modelling endeavours, the main emphasis is on the role of the decomposition of all variables in bias and bias-corrected components.

Response styles of different instruments demonstrate some variation in descriptive values, as visible in Table 1 , which can be explained by the balance between adaptive or positive items in the instrument at the one side, and negative or maladaptive items at the other side (in line with findings of other research, [ 33 ]). The AEQ instrument has lowest ARS and ERSpos scores. At the same time, the AEQ has the highest proportion of negatively valenced items (44 out of 54 items, or 81%) and the highest proportion of maladaptive items (33 out of 54 items; or 61%; the eleven learning anxiety items are negatively valenced but of adaptive type). Likewise, EES has a majority of negatively valenced items. Students tend to disagree with these negatively valenced or maladaptive items, causing lower ARS and ERSpos scores, and higher DARS and ERSneg scores. In contrast, the AGQ contains only positively valenced items, and only adaptive or neutral items (depending on how one classifies items with a performance valence). AGQ has the highest ARS and ERSpos scores, the lowest DARS and ERSneg scores.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0233977.t001

Scale reliabilities have been estimated by two different measures: Cronbach’s alpha measure and the omega measure. Omega measures have the advantage over alpha measures that they do not require the strict assumptions that come with the alpha measures and are violated in many situations [ 42 ]. Omega measures are calculated in MPlus based on code described in [ 41 ].

At the same time, there is a reasonable amount of stability in the response style measures, except for the RR and NCR variables, over the instruments: Cronbach's alpha values vary from .64 to .80. Two of the response styles are strongly right-skewed: ERSneg1 and ERSpos1.

There exists collinearity amongst the set of response styles, resulting from the overlap in their definitions. E.g., ARS correlates .74 with ERSpos2, DARS correlates .79 with ERSneg2 (see S3 Appendix ). Therefore, we make a selection from the full set of response styles, based on choices made in other research, reliability, and skewness scores. That selection is MRLS as a mild response, ERSneg2 as a negative extreme response, and ERSpos2 as a positive extreme response. Since MRLS is the complement of ERSpos2 and ERSneg2, we will report the latter two variables in the following sections (shortly addressed as ERSpos and ERSneg). ERSpos and ERSneg are, be it quite weakly, positively related, with r = .103 ( p = .001).

Confidence scores

subjective research paper

ΔConfidence is defined as the difference between subjective and objective confidence. If objective confidence is regarded as the true level of confidence, it represents a measure of overconfidence. ΔConfidence is very weakly related to ERSpos (r = -.066, p = .03), but moderately positive related to ERSneg (r = .273, p < .001), as is visible from scatterplot presented in Fig 1 , where each dot represents a student.

thumbnail

https://doi.org/10.1371/journal.pone.0233977.g001

Classification of variables based on response styles or overconfidence

The availability of response style measures allows new ways to categorise our data in educational studies. Rather than using the dichotomy of self-reported data versus objectively scored, we can position each variable of each data type in a two-dimensional plane of response styles: ERSpos and ERSneg. Fig 2 represents such a classification as a scatterplot, where each dot represents a variable in the analysis of either questionnaire, trace or learning outcome type. Variable numbers (see Table B1, S2 Appendix ) are included in the scatter.

thumbnail

https://doi.org/10.1371/journal.pone.0233977.g002

The distance of any point to the origin indicates how strong the role of response styles is in that variable. LAX (2), achievement anxiety, has the largest share of response styles in explained variation: 43%. LAX is characterised by large negative ERSneg score and modest positive ERSpos score. Other variables with that same characterisation are LHL (4), achievement helplessness, Anxiety (9), Confusion (8) and Frustration (10), the three epistemic emotions, and the three maladaptive motivations UC (29), FA (28) and AN (27): uncertain control, failure avoidance, and anxiety. In other words: these are negatively valenced, but mostly activating emotions, motivations and engagement variables.

The second group of variables positioned on the middle top of Fig 2 are characterised by high positive values of ERSneg, and small values of ERSpos. These variables are ASC (5), academic control, AB (32), academic buoyancy, and the several course performance variables: Grade (56), exam and quiz scores on both topics (57–60).

The largest cluster of variables has positive ERSpos scores and about zero ERSneg scores, as positioned on the right of Fig 2 . These represent the several goal-setting behaviours (13–20), the cognitive and metacognitive scales (39–48), and academic motivation scales (49–54), all positively valenced scales. A smaller cluster is that of the trace variables (61–68), again with zero ERSneg scores, and small positive ERSpos scores positioned just to the right of the origin of the graph. The correlation between ERSpos and ERSneg, this time with variables as the subject, is nearly zero (r = .02).

A similar classification can be done for the confidence difference variable. Fig 3 provides the scatter of confidence difference against ERSneg. Low confidence difference observations are therefore epistemic emotions Anxiety (9), Confusion (8), Frustration (10), the three maladaptive motivations UC (29), FA (28) and AN (27): uncertain control, failure avoidance, and anxiety, and the two achievement emotions LAX (2) and LHL (4), learning anxiety and hopelessness. These same variables make up the negative pole of ERSneg. At the other pole of the high positive confidence difference values, we find the variable AB (32), academic buoyancy, ASC (5), academic control, and SB (21), self-belief, that also distinguished in high positive ERSneg values. The correlation between ERSneg and confidence difference with the variables as the subject is high, .86 (much higher than the same correlation with students as subject). For that reason, Fig 3 is designed as the scatter of ΔConfidence against ERSneg (ΔConfidence is again no more than weakly related to ERSpos, with a correlation of -.18).

thumbnail

https://doi.org/10.1371/journal.pone.0233977.g003

The control-value theory of achievement emotions model

The CVTAE model is the simplest model to illustrate the suggested analytic approach within the current dataset. The model contains one predictor variable, academic control (ASC), and four response variables: learning anxiety (LAX), learning boredom (LBO), learning hopelessness (LHL) and learning enjoyment (LJO). The first step is to decompose all these five constructs into a response style component and its residual, or a confidence difference component and it's residual. Table 2 provides that decomposition.

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t002

From the left panel of Table 2 , we see that response styles account for a substantial amount of variation in the achievement emotions, up to more than 40% for anxiety and hopelessness. That is not the case in the right panel: explained variation by the confidence difference level is at a lower level, at most 10%. Left and right panel coincide concerning the ranking of the variables on explained variation: anxiety and hopelessness demonstrate the largest biases components, enjoyment the lowest. Given that ΔConfidence shares variation with ERSneg and ERSneg is the dominant predictor of anxiety and helplessness, this pattern is not surprising.

The four regression equations representing the CVTAE model estimated on observed values are contained in Table 3 .

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t003

The same CVTAE model, now based on bias-corrected values, is provided in Table 4 . Bias correction is based on response styles, left panel, or conference difference, right panel. Bias correction is applied to both left and right-hand side of the four regression equations, that is, e.g., anxiety corrected for response styles is regressed on academic control corrected for response style in the upper left panel.

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t004

The effects visible in the two panels are quite different. In the right panel, we see that correcting for overconfidence has an only limited impact: regression betas and explained variation decrease somewhat, but not much. The left panel shows a different picture. Most of the explanatory power is taken out by response style correction of anxiety and boredom values, together with ASC. In the case of boredom, academic control has even lost all of its explanatory power.

The last step in the analysis combines the corrected version of ASC with the bias terms, either ERSpos and ERSneg or ΔConfidence, as predictors of the four observed learning emotion variables. In Table 5 the outcomes of this last step are detailed.

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t005

Comparing Table 5 with Table 3 signals again a crucial difference between the corrections by response styles versus overconfidence. The right panel demonstrates that explained variation after adding overconfidence as a predictor does not increase a lot. In contrast to the left panel: adding the two response styles variables has a substantial impact on explained variation.

The epistemic origins of achievement emotions

Four weeks before measuring the achievement motivations embedded within the context of the mathematics and statistics learning tasks discussed in the middle of the course, learning emotions were measured within a more general context: learning for the course in general. Both this difference in timing and context suggest that epistemic emotions act as an antecedent for achievement emotions. To investigate if this antecedent-consequence relationship is invariant under correction for bias, we follow the same steps as in the previous subsection: first, decompose the predictor variables into bias component(s) and a bias-corrected component, and next investigate relationships with and without bias correction. Table 6 provides the decomposition of epistemic emotions measured by the EES instrument.

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t006

Response styles explain less variation in epistemic emotions than they do for achievement emotions; the effect of overconfidence is less clear. But similar to the case of achievement motivations, the effect of overconfidence explaining epistemic emotions is much smaller than the effect of the response styles.

The four regression equations relating the achievement emotions to the epistemic emotions are contained in Table 7 .

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t007

Epistemic emotions explain 30% to 50% of the variation in achievement emotions. The pattern visible in the previous section, indicating that anxiety and hopelessness find a better explanation by academic control as well as response style or overconfidence, repeats with the current very different set of predictor variables. That is remarkable, since hopelessness is the single achievement emotion without a corresponding epistemic emotion, that in all of the other three regression equation absorbs most of the predictive power. In hopelessness, it is epistemic anxiety taking that role, with secondary roles for curiosity, surprise, confusion, frustration and boredom.

The effect of correcting all emotion variables is displayed in two different tables: Table 8 when the correction is based on response styles, Table 9 for the overconfidence case.

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t008

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t009

Correcting for response styles has a substantial impact on the relationships between epistemic and achievement emotions: all explained variation values diminish in size, primarily because the role of the main predictor variable is diminished. The story of the overconfidence corrected regression models is different: due to limited collinearity of overconfidence with both types of emotions measurements, the prediction equations of boredom and enjoyment do not change by correcting measurements, whereas the prediction equations of anxiety and hopelessness do change slightly.

In the last modelling step, we add the bias term (either ERSpos and ERSneg or ΔConfidence) to the set of predictor variables and run the regressions with the observed versions of the achievement emotions. Tables 10 and 11 provide these regression outcomes.

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t010

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t011

Although the overconfidence variable is a significant predictor of all four achievement emotions, see Table 10 , the decomposition of epistemic emotions into an overconfidence part and an orthogonal part does not increase predictive power. Explained variation is of the same order of magnitude as in Table 7 . The story is again different for the response styles corrected measurements. Explained variation in Table 10 is substantially higher than that in Table 7 . In all four regressions, the predictor with the largest beta is one of the response style variables. The general pattern we can distil from these regressions is that more extreme responses tend to increase the level of positive emotions, decrease the level of negative emotions. Both types of extreme responses have effects of similar directions in case of boredom and enjoyment, whereas, in the case of anxiety and hopelessness, the effect of the negative type of extreme response dominates the effect of the positive type.

From self-report to course performance

Does bias in self-reports also influence objectively measured constructs, such as course performance? We investigate again using the AEQ questionnaire data but would have achieved similar outcomes by using other questionnaire data as predictors. As with the other analyses, we start with decomposing the course performance variables into a bias component and a component orthogonal to that bias. One would expect the bias component to be zero because these are not self-report data, but although the bias component tends to be smaller than in the self-report cases, it is nowhere zero, as is clear from Table 12 .

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t012

The right panel of Table 12 tells that all relationships between the course performance variables and the overconfidence variable are insignificant from a practical point of view, in that explained variation is always less than 1%. Especially for the first two measures of course performance, Grade and MathExam, this is remarkable since overconfidence is defined as the difference between subjective and objective confidence, and both of these confidence constructs are defined by regression of MathExam on two predictor sets of self-reports respectively objective measures. Due to this inability of the overconfidence construct to explain variation in course outcome variables, we will leave it out of consideration in the remainder of this section, and the next section.

The left panel of Table 12 tells that the story of the response styles is very different. Explained variation is still not impressive for the quiz scores as intermediate course performance variables, but up to 10% for the final and total scores. In all cases, it is the negative extreme response style that dominates the prediction of course performance scores: students high on negative response styles score on average higher in exam and quizzes.

Regression equations explaining observed course performance variables form observed CVTAE variables indicates that explained variation is modest: 16% for the final course grade (see Table 13 ).

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t013

Main predictors are academic control and hopelessness. We see marked differences between the two topics of the course, mathematics and statistics. Hopelessness is a strong predictor for mathematics-related performance, but much less for statistics related performance. Causing a gap between explained variation in performance of both topics: R 2 measures are highest for the two math-related course performances.

Redoing the analysis with response styles corrected measures brings Table 14 .

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t014

The explanatory power of all five of these regressions has decreased considerably: explained variation is less than half of the explained variation of the equations based on observed measures. The last step in the analysis refers to the explanation of observed performance variables by response styles corrected learning emotions plus the two bias terms themselves: see Table 15 .

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t015

The explained variation visible from Table 15 is back to the level of the equations expressed in observed measures. The role of the main predictor has however shifted from the academic control variable to the negative extreme response scale: part of explained variation accounted for by academic control in Table 13 , has shifted toward the negative extreme response style in Table 15 .

Self-report biases and trace and course performance variables

In this last step of the empirical analysis, we extend to the trace data and use these trace data to develop regression equations explaining the same five course performance variables as in the previous section. That is: these models are fully based on objective measures, both with regard to response variables, and predictor variables.

As a preliminary analysis, we decompose the trace variables exactly the way we did in the previous section, using both response styles and overconfidence as instrumental variables. The outcome is in Table 16 .

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t016

We included the corrections for overconfidence (right panel) to demonstrate that as in the course performance variables, overconfidence has no impact on the trace data collected from the learning processes. The largest R 2 equals 0.4% so that we will disregard this type of correction in the remainder of this section. Explained variation by the response styles is modest too, with the largest R 2 of 4.0%. From the three datasets incorporated in this study, it is clear that the learning activity trace data present the weakest relationship with the two bias factors we have composed. Another feature of interest is that the relationships of trace variables with the two response styles seem to be opposite to the relationships of performance and response styles: we find negative, rather than positive beta’s for the negative extreme response scale, and positive, rather than absent, betas for the positive extreme response scale.

In the explanation of course performance variables by trace variables, we make use of the separate topic scores at hand and the topic-specific trace measures. That is why in Table 17 , the predictors of the two mathematics course performance measures differ from the predictors of the statistics course performance measures, except for BBClicks. To address the collinearity within the set of trace variables, we had to remove the Math Sowiso Solutions variable that is highly collinear with Math Sowiso Attempts.

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t017

The main predictors in all four equations are the two product types of trace variables that represent the mastery levels achieved by the students in the two e-tutorials. The other trace variables derived from the two e-tutorials, all of the process type, all have negative or zero betas, although they are highly positively correlated with performance in bivariate relations. In combination with the mastery variable, the negative betas of NoAttempts, NoHints and Time tell that students who need more attempts, hints or time to reach the same level of mastery, score lower on average on course performance. Next, we observe that quiz performance is much better explained than exam performance, due to the close connection of quizzes and the e-tutorials.

Redoing the analysis with response styles corrected measures brings Table 18 .

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t018

Regression equations in Table 18 are (practically) identical to those in Table 17 , due to the circumstance that the response styles corrections have little impact on the trace variables. The last step of the analysis is the regression of the observed performance variables on the response style bias-corrected trace variables and the response styles. These outcomes, in Table 19 , differ considerably from the two previous tables.

thumbnail

https://doi.org/10.1371/journal.pone.0233977.t019

Because course performance variables contain a substantial response styles component, in contrast to the learning trace variables, we see that the explanation of course performance does improve adding the response styles to the predictor set. In some cases, that improvement is considerable: for the most difficult to explain performance measure, MathExam, the increase in explained variation is 40%.

The often-cited drawback of self-report data such as surveys and psychometric instruments is it biasedness: self-perceptions are seldom an accurate account of true measures. The question is: is this drawback unique for self-reports? To investigate this question, we constructed two different bias measures: one based on the differences between subjective and objective measures of confidence for learning in university, a type of under- and overconfidence construct, and the other based on extreme response styles. The selected response styles, both positive and negative extreme responses, make up a substantial part of all of the self-reported questionnaire variables, in size ranging between 7% and 43% of the explained variation. That indeed represents a considerable bias. However, the objectively measured performance measures allow the same decomposition and result in response styles contributions to explained variation in the lower end of that same range. Learning systems-based trace variables, both of product and process types, are most resistant to response styles, with the highest contribution to the explained variation of 4%. The role of the other type of bias we sought to operationalize, the overconfidence, is more modest. It stands out of course performance variables and trace variables, and is contained in some of the self-report variables, but nowhere with a variance contribution exceeding 10%.

Negative and positive extreme responses occur in different items: negative extreme response in items with a negative valence, where the scale mean is below the neutral value, and positive extreme response in positively valenced items, with scale means above the neutral value. Typically, the two extreme responses do not go together: items with a high ERSneg tend to have about zero ERSpos, such as learning helplessness, LHL (4) in Fig 1 , and items that have high ERSpos tend to have about zero ERSneg. In Fig 1 , that is the large cluster of variables on the right: since most items are positively valenced, there is a large group of academic motivations, goal setting and learning approaches variables ending up in that cluster on the right. There are a few exceptions to this pattern. Several instruments contain an anxiety-related scale, and three of these (LAX, Anxiety, AN) combine negative ERSneg weights with positive ERSpos weights. That is: if we wish to correct anxiety scores for response styles, true anxiety scores are lower than measured ones for those students with high ERSneg scores, and true anxiety score are higher than measured ones for those students with high ERSpos scores. If we look at Tables 2 and 6 , we see that the correction induced by ERSneg is a consistent and strong one: in all negatively valenced constructs, we find that students with high ERSneg levels exaggerate their negative emotions, so a downward correction is required. Likewise, these students undervalue their positive emotions, so an upward correction is demanded. The role of ERSpos is less unambiguous and not uniquely determined by the valence of the scale. In several scales, we find that an upward correction is required for students high in ERSpos scores: their anxiety levels are higher than measured, but their enjoyment and curiosity levels too. The exception is in boredom, both epistemic and achievement type: students who tend to provide extreme positive responses, exaggerate their boredom levels, calling for a downward correction.

The patterns induced by overconfidence mirror those of the negative extreme response style, but at a smaller scale. Overconfidence increases the level of the constructs with a positive valence, as academic control and enjoyment, and decreases the levels of negative emotions: anxiety of several types, hopelessness, frustration and confusion. Correcting for overconfidence will thus imply a downward correction of these positively valenced constructs and an upward correction for the negatively valenced constructs.

Course performance variables follow most of the patterns of the positively valenced self-report scales, in that we find consistent, strong ERSneg contributions. That is: expected performance levels of students with high ERSneg scores should be corrected in an upward direction. Remarkably, no correction for ERSpos scores is needed, what results in the course performance variables clustering together along the positive part of the vertical axis in Fig 2 . Together with Academic control, ASC (5), Cognitive competence (34) and Affect (35), the three variables expressing perceived self-efficacy.

In Fig 2 , the cluster nearest to the origin is that of the learning activity trace variables. They are least biased, and thus need no more than a small correction, given that students with high positive extremes tend to have slightly higher average activity levels.

In the literature on ‘fundamental validity problems’ [ 2 , 3 ], the individual reference problem and the memory problem can both explain the existence of response styles like differences in answer patterns between students. If the absence of such answer patterns is taken as a definition of the true level of measurement, then it is clear that all of the self-reports, as well as course performance variables, represent biased constructs, and that the decomposition of these variables into a response style component and a component orthogonal to that, is one of taking the bias out. However, to make validity into a meaningful concept, it has to be criterion-related. In educational research, that criterion is that it helps understanding educational theories: theories that relate multiple educational concepts measured with different instruments, or theories that relate such concepts with the outcomes of educational processes. If that is the main criterion, then our definition of validity and bias should change. A valid instrument is then an instrument that contains such typical person-specific response patterns, and bias is now defined as the incapability of the instrument to account for such patterns. In the context of our application: it is the self-report and course performance data that represent the unbiased parts of our data collection, since we aim to investigate the empirical model of the control-value theory of achievement emotions (CVTAE) and its contribution in the explanation of course outcomes, whereas our trace variables represent the biased part of our data collection, due to its inability to account for these typical personal patterns in the data that determine our criterion.

subjective research paper

Is it the second equation that we prefer? It has eliminated the impact of response styles, at least those we distinguished, but says nothing about other potential biases. The third equation has the advantage that it allows an impression of the impact of response styles, but it is in unattractive, non-parsimonious format. One needs all three expressions, because the first two help understand the extent to which helplessness and academic control share the same response styles, and the third one provides the decomposition into response style or not. However, the problem is that the response style is only one source of bias. In this example, confidence difference brings the second type of bias, accounting for 7% of the variation. Adding this second correction or any further correction one can think of, would add explanatory power, but make for an explanation most obviously lacking any parsimony, without the guarantee that all bias sources are covered. It is therefore that we prefer the first formulation of the three equations, knowing that this choice sacrifices at least 10% of the explained variation, resulting from the circumstance that helplessness carries a larger response style component than academic control can account for.

The outcome of this study that connects with all our previous research [ 14 – 18 ] is that we once more discovered how “dangerous” learning activity trace data of process type can be. In this study, we included NoAttempts, NoSolutions, NoHints, and TimeOnTask as examples of such process variables. All these variables demonstrate strong positive bivariate relationships with all of the learning performance variables, telling the simple message: the more active the student, the higher the expected learning outcomes. Nevertheless, that simple message is deceptive: as soon as we add a covariate of product type, such as Mastery in the learning tool, the role of the process predictors changes radically: relationships become negative or vanish. In itself not surprising, and easily explained by a second simple mechanism. The student who needs to consult more worked-out examples (Solutions), the student who needs more Hints, the student who needs more Attempts, the student who needs more TimeOnTask, than another student to reach the same level of Mastery, is learning less efficiently, and therefore predicted to achieve lower course performance scores on average. The obvious way out of this problem is finding the causes of these efficiency differences and correct for these factors. That is no easy way to go; although having access to a huge database of personal characteristics of students, none of these qualified as a proper predictor of learning efficiency. Prior education, diagnostic entry test scores and other variables of this type all explain a small part of these efficiency differences, but no more than that.

Reflecting on our research questions: we do find that self-report survey data and course performance data largely reflect the conceptualisations of the constructs we intended to find in our models. Both types of constructs contain response style type of components of modest to a substantial size. These might be regarded as components of bias, and contrast to the trace variables that lack these bias components. However, is it reasonable to make these traces the standards of our educational theories? When designing models, we hardly ever will do so with the prime aim of explaining levels of traces of learning activity. The majority of our models seek to understand the outcomes of learning processes or investigate the relationships between social-cognitive antecedents of these learning outcomes. Therefore, if these modelling aims define our standards, the bias is at the side of the trace variables in that they need to be corrected to include the stable response style patterns that characterise all other variables in our models.

These differences in response style patterns do not necessarily constrain analytical choices. If a sufficiently rich set of self-report data is available, as in our application, we can make a reliable decomposition of all variables in the analysis. Models that build on such a decomposition have the advantage of high predictive power, against the disadvantage of being less parsimonious, more difficult to interpret. If we prefer to stick with parsimonious models that apply measured variables only without correction, we are indeed restrained in our analytical choices. In our case, that restriction comes down to a limitation of the role that trace variables can play in the explanation of other types of variables, due to their incapability to catch the response style components.

The inclusion of response styles as separate explanatory factors does change the interpretation of models somewhat. In a manner that is quite intuitive: if we isolate the response styles components from the achievement emotions, as in Table 15 , the achievement emotions will lose part of their predictive power in favour of the response styles. That is exactly what happens in the comparison between Tables 13 and 15 : it is still academic control, ASC, and helplessness, LHL, that predict the several course performance categories, with a positive beta for ASC and a negative beta for LHL, but the absolute size of these betas are diminished. That predictive power is now absorbed by ERSneg. This finding can be generalised: when we estimate models of learning processes that are formulated in terms of variables that share a common component, such as a response style or any other ‘bias’, we will find inflated estimates caused by the circumstance that the same bias component is part of both response and predictors. Any predictor that is free of that bias component will also be free of such an inflated estimate.

Limitations and future directions

In our context, we find that it is the trace type of data that stands out in the sense that these data cannot be easily integrated with self-report and course performance data. That is a robust outcome: the same data-rich context used in this study has been investigated in more than ten years of learning analytics research, always with that same conclusion. Strong heterogeneity in our population may be part of the explanation of why the trace variables are so out of synch with the other measured constructs. High levels of learning activity may signal a student who likes doing the subject and is very good at it or a very conscientious student but may also be an indicator for extra learning efforts required to compensate low proficiency levels at the start of the course. Where the heterogeneous population benefits, in general, most model building endeavours, it clearly limits the analysis of the learning activity to learning outcomes relationship. If this analysis could be repeated in a more homogeneous sample, we might have found more stable roles for the online trace variables. However, it would not help to solve the other issue: by not being able to capture response patterns characterising questionnaire and learning outcome data, their role in empirical models based on multi-modal data is problematic.

Heterogeneity in our sample is not the only difference with other studies. Quite a lot of studies are based on experimental design, with limited numbers of participating students and focussing on learning activities of limited intensity. For instance, the Zhou and Winne study [ 4 ] is based on 95 students in a one-hour experimental session, and the Fincham study [ 43 ] on 230 students participating in one of three different MOOCs. These MOOCs lasted five to ten weeks, but per active week, students watched an average of less than one video and submitted between one and two problems. In contrast, our study (and previous ones) focus on learning activities with far higher intensity. During our eight-week course, students do, on average, 760 problem-solving attempts in the math e-tutorial and 210 attempts in the stats e-tutorial, in total more than 120 attempts per week. Given that the number of problems offered per week fluctuates between 40 and 80, a substantial part of these attempts represents repeated attempts, where the need to repeat attempts differs strongly from student to student. Therefore, it is not unlikely that the difference in the role of trace variables of process type play is a consequence of investigation of learning in a small-scale experimental design, versus participating in intensive activities in an authentic learning context. More research on the role of the learning context is needed to answer such questions.

A third option to extend this study is turning it into a multiverse analysis [ 38 ] by investigating alternative data sets and alternative statistical methods to validate our findings in different contexts. The application of robust regression methods is a prime candidate of such alternative statistical method, as well as the application of slider rating scales in the administration of the several self-report instruments.

The last topic of future research refers to the mechanism at work that might explain the relationships between response styles and learning outcome variables, or trace variables of product type. Potential antecedents of response styles, such as cultural factors or gender, have been researched [ 8 , 29 ]. But these studies are not of much help in explaining why response styles based on questionnaire data do show up in other types of data, like performance data and trace data of product type. More research is needed here.

Conclusions

The large-scale introduction of technology-enhanced learning environments has had a huge impact on education as well as educational research. Questionnaire data, long time being the main source of empirical studies of learning and teaching, lost its prominent position to online data collected as digital traces of learning processes. Because this online data, using the term that is used in the area of metacognitive research, refers to data that is collected during the learning process itself, by following the student in all learning activity steps. Trace data of process type collected by technology-enhanced learning systems is an excellent example of such on-line data (whereas trace data of product type is, in fact, part of the off-line data, because it refers to reaching a state of mastery, what is not a dynamic process). In that debate, the outcome is invariable that subjective off-line data is inferior to objective on-line data. In our learning analytics research, where the learning outcome is typically the response variable that is explained and predicted, we find the opposite conclusion invariably: it is the on-line trace data of process type that is inferior to the off-line data and the online trace data of product type. In the generation of explanatory models, the role of process type of trace variables is quite unstable, depending strongly on the covariates in the model. Regression betas can become insignificant or switch signs after adding covariates, especially if these are of product type.

In contrast, product type of trace variables tends to play stable roles, with little disturbance of the addition of covariates. The critique towards data of self-report type, too stable and too trait-oriented [ 37 ] is reversed in this application: it is the trace data of process type, even after aggregation over the full course period, which lacks the stability to act as a reliable predictor. That is what one decade of learning analytics research brought the authors as insight: be very careful with online data of process type, put more trust in online data of product type, complemented with survey data.

Supporting information

S1 appendix. instruments of self-report surveys [ 44 – 49 ]..

https://doi.org/10.1371/journal.pone.0233977.s001

S2 Appendix. Descriptive statistics of all variables in the study.

https://doi.org/10.1371/journal.pone.0233977.s002

S3 Appendix. Correlations of mean response styles measures.

https://doi.org/10.1371/journal.pone.0233977.s003

  • View Article
  • PubMed/NCBI
  • Google Scholar
  • 2. Veenman MVJ. Assessing Metacognitive Skills in Computerized Learning Environments, In: Azevedo R, Aleven V, editors, International Handbook of Metacognition and Learning Technologies, Springer International Handbooks of Education 26. New York: Springer Science + Business Media; 2013. pp. 157–168. https://doi.org/10.1007/978-1-4419-5546-3_11
  • 3. Veenman MVJ. Learning to Self-Monitor and Self-Regulate. In: Mayer RE, Alexander PA, editors, Handbook of Research on Learning and Instruction. New York: Routledge; 2016, pp. 233–257. https://doi.org/10.4324/9781315736419.ch11
  • 5. Buckingham Shum S, Deakin Crick RD. Learning dispositions and transferable competencies: pedagogy, modelling and learning analytics. In: Buckingham Shum S, Gasevic D, Ferguson R. editors, Proceedings of the 2nd international conference on learning analytics and knowledge. New York, NY: ACM; 2012, pp. 92–101. DOI: 10.1145/2330601.2330629.
  • 9. Schraw G. Measuring metacognitive judgments. In: Hacker DJ, Dunlosky J, Graesser AC editors, The educational psychology series. Handbook of metacognition in education. New York, NY: Routledge; 2009, pp. 415–429. https://doi.org/10.4324/9780203876428.ch21
  • 12. Kovanović V, Gašević D, Hatala M, Siemens G. A novel model of cognitive presence assessment using automated learning analytics methods. SRI Education Analytics4Learning report series, 2017 [cited 2020 February 13]. Available from: https://a4li.sri.com/archive/papers/Kovanovic_2017_Presence.pdf .
  • 15. Tempelaar DT, Cuypers H, Van de Vrie E, Heck A, Van der Kooij H. Formative Assessment and Learning Analytics. In: Suthers D, Verbert K editors, Proceedings of the 3rd International Conference on Learning Analytics and Knowledge. New York, NY, ACM; 2013: pp. 205–209. DOI: 10.1145/2460296.2460337.
  • 18. Tempelaar D, Rienties B, Nguyen Q. Investigating learning strategies in a dispositional learning analytics context: the case of worked examples. In Proceedings of the International Conference on Learning Analytics and Knowledge, Sydney, Australia, March 2018 (LAK’18). New York, NY, ACM; 2018: pp. 201–205. DOI: 10.1145/3170358.3170385.
  • 24. Pekrun R. A social-cognitive, control-value theory of achievement emotions. In: Heckhausen J. editor, Motivational psychology of human development: Developing motivation and motivating development. New York, NY US: Elsevier Science; 2000. pp. 143–163.
  • 25. Pekrun R. Emotions as drivers of learning and cognitive development. In: Calvo RA, D'Mello SK editors, New perspectives on affect and learning technologies. New York: Springer; 2012. pp. 23–39. https://doi.org/10.1007/978-1-4419-9625-1_3
  • 34. Skrypnyk O, Joksimović S, Kovanović V, Dawson S, Gašević D, Siemens G. The history and state of blended learning. In: Siemens G, Gašević D, Dawson S, editors. Preparing for the digital university: a review of the history and current state of distance, blended, and online learning. Edmonton, AB: Athabasca University; 2015. pp. 55–92.
  • 35. Williams A, Sun Z, Xie K, Garcia E, Ashby I, Exter M, et al. Flipping STEM. In: Santos Green L, Banas J, Perkins R, editors, The Flipped College Classroom, Conceptualized and Re-Conceptualized, Part II. Switzerland: Springer International Publishing; 2016. pp. 149–186. https://doi.org/10.1007/978-3-319-41855-1_8
  • 36. Winne PH. Learning strategies, study skills and self-regulated learning in postsecondary education. In: Paulsen MB editor, Higher education: Handbook of theory and research, Volume 28. Netherlands, Dordrecht: Springer; 2013. pp. 377–403. https://doi.org/10.1007/978-94-007-5836-0_8
  • 37. Winne PH, Perry NE. Measuring self-regulated learning. In: Boekaerts M, Pintrich PR, Zeidner M. editors, Handbook of self-regulation. San Diego, CA: Academic Press; 2000. Chapter 16, pp. 531–566.
  • 41. Bandalos DL. Measurement Theory and Applications for the Social Sciences. 1st ed. New York, NY: The Guilford Press; 2018.

What is subjectivity? Scholarly perspectives on the elephant in the room

  • Open access
  • Published: 08 November 2022
  • Volume 57 , pages 4509–4529, ( 2023 )

Cite this article

You have full access to this open access article

subjective research paper

  • Adrian Lundberg   ORCID: orcid.org/0000-0001-8555-6398 1 ,
  • Nicola Fraschini   ORCID: orcid.org/0000-0002-9353-6319 2 &
  • Renata Aliani   ORCID: orcid.org/0000-0001-9703-9996 3  

8658 Accesses

4 Citations

3 Altmetric

Explore all metrics

The concept of subjectivity has long been controversially discussed in academic contexts without ever reaching consensus. As the main approach for a science of subjectivity, we applied Q methodology to investigate subjective perspectives about ‘subjectivity’. The purpose of this work was therefore to contribute with clarity about what is meant with this central concept and in what way the understanding might differ among Q researchers and beyond. Forty-six participants from different disciplinary backgrounds and geographical locations sorted 39 statements related to subjectivity. Factor analysis yielded five different perspectives. Employing a team approach, the factors were carefully and holistically interpreted in an iterative manner. Preliminary factor interpretations were then discussed with prominent experts in the field of Q methodology. These interviewees were selected due to their clear representation by a specific factor and led to a further enrichment of the narratives presented. Despite some underlying consensus concerning subjectivity’s dynamic and complex structure and being used as individuals’ internal point of view, perspectives differ with regard to the measurability of subjectivity and the role context plays for their construction. In light of the wide range of characterisations, we suggest the presented perspectives to be used as a springboard for future Q studies and urge researchers, within and beyond the Q community, to be more specific regarding their application of the concept. Furthermore, we discuss the importance of attempting to deeply understand research participants in order to truly contribute to a science of subjectivity.

Similar content being viewed by others

subjective research paper

Subjectivity in the Human Sciences

subjective research paper

'Qualitative' and 'quantitative' methods and approaches across subject fields: implications for research values, assumptions, and practices

subjective research paper

Interpretation-Driven Guidelines for Designing and Evaluating Grounded Theory Research: A Constructivist-Social Justice Approach

Avoid common mistakes on your manuscript.

1 Introduction

According to an ancient fable, six blind men went out to use their sense of touch to investigate the nature of an elephant, something they had never heard of. Each man touched a different part of the creature. One perceived the elephant to be a wall (side), another one was sure to be touching a snake (trunk) and the third one was convinced that he had just put his hands on a tree (leg). The other three were touching the elephant’s tusk, ear and tail and again came to a different conclusion regarding the nature of the elephant. Because human beings tend to believe what they subjectively perceive to be the absolute truth, the six men were not able to agree.

If this fable is used as a metaphor for human beings’ subjective experience of the world around them, then research needs methodologies that help us understand subjectivity and uncover consensus and points of disagreement across individuals’ lived experiences. One of these methodologies was developed by British physicist and psychologist William Stephenson (1902–1989) in the 1930s (Stephenson 1935 ). Later proclaimed as “the best-developed paradigm for the investigation of human subjectivity'' (Dryzek and Holmes 2002 , p. 20), Q methodology provides researchers with the framework and a set of methodological steps to study subjectivity. Footnote 1 First, researchers select a representative Q sample from the concourse, that is the corpus of subjective communicability about a selected topic (Brown 2019 ). Second, participants are engaged in a Q sorting activity with a guiding condition of instruction, where they arrange the Q sample items according to their point of view. The resulting Q sort is then used as data for factor analysis. Finally, emerging factors are iteratively interpreted by the researchers.

As any other methodology, Q has its limitations and critiques. A constant companion throughout its existence is the criticism of Q being misguided or improper with regard to statistics (McKeown and Thomas 2013 ; Ramlo 2016 ). Many of these critical voices stem from researchers that are uncomfortable with Q’s hybridity regarding qualitative and quantitative processes (Stenner and Stainton-Rogers 2004 ) or the fact that Stephenson, in his first announcement of Q methodology in 1935, suggested inverting the factor analytical procedure for Q methodology. As opposed to more traditional and well-known R methodological factor analysis, Q factor analysis groups persons based on the similarity of their sorts. As a consequence, the resulting factors provide substantive generalisations about a phenomenon (Thomas and Baas 1993 ).

Relatively recently, an attempted academic dialogue between prominent Q researchers and two academics from outside the Q community was published in Quality & Quantity (Kampen and Tamás 2014 ; Brown et al. 2015 ; Tamás and Kampen 2015 ). One of the core points of disagreement between the two research teams seemed to be the nature of subjectivity. In fact, a conceptualisation of subjectivity is all but straight forward. Researchers within and beyond the Q community seem to perceive it in various ways and only very few try to define subjectivity. In addition, if Q methodological studies are supposed to contribute to a science of subjectivity (Brown 2019 ; Ramlo 2022 ), it is troubling to realise that there is no conceptual consensus. This again might be an explanation for the lack of deep and holistic interpretations and the connection of results to the larger question(s) pertaining to subjectivity, as opposed to the mere reporting of factor descriptions in Q methodological publications (Wolf 2009 ; Albright et al. 2019 ).

In that sense, subjectivity represents the elephant and this paper aims to use Q as an approach to explore and uncover the range of perspectives about the very concept of subjectivity among researchers across various disciplines, geographical locations, regardless of their expertise of Q methodology. The guiding research questions for this paper are:

RQ1 : What are the different perspectives on the meaning and characteristics of subjectivity?

RQ2 : What are the main aspects of subjectivity about which scholars particularly agree or disagree?

In the next sections, we provide a snapshot of the different existing perspectives about subjectivity, and particularly discuss Stephenson’s ideas on this concept. Then, we illustrate the procedure followed in this study, and report and discuss the result. We conclude with some suggestions on how to adopt Q methodology to contribute to a science of subjectivity.

2 Subjectivity in scientific research and Q methodology

An historical overview of the development of the concept of subjectivity, already conducted in detail by others (see Hall 2004 ), is outside the scope of this paper. We limit our literature review to those aspects we believe to be essential to an understanding of Q methodology as a means of investigating the science of subjectivity.

2.1 Subjectivity and objectivity

The Western production of knowledge has been defined, for many centuries, by the Cartesian distinction between mind and body, and between objectivity seen as impartial truth and subjectivity seen as the characteristic of a faulty individual (Hanson 2015 ; Hall 2004 ). Subjectivity has been considered a hindrance in the quest for objectivity, a form of impurity to avoid at all costs (Shapin 2012 ). Such a subjectivity has been associated with personal perspectives, individual goals, deviation from standards, and distorted or biased evaluations (Sabini and Silver 1982 ) and attempts to exclude it from scientific research was characteristic of many disciplines in the past.

In the twentieth century, several scholars observed that removing the personal perspective from the observation of social phenomena constitutes a form of distortion (Boon 2007 ). The exclusion of subjectivity from the pursuit of knowledge is not possible since subjectivity and objectivity are intertwined (Stenner 2008 ), and explicitly studying subjectivity is crucial to gathering reliable evidence (Lundberg et al. 2020 ). The shift in psychology from studying people’s mind as if they were in a vacuum, to studying people’s mind acknowledging an individual’s subjectivity, equivalent to the turn from classical physics to quantum mechanics (Hwang and Choi 2002 ), highlights the understanding that subjectivity does not negate objective reality, but it is only a perspective from which to look at it (Sabini and Silver 1982 ).

Subjectivity and objectivity “can both be seen as aspects of the constructivist idea of human participation in knowledge making” (Hanson 2015 , p. 859). In these terms, objectivity can be seen as shared subjectivities, multiple points of view from which to observe aspects of the same reality. Indeed, many forms of knowledge in modern societies are attempts to provide an objective measurement by removing the subjective component from the evaluation process (Phillips 2016 ) and are thus examples of “objectified subjectivity” (Shapin 2016 , p. 437). These attempts, called inter-subjectivity engines by Shapin ( 2012 ; 2016 ), allow people to share their subjective experiences through language, and show how people’s subjectivities concur to inform objectivity through finding an agreement among shared points of view.

2.2 Subjectivity in Q methodology and the work of William Stephenson

Stephenson’s work too is an attempt to overcome the body/mind and subjectivity/objectivity dualism (Good 2010 ). Instead of considering subjectivity in opposition to objectivity, Stephenson viewed subjectivity as expressed through objectivity (Midgley and Delprato 2017 ), and he stated that “the Q-technique could be applied to subjective as well as objective behaviour, there could be no valid basis for their separation” (Stephenson 1953 , p. 25).

Stephenson ( 1968 , p. 501) defined subjectivity as “what one can converse about, to others, or to oneself”; that is the communication of a personal point of view about matters of social or personal importance (McKeown and Thomas 2013 ). This personal viewpoint expresses the individual’s subjectivity through their reflection upon their lived experiences (Stephenson 1982 ). The link between personal opinion and behaviour was summarised in Parloff’s law of behaviour, stating that “self-referred operants are homologous with lived (objective) experience”, which indicates that “one’s behaviour can be the reflection of one’s opinion” (Stephenson 1974 , p. 14).

Behaviour is central to the understanding of subjectivity in Q methodology. Stephenson’s ( 1953 ) concept of behaviour includes attitudes, thinking, self-conceptions, personality, as well as social behaviour. Consequently, behaviour is “neither mind nor body nor physiology: it is simply behaviour, whether subjective to a person or objective to others” (Stephenson 1953 , p. 23). By rejecting the mind/body dualism and the idea of the mind as the place of subjectivity, behaviour becomes itself the location of subjectivity (Midgley and Delprato 2017 ). Behaviour, and therefore subjectivity, is not just a phenomenon out in the open (Stephenson 1953 ) and therefore empirically observable (Stephenson 2014 ), but its internal structure is measurable through Q methodology (Brown 1980 ; Brown et al. 2015 ). Such subjectivity is complex (Stephenson 2006 ) and highly contextual (Stephenson 1987 ), but it does not depend only on the environment, the individual is focal as subjectivity is self-referential (Stephenson 1987 ) and grounded in personal experiences.

Stephenson’s subjectivity is “rooted in conscire, in the common knowledge, the shareable knowledge known to anyone in a culture” (Stephenson 1980a , p. 15). Consciring is at the foundation of the concourse of a Q study, i.e., “a random collection of self-referable statements about something” (Stephenson 1993 , p. 5). The concourse does not only represent common knowledge, it is also a common language that allows people to express beliefs and attitudes regarding a given topic (Stephenson 1980b ). The use of a common language (the Q sample) enables people to operationalise and express their subjectivity with self-reference. Subjectivity can therefore be reached operantly (Stephenson 1968 ), which means that by allowing the study participants to express it through the sorting of the Q sample, subjectivity is transformed into operant factor structure accessible to the researcher (Stephenson 1980b ; 1982 ). Stephenson, in formulating his interpretation of Newton’s fifth rule, pointed out that the operant factor structure is what allows the formulation of new, different, and subjective hypotheses, which are “inherent to the concourse” (Stephenson 1982 , p. 51). The result is that in Q methodology “what is involved is the discovery of hypothesis and reaching understanding, instead of testing hypothesis by way of predictability and falsifiability” (Stephenson, in Brown 1980 , p. X).

2.3 From the nature of subjectivity to a science of subjectivity

Wolf ( 2009 ) observed a lack of specific attention to the nature of subjectivity in the majority of recently published Q studies. Later, Albright et al. ( 2019 ) confirmed that most focus is put on the statistical aspect of the Q method. A potential reason might be the misconception that quantification implies objectivity and validity (Ramlo 2022 ), despite that “the dividing line between R methodology and Q methodology turns on the fundamental distinction between what is objective and what is subjective” (Brown et al. 2015 , p. 528). However, Q methodological researchers should understand and accept the subjective side of the scientific process to let the respondent’s view of reality emerge from the data through the interpretation process. Therefore, more than the statistical procedure, it is the interpretation step and the researcher’s judgement that are at the core of a contribution to a study of subjectivity. This step is often considered arbitrary by critics of Q methodology (Brown 1980 ), but factor interpretation, “subjective as it may be, must square with the known facts” (Brown 1980 , p. 257); i.e., the interpretation must adhere to the factor arrays and other empirical evidence.

If we consider that “science cannot rest with mere narrative: It asks for proof” (Stephenson 1985 , p. 103), simply describing a factor is not enough to contribute to a science of subjectivity. During the interpretation process, researchers must show empathy and get a ‘feeling for the organism’ (Brown 1989 ), which means putting themselves in the participant's shoes to “provide the feelings of the sorters who define the factor” (Albright et al. 2019 , p. 135). Such a deep level of understanding, allowing conversations to occur among resultant factor viewpoints, where each sorter is “examined on its own terms” (Brown 1989 , p. 95), is possible only with the researcher’s knowledge of the concourse, situation, context, and participants (Ramlo 2022 ). This process is supported by the “logic of everyday sense-making” (Wolf 2009 , p. 24), which guides the researcher’s attention to the discovery of new meaning, in line with a more recent view of abduction for explanatory reasoning in justifying hypotheses (Douven 2021 ). In Q methodology, abduction was historically used with regard to the theoretical rotation of factors to generate hypotheses.

To further facilitate the interpretation task, interviews are a common and important tool (Brown 1980 ; Stephenson 1953 ), since “in a science of subjectivity, the observer is the Q sorter, who is the only person in direct contact with his or her own point of view and therefore the only person who can directly inform on it” (Brown et al. 2015 , p. 534). Follow-up interviews can be organised with pure or highest factor representatives (Albright et al. 2019 ), and short post-sort interviews can be conducted with all participants (Watts and Stenner 2012 ). In the case of online collection of the Q sort, participants can be invited to provide written comments. Beyond the interview, Albright et al. ( 2019 ) proposed to proceed to the analysis of the results by multiple iterations, valuing team interpretation to add new ideas and fresh perspectives at each iteration. Additionally, the feelings of the sorters who define the factor are provided by a holistic view which indicates the importance of considering the rating of the statements within a factor and across factors. Researchers should examine all statements, thinking about why they have been rated as they have been, or suggest hypotheses if there is no apparent reason (Watts and Stenner 2012 ).

Q methodology provides researchers with the theoretical ground to overcome the subjectivity/objectivity dualism, and with the methodological tools to investigate subjectivity. Nevertheless, since much of the discussion around Q methodology lacks attention to the nature of subjectivity, essential for a contribution to a science of subjectivity (Wolf 2009 ), we feel it necessary to explore how researchers, with or without knowledge of Q methodology, understand the concept of subjectivity.

3 Method and procedure

To explore scholars’ perspectives on the concept of subjectivity, we followed a series of steps that are typical of Q methodology.

3.1 Generation of a concourse

Following what McKeown and Thomas ( 2013 ) call a ‘naturalistic process’, we collected a total of 102 statements related to subjectivity from Q specific and broader literature as well as from discussions with academic colleagues who may or may not have been conversant with Q. Many works considered in the literature review above have been used, among others, to inform the concourse.

3.2 Reduction of the concourse to a manageable set of statements (Q sample)

We reviewed the concourse and deleted unclear and overlapping statements. The statements taken from Q methodology and non-Q methodology literature were then reworded to make them begin with the words “Subjectivity is…”. This process produced a set of 45 statements. Subsequently, a peer familiar with Q methodology and working in education, and a peer not familiar with Q methodology and working in the Humanities were asked to review the list of statements. After their feedback, a few statements were further refined and the sample was reduced to 40 statements.

3.3 Setting of the online instrument to collect the sorts, followed by further adjustment of the Q sample and sorting instructions

We set up the online app, developed at the University of Western Australia and already used for previous Q studies (Fraschini and Park 2021 , 2022 ). This online instrument replicated the steps that are part of the Q sorting activity (Watts and Stenner 2012 ) and presented full instructions to complete the task. The online application also allowed the participants to comment on the placement of statements. The instrument and the 40-statement Q sample were tested by two academics, different from those who had previously reviewed the list of statements. One of the two was an academic au fait with Q methodology and in the Humanities, the second one had not used the methodology before and was in Social Sciences. After receiving their feedback, we produced the final 39-statement Q sample (appendix A ) and adjusted the wording of the instructions. The sorting grid was finalised as in Table 1 .

3.4 Recruitment of the study participants and forwarding of the Q sort activity link

After receiving Human Research Ethic approval from the University of Western Australia, we invited colleagues in the Humanities, Education, and Social Sciences fields based in Europe and Australia, with and without Q methodology expertise, and without bias regarding their stance towards Q methodology. The call for participants was also posted on an internal research notice newsletter of a large Australian university, and to broaden the geographical representation of the participants, the call was also extended to an online Q methodology group and a Q methodology scholarly association in East Asia. All participants completed the sort in an anonymous form.

Additionally, we personally contacted several people considered influential in Q methodology, upon consideration that, among others, “a new generation of researchers will become true academic scholars if we not only model good research practice but also establish communities that include synergistic mentor–mentee relationships” (Ramlo 2016 , p. 42). Among prominent Q methodology scholars, Steven Brown, Susan Ramlo, Noori Akhtar-Danesh, Peter Schmolck, and Alessio Pruneddu accepted our invitation to participate in a non-anonymous form, and are therefore identifiable in the remainder of this paper. These academics were asked to complete a Q sort so that their perspectives could be compared to that of others, and some were invited to participate in a follow-up interview.

The description of the 46 participants is reported in Table 2 . Members of the P set were based across four continents and their expertise spans from a range of disciplines covering humanities, arts, social sciences, science, and health sciences.

A link provided access to the on-line instrument, reporting on the landing page the study’s Information Form and the Consent Form. Then, the application generated a random code which rendered the sorts anonymous, and could be used to withdraw the sort after submission, or to retrieve it at a later time. Before starting the sorting task, the participants were asked to provide the demographic information summarised in Table 2 .

3.5 Analysis

The 46 Q sorts collected were analysed with KADE v.1.2.1 (Banasick 2019 ), where different factor analytic solutions were explored. Because Centroid factor analysis yielded a factor that nobody identified with, we extracted the factors with PCA. The four factors within this clearer solution where then subjected to Varimax rotation. Significant sorts were flagged manually considering p  < 0.01, following the formula reported in Brown ( 1980 ). One sort loaded negatively on Factor 3, making it a bipolar factor. We considered the perspective expressed by this negative sort to be theoretically relevant, and therefore decided to split this factor into Factor 3a and Factor 3b. The rotated factors with flagged sorts are available in appendix B and the statistical description of the factors is reported in Table 3 .

We could not detect any pattern regarding the geographical location of the participants; therefore, this aspect is not dealt with further. Only those sorts representative of a factor (view) are used for the construction of factor arrays.

3.6 Interpretation of factors

The goal of the interpretation is to pursue understanding and synthesis, proceeding in a bottom-up direction, for the whole of the participant’s subjectivity (Stephenson 1982 ). In other words, it is not only about describing the range of viewpoints, but the reasons behind differences and similarities between factors. We took an iterative approach to craft preliminary descriptions of the factors (Albright et al. 2019 ). This means “revisiting the data with a fresh perspective, allowing the information to incubate as we developed themes and generalisations further” (Albright et al. 2019 , p. 142). Firstly, as a team we compared the statements within a factor by considering the most salient ones, i.e., those ranked at the extremes, and the list of distinguishing statements. Secondly, we compared the statements across factors. Thirdly, we examined the list of statements ranked from consensus to disagreement. This process was conducted considering two analysis outputs, one which included Factor 3b and another one which excluded it, since a split factor may affect the consensus statements. As a fourth step, we individually worked on the narrative descriptions, which were then compared and integrated through team discussion. As a final step, we reviewed the original sort of each participant loading on a factor and their comments, and excerpts from the comments were used to further support the narrative. Most steps of the interpretation process were undertaken as a team to “minimise the effects of the researcher bias” (Albright et al. 2019 , p. 143), and following Kitzinger’s ( 1999 ) suggestion about the verification of the adequacy of the factor description and interpretation by the readers, the full factor arrays are available in appendix A . Additionally, we acknowledge that we loaded on Factor 1 and come from a background in education and language studies.

3.7 Interviews with participating experts to gather additional qualitative data

Interviews were scheduled with some of the experts who completed the Q sorts to member-check the narrative description of the factors, and to minimise the risk of “meanings being inadvertently imposed on the research participants” (Kitzinger 1999 , p. 269). The researchers contacted were Susan Ramlo (Factor 1), Alessio Pruneddu (Factor 2), and Steven Brown (Factor 4). The interviews were conducted online, lasted between 40 min and 1 h and 15 min and included three steps. First, participants were shown the factor description without the ranking of the statements, and were asked whether they could recognise their subjective perspective within the description of the factor. In a second step, they were shown the ranking of the statements within the narrative description and asked to comment on what they felt strongly about and where the description did not match their preference. Finally, the participants were shown the factor array and their individual sort, and were invited to comment on the statements that showed the greatest discrepancy. The interviews were recorded, and written notes were taken about portions of the discussions that seemed particularly relevant for our deeper understanding of the factors (Albright et al. 2019 ). The factor interpretations presented in the next section of this paper were adapted and finalised in an additional iteration.

This section illustrates the different perspectives scholars participating in this study hold about the concept of subjectivity. The narrative of each factor, which represents shared viewpoints, is introduced by a short description of the participants loading on the factor. The numbers in brackets indicate the reference to the relevant statement, followed by the ranking of the statement for the specific factor (see also appendix A ). Comments, to foreground the participants’ point of view in their own words, are reported together with their participant’s code (appendix B).

4.1 Factor 1. Subjectivity is the lens individuals use to understand the world, and it is measurable

Eighteen participants are associated with Factor 1. Eight are researchers in a field related to education, two in health and medical sciences, four in the social sciences, two in linguistics, one in psychology, and one in marketing. Thirteen of these participants have previously used Q methodology for their research, while five are not Q methodologists. Among highly cited Q methodology scholars, Susan Ramlo, Noori Akhtar-Danesh, and Peter Schmolck are associated with this factor.

Participants associated with this factor share the perspective that subjectivity draws on the individual's own experience (28, + 2) and that it is what individuals use to make sense of the world (35, + 5). Subjectivity can be seen "as the process of making sense of the world as one engages in communicable thought with oneself and the world through discourse” [45socQ] and as “the lens through which we see and interpret the world” [10socN]. In that sense, subjectivity is highly contextual (36, + 4), as it represents “how one sees the world around himself/herself” [28heaQ]. Furthermore, for these participants, subjectivity represents their beliefs (30, + 4), something “subjective, true for me, and maybe only for me” [04eduQ] and as such neither right nor wrong (39, + 5), because "the reality we experience is our reality, which may be different to others" [06heaQ]. In other words, subjectivity is personal but not disconnected from the external world and does not indicate a lack of objective reality (2, − 5). Subjectivity is not something metaphysical located only in people's minds (5, − 4); instead, it is empirically observable (37, + 3) and therefore measurable (27, + 3) either “through a person’s Q sort on a topic” [22eduQ], or by other individuals (8, − 3) since it manifests through "behaviour or discourse” [41humN]. For the participants associated with this factor, subjectivity is not a site of struggle (33, − 3), a characteristic more often associated with subjectivity understood as strongly influenced by social discourse and ranked positively in other factors.

To summarise, subjectivity for the participants associated with Factor 1 represents an individual's understanding of the world, which is not in opposition to external objective reality. Furthermore, such subjectivity is observable and measurable, and it is not the result of a struggle with the surrounding discourse.

4.2 Factor 2. Subjectivity is the unique and complex result of the interaction between individual and context, and not measurable

Ten participants are associated with Factor 2. Three are researchers in education, two in psychology, two in fields related to humanities, two in medical and health sciences, and one in social sciences. Among them, four have used Q methodology in their research. Among influential Q methodology scholars, Alessio Pruneddu is associated with this factor.

Participants associated with Factor 2 share the perspective that subjectivity is complex (21, + 5), socioculturally influenced (15, + 4), and contextual (36, + 5), because “one’s subjectivity is highly contingent on the surroundings and the environment in which they were raised, thus is both contextual and socio-cultural" [38socN]. Its complexity results from the fact that it "isn't fixed but changes over time depending on individual and shared experiences" [42eduQ]. Such subjectivity is unique to the individual (22, + 2); in other words, subjectivity for the participants associated with Factor 2 is the unrepeatable product, in time and space, of the individual interaction with the surrounding social environment. This might also explain why Factor 2 ranks item 1 (subjectivity is related to emotions) higher than any other factor (+ 2).

In contrast to the perspective of the participants associated with Factor 1, participants associated with Factor 2 do not consider subjectivity to be empirically observable (37, − 5) and measurable (27, − 5) because “insofar as one’s identity is immeasurable—such concepts cannot be assigned a numerical value” [38socN] and because subjectivity is “not a trait” [24psyQ]. This perspective is unique among all factors, as these two statements have been rated positively by all other perspectives.

4.3 Factor 3a. Subjectivity is a performed social construct, understood as intersubjectivity

Two participants are associated with this factor. One of them is a researcher in the humanities, the other in linguistics. Both participants have never used Q methodology for their research projects.

The participants associated with this factor consider subjectivity constructed in discourse (32, + 5) and a site of struggle (33, + 3). As such, subjectivity is for them the product of the interaction between the individual and the surrounding social environment, “resulting from the complex intersubjective dynamics of collective experiences” [17humN]. Subjectivity is multifaceted (14, + 5), and the individual can perform multiple subjectivities (31, + 2). Therefore, these two participants understand subjectivity as a social construct shared collectively across individuals (20, + 3). This might explain the significantly lower ranking of item 35 (subjectivity is what individuals use to make sense of the world, 0) in comparison with other factors. Further confirmation of this point of view is that, in contrast with the perspective emerging from the other factors, these two participants do not consider subjectivity to be self-referential (24, − 5), to represent one's individuality (6, − 3) or one’s reality (29, − 1). They assume that this collective subjectivity is necessary for the existence of objectivity (18, + 2) and therefore consider subjectivity and objectivity as two opposite but complementary constructs, one necessary for the comprehension of the other. Regarding this aspect, one of the participants pointed out that “the concept of subjectivity has a long and convoluted history that is connected to the variable uses of the concept of objectivity” [17humN].

To summarise, participants associated with Factor 3a understand subjectivity as socially constructed, performed, collective, and strictly related to objectivity.

4.4 Factor 3b. Subjectivity is equivalent to self and identity, in antithesis to social reality

One participant is associated with this factor. The perspective of this factor mirrors Factor 3a, as it represents the opposite point of view. The participant associated with this factor is a researcher in marketing and advertising, based in East Asia, and has previous experience using Q methodology.

The participant associated with this factor considers subjectivity to constitute one's reality (29, + 5), the self (34, + 5), and understand subjectivity to be synonymous with identity (26, + 3). This participant understands subjectivity as strictly personal, intimate, and individual, the antithesis to the social dimension (12, − 5; 33, − 5). Therefore, subjectivity as a strictly individual feature is self-referential (24, + 4) and draws on individual experience (28, + 4). Such an individualised subjectivity cannot have plurality (14, − 3) or be collective (20, − 4). A further characteristic associated with subjectivity is not being communicable (19, − 3), a feature ranked positively in all other factors.

4.5 Factor 4. Subjectivity is communicable, self-referential, and distinct from identity

Five participants are associated with this factor. Three of them are researchers in the field of education, and two are in the social sciences. Three of them have already adopted Q methodology for their research, while two have no previous knowledge. Among the highly influential Q methodology researchers who participated in this study, Steven Brown is associated with this factor.

Participants associated with Factor 4 share the perspective that subjectivity is dynamic (25, + 5) because "influenced by context and environment” [08socN], communicable (19, + 3), and self-referential (24, + 2), since “the self is central to all meaning and significance” [35socQ]. These are all characteristics of the concept of subjectivity that have been considered central to Q methodology, in the sense that Q methodology is supposed to measure subjectivity as understood by the participants themselves. The perspective of this factor clearly distinguishes subjectivity from identity (26, − 5) because identity “tends to be defined or created vis-a-vis others, whereas subjectivity does not need the reference of others” [08socN] and does not consider subjectivity to be unique to the individual (22, − 3). This means that for the participants associated with this factor subjectivity can be shared, although it is not collective (20, − 2) as for those associated with Factor 3a. This is another fundamental aspect of Q methodology, which shows participants sharing the same perspective on the topic under investigation. However, Factor 4 highlights that subjectivity does not represent one's beliefs (30, − 4) because “it's identity which is a part of/contributor to one's subjective beliefs” [31eduQ], and that is not a matter of behaviour (38, − 3), since behaviour is “linked to social psychological work where the social dimension aren’t adequately accounted for” [14eduN].

5 Consensus and disagreement on the concept of subjectivity

The factors above have been discussed with reference to those statements ranked higher (or lower) than others in the same factor, or ranked higher (or lower) in a factor compared to other factors, on the ground of one of the postulates of Q methodology that Stephenson expressed as “all the important information for each array is contained in its variation (no information is lost in throwing away the variate means)” (Stephenson 1953 , p. 58). This brings the question of how to deal with the statements ranked at zero. Brown ( 1980 , p. 22) wrote that “the statements towards the middle, relatively speaking, lack significance”. Nevertheless, these statements are not meaningless, in fact “even zero scores, which are normally associated with an absence of salience, can be quite revealing” (Brown 2005 , p. 18). Zero is to be understood as a distensive zero, a point from where “all the information, so to speak, bulges out or distends from it—it is all contained in the dispersion about zero, that is, in the variance” (Stephenson 1953 , p. 196). Watts and Stenner ( 2012 ) interpret statements ranked at zero as a fulcrum for the expression of a perspective. The fact that important information is contained in the variation, and that zero scores indicate a lack of salience in comparison to statements ranked at the extremes, does not mean statements ranked at zero do not have any meaning at all. In defining a factor, these statements simply have less discriminating power compared to the statements towards the extremes.

The meaning of the distensive zero is even more relevant when factors share consensus statements ranked around the zero score. We interpret these statements, ranked around zero across all factors, as unmarked statements indicating an underlying and shared characteristic that informs in the background of all factors (Fraschini and Caruso 2019 ). To use a culinary metaphor, a consensus statement around zero can be seen as a pizza base, which is the same for all kinds of pizzas, while the marked statements ranked towards the extremes can be seen as the toppings, which give each different pizza its distinctive flavour.

Among the statements ranked around zero in all factors, we found statement 13 (subjectivity is behaviour as being experienced by the individual, a statistical consensus), statement 4 (subjectivity is an individual’s point of view), and statement 11 (subjectivity has an internal structure). All these statements indicate aspects of subjectivity that are often stressed in Q methodology (Brown et al. 2015 ; McKeown and Thomas 2013 ), and we feel to suggest that despite the fact that most factors consider other aspects of subjectivity to be more salient and defining, nevertheless these aspects lay in the background of the five perspectives.

Of particular relevance is statement 13 (subjectivity is behaviour as being experienced by the individual). According to our interpretation, the underlying consensus indicated by this statement reinforces the centrality of behaviour in relation to the concept of subjectivity as a shared background perspective among the factors. Additionally, the fact that statement 38 (subjectivity is a matter of behaviour) has not been ranked positively in any factor indicates that the study participants understand subjectivity to be reflected in behaviour only when this is experienced by the individual, in line with Stephenson ( 1974 ). This interpretation is confirmed by the rating of statement 24 (subjectivity is self-referential), which is positive in all factors with the exception of Factor 3a, a factor with participants who are not Q methodologists.

In line with Stephenson ( 1953 ), the participants of this study overall reject the subjectivity/objectivity dichotomy as indicated by statements 18 (subjectivity is needed for objectivity) and 2 (subjectivity is lack of objective reality), which are not positive in any factor. We would also expect statement 3 (subjectivity is located within people’s mind) to be rejected by most factors, but this is not the case. This may indicate that despite the study participants rejecting the dualism subjective/objective, there are still uncertainties regarding the dualism body/mind.

Statement 7, (subjectivity is accidental), resulted as a statistical consensus statement ranked negatively in all factors. The negative rating of this statement indicates that for the majority of the factors subjectivity is not the result of chance. The ranking of this statement can be read together with the ratings of statements 15 (subjectivity is socio-culturally influenced) and 35 (subjectivity is what individuals use to make sense of the world) which have not been rated negatively by any factor, to indicate that although subjectivity is not the result of chance, it is nevertheless the result of the interaction of the individual and the many variables of the external environment. This is confirmed by the rating of statement 32 (subjectivity is constructed in discourse), which is positive in all factors except for Factor 3b; however, regarding this statement, Steven Brown remarked in the follow-up interview that subjectivity, despite being connected to discourse, it is not a function of discourse since discourse and subjectivity are not linked by a relationship of cause/effect, adding that the term ‘discourse’ may be interpreted in many different ways.

Other two statements have been rated positively in all factors, statement 25 (subjectivity is dynamic) and statement 21 (subjectivity is complex). The positive rating of these statements, although to different degrees, shows an overall consensus about the dynamic and complexity of subjectivity. The dynamic aspect is due to the ever-changing surrounding environment, while the complexity aspect is visible from the complex structure of the Q sorts of each individual participant.

Although not commonly discussed in the results of Q methodology studies, it is worth pointing out some of the statements with the highest variance among the factors, which means with the highest degree of disagreement. Among these statements there are statement 37 (subjectivity is empirically observable), and 33 (subjectivity is a site of struggle). We feel it necessary to point out the high discrepancy in the rating of these two statements since statement 37 may be fundamental to Q methodology but not to other research traditions, which may perhaps consider subjectivity to be partially observable but not fully measurable, as discussed with the participants of the interviews. As a further confirmation of this, it does not surprise that statement 27 (subjectivity is measurable) is also one of the statements with the highest rating discrepancy. For participants associated with Factor 2, subjectivity is not measurable because it is not a trait, therefore as Hanson ( 2015 , p. 859) eloquently pointed out “issues of measurement become questions of consensus on what is being measured and how”.

On the other hand, statement 33 (subjectivity is a site of struggle) may be important to more socio-culturally oriented post-structuralist theoretical approaches, but less so for Q methodology practitioners who depending on their discipline may be less acquainted with concepts such as ‘site of struggle’. We can draw a similar conclusion for another statement showing a high degree of discrepancy, statement 24 (subjectivity is self-referential), which has been rated positively in all factors but Factor 3a. While the self-referentiality of subjectivity has often been indicated as one of the tenets of Q methodology (Stephenson 1987 ), this is clearly not so true for the two participants with a background in the Humanities but without Q methodology knowledge associated with Factor 3a, who see subjectivity as a performed social construct and as a site of struggle.

6 Discussion

The current study indicates that several perspectives are found among researchers when it comes to defining the meaning of subjectivity and its inherent characteristics. Five views concerning subjectivity emerged from the analyses. The empirical evidence from this study demonstrates that academics, regardless of their location or knowledge of Q, think in at least five divergent ways about subjectivity.

Only Factor 3a was characterised exclusively by researchers not acquainted with Q methodology, which may suggest that there are elements of convergences about the conceptualisation of subjectivity among Q and non-Q scholars. However, we recognise that this view is representative of only two participants in this study. Aspects of subjectivity about which scholars participating in this study more or less agree with, are that subjectivity constitutes an internal point of view, and that it has a dynamic and complex structure. On the other hand, major points of disagreement seem to be the possibility to observe and measure subjectivity, and the degree to which subjectivity depends on the environment or on the individual.

The issue of measurability emerged as a point of divergence not only across factors, but potentially also within factors. The perspective of Factor 1, for example, is characterised by the belief that subjectivity is measurable. Nevertheless, Ramlo in her interview made the distinction that even if people believe that subjectivity is measurable, there may still be disagreement on why it is so. For her, subjectivity is measurable because it is part of the Quantum universe, and the Q sort is what makes subjectivity measurable. Explaining her personal view, Ramlo argues that, in Quantum physics, whilst people may have the same experience, their perception of that experience gives different outcomes, unlike Newtonian physics where a cause always gives the same effect. However, she also acknowledges that for other researchers, subjectivity may be measurable because Q methodology simply allows statistical analysis. The difference in this case may be due to the disciplinary background and epistemological stance of the individual researcher (Ramlo 2020 ).

Regarding the same topic of the measurability of subjectivity, Factor 1 and Factor 2 are clearly contrasting. For Factor 2, Pruneddu clarified that subjectivity may be out in the open, nevertheless it does not mean that it is observable and measurable. Subjectivity is out in the open because people are aware of their subjectivity since subjectivity is internal in terms of abstractions, feelings, and perspectives. People have points of view, and take actions, but from those actions it is not possible to fully observe and measure their subjectivity. In other words, subjectivity is not observable and measurable because it is not a trait. Considering Pruneddu’s background as a personality psychologist, a trait is something very specific. Being extroverted, for example, is a trait and a characteristic of a person of which the individual may be aware. However, subjectivity is not a trait because it is too influenced by the environment. This highlights again that, as noted by physics professor Ramlo, Q is used in different fields, and people arrive at Q methodology from different disciplines and theoretical backgrounds, and they never let that background go, adapting Q to their belief system.

The views agree that context and environment have a role in shaping subjectivity, although they disagreed on the degree of this influence. In his interview, Brown noted that Stephenson was very contextual (see also Stephenson 1987 , 2014 ). This consideration of context does not only include the environment surrounding the individual in their everyday life, but also the context in which the sort is carried on. He further remarked that at the end the expressed uniqueness of an individual depends on the statements, the personal history experiences, and also on the situation in which the sort is conducted. Pruneddu was of the opinion that the influence of the context is what makes subjectivity complex, however other aspects, such as emotions for example, are more relevant in defining subjectivity. Also, Ramlo said that, despite the context playing a fundamental role, an individual point of view is not 100% contextual, and that the structure of subjectivity depends on what people create in their minds through their multiple individual experiences. On the other hand, for Factor 3a, the only factor without any representation of Q methodology scholars, subjectivity is constructed in discourse, constitutes a site of struggle, and is multifaceted. This reflects a more post-structuralist understanding of subjectivity, once again a perspective that may have been influenced by the background of the participants associated with this factor. In contrast, other scholars associated with other factors, although agreeing that the discourse has a role, do not share the same opinion on how this role is played out. Brown, for example, remarked that he always avoids using the term discourse because of the theory behind it, and although recognising that subjectivity is connected to discourse, he also does not think that subjectivity is a function of discourse, since discourse and subjectivity are not linked by a relationship of cause/effect.

The final perspective identified in the findings is consistent with claims in the Q literature that subjectivity is communicable (Stephenson 2014 ) and self-referential (Stephenson 1987 ). These characteristics of subjectivity are central to Q methodology, and Brown, the Q expert represented by Factor 4, explains that factors reflect shared communicability among people and that Q shows the structure underlying people’s communicability in the form it is expressed and shared. Moreover, Brown comments, subjectivity is self-referential because each statement acquires meaning in relation to the individual, and therefore each statement tells something about the participant. While the broader literature sometimes equates subjectivity to identity (McNamara 2019 ), the findings indicate this is not the view expressed by the fourth perspective and Brown clarifies that subjectivity is distinct from identity because the current focus on identity is only 20–30 years old, and identity is distinct from behaviour.

7 Implications, limitations, and conclusions

This study allows us to draw several conclusions and present a range of implications. As discussed, the concept of subjectivity lacks a common definition, within and beyond the Q community. Therefore, and to potentially expand a Q study into its importance to subjectivity more generally, Q researchers should clearly define their own understanding of this central concept. The factors presented in this study might serve as a springboard for Q researchers to describe their view regarding subjectivity. Overall, we suggest Q researchers put more emphasis on their own positionality, including their disciplinary background and epistemological view of research.

The extensive description of the methodological procedures and in particular the analytical work as a research team, based on recommendations in Brown ( 1989 ) and Albright et al. ( 2019 ), has clearly illustrated the need for additional detailed descriptions of how Q researchers pursue to deeply understand their research participants and report their perspectives in the most unbiased way possible. We wish there to be extensive knowledge and experience exchange within the Q community that supports the current and emerging generation of Q researchers to more fully understand Q methodology and a science of subjectivity. This might also include synergistic mentor–mentee relationships (Ramlo 2020 ) and joint publications as illustrated by Albright et al. ( 2019 ).

Despite interviewing established experts and requesting written comments on sorts, capturing the context in which participants sort the items is challenging. This is exacerbated by sorters participating anonymously, as often occurs in online settings. To have empathy for participants and thereby deeply understand their feelings about the items during the sorting, we invite Q researchers to adopt a more participatory approach to their study designs. In addition to including participants in the development of the concourse and culling of the items, which is comparatively common, researchers might choose to be present during the participants’ sorting and invite them to be co-creators of factor interpretations (Lundberg 2022 ).

Other ways forward are a renewed focus on intensive single-case studies (Fraschini 2022 ), or more intensive pre-sorting surveys that might include questions about the feelings and context of the sorters. Q methodological studies adopting an intensive single-case design are scant in many disciplines, including education (Lundberg et al. 2020 ), although present in others (see Brown and Rhoads 2017 ). The intensive single-case study design allows the researcher to adopt very fine-grained lens by applying “the penetrating power of factor analysis to the study of individual lives” (Brown 2019 , p. 574).

Finally, the present study has disclosed some of the challenges Q methodologists face towards non-Q academics. Subjectivity might be understood differently depending on academics’ disciplinary background, and terminology such as ‘self-referential’ and ‘behaviour’ are anything but straightforward. Q researchers should carefully choose their terminology and explain concepts that are necessary to be included, to avoid “a worrisome proliferation of terms with substantial overlap and redundancy, all of which are left up to each reader to form their own conception of its meaning and boundaries” (Al-Hoorie et al. 2021 , p. 9), and in order to be fully understood beyond the Q community and potentially be published more easily.

Before concluding, we want to mention some considerations related to generalisation, replicability, and the procedure. Although we tried to be as broad and inclusive as possible with regards to the participants, we acknowledge that scholars from other disciplines may hold even more faceted conceptualisations of subjectivity. Therefore, we do not think that our study is representative of the whole academic community as additional viewpoints may also exist that were not uncovered here. We invite other Q scholars to expand deeper on the conceptualisation of subjectivity. Nevertheless, in a more qualitative logic, the factors presented in this study provide generalisable results based on substantive inference (Thomas and Baas 1993 ). This leads to the issue of replicability. Considering that the sorting activity is grounded in the participants’ life experiences, beliefs, and sorting context, we invite the readers to understand replicability again not in a positivistic way but, as suggested by Al-Hoorie et al. ( 2021 ), as interpretability of the results, therefore putting the accent of a replication attempt not on the methodological and mechanical aspects of the procedure, but on the interpretation of the research outcomes. Finally, the ethical procedural need to safeguard the anonymity of the non-Q expert participating in this study meant that we were unable to conduct interviews with participants associated with factors 3a and 3b, which would have probably opened up a more detailed discussion.

In returning to the fable with which we opened, we conclude that Q methodology can in fact serve as an approach to deeply investigate perspectives about concepts and phenomena. However, we should not expect there to be a single, objectively true, definition of subjectivity or description of an elephant. What is much more important for Q methodologists and other researchers interested in subjectivity, is not only understanding how participants feel and think the way they do, but more importantly why there might exist multiple divergent views.

A distinction needs to be made between the Q technique and the Q method. The Q technique refers to the Q sample and the Q sort, while the analysis of the Q sorts is called Q method. The methodology holds these two together in a theoretical framework (Brown 2019 ).

Albright, E., Christofferson, K., McCabe, A., Montgomery, D.: Lessons learned: some guidelines to factor interpretation. Operant. Subj. 41 , 134–146 (2019)

Google Scholar  

Al-Hoorie, A.H., Hiver, P., Larsen-Freeman, D., Lowie, W.: From replication to substantiation: a complexity theory perspective. Lang. Teach. (2021). https://doi.org/10.1017/S0261444821000409

Article   Google Scholar  

Banasick, S.: KADE: a desktop application for Q methodology. J Open Sour Softw 4 (36), 1360 (2019). https://doi.org/10.21105/joss.01360

Boon, V.: Subjectivity. In: Ritzer, G. (ed.) The blackwell encyclopedia of sociology. Wiley (2007)

Brown, S.: Political subjectivity: applications of Q methodology in political science. Yale University Press, New Haven (1980)

Brown, S.: A feeling for the organism: understanding and interpreting political subjectivity. Operant Subj. 12 (3/4), 81–97 (1989)

Brown, S.: The science of subjectivity: methodology, identity, and deep structures. J. Korean Soc. Sci. Study Subj. 11 , 5–31 (2005)

Brown, S.: Subjectivity in the human sciences. Psychol. Rec. 69 , 565–579 (2019)

Brown, S., Rhoads, J.: Bibliography of intensive single-case studies. Operant Subj. 39 (1/2), 98–100 (2017)

Brown, S., Danielson, S., van Exel, J.: Overly ambitious critics and the medici effect: a reply to Kampen and Tamás. Qual. Quant. 49 , 523–537 (2015)

Douven, I.: Abduction. In: Zalta E.N. (ed.) The stanford encyclopedia of philosophy. https://plato.stanford.edu/archives/sum2021/entries/abduction/ (2021)

Dryzek, J.S., Holmes, L.: Post-communist democratization: political discourses across thirteen countries. Cambridge University Press, Cambridge (2002)

Book   Google Scholar  

Fraschini, N.: Language learners’ emotional dynamics: insights from a Q methodology intensive single-case study. Lang. Cult. Curric. (2022). https://doi.org/10.1080/07908318.2022.2133137

Fraschini, N., Caruso, M.: “I can see myself…” A Q methodology study on self vision of Korean language learners. System 87 , 102147 (2019). https://doi.org/10.1016/j.system.2019.102147

Fraschini, N., Park, H.: Anxiety in language teachers: exploring the variety of perceptions with Q methodology. Foreign Lang. Ann. 54 (2), 341–364 (2021). https://doi.org/10.1111/flan.12527

Fraschini, N., Park, H.: A Q methodology study to explore Korean as a second language undergraduate student-teachers’ anxiety. Int. J. Educ. Res. Open 3 , 100132 (2022). https://doi.org/10.1016/j.ijedro.2022.100132

Good, J.: Introduction to William Stephenson’s quest for a science of subjectivity. Psychoanal. Hist. 12 (2), 211–243 (2010)

Hall, D.: Subjectivity. Routledge, London (2004)

Hanson, B.: Objectivities: constructivist roots of positivism. Qual. Quant. 49 , 857–865 (2015)

Hwang, S., Choi, E.: The implementation of Q methodology in psychological research and the interpretation of its result: the duet of objectivity and subjectivity. J. Korean Soc. Sci. Study Subj. 7 , 4–25 (2002)

Kampen, J.K., Tamás, P.: Overly ambitious: contributions and current status of Q methodology. Qual. Quant. 48 , 3109–3126 (2014)

Kitzinger, C.: Researching subjectivity and diversity: Q methodology in feminist psychology. Psychol. Women Q. 23 , 267–276 (1999)

Lundberg, A.: Academics’ perspectives on good teaching practice in Switzerland’s higher education landscape. Int. J. Educ. Res. Open 3 , 100202 (2022). https://doi.org/10.1016/j.ijedro.2022.100202

Lundberg, A., de Leeuw, R., Aliani, R.: Using Q methodology: sorting out subjectivity in educational research. Educ. Res. Rev. 31 , 100361 (2020). https://doi.org/10.1016/j.edurev.2020.100361

McKeown, B., Thomas, D.: Q methodology (2nd ed.). SAGE Publications (2013). https://doi.org/10.4135/9781483384412

McNamara, T.: Language and subjectivity. Cambridge University Press, Cambridge (2019)

Midgley, B., Delprato, D.: Stephenson’s subjectivity as naturalistic and understood from scientific perspective. Psychol. Rec. 67 , 587–596 (2017)

Phillips, C.: The taste machine: sense, subjectivity, and statistics in the California wine world. Soc. Stud. Sci. 46 (3), 461–481 (2016)

Ramlo, S.: Mixed method lessons learned from 80 years of Q methodology. J. Mixed Methods Res. 10 (1), 28–45 (2016)

Ramlo, S.: Divergent viewpoints about the statistical stage of a mixed method: qualitative versus quantitative orientations. Int. J. Res. Method Educ. 43 (1), 93–111 (2020)

Ramlo, S.: A science of subjectivity. In: Rhoads, J.C., Thomas, D.B., Ramlo, S.E. (eds.) Cultivating Q methodology: essays honouring Steven R. Brown, pp. 182–217. The international association for the scientific study of subjectivity (2022)

Sabini, J.P., Silver, M.: Some senses of subjective. In: Second, P.F. (ed.) Explaining human behaviour: consciousness, human action, and social structure, pp. 71–92. Sage Publications, Beverly Hills (1982)

Shapin, S.: The science of subjectivity. Soc. Stud. Sci. 42 (2), 170–184 (2012)

Shapin, S.: A taste of science: making the subjective objective in the California wine world. Soc. Stud. Sci. 46 (3), 436–460 (2016)

Stenner, P., Stainton-Rogers, R.: Q methodology and qualiquantology: the example of discriminating between emotions. In: Todd, Z., Nerlich, B., McKeown, S., Clark, D.D. (eds.) Mixing methods in psychology, pp. 99–118. Psychology Press, Nove (2004)

Stephenson, W.: Technique of factor analysis. Nature 136 (34), 297 (1935)

Stephenson, W.: The study of behaviour: Q technique and its methodology. University of Chicago Press, Chicago (1953)

Stephenson, W.: Consciousness out-subjectivity in. The. Psychol. Rec. 18 , 499–501 (1968)

Stephenson, W.: Methodology of single case studies. J. Oper. Psychiatry 5 (2), 3–16 (1974)

Stephenson, W.: Consciring: A general theory for subjective communicability. In: Nimmo, D. (ed.) Communication yearbook, vol. 4, pp. 7–36. Transaction Books, New Brunswick (1980a)

Stephenson, W.: Q methodology and the subjectivity of literature. Operant Subj. 3 (4), 111–113 (1980b)

Stephenson, W.: Newton’s fifth rule and Q methodology: application to self psychology. Operant Subj. 5 (2), 37–57 (1982)

Stephenson, W.: Review of ‘Structures of subjectivity: explorations in psychoanalytic phenomenology.’ Operant Subjectivity 8 (4), 100–108 (1985)

Stephenson, W.: How to make a good cup of tea. Operant Subj. 10 (2), 37–57 (1987)

Stephenson, W.: Introduction to Q methodology. Operant Subj. 17 (1/2), 1–13 (1993)

Stephenson, W.: Intentionality: or how to buy a loaf of bread. Operant Subj. 29 (3/4), 122–137 (2006)

Stephenson, W.: General theory of communication. Operant Subj. 37 (3), 38–56 (2014)

Stenner, P.: A. N. Whitehead and subjectivity. Subjectivity 22 , 90–109 (2008). https://doi.org/10.1057/sub.2008.4

Tamás, P., Kampen, J.K.: Heresy and the church of Q: a reply. Qual. Quant. 49 , 539–540 (2015)

Thomas, D.B., Baas, L.R.: The issue of generalization in Q methodology: ”reliable schematics” revisited. Operant Subj. 16 (1/2), 18–36 (1993)

Watts, S., Stenner, P.: Doing Q methodological research: theory, method and interpretation. SAGE Publications (2012). https://doi.org/10.4135/9781446251911

Wolf, A.: Subjectivity, the researcher and the researched. Operant Subj. 32 , 6–28 (2009)

Download references

Acknowledgements

We would like to thank the reviewers for their comments and in particular reviewer 1 for the valuable feedback.

Open Access funding enabled and organized by CAUL and its Member Institutions. The authors declare that no funds, grants, or other support were received during the preparation of this manuscript.

Author information

Authors and affiliations.

Malmö University, Malmö, Sweden

Adrian Lundberg

The University of Western Australia, M257, 35 Stirling Hwy, Crawley, WA, 6009, Australia

Nicola Fraschini

The University of Melbourne, Melbourne, Australia

Renata Aliani

You can also search for this author in PubMed   Google Scholar

Contributions

All authors contributed to the study conception and design. Material preparation, data collection and analysis were performed by all authors. All authors contributed to the drafting and revising of the manuscript. The article is not under consideration elsewhere and all authors have approved the final manuscript.

Corresponding author

Correspondence to Nicola Fraschini .

Ethics declarations

Conflict of interest.

The authors have no relevant financial or non-financial interests to disclose.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Factor Array.

Rotated factors with flagged sorts (*).

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Lundberg, A., Fraschini, N. & Aliani, R. What is subjectivity? Scholarly perspectives on the elephant in the room. Qual Quant 57 , 4509–4529 (2023). https://doi.org/10.1007/s11135-022-01565-9

Download citation

Accepted : 19 October 2022

Published : 08 November 2022

Issue Date : October 2023

DOI : https://doi.org/10.1007/s11135-022-01565-9

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Q methodology
  • Subjectivity
  • Objectivity
  • Factor interpretation
  • Deep understanding
  • Find a journal
  • Publish with us
  • Track your research
  • How to write a research paper

Last updated

11 January 2024

Reviewed by

With proper planning, knowledge, and framework, completing a research paper can be a fulfilling and exciting experience. 

Though it might initially sound slightly intimidating, this guide will help you embrace the challenge. 

By documenting your findings, you can inspire others and make a difference in your field. Here's how you can make your research paper unique and comprehensive.

  • What is a research paper?

Research papers allow you to demonstrate your knowledge and understanding of a particular topic. These papers are usually lengthier and more detailed than typical essays, requiring deeper insight into the chosen topic.

To write a research paper, you must first choose a topic that interests you and is relevant to the field of study. Once you’ve selected your topic, gathering as many relevant resources as possible, including books, scholarly articles, credible websites, and other academic materials, is essential. You must then read and analyze these sources, summarizing their key points and identifying gaps in the current research.

You can formulate your ideas and opinions once you thoroughly understand the existing research. To get there might involve conducting original research, gathering data, or analyzing existing data sets. It could also involve presenting an original argument or interpretation of the existing research.

Writing a successful research paper involves presenting your findings clearly and engagingly, which might involve using charts, graphs, or other visual aids to present your data and using concise language to explain your findings. You must also ensure your paper adheres to relevant academic formatting guidelines, including proper citations and references.

Overall, writing a research paper requires a significant amount of time, effort, and attention to detail. However, it is also an enriching experience that allows you to delve deeply into a subject that interests you and contribute to the existing body of knowledge in your chosen field.

  • How long should a research paper be?

Research papers are deep dives into a topic. Therefore, they tend to be longer pieces of work than essays or opinion pieces. 

However, a suitable length depends on the complexity of the topic and your level of expertise. For instance, are you a first-year college student or an experienced professional? 

Also, remember that the best research papers provide valuable information for the benefit of others. Therefore, the quality of information matters most, not necessarily the length. Being concise is valuable.

Following these best practice steps will help keep your process simple and productive:

1. Gaining a deep understanding of any expectations

Before diving into your intended topic or beginning the research phase, take some time to orient yourself. Suppose there’s a specific topic assigned to you. In that case, it’s essential to deeply understand the question and organize your planning and approach in response. Pay attention to the key requirements and ensure you align your writing accordingly. 

This preparation step entails

Deeply understanding the task or assignment

Being clear about the expected format and length

Familiarizing yourself with the citation and referencing requirements 

Understanding any defined limits for your research contribution

Where applicable, speaking to your professor or research supervisor for further clarification

2. Choose your research topic

Select a research topic that aligns with both your interests and available resources. Ideally, focus on a field where you possess significant experience and analytical skills. In crafting your research paper, it's crucial to go beyond summarizing existing data and contribute fresh insights to the chosen area.

Consider narrowing your focus to a specific aspect of the topic. For example, if exploring the link between technology and mental health, delve into how social media use during the pandemic impacts the well-being of college students. Conducting interviews and surveys with students could provide firsthand data and unique perspectives, adding substantial value to the existing knowledge.

When finalizing your topic, adhere to legal and ethical norms in the relevant area (this ensures the integrity of your research, protects participants' rights, upholds intellectual property standards, and ensures transparency and accountability). Following these principles not only maintains the credibility of your work but also builds trust within your academic or professional community.

For instance, in writing about medical research, consider legal and ethical norms , including patient confidentiality laws and informed consent requirements. Similarly, if analyzing user data on social media platforms, be mindful of data privacy regulations, ensuring compliance with laws governing personal information collection and use. Aligning with legal and ethical standards not only avoids potential issues but also underscores the responsible conduct of your research.

3. Gather preliminary research

Once you’ve landed on your topic, it’s time to explore it further. You’ll want to discover more about available resources and existing research relevant to your assignment at this stage. 

This exploratory phase is vital as you may discover issues with your original idea or realize you have insufficient resources to explore the topic effectively. This key bit of groundwork allows you to redirect your research topic in a different, more feasible, or more relevant direction if necessary. 

Spending ample time at this stage ensures you gather everything you need, learn as much as you can about the topic, and discover gaps where the topic has yet to be sufficiently covered, offering an opportunity to research it further. 

4. Define your research question

To produce a well-structured and focused paper, it is imperative to formulate a clear and precise research question that will guide your work. Your research question must be informed by the existing literature and tailored to the scope and objectives of your project. By refining your focus, you can produce a thoughtful and engaging paper that effectively communicates your ideas to your readers.

5. Write a thesis statement

A thesis statement is a one-to-two-sentence summary of your research paper's main argument or direction. It serves as an overall guide to summarize the overall intent of the research paper for you and anyone wanting to know more about the research.

A strong thesis statement is:

Concise and clear: Explain your case in simple sentences (avoid covering multiple ideas). It might help to think of this section as an elevator pitch.

Specific: Ensure that there is no ambiguity in your statement and that your summary covers the points argued in the paper.

Debatable: A thesis statement puts forward a specific argument––it is not merely a statement but a debatable point that can be analyzed and discussed.

Here are three thesis statement examples from different disciplines:

Psychology thesis example: "We're studying adults aged 25-40 to see if taking short breaks for mindfulness can help with stress. Our goal is to find practical ways to manage anxiety better."

Environmental science thesis example: "This research paper looks into how having more city parks might make the air cleaner and keep people healthier. I want to find out if more green spaces means breathing fewer carcinogens in big cities."

UX research thesis example: "This study focuses on improving mobile banking for older adults using ethnographic research, eye-tracking analysis, and interactive prototyping. We investigate the usefulness of eye-tracking analysis with older individuals, aiming to spark debate and offer fresh perspectives on UX design and digital inclusivity for the aging population."

6. Conduct in-depth research

A research paper doesn’t just include research that you’ve uncovered from other papers and studies but your fresh insights, too. You will seek to become an expert on your topic––understanding the nuances in the current leading theories. You will analyze existing research and add your thinking and discoveries.  It's crucial to conduct well-designed research that is rigorous, robust, and based on reliable sources. Suppose a research paper lacks evidence or is biased. In that case, it won't benefit the academic community or the general public. Therefore, examining the topic thoroughly and furthering its understanding through high-quality research is essential. That usually means conducting new research. Depending on the area under investigation, you may conduct surveys, interviews, diary studies , or observational research to uncover new insights or bolster current claims.

7. Determine supporting evidence

Not every piece of research you’ve discovered will be relevant to your research paper. It’s important to categorize the most meaningful evidence to include alongside your discoveries. It's important to include evidence that doesn't support your claims to avoid exclusion bias and ensure a fair research paper.

8. Write a research paper outline

Before diving in and writing the whole paper, start with an outline. It will help you to see if more research is needed, and it will provide a framework by which to write a more compelling paper. Your supervisor may even request an outline to approve before beginning to write the first draft of the full paper. An outline will include your topic, thesis statement, key headings, short summaries of the research, and your arguments.

9. Write your first draft

Once you feel confident about your outline and sources, it’s time to write your first draft. While penning a long piece of content can be intimidating, if you’ve laid the groundwork, you will have a structure to help you move steadily through each section. To keep up motivation and inspiration, it’s often best to keep the pace quick. Stopping for long periods can interrupt your flow and make jumping back in harder than writing when things are fresh in your mind.

10. Cite your sources correctly

It's always a good practice to give credit where it's due, and the same goes for citing any works that have influenced your paper. Building your arguments on credible references adds value and authenticity to your research. In the formatting guidelines section, you’ll find an overview of different citation styles (MLA, CMOS, or APA), which will help you meet any publishing or academic requirements and strengthen your paper's credibility. It is essential to follow the guidelines provided by your school or the publication you are submitting to ensure the accuracy and relevance of your citations.

11. Ensure your work is original

It is crucial to ensure the originality of your paper, as plagiarism can lead to serious consequences. To avoid plagiarism, you should use proper paraphrasing and quoting techniques. Paraphrasing is rewriting a text in your own words while maintaining the original meaning. Quoting involves directly citing the source. Giving credit to the original author or source is essential whenever you borrow their ideas or words. You can also use plagiarism detection tools such as Scribbr or Grammarly to check the originality of your paper. These tools compare your draft writing to a vast database of online sources. If you find any accidental plagiarism, you should correct it immediately by rephrasing or citing the source.

12. Revise, edit, and proofread

One of the essential qualities of excellent writers is their ability to understand the importance of editing and proofreading. Even though it's tempting to call it a day once you've finished your writing, editing your work can significantly improve its quality. It's natural to overlook the weaker areas when you've just finished writing a paper. Therefore, it's best to take a break of a day or two, or even up to a week, to refresh your mind. This way, you can return to your work with a new perspective. After some breathing room, you can spot any inconsistencies, spelling and grammar errors, typos, or missing citations and correct them. 

  • The best research paper format 

The format of your research paper should align with the requirements set forth by your college, school, or target publication. 

There is no one “best” format, per se. Depending on the stated requirements, you may need to include the following elements:

Title page: The title page of a research paper typically includes the title, author's name, and institutional affiliation and may include additional information such as a course name or instructor's name. 

Table of contents: Include a table of contents to make it easy for readers to find specific sections of your paper.

Abstract: The abstract is a summary of the purpose of the paper.

Methods : In this section, describe the research methods used. This may include collecting data , conducting interviews, or doing field research .

Results: Summarize the conclusions you drew from your research in this section.

Discussion: In this section, discuss the implications of your research . Be sure to mention any significant limitations to your approach and suggest areas for further research.

Tables, charts, and illustrations: Use tables, charts, and illustrations to help convey your research findings and make them easier to understand.

Works cited or reference page: Include a works cited or reference page to give credit to the sources that you used to conduct your research.

Bibliography: Provide a list of all the sources you consulted while conducting your research.

Dedication and acknowledgments : Optionally, you may include a dedication and acknowledgments section to thank individuals who helped you with your research.

  • General style and formatting guidelines

Formatting your research paper means you can submit it to your college, journal, or other publications in compliance with their criteria.

Research papers tend to follow the American Psychological Association (APA), Modern Language Association (MLA), or Chicago Manual of Style (CMOS) guidelines.

Here’s how each style guide is typically used:

Chicago Manual of Style (CMOS):

CMOS is a versatile style guide used for various types of writing. It's known for its flexibility and use in the humanities. CMOS provides guidelines for citations, formatting, and overall writing style. It allows for both footnotes and in-text citations, giving writers options based on their preferences or publication requirements.

American Psychological Association (APA):

APA is common in the social sciences. It’s hailed for its clarity and emphasis on precision. It has specific rules for citing sources, creating references, and formatting papers. APA style uses in-text citations with an accompanying reference list. It's designed to convey information efficiently and is widely used in academic and scientific writing.

Modern Language Association (MLA):

MLA is widely used in the humanities, especially literature and language studies. It emphasizes the author-page format for in-text citations and provides guidelines for creating a "Works Cited" page. MLA is known for its focus on the author's name and the literary works cited. It’s frequently used in disciplines that prioritize literary analysis and critical thinking.

To confirm you're using the latest style guide, check the official website or publisher's site for updates, consult academic resources, and verify the guide's publication date. Online platforms and educational resources may also provide summaries and alerts about any revisions or additions to the style guide.

Citing sources

When working on your research paper, it's important to cite the sources you used properly. Your citation style will guide you through this process. Generally, there are three parts to citing sources in your research paper: 

First, provide a brief citation in the body of your essay. This is also known as a parenthetical or in-text citation. 

Second, include a full citation in the Reference list at the end of your paper. Different types of citations include in-text citations, footnotes, and reference lists. 

In-text citations include the author's surname and the date of the citation. 

Footnotes appear at the bottom of each page of your research paper. They may also be summarized within a reference list at the end of the paper. 

A reference list includes all of the research used within the paper at the end of the document. It should include the author, date, paper title, and publisher listed in the order that aligns with your citation style.

10 research paper writing tips:

Following some best practices is essential to writing a research paper that contributes to your field of study and creates a positive impact.

These tactics will help you structure your argument effectively and ensure your work benefits others:

Clear and precise language:  Ensure your language is unambiguous. Use academic language appropriately, but keep it simple. Also, provide clear takeaways for your audience.

Effective idea separation:  Organize the vast amount of information and sources in your paper with paragraphs and titles. Create easily digestible sections for your readers to navigate through.

Compelling intro:  Craft an engaging introduction that captures your reader's interest. Hook your audience and motivate them to continue reading.

Thorough revision and editing:  Take the time to review and edit your paper comprehensively. Use tools like Grammarly to detect and correct small, overlooked errors.

Thesis precision:  Develop a clear and concise thesis statement that guides your paper. Ensure that your thesis aligns with your research's overall purpose and contribution.

Logical flow of ideas:  Maintain a logical progression throughout the paper. Use transitions effectively to connect different sections and maintain coherence.

Critical evaluation of sources:  Evaluate and critically assess the relevance and reliability of your sources. Ensure that your research is based on credible and up-to-date information.

Thematic consistency:  Maintain a consistent theme throughout the paper. Ensure that all sections contribute cohesively to the overall argument.

Relevant supporting evidence:  Provide concise and relevant evidence to support your arguments. Avoid unnecessary details that may distract from the main points.

Embrace counterarguments:  Acknowledge and address opposing views to strengthen your position. Show that you have considered alternative arguments in your field.

7 research tips 

If you want your paper to not only be well-written but also contribute to the progress of human knowledge, consider these tips to take your paper to the next level:

Selecting the appropriate topic: The topic you select should align with your area of expertise, comply with the requirements of your project, and have sufficient resources for a comprehensive investigation.

Use academic databases: Academic databases such as PubMed, Google Scholar, and JSTOR offer a wealth of research papers that can help you discover everything you need to know about your chosen topic.

Critically evaluate sources: It is important not to accept research findings at face value. Instead, it is crucial to critically analyze the information to avoid jumping to conclusions or overlooking important details. A well-written research paper requires a critical analysis with thorough reasoning to support claims.

Diversify your sources: Expand your research horizons by exploring a variety of sources beyond the standard databases. Utilize books, conference proceedings, and interviews to gather diverse perspectives and enrich your understanding of the topic.

Take detailed notes: Detailed note-taking is crucial during research and can help you form the outline and body of your paper.

Stay up on trends: Keep abreast of the latest developments in your field by regularly checking for recent publications. Subscribe to newsletters, follow relevant journals, and attend conferences to stay informed about emerging trends and advancements. 

Engage in peer review: Seek feedback from peers or mentors to ensure the rigor and validity of your research . Peer review helps identify potential weaknesses in your methodology and strengthens the overall credibility of your findings.

  • The real-world impact of research papers

Writing a research paper is more than an academic or business exercise. The experience provides an opportunity to explore a subject in-depth, broaden one's understanding, and arrive at meaningful conclusions. With careful planning, dedication, and hard work, writing a research paper can be a fulfilling and enriching experience contributing to advancing knowledge.

How do I publish my research paper? 

Many academics wish to publish their research papers. While challenging, your paper might get traction if it covers new and well-written information. To publish your research paper, find a target publication, thoroughly read their guidelines, format your paper accordingly, and send it to them per their instructions. You may need to include a cover letter, too. After submission, your paper may be peer-reviewed by experts to assess its legitimacy, quality, originality, and methodology. Following review, you will be informed by the publication whether they have accepted or rejected your paper. 

What is a good opening sentence for a research paper? 

Beginning your research paper with a compelling introduction can ensure readers are interested in going further. A relevant quote, a compelling statistic, or a bold argument can start the paper and hook your reader. Remember, though, that the most important aspect of a research paper is the quality of the information––not necessarily your ability to storytell, so ensure anything you write aligns with your goals.

Research paper vs. a research proposal—what’s the difference?

While some may confuse research papers and proposals, they are different documents. 

A research proposal comes before a research paper. It is a detailed document that outlines an intended area of exploration. It includes the research topic, methodology, timeline, sources, and potential conclusions. Research proposals are often required when seeking approval to conduct research. 

A research paper is a summary of research findings. A research paper follows a structured format to present those findings and construct an argument or conclusion.

Get started today

Go from raw data to valuable insights with a flexible research platform

Editor’s picks

Last updated: 21 December 2023

Last updated: 16 December 2023

Last updated: 6 October 2023

Last updated: 25 November 2023

Last updated: 12 May 2023

Last updated: 15 February 2024

Last updated: 11 March 2024

Last updated: 12 December 2023

Last updated: 18 May 2023

Last updated: 6 March 2024

Last updated: 10 April 2023

Last updated: 20 December 2023

Latest articles

Related topics.

  • 10 research paper

Log in or sign up

Get started for free

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List

Logo of peerj

Exploring constructs of well-being, happiness and quality of life

Oleg n. medvedev.

1 School of Medicine, University of Auckland, Auckland, New Zealand

C. Erik Landhuis

2 School of Social Sciences and Public Policy, Auckland University of Technology, Auckland, New Zealand

Associated Data

The following information was supplied regarding data availability:

The raw data are provided in the Supplemental File .

Existing definitions of happiness, subjective well-being, and quality of life suggest conceptual overlap between these constructs. This study explored the relationship between these well-being constructs by applying widely used measures with satisfactory psychometric properties.

Materials and Methods

University students ( n = 180) completed widely used well-being measures including the Oxford Happiness Questionnaire (OHQ), the World Health Organization Quality of Life Questionnaire, the Satisfaction with Life Scale, and the Positive and Negative Affect Scale. We analyzed the data using correlation, regression, and exploratory factor analysis.

All included well-being measures demonstrated high loadings on the global well-being construct that explains about 80% of the variance in the OHQ, the psychological domain of Quality of Life and subjective well-being. The results show high positive correlations between happiness, psychological and health domains of quality of life, life satisfaction, and positive affect. Social and environmental domains of quality of life were poor predictors of happiness and subjective well-being after controlling for psychological quality of life.

Together, these data provide support for a global well-being dimension and interchangeable use of terms happiness, subjective well-being, and psychological quality of life with the current sample and measures. Further investigation with larger heterogeneous samples and other well-being measures is warranted.

Introduction

The existing definitions of happiness, subjective well-being, and health related quality of life and the main components assigned to these constructs in the research literature (see Table 1 ) suggest conceptual overlap between these dimensions ( Camfield & Skevington, 2008 ). Quality of life was defined in the cross-cultural project of the World Health Organization (WHO) as:

An individual’s perception of their position in life, in the context of the culture and value systems in which they live, and in relation to their goals, expectations, standards, and concerns. It is a broad ranging concept, affected in a complex way by the person’s physical health, psychological state, level of independence, social relationships and their relationships to salient features of their environment. ( WHOQOL Group, 1995 , p. 1404)

The new reconceptualization of subjective well-being assumed to be synonymous of happiness by Diener (2006 , p. 400) as: “An umbrella term for different valuations that people make regarding their lives, the events happening to them, their bodies and minds, and the circumstances in which they live” resulted in greater theoretical convergence between these constructs. This raises an issue as to the point in which conceptual overlap invites redundancy, and whether one or the other of the terms is now surplus to requirements.

Historically, humans strived to achieve happiness and considered it the most important goal in life ( Compton, 2005 ). Cross-cultural research provide supporting evidence for primacy of happiness compared to other individual values such as physical health, wealth or love ( Kim-Prieto et al., 2005 ; Skevington, MacArthur & Somerset, 1997 ). Essentially, other human goals are valued because they are believed to give rise to happiness ( Csikszentmihaliy, 1992 ). Initially psychology was dealing with mental health issues affecting physical and social functioning of an individual ( Andrews & McKennell, 1980 ; Beck, 1991 , 1993 ). Happiness, well-being, and quality of life have only attracted increased interest of psychologists by the end of the 20th century resulting in growing research in this area ( Diener, 1984 ; WHOQOL Group, 1998a , 1998b ). Happiness and well-being research became increasingly important in the economics’ context ( Kristoffersen, 2010 ), and well-being data are widely used along with economic indicators by economists ( Kahneman & Krueger, 2006 ).

Currently, there is no agreement between researchers in defining happiness and its related constructs ( Diener, 2006 ; Diener et al., 2010 ; Rojas & Veenhoven, 2013 ; Kern et al., 2014 ; Shin & Johnson, 1978 ). In the literature happiness is often called subjective well-being ( Diener, 2006 ; Hills & Argyle, 2002 ), emotional well-being, positive affect ( Brandburn, 1969 ; Fordyce, 1988 ), and quality of life ( Diener, 2000 ; Ratzlaff et al., 2000 ; Shin & Johnson, 1978 ), which suggests that the meanings of happiness may depend on the context ( Diener, 2006 ; Carlquist et al., 2016 ). Elsewhere, subjective happiness was defined as “a global evaluation of life satisfaction” ( Diener, 2006 , p. 400). In the same way, subjective well-being was defined as “evaluations of life quality” ( Andrews & McKennell, 1980 , p. 131). These definitions indicate close relationship between the constructs of happiness, subjective well-being, quality of life, and life satisfaction. More recently subjective well-being was proposed as more appropriate “Big One” including the relevant aspects of global well-being ( Diener, 2006 ; Kashdan, Biswas-Diener & King, 2008 ).

Happiness can be described by bottom-up and top-down processes ( Andrews & McKennell, 1980 ; Diener, 1984 ). The bottom-up approach implies that happiness depends on aggregated positive and negative feelings ( Diener, 1984 ). However, evidence suggests that positive affect is not a counterpart of negative affect and the correlation between them is merely moderate ( Argyle, 2001 ; Tellegen et al., 1988 ). Alternatively, top-down approaches explain happiness is a result of subjective evaluations of individual’s life experiences or satisfaction with life (SL) ( Andrews & McKennell, 1980 ; Diener, Lucas & Oishi, 2005 ). The top-down approach has both theoretical foundation ( Beck, 1993 ; Diener, 1984 ) and empirical support ( Andrews & McKennell, 1980 ; Butler et al., 2006 ). The approaches appear to complement each other because research on happiness assessment consistently indicates that positive and negative affect and a cognitive component or SL map on unidimensional happiness construct ( Argyle, 2001 ; Hills & Argyle, 1998 , 2002 ; Joseph & Lewis, 1998 ). The cognitive component of happiness may involve personality traits such as optimism, extraversion, and internal locus of control ( Fordyce, 1988 ; Mayers, 1992 ).

Shin & Johnson (1978) noted, “happiness has been mistakenly identified with feelings of pleasure” in the research literature and defined as emotional well-being ( Fordyce, 1986 ; Layard, 2005 ). Shin & Johnson (1978) proposed that this definition refers to hedonic happiness associated with feeling happy also called euphoria or elation. They argued that feeling happy and being happy are not the same— being happy refers to enduring condition rather than to momentary pleasures or happy feelings. Accordingly, happiness should be understood as global evaluation of individual’s life quality according to their own criteria, which include both cognitions and emotions ( Shin & Johnson, 1978 ). According to this model feeling happy refers to state happiness and being happy incorporates both state and trait happiness.

The hedonic concept of happiness does not consider that cognitive appraisal plays the important role in emotional functioning ( Frijda, 1998 , 2007 ). According to the dual route model of emotional processing proposed by LeDoux (2000) , triggering information is simultaneously sent to the amygdala, resulting in immediate physiological responses like “fight or flight” ( Cannon, 1929 ), and to prefrontal cortex for further cognitive appraisal. Evidence shows that activation of the amygdala could be inhibited by prefrontal brain structures involved in conscious cognition ( Thayer et al., 2009 ; Thayer & Lane, 2000 ). Also, the impact of cognition on emotional states is well supported by evidence-based cognitive therapy ( Butler et al., 2006 ; Ellis, 2002 ). Therefore, the definition of happiness as merely emotional well-being is limited, because it does not account for the cognitive component of happiness supported by both theories and empirical evidence ( Diener et al., 1999 ; Eid & Larsen, 2008 ; Frijda, 2007 ).

Rather than constructing happiness as merely emotional well-being, Ryff (1989) and Ryff & Keyes (1995) proposed a eudemonic model of happiness, which they also called psychological well-being or positive functioning, comprising six dimensions: purpose in life; personal growth; environmental mastery; autonomy; positive self-regard; and social connections. These dimensions do not include basic components of subjective well-being and happiness such as emotions and life satisfaction consistently supported by the literature ( Helliwell, Huang & Wang, 2014 ; Diener, Sapyta & Suh, 1998 ; Rojas & Veenhoven, 2013 ). The construct validity of the assessment instrument based on these six factors ( Ryff, 1989 ) was challenged by later investigation indicating substantial overlap between dimensions ( Springer & Hauser, 2006 ; Springer, Hauser & Freese, 2006 ).

Ryff’s (1989) model of eudemonic happiness was also scrutinized by Diener et al. (2010) , resulting in development of an alternative construct defined as “psychological flourishing” or an individual’s self-perceived success, which is an aspect of life satisfaction. The proposed construct emphasizes positive functioning and covers dimensions such as social relationships; purposeful life; engagement in activities; self-esteem; and optimism, which overlap with components of widely used quality of life and happiness measures ( WHOQOL Group, 1998a ; Hills & Argyle, 2002 ). For instance, social relationships is a domain of the quality of life measure ( WHOQOL Group, 1998a ) and self-esteem and optimism are components of the widely used happiness measure ( Hills & Argyle, 2002 ). The component “purposeful life” implies that one cannot be happy without having a purpose making happiness an exclusive attribute of a group of adults who managed to develop such a purpose. Including this component in a psychometric measure may violate fundamental measurement principle of invariance across population groups ( Thurstone, 1931 ) because the sense of purpose in life varies substantially across cultural and age groups ( Oishi & Diener, 2014 ). Notwithstanding the importance of eudemonic well-being associated with individual’s fulfilment, it is implicitly included in subjective well-being and reflected by the overall SL ( Diener et al., 1999 ; Eid & Larsen, 2008 ; Kashdan, Biswas-Diener & King, 2008 ).

Different measures were developed to assess well-being associated constructs, however, definitions used in these instruments appear inconsistent ( Diener et al., 1999 ; Diener et al., 2010 ; Fordyce, 1986 ; Joseph & Lewis, 1998 ). Also, one or two items often used to measure well-being or happiness in national and cross-cultural surveys appeared unreliable compared to measures with more items covering various well-being components ( Andrews & McKennell, 1980 ; Hills & Argyle, 2002 ; Joseph & Lewis, 1998 ).

Hills & Argyle (2002) considered limitations of earlier happiness measurements when developing their Oxford Happiness Questionnaire (OHQ). The authors used the terms “well-being” and “subjective well-being” as synonyms for “happiness” when describing the OHQ. This instrument is a new version of the Oxford Happiness Inventory ( Argyle, 2001 ) and both scales were widely used in Oxford University for assessment of personal happiness and are shown to have satisfactory psychometric properties ( Hills & Argyle, 2002 ). The OHQ is a unidimensional scale that contains items tapping into positive and negative affect, life satisfaction and happy traits such as sense of control, physical fitness, positive cognition, mental alertness, self-esteem, cheerfulness, optimism, and empathy ( Diener, 1984 ; Hills & Argyle, 2002 ).

Quality of Life was widely recognized as a health related issue associated with the WHO’s definition of health been not only the absence of disease but a complete mental, social, and physical well-being ( WHOQOL Group, 1995 , p. 1404). The short-form version of the World Health Organization’s Quality of Life measurement tools (WHOQOL-BREF) is a 26-item questionnaire that assesses quality of life on physical, psychological, social, and environmental domains ( WHOQOL Group, 1998a ).

The WHO definition above supports an emerging consensus that QOL is a multidimensional construct conceptualized as separate domains and sub-domains relating to all areas of life ( Skevington, 2002 ; WHOQOL Group, 1995 ).

In psychology many variables of interest cannot be measured directly, and as latent constructs the establishment of their properties remains an ongoing challenge. By using accurate operational definitions a construct’s properties can be evaluated, but reliable and valid measurements can be obtained only when the operational definitions themselves have been rigorously developed ( Aiken & Groth-Marnat, 2006 ). Happiness, subjective well-being, and quality of life are concepts that share common components ( Table 1 ) and arguably, lack standardized operational definitions or criteria. This lack is evident in the interchangeable use of these terms in the research literature ( Andrews & McKennell, 1980 ; Diener et al., 1999 ; Fordyce, 1986 ; Shin & Johnson, 1978 ). The aim of the current study is to clarify relationships between these constructs empirically by applying widely used and well-validated measures of well-being including the OHQ, the World Health Organization Quality of Life Questionnaire (WHOQOL-BREF), the SL Scale, and the Positive and Negative Affect Scale (PANAS).

Participants

The Auckland University of Technology Ethics Committee granted ethical approved for this study (Ethics Application Number 11/209). New Zealand university students ( n = 180) recruited in class completed the study questionnaire; from them 35 were males (19.9%), 141 were females (80.1%) and four participants did not provide gender information. We have conducted power analysis to estimate a minimum sample size required for the correlational study with α (two tailed) = 0.05, β = 0.20, and r ≥ 0.25, which is n = 123 and our sample size is greater. The sample size also satisfied 20 participants per item criteria for principle component analysis with eight study variables ( Hair et al., 1995 ). The participants age ranges from 18 to 55 years, mean age is 24.6 years and standard deviation (SD) is 7.28. About 66 participants (36.7%) identified themselves as New Zealand European, 34 (18.9%) as Asian, 24 (13.7%) as Pasifika, 14 (7.8%) as other European, 7 (3.9%) as Maori, 30 (16.7%) as other ethnicities, and five participants did not indicate their ethnicity. These data has been previously used as a part of psychometric investigation that applied Rasch analysis to evaluate psychometric properties of the OHQ ( Medvedev et al., 2016 ), which is unrelated to the purpose of the current study.

Instruments

Oxford Happiness Questionnaire ( Hills & Argyle, 2002 ) includes 29 items using six-point Likert scale response format. WHOQOL-BREF quality of life questionnaire ( WHOQOL Group, 1998a ) includes 26 items with five-point Likert scale response format representing four different domains. The SL scale contains five items presented in seven-point Likert scale format ( Diener et al., 1985 ). The PANAS ( Watson, Clark & Tellegen, 1988 ) includes two subscales measuring positive and negative affect independently. Each scale is composed of 10 adjectives expressing different feelings and emotions like “excited,” “interested” or “distressed” and participants indicate the correspondence of their average feeling to each provided adjective on a five-point Likert scale from “not at all or very slightly” = 1 to “extremely” = 5. The composite subjective well-being scale (SWS) was calculated as a mean of z -scores for the SL scale ( Diener et al., 1985 ), the PANAS positive affect subscale and the reversed coded PANAS negative affect subscale ( Watson, Clark & Tellegen, 1988 ). Therefore, the SWS combines positive and negative affect and life satisfaction, which are the main components of subjective well-being suggested by the literature ( Table 1 ).

The study questionnaires were completed by the participants in the lecture theaters of the Auckland University of Technology before lecture. The study complied with local ethical guidelines.

Data analyses

The data analysis was performed using IBM SPSS program, version 24. The data was screened for normality of distribution and for meeting assumptions for correlation, regression, and principle component analysis. We computed descriptive statistics and examined internal consistency (Cronbach’s alpha and item-to-total correlations) for all included measures with the current dataset. Correlation and regression analyses were conducted to explore the relationships between study variables and the extent to which quality of life domains predict happiness and subjective well-being. Principle component analysis was used to examine communalities and loadings on the first principle component for all study variables.

Psychometric properties of the measures

Psychometric properties of the applied scales were tested with our data set. The inter-item total correlation for all the scales were in the permissible range from 0.3 to 0.75 with an exception of the item 2 in OHQ, which correlates with other items at about 0.12. Means, SD, and reliability coefficients for each scale including the OHQ, the WHOQOL-BREF domain scales, the SL and the PANAS positive and negative affect are summarized in Table 2 . The majority of the scales have reliability coefficients over 0.8 with the exception of social and environmental domain scales of WHOQOL falling below this number.

Correlational analysis

Correlations between the outcome variables, gender, and age are represented in Table 3 . The results show that neither gender nor age correlates significantly with any of the scales. The correlations between all well-being related measures are significant and range from moderate to strong.

Oxford happiness is the Oxford Happiness Questionnaire ( Hills & Argyle, 2002 ); QOL is quality of life, QOL general is the general question about quality of life and QOL social, QOL psychological, QOL environment, QOL health are the four domain scales of WHOQOL ( WHOQOL Group, 1998a ); Life satisfaction is the Satisfaction with Life scale ( Diener et al., 1985 ); Positive affect and Negative affect are PANAS subscales measuring positive and negative affect respectively ( Watson, Clark & Tellegen, 1988 ); Subjective well-being is the composite scale of subjective well-being combining the Satisfaction with Life and the PANAS Positive and reversed Negative affect scales.

Multiple regression analysis

The data satisfied assumptions of multiple regression analysis with skewness and kurtosis values within ±1, no significant outliers, no signs of multicolinearity and variance inflating factor below 5. Multiple linear regression analysis was performed to test regression weights of WHOQOL domains and their significance in predicting happiness as measured by OHQ and subjective well-being measured by the SWS composite measure. Table 4 shows that the WHOQOL domains together explain 73% of happiness on the OHQ. It also shows that the strongest predictor is psychological domain of WHOQOL and environmental factors appear not significant in predicting happiness. All the WHOQOL domains appear significant and together explain about 66% of subjective well-being with psychological domain as the strongest predictor ( Table 4 ).

Happiness is measured by the Oxford Happiness Questionnaire.

R , multiple regression coefficient.

Principle component analysis

Principle component analysis was first conducted for all applied scales aiming to extract communality of each scale. Extracted communalities and loadings on the single factor for all scales together with total variance explained by scales and total eigenvalue are represented in Table 5 . The extracted communalities of the scales range from 0.38 (social relationships) to 0.83 (the OHQ and the psychological domain of WHOQOL).

Scree plot ( Fig. 1 ) shows the sharp drop and clear Cattell’s cut off point (elbow) after the first principal component and the rest of the plot representing other extracted components is shallow and almost flat.

An external file that holds a picture, illustration, etc.
Object name is peerj-06-4903-g001.jpg

Alternatively, the SL and the PANAS scales were replaced by the SWS, which followed the same analysis illustrated in Table 6 . The extracted communalities range between 0.42 (social relationships) and 0.81 (the OHQ and the psychological domain of WHOQOL).

The aim of this study was to investigate the relationship between happiness, subjective well-being, quality of life, and related components by applying widely used scales with satisfactory psychometric properties. Our data offer preliminary clarification of the relationship between happiness, subjective well-being, and quality of life. The results show that all applied well-being measures have high loadings on the global well-being domain that explains about 80% of the variance in the OHQ, the psychological domain of Quality of Life and subjective well-being ( Tables 5 and ​ and6). 6 ). These findings support the proposed global dimension of well-being that transcends relative distinctions between specific components contributing to the overall wellness ( Kashdan, Biswas-Diener & King, 2008 ; Hills & Argyle, 1998 , 2002 ; Joseph & Lewis, 1998 ). These results also provide support for interchangeable use of happiness and subjective well-being, and suggest that these constructs and quality of life domains may be considered as facets of the global well-being construct.

Widely used well-being measures capture subjective evaluation of individual’s condition and top-down approach suggests that people are happier if they evaluate their life including its eudemonic aspects in a positive way ( Diener et al., 1999 ; Diener, 1984 ). In contrast, negative evaluations diminish well-being, may discount eudemonic components and lead to psychological conditions such as depression or anxiety ( Diener et al., 1999 ; Beck, 1991 , 1993 ). Therefore, moderate to strong correlations found between subjective well-being measures in this study were expected according to this approach. The strongest relationship is evident between the OHQ, psychological domain of the WHOQOL, the SL, the PANAS Positive affect, and the SWS ( Table 3 ). It is likely that correlation values were suppressed due to unrepresentative student sample with substantial proportion of international students for whom English is a second language, which might have produced a response bias. Also, the correlation values could suffer from inconsistent item wording in different scales. For example, the WHOQOL items ask the participants to evaluate their affective experiences for the last two weeks ( WHOQOL Group, 1998a ), whereas the PANAS measures how the participants feel on average ( Watson, Clark & Tellegen, 1988 ). Thus, the correlations could be higher if uniform wording was used for all scales and applied to a larger sample more representative of the general population.

Furthermore, the results show that the psychological, physical health, social, and environmental domains of WHOQOL together explain 73% of happiness measured by the OHQ and 66% of subjective well-being. In both cases psychological domain was the strongest predictor, but environmental factors explain only 14% of the variance in subjective well-being and were found not significant in predicting happiness. These data suggest that environment does not appear as relevant determinant of individual happiness. However, psychological domain appeared as strongest predictor of both happiness and subjective well-being in contrast to both environment and social relationships. The social relationships explain the least amount of variance in the global well-being construct indicating that they may play important but not the major role in individual’s well-being of the current sample. These results are consistent with earlier studies supporting top-down approach and emphasizing the role of individual’s cognition in subjective happiness ( Andrews & McKennell, 1980 ; Andrews & Withey, 1976 ; Butler et al., 2006 ).

The main problem to address redundancy issue is the proposed multidimensional structure of WHOQOL in which the four domains are typically assessed independently without providing a combined quality of life score ( WHOQOL Group, 1998b ). However, our results suggest that the psychological domain of WHOQOL can be used as an alternative brief measure of happiness or subjective well-being, which is an advantage, because it is a six-item scale with good reliability compared to the 29-item OHQ.

Tested with our data set, satisfactory psychometric properties of all scales used in the study appeared consistent with earlier research ( Hills & Argyle, 2002 ; Diener et al., 1985 ; Watson, Clark & Tellegen, 1988 ; WHOQOL Group, 1998a ) with the exception of item 2 in the OHQ, which correlates with other items at 0.12, below commonly acceptable level of 0.3. Thus, discarding this item would slightly increase reliability of the OHQ, which is recommended for future application of this scale. However, it is unlikely that this item could have strong influence on overall sufficiently high reliability of the scale (α = 0.90).

Limitations

The common limitations of subjective well-being research refer to participants’ transient mood states and other contextual influences, which might affect participants’ responses ( Eid & Larsen, 2008 ). However, these effects were minimized because the data were collected in different classes. The other limitation of this study refers to the modest sample size and disproportionally larger number of female participants (80.1%) comparing to male (19.9%), which limits generalization of the findings to the male population. In this study we used all measures in their original form without enhancement of their psychometric properties to maintain consistency with studies conducted earlier. Recently proposed modification of happiness and quality of life assessment tools ( Medvedev et al., 2016 ; Krägeloh et al., 2016 ) may contribute to more accurate estimations of relationships between these happiness and well-been measures. However, this would require similar psychometric enhancements of all other measures (e.g., PANAS) involved in the analyses, which are not available to date.

Future Directions

Further research should investigate the relationship between happiness, subjective well-being, and quality of life among more diverse populations, including people differing in socio-economic status and health conditions, as these dimensions have proved to be crucial in the assessment of well-being under its different formalizations. Finally, the research should focus on development of more accurate instruments for assessment of happiness and subjective well-being by considering the WHOQOL domains and other relevant measures.

Conclusions

Taken together, the findings of this study provide support for a global well-being dimension and interchangeable use of terms happiness, subjective well-being, and psychological quality of life with the current sample and measures. The WHOQOL measures happiness or subjective well-being by its psychological domain but in addition includes subscales focused on measurement of perceived physical health and more externally oriented domains such as social relationships and environmental factors. These differences should be considered in measurement definitions to refine reliability and validity. The findings of this study contribute to better understanding of the relationships between happiness, subjective well-being, and quality of life, which is necessary for more accurate assessment of these constructs. Also, these findings have implications for the enhancement of people’s well-being, happiness, and quality of life through development of contentment and emotional stability. Further investigation with larger heterogeneous samples and other well-being measures is warranted.

Supplemental Information

Supplemental information 1, funding statement.

The authors received no funding for this work.

Additional Information and Declarations

The authors declare that they have no competing interests.

Oleg N. Medvedev conceived and designed the experiments, performed the experiments, analyzed the data, contributed reagents/materials/analysis tools, prepared figures and/or tables, authored or reviewed drafts of the paper, approved the final draft, obtained ethics approval.

C. Erik Landhuis conceived and designed the experiments, contributed reagents/materials/analysis tools, authored or reviewed drafts of the paper, approved the final draft.

The following information was supplied relating to ethical approvals (i.e., approving body and any reference numbers):

The Auckland University of Technology Ethics Committee granted ethical approval for this study (Ethics Application Number 11/209).

PrepScholar

Choose Your Test

Sat / act prep online guides and tips, 113 great research paper topics.

author image

General Education

feature_pencilpaper

One of the hardest parts of writing a research paper can be just finding a good topic to write about. Fortunately we've done the hard work for you and have compiled a list of 113 interesting research paper topics. They've been organized into ten categories and cover a wide range of subjects so you can easily find the best topic for you.

In addition to the list of good research topics, we've included advice on what makes a good research paper topic and how you can use your topic to start writing a great paper.

What Makes a Good Research Paper Topic?

Not all research paper topics are created equal, and you want to make sure you choose a great topic before you start writing. Below are the three most important factors to consider to make sure you choose the best research paper topics.

#1: It's Something You're Interested In

A paper is always easier to write if you're interested in the topic, and you'll be more motivated to do in-depth research and write a paper that really covers the entire subject. Even if a certain research paper topic is getting a lot of buzz right now or other people seem interested in writing about it, don't feel tempted to make it your topic unless you genuinely have some sort of interest in it as well.

#2: There's Enough Information to Write a Paper

Even if you come up with the absolute best research paper topic and you're so excited to write about it, you won't be able to produce a good paper if there isn't enough research about the topic. This can happen for very specific or specialized topics, as well as topics that are too new to have enough research done on them at the moment. Easy research paper topics will always be topics with enough information to write a full-length paper.

Trying to write a research paper on a topic that doesn't have much research on it is incredibly hard, so before you decide on a topic, do a bit of preliminary searching and make sure you'll have all the information you need to write your paper.

#3: It Fits Your Teacher's Guidelines

Don't get so carried away looking at lists of research paper topics that you forget any requirements or restrictions your teacher may have put on research topic ideas. If you're writing a research paper on a health-related topic, deciding to write about the impact of rap on the music scene probably won't be allowed, but there may be some sort of leeway. For example, if you're really interested in current events but your teacher wants you to write a research paper on a history topic, you may be able to choose a topic that fits both categories, like exploring the relationship between the US and North Korea. No matter what, always get your research paper topic approved by your teacher first before you begin writing.

113 Good Research Paper Topics

Below are 113 good research topics to help you get you started on your paper. We've organized them into ten categories to make it easier to find the type of research paper topics you're looking for.

Arts/Culture

  • Discuss the main differences in art from the Italian Renaissance and the Northern Renaissance .
  • Analyze the impact a famous artist had on the world.
  • How is sexism portrayed in different types of media (music, film, video games, etc.)? Has the amount/type of sexism changed over the years?
  • How has the music of slaves brought over from Africa shaped modern American music?
  • How has rap music evolved in the past decade?
  • How has the portrayal of minorities in the media changed?

music-277279_640

Current Events

  • What have been the impacts of China's one child policy?
  • How have the goals of feminists changed over the decades?
  • How has the Trump presidency changed international relations?
  • Analyze the history of the relationship between the United States and North Korea.
  • What factors contributed to the current decline in the rate of unemployment?
  • What have been the impacts of states which have increased their minimum wage?
  • How do US immigration laws compare to immigration laws of other countries?
  • How have the US's immigration laws changed in the past few years/decades?
  • How has the Black Lives Matter movement affected discussions and view about racism in the US?
  • What impact has the Affordable Care Act had on healthcare in the US?
  • What factors contributed to the UK deciding to leave the EU (Brexit)?
  • What factors contributed to China becoming an economic power?
  • Discuss the history of Bitcoin or other cryptocurrencies  (some of which tokenize the S&P 500 Index on the blockchain) .
  • Do students in schools that eliminate grades do better in college and their careers?
  • Do students from wealthier backgrounds score higher on standardized tests?
  • Do students who receive free meals at school get higher grades compared to when they weren't receiving a free meal?
  • Do students who attend charter schools score higher on standardized tests than students in public schools?
  • Do students learn better in same-sex classrooms?
  • How does giving each student access to an iPad or laptop affect their studies?
  • What are the benefits and drawbacks of the Montessori Method ?
  • Do children who attend preschool do better in school later on?
  • What was the impact of the No Child Left Behind act?
  • How does the US education system compare to education systems in other countries?
  • What impact does mandatory physical education classes have on students' health?
  • Which methods are most effective at reducing bullying in schools?
  • Do homeschoolers who attend college do as well as students who attended traditional schools?
  • Does offering tenure increase or decrease quality of teaching?
  • How does college debt affect future life choices of students?
  • Should graduate students be able to form unions?

body_highschoolsc

  • What are different ways to lower gun-related deaths in the US?
  • How and why have divorce rates changed over time?
  • Is affirmative action still necessary in education and/or the workplace?
  • Should physician-assisted suicide be legal?
  • How has stem cell research impacted the medical field?
  • How can human trafficking be reduced in the United States/world?
  • Should people be able to donate organs in exchange for money?
  • Which types of juvenile punishment have proven most effective at preventing future crimes?
  • Has the increase in US airport security made passengers safer?
  • Analyze the immigration policies of certain countries and how they are similar and different from one another.
  • Several states have legalized recreational marijuana. What positive and negative impacts have they experienced as a result?
  • Do tariffs increase the number of domestic jobs?
  • Which prison reforms have proven most effective?
  • Should governments be able to censor certain information on the internet?
  • Which methods/programs have been most effective at reducing teen pregnancy?
  • What are the benefits and drawbacks of the Keto diet?
  • How effective are different exercise regimes for losing weight and maintaining weight loss?
  • How do the healthcare plans of various countries differ from each other?
  • What are the most effective ways to treat depression ?
  • What are the pros and cons of genetically modified foods?
  • Which methods are most effective for improving memory?
  • What can be done to lower healthcare costs in the US?
  • What factors contributed to the current opioid crisis?
  • Analyze the history and impact of the HIV/AIDS epidemic .
  • Are low-carbohydrate or low-fat diets more effective for weight loss?
  • How much exercise should the average adult be getting each week?
  • Which methods are most effective to get parents to vaccinate their children?
  • What are the pros and cons of clean needle programs?
  • How does stress affect the body?
  • Discuss the history of the conflict between Israel and the Palestinians.
  • What were the causes and effects of the Salem Witch Trials?
  • Who was responsible for the Iran-Contra situation?
  • How has New Orleans and the government's response to natural disasters changed since Hurricane Katrina?
  • What events led to the fall of the Roman Empire?
  • What were the impacts of British rule in India ?
  • Was the atomic bombing of Hiroshima and Nagasaki necessary?
  • What were the successes and failures of the women's suffrage movement in the United States?
  • What were the causes of the Civil War?
  • How did Abraham Lincoln's assassination impact the country and reconstruction after the Civil War?
  • Which factors contributed to the colonies winning the American Revolution?
  • What caused Hitler's rise to power?
  • Discuss how a specific invention impacted history.
  • What led to Cleopatra's fall as ruler of Egypt?
  • How has Japan changed and evolved over the centuries?
  • What were the causes of the Rwandan genocide ?

main_lincoln

  • Why did Martin Luther decide to split with the Catholic Church?
  • Analyze the history and impact of a well-known cult (Jonestown, Manson family, etc.)
  • How did the sexual abuse scandal impact how people view the Catholic Church?
  • How has the Catholic church's power changed over the past decades/centuries?
  • What are the causes behind the rise in atheism/ agnosticism in the United States?
  • What were the influences in Siddhartha's life resulted in him becoming the Buddha?
  • How has media portrayal of Islam/Muslims changed since September 11th?

Science/Environment

  • How has the earth's climate changed in the past few decades?
  • How has the use and elimination of DDT affected bird populations in the US?
  • Analyze how the number and severity of natural disasters have increased in the past few decades.
  • Analyze deforestation rates in a certain area or globally over a period of time.
  • How have past oil spills changed regulations and cleanup methods?
  • How has the Flint water crisis changed water regulation safety?
  • What are the pros and cons of fracking?
  • What impact has the Paris Climate Agreement had so far?
  • What have NASA's biggest successes and failures been?
  • How can we improve access to clean water around the world?
  • Does ecotourism actually have a positive impact on the environment?
  • Should the US rely on nuclear energy more?
  • What can be done to save amphibian species currently at risk of extinction?
  • What impact has climate change had on coral reefs?
  • How are black holes created?
  • Are teens who spend more time on social media more likely to suffer anxiety and/or depression?
  • How will the loss of net neutrality affect internet users?
  • Analyze the history and progress of self-driving vehicles.
  • How has the use of drones changed surveillance and warfare methods?
  • Has social media made people more or less connected?
  • What progress has currently been made with artificial intelligence ?
  • Do smartphones increase or decrease workplace productivity?
  • What are the most effective ways to use technology in the classroom?
  • How is Google search affecting our intelligence?
  • When is the best age for a child to begin owning a smartphone?
  • Has frequent texting reduced teen literacy rates?

body_iphone2

How to Write a Great Research Paper

Even great research paper topics won't give you a great research paper if you don't hone your topic before and during the writing process. Follow these three tips to turn good research paper topics into great papers.

#1: Figure Out Your Thesis Early

Before you start writing a single word of your paper, you first need to know what your thesis will be. Your thesis is a statement that explains what you intend to prove/show in your paper. Every sentence in your research paper will relate back to your thesis, so you don't want to start writing without it!

As some examples, if you're writing a research paper on if students learn better in same-sex classrooms, your thesis might be "Research has shown that elementary-age students in same-sex classrooms score higher on standardized tests and report feeling more comfortable in the classroom."

If you're writing a paper on the causes of the Civil War, your thesis might be "While the dispute between the North and South over slavery is the most well-known cause of the Civil War, other key causes include differences in the economies of the North and South, states' rights, and territorial expansion."

#2: Back Every Statement Up With Research

Remember, this is a research paper you're writing, so you'll need to use lots of research to make your points. Every statement you give must be backed up with research, properly cited the way your teacher requested. You're allowed to include opinions of your own, but they must also be supported by the research you give.

#3: Do Your Research Before You Begin Writing

You don't want to start writing your research paper and then learn that there isn't enough research to back up the points you're making, or, even worse, that the research contradicts the points you're trying to make!

Get most of your research on your good research topics done before you begin writing. Then use the research you've collected to create a rough outline of what your paper will cover and the key points you're going to make. This will help keep your paper clear and organized, and it'll ensure you have enough research to produce a strong paper.

What's Next?

Are you also learning about dynamic equilibrium in your science class? We break this sometimes tricky concept down so it's easy to understand in our complete guide to dynamic equilibrium .

Thinking about becoming a nurse practitioner? Nurse practitioners have one of the fastest growing careers in the country, and we have all the information you need to know about what to expect from nurse practitioner school .

Want to know the fastest and easiest ways to convert between Fahrenheit and Celsius? We've got you covered! Check out our guide to the best ways to convert Celsius to Fahrenheit (or vice versa).

These recommendations are based solely on our knowledge and experience. If you purchase an item through one of our links, PrepScholar may receive a commission.

author image

Christine graduated from Michigan State University with degrees in Environmental Biology and Geography and received her Master's from Duke University. In high school she scored in the 99th percentile on the SAT and was named a National Merit Finalist. She has taught English and biology in several countries.

Student and Parent Forum

Our new student and parent forum, at ExpertHub.PrepScholar.com , allow you to interact with your peers and the PrepScholar staff. See how other students and parents are navigating high school, college, and the college admissions process. Ask questions; get answers.

Join the Conversation

Ask a Question Below

Have any questions about this article or other topics? Ask below and we'll reply!

Improve With Our Famous Guides

  • For All Students

The 5 Strategies You Must Be Using to Improve 160+ SAT Points

How to Get a Perfect 1600, by a Perfect Scorer

Series: How to Get 800 on Each SAT Section:

Score 800 on SAT Math

Score 800 on SAT Reading

Score 800 on SAT Writing

Series: How to Get to 600 on Each SAT Section:

Score 600 on SAT Math

Score 600 on SAT Reading

Score 600 on SAT Writing

Free Complete Official SAT Practice Tests

What SAT Target Score Should You Be Aiming For?

15 Strategies to Improve Your SAT Essay

The 5 Strategies You Must Be Using to Improve 4+ ACT Points

How to Get a Perfect 36 ACT, by a Perfect Scorer

Series: How to Get 36 on Each ACT Section:

36 on ACT English

36 on ACT Math

36 on ACT Reading

36 on ACT Science

Series: How to Get to 24 on Each ACT Section:

24 on ACT English

24 on ACT Math

24 on ACT Reading

24 on ACT Science

What ACT target score should you be aiming for?

ACT Vocabulary You Must Know

ACT Writing: 15 Tips to Raise Your Essay Score

How to Get Into Harvard and the Ivy League

How to Get a Perfect 4.0 GPA

How to Write an Amazing College Essay

What Exactly Are Colleges Looking For?

Is the ACT easier than the SAT? A Comprehensive Guide

Should you retake your SAT or ACT?

When should you take the SAT or ACT?

Stay Informed

subjective research paper

Get the latest articles and test prep tips!

Looking for Graduate School Test Prep?

Check out our top-rated graduate blogs here:

GRE Online Prep Blog

GMAT Online Prep Blog

TOEFL Online Prep Blog

Holly R. "I am absolutely overjoyed and cannot thank you enough for helping me!”

Subjective Answers Evaluation Using Machine Learning and Natural Language Processing

Ieee account.

  • Change Username/Password
  • Update Address

Purchase Details

  • Payment Options
  • Order History
  • View Purchased Documents

Profile Information

  • Communications Preferences
  • Profession and Education
  • Technical Interests
  • US & Canada: +1 800 678 4333
  • Worldwide: +1 732 981 0060
  • Contact & Support
  • About IEEE Xplore
  • Accessibility
  • Terms of Use
  • Nondiscrimination Policy
  • Privacy & Opting Out of Cookies

A not-for-profit organization, IEEE is the world's largest technical professional organization dedicated to advancing technology for the benefit of humanity. © Copyright 2024 IEEE - All rights reserved. Use of this web site signifies your agreement to the terms and conditions.

IMAGES

  1. Research paper first page. Comprehensive Guide on How to Write a Title

    subjective research paper

  2. 😎 What is a research paper. Write A Research Paper. 2019-02-24

    subjective research paper

  3. Qualitative Research Paper Sample

    subjective research paper

  4. Types of research papers

    subjective research paper

  5. Example Method Paper / 28 Research Paper Formats

    subjective research paper

  6. (PDF) How does “subjective I” influence a qualitative research question

    subjective research paper

VIDEO

  1. What is SUBJECT? (NAWED NAQVI)

  2. L-17 Qualitative Research Process (In Urdu / Hindi)

  3. Class 9th physics paper subjective paper for federal board

  4. 50 Qualities of Qualitative Researcher (In Urdu / Hindi) L-18

  5. What is Qualia?

  6. EXPLAINING SUBJECTIVE VS OBJECTIVE DATA

COMMENTS

  1. Tips on considering "subjectivity" in qualitative research

    Peshkin takes up the idea of how one might consider one's subject positions in a much-cited article (1988) entitled: In search of subjectivity: One's own. Peshkin defines "subjectivity" as the "amalgam of the persuasions that stem from the circumstances of one's class, statuses, and values interacting with the particulars of one's ...

  2. PDF Subjectivity in Qualitative Research

    Subjectivity. Feelings, opinions, and preferences that comprise a person's identity. Sometimes contrasted with objectivity. Should not ignore subjectivity in qualitative research. "Subjectivity is like a cloak..." (Peshkin, 1988) Vital to social science research.

  3. How to Write a Research Paper

    Choose a research paper topic. Conduct preliminary research. Develop a thesis statement. Create a research paper outline. Write a first draft of the research paper. Write the introduction. Write a compelling body of text. Write the conclusion. The second draft.

  4. Revisiting Bias in Qualitative Research: Reflections on Its

    Research, they say, is all about impact (Higher Education Funding Council for England, 2017). Here, the impact of research outputs is not solely evaluated using academic measures (e.g., number of citations) but on its "wider impact" beyond academia, such as on the economy, society, culture, public policy or services, health, or the environment.

  5. Research Paper

    A research paper is a piece of academic writing that provides analysis, interpretation, and argument based on in-depth independent research. ... papers are written in a formal and objective tone, with an emphasis on clarity, precision, and accuracy. They avoid subjective language or personal opinions and instead rely on objective data and ...

  6. How does "subjective I" influence a qualitative research question

    The "subjective Is" are those values and beliefs that a researcher or a practitioner brings to a research project or practice. The "subjective I" enables the researchers or practitioners to ask ...

  7. 1 Objective and subjective research perspectives

    1 Objective and subjective research perspectives. Research in social science requires the collection of data in order to understand a phenomenon. This can be done in a number of ways, and will depend on the state of existing knowledge of the topic area. The researcher can: Explore a little known issue. The researcher has an idea or has observed ...

  8. Subjectivity and Objectivity in Qualitative Methodology

    1. Subjectivism and Objectivism. Qualitative methodology recognizes that the subjectivity of the researche r is. intimately involved in scientific research. Subjectivity guides everything from the ...

  9. Writing a Research Paper Introduction

    Table of contents. Step 1: Introduce your topic. Step 2: Describe the background. Step 3: Establish your research problem. Step 4: Specify your objective (s) Step 5: Map out your paper. Research paper introduction examples. Frequently asked questions about the research paper introduction.

  10. A Beginner's Guide to Starting the Research Process

    Step 4: Create a research design. The research design is a practical framework for answering your research questions. It involves making decisions about the type of data you need, the methods you'll use to collect and analyze it, and the location and timescale of your research. There are often many possible paths you can take to answering ...

  11. Subjective data, objective data and the role of bias in predictive

    For decades, self-report measures based on questionnaires have been widely used in educational research to study implicit and complex constructs such as motivation, emotion, cognitive and metacognitive learning strategies. However, the existence of potential biases in such self-report instruments might cast doubts on the validity of the measured constructs. The emergence of trace data from ...

  12. What is subjectivity? Scholarly perspectives on the elephant in the

    The concept of subjectivity has long been controversially discussed in academic contexts without ever reaching consensus. As the main approach for a science of subjectivity, we applied Q methodology to investigate subjective perspectives about 'subjectivity'. The purpose of this work was therefore to contribute with clarity about what is meant with this central concept and in what way the ...

  13. Transparency, Subjectivity and Objectivity in Academic Texts

    The ai m of this paper is to examine the notions of 'subjectivity' and 'objectivity' as. represented in academic texts. In our review of the literature, we have identified three key ...

  14. How to write a research paper

    A research paper provides an excellent opportunity to contribute to your area of study or profession by exploring a topic in depth.. With proper planning, knowledge, and framework, completing a research paper can be a fulfilling and exciting experience. Though it might initially sound slightly intimidating, this guide will help you embrace the challenge.

  15. Exploring constructs of well-being, happiness and quality of life

    Introduction. The existing definitions of happiness, subjective well-being, and health related quality of life and the main components assigned to these constructs in the research literature (see Table 1) suggest conceptual overlap between these dimensions (Camfield & Skevington, 2008).Quality of life was defined in the cross-cultural project of the World Health Organization (WHO) as:

  16. How to Create a Structured Research Paper Outline

    How to write a research paper outline. Follow these steps to start your research paper outline: Decide on the subject of the paper. Write down all the ideas you want to include or discuss. Organize related ideas into sub-groups.

  17. What Are Research Objectives and How to Write Them (with Examples)

    Research papers are essential instruments for researchers to effectively communicate their work. Among the many sections that constitute a research paper, the introduction plays a key role in providing a background and setting the context. 1 Research objectives, which define the aims of the study, are usually stated in the introduction. Every ...

  18. (PDF) Subjectivity in research: Why not ? But…

    1. Introduction. Subjectivity in research is a topic that has led more than once to much discussion and to many. debates. For quantitative researchers, it is -and rightfully so- a variable needing ...

  19. 113 Great Research Paper Topics

    A paper is always easier to write if you're interested in the topic, and you'll be more motivated to do in-depth research and write a paper that really covers the entire subject. Even if a certain research paper topic is getting a lot of buzz right now or other people seem interested in writing about it, don't feel tempted to make it your topic ...

  20. Research Objectives

    Example: Research aim. To examine contributory factors to muscle retention in a group of elderly people. Example: Research objectives. To assess the relationship between sedentary habits and muscle atrophy among the participants. To determine the impact of dietary factors, particularly protein consumption, on the muscular health of the ...

  21. Subjective Answers Evaluation Using Machine Learning and Natural

    Subjective paper evaluation is a tricky and tiresome task to do by manual labor. Insufficient understanding and acceptance of data are crucial challenges while analyzing subjective papers using Artificial Intelligence (AI). Several attempts have been made to score students' answers using computer science. However, most of the work uses traditional counts or specific words to achieve this ...

  22. (PDF) Happiness: Also Known as "Life Satisfaction" and "Subjective Well

    Happiness is a main goal, most individuals reach out for a happy life and many policy. makers aim at greater happiness for a greater number. This pursuit of happiness calls for. understanding of ...

  23. Design of highly functional genome editors by modeling the ...

    Gene editing has the potential to solve fundamental challenges in agriculture, biotechnology, and human health. CRISPR-based gene editors derived from microbes, while powerful, often show significant functional tradeoffs when ported into non-native environments, such as human cells. Artificial intelligence (AI) enabled design provides a powerful alternative with potential to bypass ...