(Stanford users can avoid this Captcha by logging in.)

  • Send to text email RefWorks RIS download printer

Reporting Quantitative Research in Psychology: How to Meet APA Style Journal Article Reporting Standards. Second Edition. Revised. APA Style Series

About this article.

Stanford University

  • Stanford Home
  • Maps & Directions
  • Search Stanford
  • Emergency Info
  • Terms of Use
  • Non-Discrimination
  • Accessibility

© Stanford University , Stanford , California 94305 .

METHODS article

Farewell to bright-line: a guide to reporting quantitative results without the s-word.

\r\nKevin M. Cummins,,*

  • 1 Division of Infectious Disease and Global Public Health, SDSU-UCSD Joint Doctoral Program in Interdisciplinary Research on Substance Use, San Diego, CA, United States
  • 2 Department of Psychology, University of California, San Diego, San Diego, CA, United States
  • 3 Department of Psychiatry, University of California, San Diego, San Diego, CA, United States

Recent calls to end the practice of categorizing findings based on statistical significance have focused on what not to do. Practitioners who subscribe to the conceptual basis behind these calls may be unaccustomed to presenting results in the nuanced and integrative manner that has been recommended as an alternative. This alternative is often presented as a vague proposal. Here, we provide practical guidance and examples for adopting a research evaluation posture and communication style that operates without bright-line significance testing. Characteristics of the structure of results communications that are based on conventional significance testing are presented. Guidelines for writing results without the use of bright-line significance testing are then provided. Examples of conventional styles for communicating results are presented. These examples are then modified to conform to recent recommendations. These examples demonstrate that basic modifications to written scientific communications can increase the information content of scientific reports without a loss of rigor. The adoption of alternative approaches to results presentations can help researchers comply with multiple recommendations and standards for the communication and reporting of statistics in the psychological sciences.

Introduction

The abandonment of significance testing has been proposed by some researchers for several decades ( Hunter, 1997 ; Krantz, 1999 ; Kline, 2004 ; Armstrong, 2007 ). In place of heavy reliance on significance testing, a thorough interrogation of data and replication of findings can be relied upon to build scientific knowledge ( Carver, 1978 ) and has been recommended in particular for exploratory research ( Gigerenzer, 2018 ). The replication crisis has demonstrated that significance testing alone does not ensure that reported findings are adequately reliable ( Nosek et al., 2015 ). Indeed, the practice of focusing on significance testing during analysis is a motivator for “P-hacking” and expeditions into the “garden of forking paths” ( Gelman and Loken, 2014 ; Szucs and Ioannidis, 2017 ). These problems are compounded by the outright misunderstanding and misuse of p -values ( Goodman, 2008 ; Wasserstein, 2016 ). The practice of bright-line significance testing has been considered a generator of scientific confusion ( Gelman, 2013 ), though appropriately interpreted p-value s can provide guidance in results interpretation.

A recent issue of The American Statistician and a commentary in Nature suggest a seemingly simple conciliatory solution to the problems associated with current statistical practices ( Amrhein et al., 2019 ; Wasserstein et al., 2019 ). The authors of these articles advocate ending the use of bright-line statistical testing in favor of a thoughtful, open, and modest approach to results reporting and evaluation ( Wasserstein et al., 2019 ). This call to action opens the door to the widespread adoption of various alternative practices. A transition will require authors to adopt new customs for analysis and communication. Reviewers and editors will also need to recognize and accept communication styles that are congruent with these recommendations.

Many researchers may subscribe to the ideas behind the criticisms of traditional significance testing based on bright-line decision rules but are unaccustomed to communicating findings without them. This is a surmountable barrier. While recent calls to action espouse principles that researchers should follow, tangible examples, both of traditional approaches to statistical reporting, as well as the newly recommended ones, may serve as a needed resource for many researchers. The aim of this paper is to provide researchers guidance in ensuring their repertoire of approaches and communication styles include approaches consistent with these newly reinforced recommendations. Guidance on crafting results may facilitate some researchers’ transition toward the execution of the recent recommendations.

What Is the Dominant Approach?

Null hypothesis significance testing (NHST) dominates the contemporary application of statistics in psychological sciences. A common approach to structuring a research report based on NHST (and we note that there are many variations) follows these steps: first , a substantive question is articulated; then , an appropriately matched statistical null hypothesis is constructed and evaluated. The statistical question is often distinct from the substantive question of interest. Next, upon execution of the given approach, the appropriate metrics, including the p -value, are extracted, and, finally , if this p -value (or equivalently the test statistic) is more extreme than a pre-determined bright-line α, typically 0.05, a declaration that the result is significant is issued.

The dominant style of research communication that has arisen from this approach has emphasized the dissemination of findings that meet the significance threshold, often disregarding the potential for non-significant findings to provide some utility in addressing substantive scientific questions at hand. The concern with dichotomizing findings was distilled by Altman (1990) when he wrote, “It is ridiculous to interpret the results of a study differently according to whether or not the P -value obtained was, say, 0.055 or 0.045. These should lead to very similar conclusions, not diametrically opposed ones.” Further, this orientation has facilitated the de-emphasis of the functional associations between variables under investigation. In the simplest case, researchers have failed to focus on the association magnitude ( Kirk, 1996 ). Whereas Kelley and Preacher (2012) have described the differences between effect size and effect magnitude, we propose a more general focus on the functional associations between our variables of interest, which are often complex, contingent, and curvilinear, and so often cannot be adequately distilled into a single number. Although we will refer to effect sizes using the conventional definition, we want the reader to recognize that this usage is not consistently tied to causal inference, in practice. Adapting Kelley and Preacher’s (2012) definition, we treat effect size as the quantitative reflection(s) of some feature(s) of a phenomenon that is under investigation. In other words, it is the quantitative features of the functional association between variables in a system under study; this tells us how much our outcome variable is expected to change based on differences in the predictors. If the outcome variable displays such small changes as a result of changes in a predictor that the variance is of little practical value, a finding of statistical significance may be irrelevant to the field. The shape, features, and magnitude of functional associations in studied phenomena should be the focus of researchers’ description of findings. To this end, the reader is encouraged to consult several treatments of effect size indices to assist in the identification of appropriate statistics ( Ellis, 2010 ; Grissom and Kim, 2014 ; Cumming, 2017 ).

Herein, we present a generalized version of this significance orientation communication style (SOCS), steps that can be taken to transition to a post-significance communication style (POCS) that will facilitate researchers’ focus on the structure of the associations they are studying rather than just evidence of an association. Examples of how SOCS results write-ups may be updated to meet the standards of this new style follow.

Significance Orientation Communications Structure

The structure for a passage in a results section written in the SOCS frequently includes:

1. A reference to a table or figure,

2. A declaration of significance, and,

3. A declaration of the direction of the association (positive or negative).

The ordering is not consistent but often begins with the reference. The authors write a statement such as, “Table 1 contains the results of the regression models,” where Table 1 holds the statistics from a series of models. There may be no further verbal description of the pattern of findings. The second sentence is commonly a declaration of the result of a significance test, such as, “In adjusted models, depression scores were significantly associated with the frequency of binge drinking episodes ( p < 0.05).” If the direction of association was not incorporated into the second sentence, a third sentence might follow; for instance, “After adjustment for covariates, depression scores were positively associated with binge episodes.” Variation in this structure occurs, and in many instances, some information regarding the magnitude of associations (i.e., effect sizes) is presented. However, because this approach focuses on the results of a significance test, the description of the effect is often treated as supplemental or perfunctory. This disposition explains many misunderstandings and misuses of standardized effect sizes ( Baguley, 2009 ). Findings that do not meet the significance threshold are often only available to the reader in the tables and frequently not considered when answering the substantive question at hand. However, interval estimates can and should be leveraged even when the null hypothesis is not rejected ( Kelley and Preacher, 2012 ).

Post-significance Communications Structure

Here, we present an overarching structure for what text in the results could look like when using post-significance communications structure (POCS). The emphasis shifts from identifying significant results to applying all findings toward the purpose of answering the substantive question under study. The first sentence can be considered a direct answer to this question, which the authors proposed in the introduction – the findings of the statistical tests should be placed in the context of the scientific hypotheses they are addressing. Next, the quantitative results of the statistical analyses should be described, and, as a part of this description, a directional reference to supporting tables and figures can be noted. Emphasis should be placed on making sure the results are presented in a form that allows the reader to confirm if the author’s assessment in the first sentence is appropriate. This will often include a parenthetical notation of the p -value associated with the presented parameter estimates. The significance is not an isolated focus and its presentation is not contingent on the p -value reaching a threshold. Instead, p -values are part of the support and context for the answer statement ( Schneider, 2015 ). This is reflected in their position within the paper. They can be placed in tables, presented parenthetically, or set off from the rest of the text through the use of commas when parentheses would add an additional level of enclosure. P -values should always be presented as continuous statistics and recognized as providing graded levels of evidence ( Murtaugh, 2014 ; Wasserstein et al., 2019 ).

Even where p -values are large, the authors should focus on describing patterns relevant to the question at hand. Assuming a good study design, the best estimate, based on the data being presented, are the point estimates, regardless of the p -value. Considering the context of the interval estimates is also critical in all circumstances because we do not want to conflate random noise with effects. The remaining sentences should be descriptions of the auxiliary patterns in the data that are pertinent to the scientific questions at hand. In many cases, these descriptions function as annotations of the key patterns found in the tables and figures.

To help clarify how we can transition from the SOCS style to the POCS style, we provide two examples from our own research.

Example: Factors Related to Injection Drug Use Initiation Assistance

Using data from a multi-site prospective cohort study, we investigated factors that were associated with providing injection assistance to previously injection-naïve individuals the first time they injected ( Marks et al., 2019 ). Most initiations (i.e., the first time an injection-naïve person injects drugs) are facilitated by other people who inject drugs (PWID). There is evidence that PWID receiving opioid agonist treatment have a reduced likelihood of providing assistance to someone initiating injection drug use ( Mittal et al., 2017 ). We are interested in understanding the extent to which opioid agonist treatment enrollment and other factors are associated with assisting injection drug use initiation. The following describes part of what we recently found, using a conventional SOCS approach ( Marks et al., 2019 ):

Conventional example 1

As shown in Table B 1 , the likelihood of recently (past 6 months) assisting injection drug use initiation was significantly related to recent enrollment in opioid agonist treatment ( z = −2.52, p = 0.011), and methamphetamine injecting ( z = 2.38, p = 0.017), in Vancouver. Enrollment in the opioid agonist treatment arm was associated with a lower likelihood of assisting injection initiation. The relative risk was significantly elevated for those injecting methamphetamine, whereas speedball injecting was not significantly associated with initiation assistance ( z = 1.84, p = 0.064).

This example starts with a reference to a table. It then indicates the patterns of significance and the direction of the effects. The parameter estimates that describe the magnitude and functional form can be extracted from the table (see Marks et al., 2019 ); however, significance tests are the focus of what is being communicated. The parameter estimates are absent from the text. No information about the non-significant association is developed. Abandoning significance tests and broadening the focus to include parameter estimates increases both the total information content and information density of the text. Now we will rewrite this paragraph in the POCS style.

The first sentence can be a direct answer to the research question proposed. Based on prior evidence, we had hypothesized opioid agonist treatment enrollment would decrease the likelihood of assisting an initiation; thus, for our new first sentence, we propose:

Results of our multivariable model are consistent with our hypothesis that recent enrollment in opioid agonist treatment was associated with a decreased likelihood of recently assisting injection initiation in Vancouver.

We have begun by directly addressing how our findings answer our research question. Next, we want to present the details of the quantitative patterns. This can also be the first sentence when the functional association is simple. We also want to make sure to present the results in a way that increases the value of information available to the reader – in this case, instead of presenting regression point estimates, we present the relative risk and proportional effects. As such, we propose:

Recent opioid agonist treatment enrollment was associated with a 12 to 63% reduction in likelihood of assisting initiation (RR: 0.58 95% CI: 0.37–0.88, p = 0.011, Table B ).

This lets the reader know not only that we have a high degree of confidence in the direction of the effect (both indicated by the confidence interval and p -value), but also that the magnitude of the effect warrants further consideration that opioid agonist treatment should be considered as a tool for addressing injection initiation. If, for example, our confidence interval had been 0.96–0.98, even though we feel confident in the direction of the effect, we may deem it inappropriate to suggest changes to treatment implementation as a result based on such a small potential return on investment. Relying solely upon significance testing to determine the value of findings could result in the glossing over this critical piece of information (i.e., the effect size). In addition, we note that we have now included a reference to the table where further details and context can be inspected.

Finally, we want to examine additional patterns in the data. In the SOCS style paragraph, we reflected on the significance of both the effect of recent methamphetamine injection and recent speedball (heroin and cocaine) injection. While our primary research question focused on the impact of opioid agonist treatment, we can still also present results for related secondary questions regarding methamphetamine and speedball injection, so we write:

Recent methamphetamine injection was associated with a 12% to 227% increase in likelihood of assisting initiation (RR: 1.91 95% CI: 1.12–3.27, p-value = 0.017). Similarly, recent speedball injection was associated with an effect ranging from a 3% decrease to a 193% increase in likelihood of assisting initiation (RR: 1.68 95% CI: 0.97–2.93, p = 0.064).

Here, we find that methamphetamine and speedball injection had similar confidence interval estimates. Instead of saying that the impact of speedball injection was “not significant” where the p -values exceed.05, we present the confidence interval of methamphetamine and speedball injection relative risks. From this, the reader can evaluate if our conclusion that the findings preclude the possibility that speedball may have at most a small protective effect on assisting initiation. We can gain some knowledge from non-significant findings. Relevant stakeholders may determine that a 3% reduction to a 193% increase in risk is strong enough evidence to allocate resources to further study and/or intervene on speedball injection. We note that assessing the acceptability of characterizing an effect in this way that did not meet traditional standards of significance is a complex task and that it will be dependent on the consensus of the authors, reviewers, and editors. This subjectivity of assessment exemplifies the importance of the POCS style, as it requires all stakeholders in the peer-review process to engage critically with the interpretation of “not significant” findings.

Our new POCS paragraph reads:

Results of our multivariable model were consistent with our hypothesis that recent enrollment in opioid agonist treatment was associated with a decreased likelihood of recently assisting injection initiation in Vancouver. Recent opioid agonist treatment enrollment was associated with a 12 to 63% reduction in likelihood of assisting initiation (RR: 0.58 95% CI: 0.37–0.88, p = 0.011, Table B ). Recent methamphetamine injection was associated with a 12% to 227% increase in likelihood of assisting initiation (RR: 1.91 95% CI: 1.12–3.27, p-value = 0.017). Similarly, recent speedball injection was associated with an effect ranging from a 3% decrease to a 193% increase in likelihood of assisting initiation (RR: 1.68 95% CI: 0.97–2.93, p = 0.064).

Example 2: Associations Among Adolescent Alcohol Use, Expectancies, and School Connectedness

Using data from a community survey of high school students, we investigated the relationship among drinking expectancies, school connectedness and heavy episodic drinking ( Cummins et al., 2019 ). Student perceptions of acceptance, respect, and support at their schools are reported to be protective against various risky health behaviors, including drinking. We wanted to know if the association was contingent on alcohol expectancies. Alcohol expectancies are cognitions related to the expected outcomes that a person attributes to drinking ( Brown et al., 1987 ). In this study, higher expectancies indicate the respondent expects the outcomes of consuming alcohol to be more rewarding.

Conventional example 2

Figure 1 and Table 2 2 depict the associations among recent (past 30 days) binge drinking, school connectedness, and alcohol expectancies. The model for recent (past 30 days) binge drinking with school connectedness, party-related alcohol expectancies, and their interaction as independent variables was statistically significant (Likelihood Ratio χ2 (3) = 171, p < 0.0001). Significant moderation was observed (OR = 9.89, SC X pAE interaction: z = 2.64, p = 0.008). The prevalence of binging was significantly higher for students reporting the highest expectancies as compared to those reporting the lowest expectancies when students also reported the highest school connectedness ( z = 9.39, p < 0.001). The predicted prevalence of binge drinking was 17.9 times higher among students with the highest expectancies, as compared to those with the lowest expectancies. This same comparison was non-significant, where school connectedness was at its lowest ( z = 1.84, p = 0.066).

www.frontiersin.org

Figure 1. Modeled proportion of high school students engaging in heavy episodic drinking (binging) as a function of school connectedness (SC) and party-related alcohol expectancies (AE). Error bars represent 95% confidence intervals. Low and high school connectedness are at the minimum and maximum observed school connectedness, respectively. Modified from Cummins et al. (2019) .

While this is not an archetypal version of the SOCS style, its primary focus is on the patterns of significance. Some information is presented on the magnitudes of the associations in the text; however, metrics of estimation uncertainty are absent, as is information on the non-significant patterns. Much of the text is redundant with the tables or is unneeded ( Cummins, 2009 ). For example, the first sentence is a simple reference to a figure with no indication of what the authors extracted from their inspection of the figure. The text itself does not help answer the scientific question. The functional associations and their magnitudes are not described, so there is no value added by the presence of the sentence. The latter sentences primarily function to identify statistically significant patterns. The measure of association strength is presented for one of the features of the model, which was statistically significant. The uncertainty of the estimate and the features were not described. Finally, it remains unclear how the results answer the substantive question under study. Now, we will rewrite this paragraph in the POCS style.

The first sentence needs to be a direct answer to the substantive question under study. We expected students who reported higher party-related expectancies (i.e., had more positive views of attending parties) would report higher odds of recent binge drinking. Further, we wanted to assess if school connectedness moderated this relationship. For our new first sentence, we propose:

We found that higher levels of alcohol expectancies were associated with greater odds of binge drinking and that this relationship was attenuated among those reporting the lowest level of school connectedness.

Here, we have directly answered our research question. Results of moderation analyses can be challenging to parse, exemplifying the need to clearly articulate how the results reflect upon the question under study is particularly important. This first sentence helps the reader navigate the more complex statements about contingent effects. We do this by presenting a data visualization and highlighting the relationship of party-related expectancies and binge drinking at the highest and lowest levels of school connectedness in the text, as such:

For students reporting the highest level of school connectedness, the modeled odds of binge drinking was 30.5 (95% CI: 15.4–60.2) times higher for students with the highest level of alcohol expectancies as compared to the those with the lowest. This pattern was attenuated for students reporting the lowest levels of school connectedness (OR = 3.18), such that the confidence interval for the modeled odds ratio ranged from 1.18 to 10.6 (95% CI) ( Figure 1 ).

Here, we have provided the reader two key pieces of information that quantifies our initial qualitative statement: first, for both students with the lowest and highest levels of school connectedness, we are confident there is a positive relationship between alcohol expectancies and binge drinking; and, second, that this relationship is attenuated amongst those with the lowest levels of school connectedness, as indicated by their non-overlapping confidence intervals. We have also provided the reference to Figure 1 , reducing the initial examples’ 20-word directional sentence, to two words.

We also focus upon estimation uncertainty by presenting the confidence intervals in the body of the text. We direct the reader to recognize the lower bound of the for the odds ratio was near 1.18 for students with the lowest school connectedness. This can be returned to in the discussion. It could be pointed out that it is plausible alcohol expectancies are not strongly associated with binge drinking for these students. Deploying interventions targeting expectancies among these students could be an inefficient use of resources. Thus, getting an improved estimate of the effect size could be valuable to practitioners before committing to a rigid plan for deploying intervention resources. Not only should authors present measures of uncertainty (e.g., confidence intervals, credibility intervals, prediction intervals), they should base their interpretations on those intervals.

Finally, we want to reflect on additional patterns in the data. In the SOCS example, we presented the significance of the model fit, as well as the significance of the interaction effect. We provide additional information for the reader to assess the magnitude of the interaction. We give the reader a way to gauge this by contrasting the association at the extremes of school connectedness. We note that, in cases where the models are complex, word limits and a disposition toward being concise will force authors to be selective about which features are to be verbalized. Patterns of lower importance may not be described in the text but should be accessible to the reader through tables and figures.

Here, we also note that the reader should be able to evaluate the authors’ descriptive choices in the text and ensure those are faithful to the overall patterns. On the flip side of the coin, the author’s selections also initially guide the reader through the answers to the study’s questions that are supported by the content within the tables and figures. For reviewers and editors assessing works in the POCS style, it will be important to assess if the authors’ descriptive choices are faithful to the overall patterns of the results. This requires that authors provide adequate information in their tables and figures for reviewers to make such an assessment.

As a result, our new POCS paragraph reads as follows:

We found that higher levels of alcohol expectancies were associated with greater odds of binge drinking and that there was evidence that the strength of this relationship was contingent on school connectedness, such that it was attenuated among students reporting the lowest level of school connectedness. For students reporting the highest level of school connectedness, the modelled odds of binge drinking was 30.5 (95% CI: 15.4–60.2, Figure 1 and Table 2) times higher for students with the highest level of alcohol expectancies as compared to the those with the lowest. This association was attenuated for students reporting the lowest levels of school connectedness (OR = 3.18), such that the interval estimate of modelled odds ratio ranged from 1.18 to 10.6 (95% CI).

We present a communication style that abandons the use of bright-line significance testing. By introducing the POCS style as a formal structure for presenting results, we seek to reduce barriers faced by researchers in their efforts to follow recommendations for abandoning the practice of declaring results statistically significant ( Amrhein et al., 2019 ; Wasserstein et al., 2019 ). The examples provided demonstrate how the adoption of this general approach could help improve the field by shifting its focus during results generation to the simultaneous and integrated consideration of measures of effect and inferential statistics. Reviewers should also recognize that the use of POCS is not an indicator of statistical naivety, but rather one of a differing view on traditional approaches–this paper can be a useful resource for explaining POCS to unfamiliar reviewers. Writing results without the word “significant” is completely counter to the training and experience for most researchers. We hope that these examples will motivate researchers to attempt to draft their results without using or reporting significance tests. Although some researchers may fear that they will be left with a diminished ability to publish, this need not be the case. If the research findings do not stand up when described in terms of the functional associations, perhaps that research is not ready to be published. Indeed, with greater recognition of the replication crisis in the psychological sciences, we should pay more attention to the design features and basic details of the patterns of effects.

Significance testing should not be used to reify a conclusion. Fisher (1935) warned that an “isolated record” of a significant result does not warrant its consideration as a genuine effect. Although we want our individual works to be presented as providing a strong benefit to the field, our confidence that individual reports will hold is often unwarranted. We may benefit from cautiously reserving our conclusions until a strong and multi-faceted body of confirmatory evidence is available. This evidence can be compiled without bright-line significance testing. Improved reporting, that presents a full characterization of the functional relationships under study, can help to facilitate the synthesis of research generated knowledge into reviews and metanalyses. It is also consistent with American Psychological Association reporting standards, which promotes the reporting of exact p -values along with point and interval estimates of the effect-size ( Appelbaum et al., 2018 ).

The strongest support for some of our research conclusions have been obtained from Bayesian probabilities based on informative priors (e.g., Cummins et al., 2019 ). This point serves to highlight the general limitations of focusing on frequentist based NHST in scientific research and the benefit of gauging evidence other substantive features, such as the design, explanatory breadth, predictive power, assumptions, and competing alternative models ( de Schoot et al., 2011 ; Trafimow et al., 2018 ). The POCS is compatible with a more integrated approach to the valuation of research reports, whereas the continued use of bright-line significance testing is not ( Trafimow et al., 2018 ).

We suspect that the quality of many papers will increase through the application of POCS. In part, this will be driven by a change in orientation toward the aims of research reports where the emphasis on the establishment of the presence of an association is substituted with an emphasis on estimating the functional form (magnitude, shape, and contingencies) of those relationships. The examples from our own work demonstrate that there should be no barrier to drafting papers with POCS. Research-based on an integrated examination of all statistical metrics (effect sizes, p -values, error estimates, etc.) shall lead to more meaningful and transparent communication and robust development of our knowledge base. Research findings should not be simply dichotomized – the quantitative principle that the categorization of a continuous variable will always lead to a loss of information also applies to p -values ( Altman and Royston, 2006 ). In this paper, we provide examples of different ways to apply that principle.

Author Contributions

Both authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  • ^ This reference is to Table B in the supplement to Marks et al. (2019) .
  • ^ This reference is to Table 2 in the supplement to Cummins et al. (2019) .

Altman, D. G. (1990). Practical Statistics for Medical Research. Boca Raton: CRC Press.

Google Scholar

Altman, D. G., and Royston, P. (2006). The cost of dichotomising continuous variables. BMJ 332:1080. doi: 10.1136/bmj.332.7549.1080

PubMed Abstract | CrossRef Full Text | Google Scholar

Amrhein, V., Greenland, S., and McShane, B. (2019). Scientists rise up against statistical significance. Nature 567, 305–307.

Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., and Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA publications and communications board task force report. Am. Psychol. 73, 3–25. doi: 10.1037/amp0000191

Armstrong, J. S. (2007). Significance tests harm progress in forecasting. Intern. J. Forecast. 23, 321–327.

Baguley, T. (2009). Standardized or simple effect size: what should be reported. Br. J. Psychol. 100, 603–617. doi: 10.1348/000712608X377117/pdf

CrossRef Full Text | Google Scholar

Brown, S. A., Christiansen, B. A., and Goldman, M. S. (1987). The alcohol expectancy questionnaire: an instrument for the assessment of adolescent and adult alcohol expectancies. J. Stud. Alcohol 48, 483–491.

Carver, R. (1978). The case against statistical significance testing. Harvard Educ. Rev. 48, 378–399.

Cumming, G. (2017). Understanding The New Statistics: Effect Sizes, Confidence Intervals, and Meta Analysis. Abingdon: Routledge.

Cummins, K. (2009). Tips on Writing Results For A Scientific Paper. Alexandria: American Statistical Association.

Cummins, K. M., Diep, S. A., and Brown, S. A. (2019). Alcohol expectancies moderate the association between school connectedness and alcohol consumption. J. Sch. Health 89, 865–873. doi: 10.1111/josh.12829

de Schoot, R. V., Hoijtink, H., and Jan-Willem, R. (2011). Moving beyond traditional null hypothesis testing: evaluating expectations directly. Front. Psychol. 2:24. doi: 10.3389/fpsyg.2011.00024

Ellis, P. D. (2010). The Essential Guide to Effect Sizes. Cambridge: Cambridge University Press.

Fisher, S. R. A. (1935). The Design of Experiments. Edinburgh: Oliver and Boyd.

Gelman, A. (2013). P values and statistical practice. Epidemiology 24, 69–72. doi: 10.1097/EDE.0b013e31827886f7

Gelman, A., and Loken, E. (2014). The statistical crisis in science data-dependent analysis—a “garden of forking paths”—explains why many statistically significant comparisons don’t hold up. Am. Sci. 102, 459–462.

Gigerenzer, G. (2018). Statistical rituals: the replication delusion and how we got there. Adv. Methods Pract. Psychol. Sci. 1, 198–218.

Goodman, S. (2008). A dirty dozen: twelve p-value misconceptions. Semin. Hematol. 45, 135–140. doi: 10.1053/j.seminhematol.2008.04.003

Grissom, R. J., and Kim, J. J. (2014). Effect Sizes for Research. Abingdon: Routledge.

Hunter, J. E. (1997). Needed: a ban on the significance test. Psychol. Sci. 8, 3–7.

Kelley, K., and Preacher, K. J. (2012). On effect size. Psychol. Methods 17, 137–152. doi: 10.1037/a0028086

Kirk, R. E. (1996). Practical significance: a concept whose time has come. Educ. Psychol. Measur. 56, 746–759.

Kline, R. B. (2004). Beyond significance testing: reforming data analysis methods in behavioral research. Am. Psychol. Assn. 2004:325.

Krantz, D. H. (1999). The null hypothesis testing controversy in psychology. J. Am. Statist. Assoc. 94, 1372–1381.

Marks, C., Borquez, A., Jain, S., Sun, X., Strathdee, S. A., Garfein, R. S., et al. (2019). Opioid agonist treatment scale-up and the initiation of injection drug use: a dynamic modeling analysis. PLoS Med. 16:2973. doi: 10.1371/journal.pmed.1002973

Mittal, M. L., Vashishtha, D., Sun, S., Jain, S., Cuevas-Mota, J., Garfein, R., et al. (2017). History of medication-assisted treatment and its association with initiating others into injection drug use in San Diego. CA Subst. Abuse Treat. Prev. Policy 12, 42. doi: 10.1186/s13011-017-0126-1

Murtaugh, P. A. (2014). In defense of P values. Ecology 95, 611–617.

Nosek, B., Aarts, A., Anderson, C., Anderson, J., Kappes, H., Baranski, E., et al. (2015). Estimating the reproducibility of psychological science. Science 349:aac4716.

Schneider, J. W. (2015). Null hypothesis significance tests. A mix-up of two different theories: the basis for widespread confusion and numerous misinterpretations. Scientometrics 102, 411–432. doi: 10.1007/s11192-014-1251-5

Szucs, D., and Ioannidis, J. P. A. (2017). When null hypothesis significance testing is unsuitable for research: a reassessment. Front. Hum. Neurosci. 11:390. doi: 10.3389/fnhum.2017.00390

Trafimow, D., Amrhein, V., Areshenkoff, C. N., Barrera-Causil, C. J., Beh, E. J., Bilgiç, Y. K., et al. (2018). Manipulating the alpha level cannot cure significance testing. Front. Psychol. 9:699. doi: 10.3389/fpsyg.2018.00699

Wasserstein, R. (2016). American Statistical Association Releases Statement On Statistical Significance And P-Values: Provides Principles To Improve The Conduct And Interpretation Of Quantitative Science. Washington, DC: American Statistical Association.

Wasserstein, R. L., Schirm, A. L., and Lazar, N. A. (2019). Moving to a World Beyond “p<0. 05 ”. Am. Statist. 73, 1–19. doi: 10.1080/00031305.2019.1583913

Keywords : scientific communication, statistical significance, null hypothesis significance testing, confidence intervals, bright-line testing

Citation: Cummins KM and Marks C (2020) Farewell to Bright-Line: A Guide to Reporting Quantitative Results Without the S-Word. Front. Psychol. 11:815. doi: 10.3389/fpsyg.2020.00815

Received: 07 January 2020; Accepted: 02 April 2020; Published: 13 May 2020.

Reviewed by:

Copyright © 2020 Cummins and Marks. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY) . The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Kevin M. Cummins, [email protected]

Disclaimer: All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article or claim that may be made by its manufacturer is not guaranteed or endorsed by the publisher.

Branches of Psychology

Peer Counselor

Psychology Undergraduate, Harvard University

Fujia Sun is a first-year undergraduate studying psychology and economics at Harvard College. She has worked as a clinical psychology intern and peer counselor.

Learn about our Editorial Process

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

Psychology is a science in which behavioral and other evidence is used to understand the mind and behavior of humans (Eysenck, 2004).

It encompasses various aspects of human behavior, such as thought, emotions, cognition, personality, social behavior, and brain function.

Throughout the years, people have used multiple research methods to understand this complicated subject, and have divided this subject into various branches to better study human behavior. So far, there can be as many as 22 branches of psychology.

an illustration showing a school psychologist sat on a chair talking to a client who is sat on a sofa.

To better categorize the branches of psychology, this article will be dividing the branches into basic and applied psychology (Guilford & Anastasi, 1950).
  • Basic psychology, or theoretical psychology, aims to extend and improve human knowledge. Basic psychology aims to discover or establish instances of universal similarity and trace their origin or development to explain their causal connections.
  • Applied psychology, or practical psychology, aims to extend and improve the condition and phases of human life and conduct. Its goal is to analyze responses and situations and create interventions to address real-life concerns and challenges individuals face (Thomas, 2022).

However, it is crucial to remember that psychology classification could vary, and many psychologists work in both basic and applied psychology sections throughout their careers.

Moreover, basic and applied fields of psychology complement one another, and together help form a deeper understanding of human mind and behavior.

Basic Psychology

Basic psychology seeks to understand the fundamental principles of behavior and mind, focusing on generating knowledge. It seeks to answer the question “Why does this occur?”.

1. Biological psychology

The origin of biological psychology is greatly influenced by “Origin of Species” written by Charles Darwin, whose views on evolution greatly impacted the psychological world.

Psychologists started analyzing the role of heredity in influencing human behavior. Biological psychology focuses on studying physiological processes within the human body, often including the study of neurotransmitters (chemical messengers that transmit messages from one neuron to the next) and hormones.

One approach used in biological psychology is studying how different sets of genes influence behavior, personality, and intelligence.

Twin studies – the study of monozygotic twins with the exact set of genes and the study of dizygotic twins who share half of each other’s genomes – are often used for this purpose.

The biological branch of psychology is especially important because processes studied by it are virtually involved in almost all human behavior (Eysenck, 2004).

2. Abnormal psychology

Abnormal psychology is also known as psychopathology . It focuses on understanding the causes, treatment, and nature of mental disorders, and helps produce effective therapy for patients who have mental disorders, such as Major Depressive Disorder, Autism, Schizophrenia, and much more (Hooley et al., 2019).

This is considered basic psychology because understanding the etiology of a mental health condition can help applied psychologists develop effective interventions.

In the past, the treatment of people who deviated from the norm involved brutal treatments such as trephination and exorcism that aimed to “drive away” the evil spirits that people claimed to be possessing the ill.

Patients were kept in the bedlams inside asylums, and most people with mental illness died tragically. The development of psychoanalysis , CBT , and mindfulness-based therapies , has led to improvements in therapy and reduced the pain and suffering of patients (Eysenck, 2004).

3. Cognitive Psychology

The study of cognitive psychology focuses on the process of thinking. It includes areas such as memory, problem-solving, learning, attention, and language.

This approach to psychology is important because the human brain is tied to multiple other branches of psychology, such as social, abnormal, and developmental psychology. 

Theoretical topics of cognitive psychology would include the study of memory, perception, attention, etc.

Moreover, the insights provided by cognitive psychologists have led to profound impacts on the development of practical applications across diverse fields, whereby professionals from unrelated domains effectively utilized this knowledge to create things such as computer and gaming systems that are easy for humans to use.

The study of cognitive psychology also benefited the well-being of many patients who experienced brain damage , and they were able to get surgery to regain back some of their lost cognitive skills (Eysenck, 2004).

4. Developmental Psychology

Developmental psychology studies how individuals change over time, especially during the childhood period.

Psychologists spend decades studying people’s childhood development of thinking processes and behavioral changes, looking at how childhood experiences may impact adult behavior.

It includes topics such as the physical, cognitive, emotional, and social changes that occur from infancy to elderly life.

This branch is essential to helping children develop good social skills and address developmental challenges, as well as understanding factors influencing adult behaviors (Eysenck, 2004).

5. Behavioral Psychology

Behaviorism , also known as behavioral learning theory, is a theoretical perspective in psychology that emphasizes the role of learning and observable behaviors in understanding human and animal actions.

Behaviorism is a theory of learning that states all behaviors are learned through conditioned interaction with the environment. Thus, behavior is simply a response to environmental stimuli.

The behaviorist theory is only concerned with observable stimulus-response behaviors, as they can be studied in a systematic and observable manner.

Some of the key figures of the behaviorist approach include B.F. Skinner, known for his work on operant conditioning, and John B. Watson, who established the psychological school of behaviorism.

6. Social Psychology

Social psychology covers numerous topics that focus on society as a whole, including the observation and study of social behavior and intergroup relations.

Topics would include things such as the study of attitudes , social influences , prejudice , etc. Humans are known to be social animals that interact with one another.

Every day we encounter different people and must use social knowledge to make sense of the social group that we are all living in and make decisions.

Social psychology is thus used to reveal the many biases and misconceptions humans have, as well as how people’s behaviors are deeply influenced by one another (Eysenck, 2004).

7. Comparative Psychology

Comparative psychology is the study of similarities and differences between humans and animals. It is said to be originated by George Romanes, a British psychologist who wrote “Animal Intelligence” in 1882.

In short, it compares humans with other animals, including both qualitative and quantitative observations. 

It is especially useful to use animal models to study behaviors and psychological phenomena because they may otherwise be challenging or unethical to study in humans.

Animal models provide researchers with a way to examine the effects of stress, addiction, learning, and memory on behavior.

Since animals may function in similar ways as humans, psychologists study animals and draw analogies with human beings to arrive at conclusions (Greenberg, 2012). Famous studies of comparative psychology would include Harlow’s monkeys and Lorenz’s geese .

8. Experimental Psychology

Experimental psychology encompasses the scientific research methods that other branches of psychology rely on. Psychologists use a scientific approach to understanding behavior, and their findings are based on scientific evidence accumulated through research.

Examples of what is involved in an experiment would be the different types of research methods (qualitative versus quantitative), the use of data analysis, correlation and causation, and hypothesis testing.

It is generally said to emerge as a branch of psychology in the 19 th century led by Wilhelm Wundt , who introduced mathematical expressions and calculations into the psychology field (Boring, 1950).

This is important because humans’ commonsense beliefs about behaviors are unreliable, and the explanations derived from them could be deceiving.

Experimental psychology ensures that psychological conclusions are not limited in their accuracy and generalizability (Myers & Hansen, 2011).

9. Child Psychology

Child psychology examines the developmental changes within the different domains of child development (Hetherington et al., 1999).

It first started with Jean Piaget , a French psychologist whose research interests were highly focused on child development, including how they think, acquire knowledge, and interact with those around them (Schwartz, 1972). 

It primarily focuses on the understanding of children’s emotional, cognitive, social, and behavioral development during these early stages of life.

Child psychology also assesses and diagnoses developmental disorders or psychological conditions that may affect children and provide interventions and treatments to support their psychological well-being and development.

10. Cross-Cultural Psychology

Cross-Cultural Psychology is a branch of psychology that studies how cultural factors influence human behavior, thoughts, and emotions.

It examines psychological differences and similarities across various cultures and ethnic groups, aiming to understand how cultural context can shape individual and group psychological processes.

11. Personality Psychology

The personality branch of psychology studies the patterns of thoughts, feelings, and behaviors that make an individual unique. It explores the traits and characteristics that define a person’s temperament, interactions, and consistent behaviors across various situations.

Theories in this field range from Freud’s psychoanalytic perspective to trait theories and humanistic approaches.

The goal is to understand the complexities of individual differences, the factors that shape personality development, and how personality influences life outcomes.

The Big Five and Myers-Briggs are both models used to describe and measure personality traits, but they differ significantly in their origins, components, and scientific validation:

  • The Big Five : Often referred to as the Five-Factor Model, it identifies five broad dimensions of personality: Openness, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (often remembered by the acronym OCEAN). This model is backed by extensive empirical research and is widely accepted in the academic community.
  • Myers-Briggs Type Indicator (MBTI) : This typological approach categorizes individuals into 16 personality types based on four dichotomies: Extraversion/Introversion, Sensing/Intuition, Thinking/Feeling, and Judging/Perceiving.

Applied psychology

Applied psychology takes these foundational understandings and applies them to solve real-world problems, enhancing well-being and performance in various settings. Applied psychology asks “How can we use this knowledge practically?”.

12. Clinical Psychology

The American Psychological Association defines clinical psychology as a field of psychology that “integrates science, theory, and practice to understand, predict, and alleviate maladjustment, disability, and discomfort as well to promote human adaptation, adjustment, and personal development” (American Psychological Association, Division 12, 2012).

In simplified terms, it is a branch of psychology focusing on understanding mental illness and looking for the best ways of providing care for individuals, families, and groups.

Through strong research, clinical psychologists offer ongoing and comprehensive mental and behavioral healthcare for individuals, couples, families, and all sorts of different groups.

They also provide consultation services to agencies and communities, as well as training, education, supervision, and research-based practice.

The work of a clinical psychologist thus includes assessing patients’ health status, performing psychotherapy, teaching, researching, consulting, and more (Kramer et al., 2019). The branch would also understand the diagnostic criteria of mental illness.

13. Educational Psychology

Educational psychology is a branch of psychology that focuses on understanding how people learn and develop in educational settings.

It uses psychological science to enhance the learning process of students and applies psychological findings to promote educational success in a classroom (Lindgren & Suter, 1967).

Educational psychologists do things such as improve teaching techniques, develop specialized learning materials, and monitor educational outcomes.

Psychologists have argued that education is a tripolar process involving the interaction between the educator and the student, as well as the educator and the student understanding oneself and acting accordingly in an educational or social setting.

The social environment, on the other hand, provides subtle influences on both sides. Educational psychology aims to understand and enhance education through understanding these interactions (Aggarwal, 2010).

Under this branch of psychology, there is also school psychology. School psychology is defined as the general practice of psychology involving learners of all ages, and the process of schooling.

For instance, psychological assessments, interventions, and preventions, as well as mental health promotion programs focusing on the development processes of youth in the context of the school system are all considered to be part of school psychology.

School psychologists access the school environment to ensure and promote positive learning outcomes for youth, and ensure healthy psychological development of students.

Other than students, they also support the families, teachers, and other professionals who work to support the students (Merrell et al., 2011).

14. Counseling Psychology

Counseling psychology is defined as a holistic healthcare specialty that employs diverse information (such as culture) and methodologies (such as motivational interviewing and cognitive behavioral therapy) to enhance an individual’s mental well-being and reduce maladaptive behaviors.

The aim is to help individuals cope with various life challenges, emotions, mental health symptoms, etc. People who seek counseling explore their thoughts and behaviors with a counselor and seek to improve their wellbeing and develop coping strategies.

Counseling psychologists work with people of all cultural backgrounds, and conduct activities such as crisis intervention, trauma management, diagnosing mental disorders, treatment evaluation, and consulting (Oetting, 1967).

15. Forensic Psychology

Forensic psychology is the branch of applied psychology that focuses on collecting, examining, and presenting evidence for judicial purposes.

Psychologists in this branch work on court cases in assessing behavioral problems and psychological disorders in criminal profiles , determining the mental status of criminals, and whether or not compensation could be awarded for “psychological damage”.

Researchers in this branch also investigate false confessions and psychological vulnerabilities in criminals, together with methods of improving societal bias and police practice (Gudjonsson & Haward, 2016).

16. Health Psychology

Health psychology has been widely defined using Matarazzo’s definition, summarized as any activity of psychology that involves any aspect of health or the healthcare system.

It includes areas ranging from prevention to treatment of illness and the analysis and diagnosis of health-related dysfunctions. It often promotes healthy lifestyles and disease prevention.

Health psychology emphasizes the health system and how patients as well as psychologists interact within the system, having the goal of optimizing communication and treatment at the same time.

Health psychologists look at problems such as the cost of healthcare, chronic psychological illness, as well as political and economic needs (Feuerstein et al., 1986).

This is different from clinical psychology, which concentrates more on treating and diagnosing mental, emotional, and behavioral disorders, providing therapy and strategies for managing mental health issues.

17. Sports Psychology

Sports psychology is in general seen as a field of study where principles of psychology are used or applied to sport. In history, many different perspectives have been taken in determining the definition of sports psychology.

For example, when seen as a branch of psychology, this study focuses on understanding psychological theories when applied to sports.

When seen more as a subdiscipline of sport science, this branch of psychology focuses on enhancing and explaining behavior in the sports context (Horn, 2008).

Overall, sports psychology is crucial in optimizing athletes’ performance by addressing their sports’ emotional and physical aspects, such as arousal regulation, pre-performance routines, goal-setting techniques, etc.

18. Community Psychology

Community psychology focuses on thinking about human behavior within the context of a community.

It requires a shift in perspective since it operates to prevent a problem from occurring and promoting healthy functioning for the community rather than treating individual problems after they arise.

Research under this branch examines factors on the macro neighborhood and community level that either enhances or impedes the psychological health of a community rather than internal psychological processes of an individual (Kloos, 2012).

19. Industrial-organizational psychology

Industrial-organizational psychology is the branch of psychology that studies how principles of psychology act on human beings who are operating within the business and industry context.

The definition has evolved over time to include more wide-ranged work-related topics, and looks at interactions between people and institutions.

Psychologists in this field center around enhancing productivity, staff well-being, and organizational work performance.

They utilize scientific methodologies and concepts to tackle workplace predicaments such as staff recruitment, training, appraisal, inspiration, job contentment, and workplace culture (Forces, 2003).

20. Family Psychology

Family psychology is considered a clinical science that focuses on discovering the truth behind families and the individuals inside them while improving the well-being of families.

It includes research in biological systems such as how genetics contributes to mental disorders and larger scopes of how community resources could be used to strengthen family relationships.

Through understanding family dynamics and supporting family relationships, this branch of psychology helps families navigate the complexities that arise in their relationships, promoting healthy relationships and overall wellbeing (Pinsof & Lebow, 2005).

21. Media Psychology

Media psychology refers to the use of psychology in the usage and production of media.

It includes areas such as making new technologies more user-friendly, using media to enhance clinical psychology, and studying how media may contribute to sociological and psychological phenomena in society (Luskin & Friedland, 1998).

The field has also evolved to include more emerging technologies and applications, such as interactive media, internet, and video games.

22. Environmental Psychology

Environmental psychology looks at the interrelationship between human behavior and environments. It examines how people’s behavior, emotions, and well-being are influenced by natural and built surroundings, such as homes, workplaces, urban spaces, and natural landscapes.

With models in the psychological field in mind, environmental psychologists work to protect, manage, and design environments that enhance human behavior, and diagnose any problems that occur in the process.

Some common topics have included the effects of environmental stress on human performance and how humans process information in an unfamiliar environment (De Young, 1999).

Aggarwal, J. C. (2010).  Essentials of educational psychology . Vikas Publishing House. Boring, Edwin G. (1950).  A History of Experimental Psychology (2nd ed.). Prentice-Hall.

De Young, R. (1999 ). Environmental psychology.

Eysenck, M. W. (2004).  Psychology: An international perspective . Taylor & Francis.

Feuerstein, M., Labbé, E. E., Kuczmierczyk, A. R. (1986 ). Health psychology : a psychobiological perspective.  Netherlands: Springer US.

Forces, I. (2003). Industrial-organizational psychology.  Handbook of Psychology: Volume 1, History of Psychology, 3 67.

Greenberg, G. (2012). Comparative psychology and ethology. In N. M. Seele (Ed.). Encyclopedia of the sciences of learning (pp. 658-661). New York: Springer

Gudjonsson, G. H., & Haward, L. R. (2016).  Forensic psychology: A guide to practice . Routledge.

Guilford, J. P., & Anastasi, A. (1950).  Fields of psychology, basic and applied  (2d ed.). Van Nostrand.

Hetherington, E. M., Parke, R. D., & Locke, V. O. (1999).  Child psychology: A contemporary viewpoint  (5th ed.). McGraw-Hill.

Hooley, J. M., Nock, M., & Butcher, J. N. (2019). Abnormal psychology (18th ed.). Pearson. 

Horn, T. S. (2008).  Advances in sport psychology . Human kinetics. 

Kloos, B., Hill, J., Thomas, E., Wandersman, A., Elias, M. J., & Dalton, J. H. (2012). Community psychology . Belmont, CA: Cengage Learning.

Kramer, G. P., Bernstein, D. A., & Phares, V. (2019).  Introduction to clinical psychology . Cambridge University Press.

Lindgren, H. C., & Suter, W. N. (1967).  Educational psychology in the classroom  (Vol. 956). New York: Wiley.

Luskin, B. J., & Friedland, L. (1998). Task force report: Media psychology and new technologies. Washington, DC: Media Psychology Division 46 of the American Psychological Association

Malpass, R. S. (1977). Theory and method in cross-cultural psychology.  American Psychologist, 32(12), 1069–1079.  https://doi.org/10.1037/0003-066X.32.12.1069

Merrell, K. W., Ervin, R. A., & Peacock, G. G. (2011).  School psychology for the 21st century: Foundations and practices . Guilford Press.

Myers, A., & Hansen, C. H. (2011).  Experimental psychology . Cengage Learning. 

Oetting, E. R. (1967). Developmental definition of counseling psychology.  Journal of Counseling Psychology, 14 (4), 382–385.  https://doi.org/10.1037/h0024747

Pinsof, W. M., & Lebow, J. L. (Eds.). (2005).  Family psychology: The art of the science . Oxford University Press.

Schwartz, E. (1972). The Psychology of The Child. Jean Piaget and Barbel Inhelder. New York: Basic Books, 1969. 159 pp.  Psychoanalytic Review ,  59 (3), 477-479.

Society of Clinical Psychology. (2012). In American Psychological Association (Vol. 12). essay. 

Thomas, R. K. (2022). “Pure” versus “Applied” psychology: An historical conflict between     edward B. titchener (pure) and ludwig R. geissler (applied).  The Psychological Record, 72 (1), 131-143. doi:https://doi.org/10.1007/s40732-021-00460-3

Print Friendly, PDF & Email

Related Articles

Soft Determinism In Psychology

Soft Determinism In Psychology

Social Action Theory (Weber): Definition & Examples

Social Action Theory (Weber): Definition & Examples

Attachment Styles and How They Affect Adult Relationships

Adult Attachment , Personality , Psychology , Relationships

Attachment Styles and How They Affect Adult Relationships

Big Five Personality Traits: The 5-Factor Model of Personality

Personality , Psychology

Big Five Personality Traits: The 5-Factor Model of Personality

Behaviorism In Psychology

Learning Theories , Psychology

Behaviorism In Psychology

Albert Bandura’s Social Learning Theory

Learning Theories , Psychology , Social Science

Albert Bandura’s Social Learning Theory

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

Publications

  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Internet & Technology

6 facts about americans and tiktok.

62% of U.S. adults under 30 say they use TikTok, compared with 39% of those ages 30 to 49, 24% of those 50 to 64, and 10% of those 65 and older.

Many Americans think generative AI programs should credit the sources they rely on

Americans’ use of chatgpt is ticking up, but few trust its election information, whatsapp and facebook dominate the social media landscape in middle-income nations, sign up for our internet, science, and tech newsletter.

New findings, delivered monthly

Electric Vehicle Charging Infrastructure in the U.S.

64% of Americans live within 2 miles of a public electric vehicle charging station, and those who live closest to chargers view EVs more positively.

When Online Content Disappears

A quarter of all webpages that existed at one point between 2013 and 2023 are no longer accessible.

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

High school teachers are more likely than elementary and middle school teachers to hold negative views about AI tools in education.

Teens and Video Games Today

85% of U.S. teens say they play video games. They see both positive and negative sides, from making friends to harassment and sleep loss.

Americans’ Views of Technology Companies

Most Americans are wary of social media’s role in politics and its overall impact on the country, and these concerns are ticking up among Democrats. Still, Republicans stand out on several measures, with a majority believing major technology companies are biased toward liberals.

22% of Americans say they interact with artificial intelligence almost constantly or several times a day. 27% say they do this about once a day or several times a week.

About one-in-five U.S. adults have used ChatGPT to learn something new (17%) or for entertainment (17%).

Across eight countries surveyed in Latin America, Africa and South Asia, a median of 73% of adults say they use WhatsApp and 62% say they use Facebook.

5 facts about Americans and sports

About half of Americans (48%) say they took part in organized, competitive sports in high school or college.

REFINE YOUR SELECTION

Research teams, signature reports.

reporting quantitative research in psychology pdf

The State of Online Harassment

Roughly four-in-ten Americans have experienced online harassment, with half of this group citing politics as the reason they think they were targeted. Growing shares face more severe online abuse such as sexual harassment or stalking

Parenting Children in the Age of Screens

Two-thirds of parents in the U.S. say parenting is harder today than it was 20 years ago, with many citing technologies – like social media or smartphones – as a reason.

Dating and Relationships in the Digital Age

From distractions to jealousy, how Americans navigate cellphones and social media in their romantic relationships.

Americans and Privacy: Concerned, Confused and Feeling Lack of Control Over Their Personal Information

Majorities of U.S. adults believe their personal data is less secure now, that data collection poses more risks than benefits, and that it is not possible to go through daily life without being tracked.

Americans and ‘Cancel Culture’: Where Some See Calls for Accountability, Others See Censorship, Punishment

Social media fact sheet, digital knowledge quiz, video: how do americans define online harassment.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

IMAGES

  1. Reporting Quantitative Research in Psychology: How to Meet APA Style

    reporting quantitative research in psychology pdf

  2. Information Quantitative Analysis on Psychological Research

    reporting quantitative research in psychology pdf

  3. Quantitative Research in Psychology

    reporting quantitative research in psychology pdf

  4. FREE 10+ Quantitative Research Report Samples & Templates in PDF

    reporting quantitative research in psychology pdf

  5. Ebook

    reporting quantitative research in psychology pdf

  6. Reporting Standards for Research in Psychology

    reporting quantitative research in psychology pdf

VIDEO

  1. Reporting Descriptive Analysis

  2. Chapter 7, Importance of Statistics in Psychology and Education

  3. IB Psychology: Quantitative research methods

  4. Introduction to Quantitative Research Course 1 مقدمة مقرر البحث الكمي

  5. Quantitative and Qualitative research in research psychology

  6. How to conduct quantitative research (8 Major Steps)

COMMENTS

  1. PDF Journal Article Reporting Standards for Quantitative Research in Psychology

    The involvement of the American Psychological Associ-ation (APA) in the establishment of journal article reporting standards began as part of a mounting concern with trans-. parency in science. The effort of the APA was contempo-raneous with the development of reporting standards in other fields, such as the Consolidated Standards of Report-.

  2. Reporting Quantitative Research in Psychology: How to Meet APA Style

    Reporting Research Syntheses and Meta-Analyses Download; XML; How the Journal Article Reporting Standards and the Meta-Analysis Reporting Standards Came to Be and Can Be Used in the Future Download; XML; APPENDIX:: Abstracts of the 16 Articles Used as Examples in Text Download; XML; References Download; XML; Index Download; XML; About the ...

  3. Journal article reporting standards for quantitative research in

    In May 2015, the P&C Board of APA authorized the appointment of two working groups: one to revisit and expand the work of the original JARS (JARS-Quant Working Group or Working Group) and the other to establish new standards for reporting qualitative research (JARS-Qual Working Group). This report is the result of the deliberations of the JARS-Quant Working Group and both updates the ...

  4. Reporting quantitative research in psychology: How to meet APA Style

    This book presents the revised reporting standards that came out of the efforts related to quantitative research in psychology. It addresses the material that appears first in a research manuscript. This material includes the title page with an author note; the abstract, which very briefly summarizes the research and its purpose, results, and conclusions; and the introductory section, which ...

  5. Reporting quantitative research in psychology: How to meet APA Style

    Request PDF | On Jan 1, 2018, Harris Cooper published Reporting quantitative research in psychology: How to meet APA Style Journal Article Reporting Standards (2nd ed.). | Find, read and cite all ...

  6. Reporting Quantitative Research in Psychology: How to Meet APA Style

    Reporting Quantitative Research in Psychology: How to Meet APA Style Journal Article Reporting Standards. Second Edition. Revised. APA Style Series Page Count: 217 ISBN: 978-1-4338-3283-3 Document Type: Books and Guides - General Top; Abstract; Details; Bottom; Hours & locations; My Account; Ask us;

  7. Frontiers

    Farewell to Bright-Line: A Guide to Reporting Quantitative Results Without the S-Word. Kevin M. Cummins 1,2,3* Charles Marks 1. 1 Division of Infectious Disease and Global Public Health, SDSU-UCSD Joint Doctoral Program in Interdisciplinary Research on Substance Use, San Diego, CA, United States. 2 Department of Psychology, University of ...

  8. (PDF) Journal Article Reporting Standards for Quantitative Research in

    Our report follows Transparency and Openness Promotion guidelines (Nosek et al., 2015) and Journal Article Reporting Standards for quantitative research in psychology (Appelbaum et al., 2018 ...

  9. ERIC

    This book offers practical guidance for understanding and implementing APA Style Journal Article Reporting Standards (JARS) and Meta-Analysis Reporting Standards (MARS) for quantitative research. These standards provide the essential information researchers need to report, including detailed accounts of the methods they followed, data results and analysis, interpretations of their findings ...

  10. [PDF] Reporting standards for research in psychology: why do we need

    The resulting recommendations contain standards for all journal articles, and more specific standards for reports of studies with experimental manipulations or evaluations of interventions using research designs involving random or nonrandom assignment. In anticipation of the impending revision of the Publication Manual of the American Psychological Association, APA's Publications and ...

  11. Journal article reporting standards for quantitative research in

    Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report. Publication Date. Jan 2018. Publication History. Accepted: Jun 29, 2017. Revised: Jun 27, 2017. First Submitted: Sep 6, 2016. Language. English. Author Identifier

  12. [PDF] Reporting Standards for Quantitative Research in Psychology: The

    Objectives • State the problem under investigation. • Main hypotheses Participants • Describe subjects (animal research) or participants (human research), specifying their pertinent characteristics for this study; in animal research, include genus and species. Participants will be described in greater detail in the body of the paper. Study method • Describe the study method, including ...

  13. How to Write a Lab Report: Step-by-Step Guide & Examples

    In psychology, a lab report outlines a study's objectives, methods, results, discussion, and conclusions, ensuring clarity and adherence to APA (or relevant) formatting guidelines. A typical lab report would include the following sections: title, abstract, introduction, method, results, and discussion.

  14. Phrases that are often people's last words, says doctor: What we ...

    People often wait too long to show their love or correct their interpersonal mistakes, says Pulitzer-winning author and oncologist Siddhartha Mukherjee.

  15. Branches of Psychology

    Psychology branches out into various specializations: Clinical focuses on mental health treatment; Cognitive delves into mental processes; Developmental studies growth and change over the lifespan; Evolutionary examines survival and reproduction influences; Forensic intersects with the legal system; Health emphasizes psychological factors in health; Industrial-Organizational assesses workplace ...

  16. Blind Spots in Employee Communication Research ...

    This paper aims to uncover blind spots in research on employee communication regarding LGBT+ to provide guidance for future research. To this end, we conducted a scoping review following the PRISMA-guidelines. A systematic literature search in four databases yielded 3,055 records. Our final sample included 164 publications reporting on 178 quantitative studies (207,181 participants and 3,740 ...

  17. PDF GUIDE TO WRITING RESEARCH REPORTS

    A useful rule of thumb is to try to write four concise sentences describing: (1) Why you did it, (2) What you did, (3) What results you found and (4) What you concluded. Write the Abstract after you have written the rest of the report. You may find it difficult to write a short abstract in one go.

  18. Internet & Technology

    ABOUT PEW RESEARCH CENTER Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions.