11.2 Writing a Research Report in American Psychological Association (APA) Style

Learning objectives.

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a  title page . The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioral Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behavior?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

It’s  Soooo  Cute!  How Informal Should an Article Title Be?

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The  abstract  is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The  introduction  begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The  opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behavior (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humorous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humor and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favorite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The  closing  of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question and hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behavior during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned to conditions, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centered on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Figure 11.1 Three Ways of Organizing an APA-Style Method

Figure 11.1 Three Ways of Organizing an APA-Style Method

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on. The materials subsection is also a good place to refer to the reliability and/or validity of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items and that they accurately measure what they are intended to measure.

The  results section  is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Several journals now encourage the open sharing of raw data online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The  discussion  is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end by returning to the problem or issue introduced in your opening paragraph and clearly stating how your research has addressed that issue or problem.

The references section begins on a new page with the heading “References” centered at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An  appendix  is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centered at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to display graphs, illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.3 Introduction and Method. Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.3 Introduction and Method. Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.4 Results and Discussion The discussion begins immediately after the results section ends.

Figure 11.4 Results and Discussion The discussion begins immediately after the results section ends.

Figure 11.5 References and Figure. If there were appendices or tables, they would come before the figure.

Figure 11.5 References and Figure. If there were appendices or tables, they would come before the figure.

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g.,  Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different color each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.
  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The complete academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵

Creative Commons License

Share This Book

  • Increase Font Size
  • Search This Site All UCSD Sites Faculty/Staff Search Term
  • Contact & Directions
  • Climate Statement
  • Cognitive Behavioral Neuroscience
  • Cognitive Psychology
  • Developmental Psychology
  • Social Psychology
  • Adjunct Faculty
  • Non-Senate Instructors
  • Researchers
  • Psychology Grads
  • Affiliated Grads
  • New and Prospective Students
  • Honors Program
  • Experiential Learning
  • Programs & Events
  • Psi Chi / Psychology Club
  • Prospective PhD Students
  • Current PhD Students
  • Area Brown Bags
  • Colloquium Series
  • Anderson Distinguished Lecture Series
  • Speaker Videos
  • Undergraduate Program
  • Academic and Writing Resources

Writing Research Papers

  • Research Paper Structure

Whether you are writing a B.S. Degree Research Paper or completing a research report for a Psychology course, it is highly likely that you will need to organize your research paper in accordance with American Psychological Association (APA) guidelines.  Here we discuss the structure of research papers according to APA style.

Major Sections of a Research Paper in APA Style

A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1  Many will also contain Figures and Tables and some will have an Appendix or Appendices.  These sections are detailed as follows (for a more in-depth guide, please refer to " How to Write a Research Paper in APA Style ”, a comprehensive guide developed by Prof. Emma Geller). 2

What is this paper called and who wrote it? – the first page of the paper; this includes the name of the paper, a “running head”, authors, and institutional affiliation of the authors.  The institutional affiliation is usually listed in an Author Note that is placed towards the bottom of the title page.  In some cases, the Author Note also contains an acknowledgment of any funding support and of any individuals that assisted with the research project.

One-paragraph summary of the entire study – typically no more than 250 words in length (and in many cases it is well shorter than that), the Abstract provides an overview of the study.

Introduction

What is the topic and why is it worth studying? – the first major section of text in the paper, the Introduction commonly describes the topic under investigation, summarizes or discusses relevant prior research (for related details, please see the Writing Literature Reviews section of this website), identifies unresolved issues that the current research will address, and provides an overview of the research that is to be described in greater detail in the sections to follow.

What did you do? – a section which details how the research was performed.  It typically features a description of the participants/subjects that were involved, the study design, the materials that were used, and the study procedure.  If there were multiple experiments, then each experiment may require a separate Methods section.  A rule of thumb is that the Methods section should be sufficiently detailed for another researcher to duplicate your research.

What did you find? – a section which describes the data that was collected and the results of any statistical tests that were performed.  It may also be prefaced by a description of the analysis procedure that was used. If there were multiple experiments, then each experiment may require a separate Results section.

What is the significance of your results? – the final major section of text in the paper.  The Discussion commonly features a summary of the results that were obtained in the study, describes how those results address the topic under investigation and/or the issues that the research was designed to address, and may expand upon the implications of those findings.  Limitations and directions for future research are also commonly addressed.

List of articles and any books cited – an alphabetized list of the sources that are cited in the paper (by last name of the first author of each source).  Each reference should follow specific APA guidelines regarding author names, dates, article titles, journal titles, journal volume numbers, page numbers, book publishers, publisher locations, websites, and so on (for more information, please see the Citing References in APA Style page of this website).

Tables and Figures

Graphs and data (optional in some cases) – depending on the type of research being performed, there may be Tables and/or Figures (however, in some cases, there may be neither).  In APA style, each Table and each Figure is placed on a separate page and all Tables and Figures are included after the References.   Tables are included first, followed by Figures.   However, for some journals and undergraduate research papers (such as the B.S. Research Paper or Honors Thesis), Tables and Figures may be embedded in the text (depending on the instructor’s or editor’s policies; for more details, see "Deviations from APA Style" below).

Supplementary information (optional) – in some cases, additional information that is not critical to understanding the research paper, such as a list of experiment stimuli, details of a secondary analysis, or programming code, is provided.  This is often placed in an Appendix.

Variations of Research Papers in APA Style

Although the major sections described above are common to most research papers written in APA style, there are variations on that pattern.  These variations include: 

  • Literature reviews – when a paper is reviewing prior published research and not presenting new empirical research itself (such as in a review article, and particularly a qualitative review), then the authors may forgo any Methods and Results sections. Instead, there is a different structure such as an Introduction section followed by sections for each of the different aspects of the body of research being reviewed, and then perhaps a Discussion section. 
  • Multi-experiment papers – when there are multiple experiments, it is common to follow the Introduction with an Experiment 1 section, itself containing Methods, Results, and Discussion subsections. Then there is an Experiment 2 section with a similar structure, an Experiment 3 section with a similar structure, and so on until all experiments are covered.  Towards the end of the paper there is a General Discussion section followed by References.  Additionally, in multi-experiment papers, it is common for the Results and Discussion subsections for individual experiments to be combined into single “Results and Discussion” sections.

Departures from APA Style

In some cases, official APA style might not be followed (however, be sure to check with your editor, instructor, or other sources before deviating from standards of the Publication Manual of the American Psychological Association).  Such deviations may include:

  • Placement of Tables and Figures  – in some cases, to make reading through the paper easier, Tables and/or Figures are embedded in the text (for example, having a bar graph placed in the relevant Results section). The embedding of Tables and/or Figures in the text is one of the most common deviations from APA style (and is commonly allowed in B.S. Degree Research Papers and Honors Theses; however you should check with your instructor, supervisor, or editor first). 
  • Incomplete research – sometimes a B.S. Degree Research Paper in this department is written about research that is currently being planned or is in progress. In those circumstances, sometimes only an Introduction and Methods section, followed by References, is included (that is, in cases where the research itself has not formally begun).  In other cases, preliminary results are presented and noted as such in the Results section (such as in cases where the study is underway but not complete), and the Discussion section includes caveats about the in-progress nature of the research.  Again, you should check with your instructor, supervisor, or editor first.
  • Class assignments – in some classes in this department, an assignment must be written in APA style but is not exactly a traditional research paper (for instance, a student asked to write about an article that they read, and to write that report in APA style). In that case, the structure of the paper might approximate the typical sections of a research paper in APA style, but not entirely.  You should check with your instructor for further guidelines.

Workshops and Downloadable Resources

  • For in-person discussion of the process of writing research papers, please consider attending this department’s “Writing Research Papers” workshop (for dates and times, please check the undergraduate workshops calendar).

Downloadable Resources

  • How to Write APA Style Research Papers (a comprehensive guide) [ PDF ]
  • Tips for Writing APA Style Research Papers (a brief summary) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – empirical research) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – literature review) [ PDF ]

Further Resources

How-To Videos     

  • Writing Research Paper Videos

APA Journal Article Reporting Guidelines

  • Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 3.
  • Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D. M., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 26.  

External Resources

  • Formatting APA Style Papers in Microsoft Word
  • How to Write an APA Style Research Paper from Hamilton University
  • WikiHow Guide to Writing APA Research Papers
  • Sample APA Formatted Paper with Comments
  • Sample APA Formatted Paper
  • Tips for Writing a Paper in APA Style

1 VandenBos, G. R. (Ed). (2010). Publication manual of the American Psychological Association (6th ed.) (pp. 41-60).  Washington, DC: American Psychological Association.

2 geller, e. (2018).  how to write an apa-style research report . [instructional materials]. , prepared by s. c. pan for ucsd psychology.

Back to top  

  • Formatting Research Papers
  • Using Databases and Finding References
  • What Types of References Are Appropriate?
  • Evaluating References and Taking Notes
  • Citing References
  • Writing a Literature Review
  • Writing Process and Revising
  • Improving Scientific Writing
  • Academic Integrity and Avoiding Plagiarism
  • Writing Research Papers Videos

The Professional Counselor

Guidelines and Recommendations for Writing a Rigorous Quantitative Methods Section in Counseling and Related Fields

Volume 12 - Issue 3

Michael T. Kalkbrenner

Conducting and publishing rigorous empirical research based on original data is essential for advancing and sustaining high-quality counseling practice. The purpose of this article is to provide a one-stop-shop for writing a rigorous quantitative Methods section in counseling and related fields. The importance of judiciously planning, implementing, and writing quantitative research methods cannot be understated, as methodological flaws can completely undermine the integrity of the results. This article includes an overview, considerations, guidelines, best practices, and recommendations for conducting and writing quantitative research designs. The author concludes with an exemplar Methods section to provide a sample of one way to apply the guidelines for writing or evaluating quantitative research methods that are detailed in this manuscript.

Keywords : empirical, quantitative, methods, counseling, writing

     The findings of rigorous empirical research based on original data is crucial for promoting and maintaining high-quality counseling practice (American Counseling Association [ACA], 2014; Giordano et al., 2021; Lutz & Hill, 2009; Wester et al., 2013). Peer-reviewed publication outlets play a crucial role in ensuring the rigor of counseling research and distributing the findings to counseling practitioners. The four major sections of an original empirical study usually include: (a) Introduction/Literature Review, (b) Methods, (c) Results, and (d) Discussion (American Psychological Association [APA], 2020; Heppner et al., 2016). Although every section of a research study must be carefully planned, executed, and reported (Giordano et al., 2021), scholars have engaged in commentary about the importance of a rigorous and clearly written Methods section for decades (Korn & Bram, 1988; Lutz & Hill, 2009). The Methods section is the “conceptual epicenter of a manuscript” (Smagorinsky, 2008, p. 390) and should include clear and specific details about how the study was conducted (Heppner et al., 2016). It is essential that producers and consumers of research are aware of key methodological standards, as the quality of quantitative methods in published research can vary notably, which has serious implications for the merit of research findings (Lutz & Hill, 2009; Wester et al., 2013).

Careful planning prior to launching data collection is especially important for conducting and writing a rigorous quantitative Methods section, as it is rarely appropriate to alter quantitative methods after data collection is complete for both practical and ethical reasons (ACA, 2014; Creswell & Creswell, 2018). A well-written Methods section is also crucial for publishing research in a peer-reviewed journal; any serious methodological flaws tend to automatically trigger a decision of rejection without revisions. Accordingly, the purpose of this article is to provide both producers and consumers of quantitative research with guidelines and recommendations for writing or evaluating the rigor of a Methods section in counseling and related fields. Specifically, this manuscript includes a general overview of major quantitative methodological subsections as well as an exemplar Methods section. The recommended subsections and guidelines for writing a rigorous Methods section in this manuscript (see Appendix) are based on a synthesis of (a) the extant literature (e.g., Creswell & Creswell, 2018; Flinn & Kalkbrenner, 2021; Giordano et al., 2021); (b) the Standards for Educational and Psychological Testing (American Educational Research Association [AERA] et al., 2014), (c) the ACA Code of Ethics (ACA, 2014), and (d) the Journal Article Reporting Standards (JARS) in the APA 7 (2020) manual.

Quantitative Methods: An Overview of the Major Sections

The Methods section is typically the second major section in a research manuscript and can begin with an overview of the theoretical framework and research paradigm that ground the study (Creswell & Creswell, 2018; Leedy & Ormrod, 2019). Research paradigms and theoretical frameworks are more commonly reported in qualitative, conceptual, and dissertation studies than in quantitative studies. However, research paradigms and theoretical frameworks can be very applicable to quantitative research designs (see the exemplar Methods section below). Readers are encouraged to consult Creswell and Creswell (2018) for a clear and concise overview about the utility of a theoretical framework and a research paradigm in quantitative research.

Research Design      The research design should be clearly specified at the beginning of the Methods section. Commonly employed quantitative research designs in counseling include but are not limited to group comparisons (e.g., experimental, quasi-experimental, ex-post-facto), correlational/predictive, meta-analysis, descriptive, and single-subject designs (Creswell & Creswell, 2018; Flinn & Kalkbrenner, 2021; Leedy & Ormrod, 2019). A well-written literature review and strong research question(s) will dictate the most appropriate research design. Readers can refer to Flinn and Kalkbrenner (2021) for free (open access) commentary on and examples of conducting a literature review, formulating research questions, and selecting the most appropriate corresponding research design.

Researcher Bias and Reflexivity      Counseling researchers have an ethical responsibility to minimize their personal biases throughout the research process (ACA, 2014). A researcher’s personal beliefs, values, expectations, and attitudes create a lens or framework for how data will be collected and interpreted. Researcher reflexivity or positionality statements are well-established methodological standards in qualitative research (Hays & Singh, 2012; Heppner et al., 2016; Rovai et al., 2013). Researcher bias is rarely reported in quantitative research; however, researcher bias can be just as inherently present in quantitative as it is in qualitative studies. Being reflexive and transparent about one’s biases strengthens the rigor of the research design (Creswell & Creswell, 2018; Onwuegbuzie & Leech, 2005). Accordingly, quantitative researchers should consider reflecting on their biases in similar ways as qualitative researchers (Onwuegbuzie & Leech, 2005). For example, a researcher’s topical and methodological choices are, at least in part, based on their personal interests and experiences. To this end, quantitative researchers are encouraged to reflect on and consider reporting their beliefs, assumptions, and expectations throughout the research process.

Participants and Procedures      The major aim in the Participants and Procedures subsection of the Methods section is to provide a clear description of the study’s participants and procedures in enough detail for replication (ACA, 2014; APA, 2020; Giordano et al., 2021; Heppner et al., 2016). When working with human subjects, authors should briefly discuss research ethics including but not limited to receiving institutional review board (IRB) approval (Giordano et al., 2021; Korn & Bram, 1988). Additional considerations for the Participants and Procedures section include details about the authors’ sampling procedure, inclusion and/or exclusion criteria for participation, sample size, participant background information, location/site, and protocol for interventions (APA, 2020).

Sampling Procedure and Sample Size      Sampling procedures should be clearly stated in the Methods section. At a minimum, the description of the sampling procedure should include researcher access to prospective participants, recruitment procedures, data collection modality (e.g., online survey), and sample size considerations. Quantitative sampling approaches tend to be clustered into either probability or non-probability techniques (Creswell & Creswell, 2018; Leedy & Ormrod, 2019). The key distinguishing feature of probability sampling is random selection, in which all prospective participants in the population have an equal chance of being randomly selected to participate in the study (Leedy & Ormrod, 2019). Examples of probability sampling techniques include simple random sampling, systematic random sampling, stratified random sampling, or cluster sampling (Leedy & Ormrod, 2019).

Non-probability sampling techniques lack random selection and there is no way of determining if every member of the population had a chance of being selected to participate in the study (Leedy & Ormrod, 2019). Examples of non-probability sampling procedures include volunteer sampling, convenience sampling, purposive sampling, quota sampling, snowball sampling, and matched sampling. In quantitative research, probability sampling procedures are more rigorous in terms of generalizability (i.e., the extent to which research findings based on sample data extend or generalize to the larger population from which the sample was drawn). However, probability sampling is not always possible and non-probability sampling procedures are rigorous in their own right. Readers are encouraged to review Leedy and Ormrod’s (2019) commentary on probability and non-probability sampling procedures. Ultimately, the selection of a sampling technique should be made based on the population parameters, available resources, and the purpose and goals of the study.

     A Priori Statistical Power Analysis . It is essential that quantitative researchers determine the minimum necessary sample size for computing statistical analyses before launching data collection (Balkin & Sheperis, 2011; Sink & Mvududu, 2010). An insufficient sample size substantially increases the probability of committing a Type II error, which occurs when the results of statistical testing reveal non–statistically significant findings when in reality (of which the researcher is unaware), significant findings do exist. Computing an a priori (computed before starting data collection) statistical power analysis reduces the chances of a Type II error by determining the smallest sample size that is necessary for finding statistical significance, if statistical significance exists (Balkin & Sheperis, 2011). Readers can consult Balkin and Sheperis (2011) as well as Sink and Mvududu (2010) for an overview of statistical significance, effect size, and statistical power. A number of statistical power analysis programs are available to researchers. For example, G*Power (Faul et al., 2009) is a free software program for computing a priori statistical power analyses.

Sampling Frame and Location      Counselors should report their sampling frame (total number of potential participants), response rate, raw sample (total number of participants that engaged with the study at any level, including missing and incomplete data), and the size of the final useable sample. It is also important to report the breakdown of the sample by demographic and other important participant background characteristics, for example, “XX.X% ( n = XXX) of participants were first-generation college students, XX.X% ( n = XXX) were second-generation . . .” The selection of demographic variables as well as inclusion and exclusion criteria should be justified in the literature review. Readers are encouraged to consult Creswell and Creswell (2018) for commentary on writing a strong literature review.

The timeframe, setting, and location during which data were collected are important methodological considerations (APA, 2020). Specific names of institutions and agencies should be masked to protect their privacy and confidentiality; however, authors can give descriptions of the setting and location (e.g., “Data were collected between April 2021 to February 2022 from clients seeking treatment for addictive disorders at an outpatient, integrated behavioral health care clinic located in the Northeastern United States.”). Authors should also report details about any interventions, curriculum, qualifications and background information for research assistants, experimental design protocol(s), and any other procedural design issues that would be necessary for replication. In instances in which describing a treatment or conditions becomes exorbitant (e.g., step-by-step manualized therapy, programs, or interventions), researchers can include footnotes, appendices, and/or references to refer the reader to more information about the intervention protocol.

Missing Data      Procedures for handling missing values (incomplete survey responses) are important considerations in quantitative data analysis. Perhaps the most straightforward option for handling missing data is to simply delete missing responses. However, depending on the percentage of data that are missing and how the data are missing (e.g., missing completely at random, missing at random, or not missing at random), data imputation techniques can be employed to recover missing values (Cook, 2021; Myers, 2011). Quantitative researchers should provide a clear rationale behind their decisions around the deletion of missing values or when using a data imputation method. Readers are encouraged to review Cook’s (2021) commentary on procedures for handling missing data in quantitative research.

Measures      Counseling and other social science researchers oftentimes use instruments and screening tools to appraise latent traits, which can be defined as variables that are inferred rather than observed (AERA et al., 2014). The purpose of the Measures (aka Instrumentation) section is to operationalize the construct(s) of measurement (Heppner et al., 2016). Specifically, the Measures subsection of the Methods in a quantitative manuscript tends to include a presentation of (a) the instrument and construct(s) of measurement, (b) reliability and validity evidence of test scores, and (c) cross-cultural fairness and norming. The Measures section might also include a Materials subsection for studies that employed data-gathering techniques or equipment besides or in addition to instruments (Heppner et al., 2016); for instance, if a research study involved the use of a biofeedback device to collect data on changes in participants’ body functions.

Instrument and Construct of Measurement      Begin the Measures section by introducing the questionnaire or screening tool, its construct(s) of measurement, number of test items, example test items, and scale points. If applicable, the Measures section can also include information on scoring procedures and cutoff criterion; for example, total score benchmarks for low, medium, and high levels of the trait. Authors might also include commentary about how test scores will be operationalized to constitute the variables in the upcoming Data Analysis section.

Reliability and Validity Evidence of Test Scores      Reliability evidence involves the degree to which test scores are stable or consistent and validity evidence refers to the extent to which scores on a test succeed in measuring what the test was designed to measure (AERA et al., 2014; Bardhoshi & Erford, 2017). Researchers should report both reliability and validity evidence of scores for each instrument they use (Wester et al., 2013). A number of forms of reliability evidence exist (e.g., internal consistency, test-retest, interrater, and alternate/parallel/equivalent forms) and the AERA standards (2014) outline five forms of validity evidence. For the purposes of this article, I will focus on internal consistency reliability, as it is the most popular and most commonly misused reliability estimate in social sciences research (Kalkbrenner, 2021a; McNeish, 2018), as well as construct validity. The psychometric properties of a test (including reliability and validity evidence) are contingent upon the scores from which they were derived. As such, no test is inherently valid or reliable; test scores are only reliable and valid for a certain purpose, at a particular time, for use with a specific sample. Accordingly, authors should discuss reliability and validity evidence in terms of scores, for example, “Stamm (2010) found reliability and validity evidence of scores on the Professional Quality of Life (ProQOL 5) with a sample of . . . ”

Internal Consistency Reliability Evidence. Internal consistency estimates are derived from associations between the test items based on one administration (Kalkbrenner, 2021a). Cronbach’s coefficient alpha (α) is indisputably the most popular internal consistency reliability estimate in counseling and throughout social sciences research in general (Kalkbrenner, 2021a; McNeish, 2018). The appropriate use of coefficient alpha is reliant on the data meeting the following statistical assumptions: (a) essential tau equivalence, (b) continuous level scale of measurement, (c) normally distributed data, (d) uncorrelated error, (e) unidimensional scale, and (f) unit-weighted scaling (Kalkbrenner, 2021a). For decades, coefficient alpha has been passed down in the instructional practice of counselor training programs. Coefficient alpha has appeared as the dominant reliability index in national counseling and psychology journals without most authors computing and reporting the necessary statistical assumption checking (Kalkbrenner, 2021a; McNeish, 2018). The psychometrically daunting practice of using alpha without assumption checking poses a threat to the veracity of counseling research, as the accuracy of coefficient alpha is threatened if the data violate one or more of the required assumptions.

Internal Consistency Reliability Indices and Their Appropriate Use . Composite reliability (CR) internal consistency estimates are derived in similar ways as coefficient alpha; however, the proper computation of CRs is not reliant on the data meeting many of alpha’s statistical assumptions (Kalkbrenner, 2021a; McNeish, 2018). For example, McDonald’s coefficient omega (ω or ω t ) is a CR estimate that is not dependent on the data meeting most of alpha’s assumptions (Kalkbrenner, 2021a). In addition, omega hierarchical (ω h ) and coefficient H are CR estimates that can be more advantageous than alpha. Despite the utility of CRs, their underuse in research practice is historically, in part, because of the complex nature of computation. However, recent versions of SPSS include a breakthrough point-and-click feature for computing coefficient omega as easily as coefficient alpha. Readers can refer to the SPSS user guide for steps to compute omega.

Guidelines for Reporting Internal Consistency Reliability. In the Measures subsection of the Methods section, researchers should report existing reliability evidence of scores for their instruments. This can be done briefly by reporting the results of multiple studies in the same sentence, as in: “A number of past investigators found internal consistency reliability evidence for scores on the [name of test] with a number of different samples, including college students (α =. XX, ω =. XX; Authors et al., 20XX), clients living with chronic back pain (α =. XX, ω =. XX; Authors et al., 20XX), and adults in the United States (α = . XX, ω =. XX; Authors et al., 20XX) . . .”

Researchers should also compute and report reliability estimates of test scores with their data set in the Measures section. If a researcher is using coefficient alpha, they have a duty to complete and report assumption checking to demonstrate that the properties of their sample data were suitable for alpha (Kalkbrenner, 2021a; McNeish, 2018). Another option is to compute a CR (e.g., ω or H ) instead of alpha. However, Kalkbrenner (2021a) recommended that researchers report both coefficient alpha (because of its popularity) and coefficient omega (because of the robustness of the estimate). The proper interpretation of reliability estimates of test scores is done on a case-by-case basis, as the meaning of reliability coefficients is contingent upon the construct of measurement and the stakes or consequences of the results for test takers (Kalkbrenner, 2021a). The following tentative interpretative guidelines for adults’ scores on attitudinal measures were offered by Kalkbrenner (2021b) for coefficient alpha: α < .70 = poor, α > .70 to .84 = acceptable, α > .85 = strong; and for coefficient omega: ω < .65 = poor, ω > .65 to .80 = acceptable, ω > .80 = strong. It is important to note that these thresholds are for adults’ scores on attitudinal measures; acceptable internal consistency reliability estimates of scores should be much stronger for high-stakes testing.

     Construct Validity Evidence of Test Scores. Construct validity involves the test’s ability to accurately capture a theoretical or latent construct (AERA et al., 2014). Construct validity considerations are particularly important for counseling researchers who tend to investigate latent traits as outcome variables. At a minimum, counseling researchers should report construct validity evidence for both internal structure and relations with theoretically relevant constructs. Internal structure (aka factorial validity) is a source of construct validity that represents the degree to which “the relationships among test items and test components conform to the construct on which the proposed test score interpretations are based” (AERA et al., 2014, p. 16). Readers can refer to Kalkbrenner (2021b) for a free (open access publishing) overview of exploratory factor analysis (EFA) and confirmatory factor analysis (CFA) that is written in layperson’s terms. Relations with theoretically relevant constructs (e.g., convergent and divergent validity) are another source of construct validity evidence that involves comparing scores on the test in question with scores on other reputable tests (AERA et al., 2014; Strauss & Smith, 2009).

     Guidelines for Reporting Validity Evidence. Counseling researchers should report existing evidence of at least internal structure and relations with theoretically relevant constructs (e.g., convergent or divergent validity) for each instrument they use. EFA results alone are inadequate for demonstrating internal structure validity evidence of scores, as EFA is a much less rigorous test of internal structure than CFA (Kalkbrenner, 2021b). In addition, EFA results can reveal multiple retainable factor solutions, which need to be tested/confirmed via CFA before even initial internal structure validity evidence of scores can be established. Thus, both EFA and CFA are necessary for reporting/demonstrating initial evidence of internal structure of test scores. In an extension of internal structure, counselors should also report existing convergent and/or divergent validity of scores. High correlations ( r > .50) demonstrate evidence of convergent validity and moderate-to-low correlations ( r < .30, preferably r < .10) support divergent validity evidence of scores (Sink & Stroh, 2006; Swank & Mullen, 2017).

In an ideal situation, a researcher will have the resources to test and report the internal structure (e.g., compute CFA firsthand) of scores on the instrumentation with their sample. However, CFA requires large sample sizes (Kalkbrenner, 2021b), which oftentimes is not feasible. It might be more practical for researchers to test and report relations with theoretically relevant constructs, though adding one or more questionnaire(s) to data collection efforts can come with the cost of increasing respondent fatigue. In these instances, researchers might consider reporting other forms of validity evidence (e.g., evidence based on test content, criterion validity, or response processes; AERA et al., 2014). In instances when computing firsthand validity evidence of scores is not logistically viable, researchers should be transparent about this limitation and pay especially careful attention to presenting evidence for cross-cultural fairness and norming.

Cross-Cultural Fairness and Norming      In a psychometric context, fairness (sometimes referred to as cross-cultural fairness) is a fundamental validity issue and a complex construct to define (AERA et al., 2014; Kane, 2010; Neukrug & Fawcett, 2015). I offer the following composite definition of cross-cultural fairness for the purposes of a quantitative Measures section: the degree to which test construction, administration procedures, interpretations, and uses of results are equitable and represent an accurate depiction of a diverse group of test takers’ abilities, achievement, attitudes, perceptions, values, and/or experiences (AERA et al., 2014; Educational Testing Service [ETS], 2016; Kane, 2010; Kane & Bridgeman, 2017). Counseling researchers should consider the following central fairness issues when selecting or developing instrumentation: measurement bias, accessibility, universal design, equivalent meaning (invariance), test content, opportunity to learn, test adaptations, and comparability (AERA et al., 2014; Kane & Bridgeman, 2017). Providing a comprehensive overview of fairness is beyond the scope of this article; however, readers are encouraged to read Chapter 3 in the AERA standards (2014) on Fairness in Testing.

In the Measures section, counseling researchers should include commentary on how and in what ways cross-cultural fairness guided their selection, administration, and interpretation of procedures and test results (AERA et al., 2014; Kalkbrenner, 2021b). Cross-cultural fairness and construct validity are related constructs (AERA et al., 2014). Accordingly, citing construct validity of test scores (see the previous section) with normative samples similar to the researcher’s target population is one way to provide evidence of cross-cultural fairness. However, construct validity evidence alone might not be a sufficient indication of cross-cultural fairness, as the latent meaning of test scores are a function of test takers’ cultural context (Kalkbrenner, 2021b). To this end, when selecting instrumentation, researchers should review original psychometric studies and consider the normative sample(s) from which test scores were derived.

Commentary on the Danger of Using Self-Developed and Untested Scales      Counseling researchers have an ethical duty to “carefully consider the validity, reliability, psychometric limitations, and appropriateness of instruments when selecting assessments” (ACA, 2014, p. 11). Quantitative researchers might encounter instances in which a scale is not available to measure their desired construct of measurement (latent/inferred variable). In these cases, the first step in the line of research is oftentimes to conduct an instrument development and score validation study (AERA et al., 2014; Kalkbrenner, 2021b). Detailing the protocol for conducting psychometric research is outside the scope of this article; however, readers can refer to the MEASURE Approach to Instrument Development (Kalkbrenner, 2021c) for a free (open access publishing) overview of the steps in an instrument development and score validation study. Adapting an existing scale can be option in lieu of instrument development; however, according to the AERA standards (2014), “an index that is constructed by manipulating and combining test scores should be subjected to the same validity, reliability, and fairness investigations that are expected for the test scores that underlie the index” (p. 210). Although it is not necessary that all quantitative researchers become psychometricians and conduct full-fledged psychometric studies to validate scores on instrumentation, researchers do have a responsibility to report evidence of the reliability, validity, and cross-cultural fairness of test scores for each instrument they used. Without at least initial construct validity testing of scores (calibration), researchers cannot determine what, if anything at all, an untested instrument actually measures.

Data Analysis      Counseling researchers should report and explain the selection of their data analytic procedures (e.g., statistical analyses) in a Data Analysis (or Statistical Analysis) subsection of the Methods or Results section (Giordano et al., 2021; Leedy & Ormrod, 2019). The placement of the Data Analysis section in either the Methods or Results section can vary between publication outlets; however, this section tends to include commentary on variables, statistical models and analyses, and statistical assumption checking procedures.

Operationalizing Variables and Corresponding Statistical Analyses      Clearly outlining each variable is an important first step in selecting the most appropriate statistical analysis for answering each research question (Creswell & Creswell, 2018). Researchers should specify the independent variable(s) and corresponding levels as well as the dependent variable(s); for example, “The first independent variable, time, was composed of the three following levels: pre, middle, and post. The dependent variables were participants’ scores on the burnout and compassion satisfaction subscales of the ProQOL 5.” After articulating the variables, counseling researchers are tasked with identifying each variable’s scale of measurement (Creswell & Creswell, 2018; Field, 2018; Flinn & Kalkbrenner, 2021). Researchers can select the most appropriate statistical test(s) for answering their research question(s) based on the scale of measurement for each variable and referring to Table 8.3 on page 159 in Creswell and Creswell (2018), Figure 1 in Flinn and Kalkbrenner (2021), or the chart on page 1072 in Field (2018).

Assumption Checking      Statistical analyses used in quantitative research are derived based on a set of underlying assumptions (Field, 2018; Giordano et al., 2021). Accordingly, it is essential that quantitative researchers outline their protocol for testing their sample data for the appropriate statistical assumptions. Assumptions of common statistical tests in counseling research include normality, absence of outliers (multivariate and/or univariate), homogeneity of covariance, homogeneity of regression slopes, homoscedasticity, independence, linearity, and absence of multicollinearity (Flinn & Kalkbrenner, 2021; Giordano et al., 2021). Readers can refer to Figure 2 in Flinn and Kalkbrenner (2021) for an overview of statistical assumptions for the major statistical analyses in counseling research.

Exemplar Quantitative Methods Section

The following section includes an exemplar quantitative methods section based on a hypothetical example and a practice data set. Producers and consumers of quantitative research can refer to the following section as an example for writing their own Methods section or for evaluating the rigor of an existing Methods section. As stated previously, a well-written literature review and research question(s) are essential for grounding the study and Methods section (Flinn & Kalkbrenner, 2021). The final piece of a literature review section is typically the research question(s). Accordingly, the following research question guided the following exemplar Methods section: To what extent are there differences in anxiety severity between college students who participate in deep breathing exercises with progressive muscle relaxation, group exercise program, or both group exercise and deep breathing with progressive muscle relaxation?

——-Exemplar——-

A quantitative group comparison research design was employed based on a post-positivist philosophy of science (Creswell & Creswell, 2018). Specifically, I implemented a quasi-experimental, control group pretest/posttest design to answer the research question (Leedy & Ormrod, 2019). Consistent with a post-positivist philosophy of science, I reflected on pursuing a probabilistic objective answer that is situated within the context of imperfect and fallible evidence. The rationale for the present study was grounded in Dr. David Servan-Schreiber’s (2009) theory of lifestyle practices for integrated mental and physical health. According to Servan-Schreiber, simultaneously focusing on improving one’s mental and physical health is more effective than focusing on either physical health or mental wellness in isolation. Consistent with Servan-Schreiber’s theory, the aim of the present study was to compare the utility of three different approaches for anxiety reduction: a behavioral approach alone, a physiological approach alone, and a combined behavioral approach and physiological approach.

I am in my late 30s and identify as a White man. I have a PhD in counselor education as well as an MS in clinical mental health counseling. I have a deep belief in and an active line of research on the utility of total wellness (combined mental and physical health). My research and clinical experience have informed my passion and interest in studying the utility of integrated physical and psychological health services. More specifically, my personal beliefs, values, and interest in total wellness influenced my decision to conduct the present study. I carefully followed the procedures outlined below to reduce the chances that my personal values biased the research design.

Participants and Procedures      Data collection began following approval from the IRB. Data were collected during the fall 2022 semester from undergraduate students who were at least 18 years or older and enrolled in at least one class at a land grant, research-intensive university located in the Southwestern United States. An a priori statistical power analysis was computed using G*Power (Faul et al., 2009). Results revealed that a sample size of at least 42 would provide an 80% power estimate, α = .05, with a moderate effect size, f = 0.25.

I obtained an email list from the registrar’s office of all students enrolled in a section of a Career Excellence course, which was selected to recruit students in a variety of academic majors because all undergraduate students in the College of Education are required to take this course. The focus of this study (mental and physical wellness) was also consistent with the purpose of the course (success in college). A non-probability, convenience sampling procedure was employed by sending a recruitment message to students’ email addresses via the Qualtrics online survey platform. The response rate was approximately 15%, with a total of 222 prospective participants indicating their interest in the study by clicking on the electronic recruitment link, which automatically sent them an invitation to attend an information session about the study. One hundred forty-four students showed up for the information session, 129 of which provided their voluntary informed consent to enroll in the study. Participants were given a confidential identification number to track their pretest/posttest responses, and then they completed the pretest (see the Measures section below). Respondents were randomly assigned in equal groups to either (a) deep breathing with progressive muscle relaxation condition, (b) group exercise condition, or (c) both exercise and deep breathing with progressive muscle relaxation condition.

A missing values analysis showed that less than 5% of data was missing for all cases. Expectation maximization was used to impute missing values, as Little’s Missing Completely at Random (MCAR) test revealed that the data could be treated as MCAR ( p = .367). Data from five participants who did not return to complete the posttest at the end of the semester were removed, yielding a robust sample of N = 124. Participants ( N = 124) ranged in age from 18 to 33 ( M = 21.64, SD = 3.70). In terms of gender identity, 65.0% ( n = 80) self-identified as female, 32.2% ( n = 40) as male, 0.8% ( n = 1) as transgender, and 2.4% ( n = 3) did not specify their gender identity. For ethnic identity, 50.0% ( n = 62) identified as White, 26.7% ( n = 33) as Latinx, 12.1% ( n = 15) as Asian, 9.6% ( n = 12) as Black, 0.8% ( n = 1) as Alaskan Native, and 0.8% ( n = 1) did not specify their ethnic identity. In terms of generational status, 36.3% ( n = 45) of participants were first-generation college students and 63.7% ( n = 79) were second-generation or beyond.

Group Exercise and Deep Breathing Programs      I was awarded a small grant to offer on-campus deep breathing with progressive muscle relaxation and group exercise programs. The structure of the group exercise program was based on Patterson et al. (2021), which consisted of more than 50 available exercise classes each week (e.g., cycling, yoga, swimming, dance). There was no limit to the number of classes that participants could attend; however, attending at least one class each week was required for participation in the study. Readers can refer to Patterson et al. for more information about the group exercise programming.

Neeru et al.’s (2015) deep breathing and progressive muscle relaxation programming was used in the present study. Participants completed daily deep breathing and Jacobson Progressive Muscle Relaxation (JPMR). JPMR was selected because of its documented success with treating anxiety disorders (Neeru et al., 2015). Specifically, the program consisted of four deep breathing steps completed five times and JPMR for approximately 25 minutes daily. Participants attended a weekly deep breathing and JPMR session facilitated by a licensed professional counselor. Participants also practiced deep breathing and JPMR on their own daily and kept a log to document their practice sessions. Readers can refer to Neeru et al. for more information about JPMR and the deep breathing exercises.

Measures      Prospective participants read an informed consent statement and indicated their voluntary informed consent by clicking on a checkbox. Next, participants confirmed that they met the following inclusion criteria: (a) at least 18 years old and (b) currently enrolled in at least one undergraduate college class. The instrumentation began with demographic items regarding participants’ gender identity, ethnic identity, age, and confidential identification number to track their pretest and posttest scores. Lastly, participants completed a convergent validity measure (Mental Health Inventory – 5) and the Generalized Anxiety Disorder (GAD)-7 to measure the outcome variable (anxiety severity).

Reliability and Validity Evidence of Test Scores      Tests of internal consistency were computed to test the reliability of scores on the screening tool for appraising anxiety severity with undergraduate students in the present sample. For internal consistency reliability of scores, coefficient alpha (α) and coefficient omega (ω) were computed with the following minimum thresholds for adults’ scores on attitudinal measures: α > .70 and ω > .65, based on the recommendations of Kalkbrenner (2021b).

The Mental Health Inventory–5. Participants completed the Mental Health Inventory (MHI)-5 to test the convergent validity of undergraduate students in the present samples’ scores on the GAD-7, which was used to measure the outcome variable in this study, anxiety severity. The MHI-5 is a 5-item measure for appraising overall mental health (Berwick et al., 1991). Higher MHI-5 scores reflect better mental health. Participants responded to test items (example: “How much of the time, during the past month, have you been a very nervous person?”) on the following Likert-type scale: 0 = none of the time , 1 = a little of the time , 2 = some of the time , 3 = a good bit of the time , 4 = most of the time , or 5 = all of the time . The MHI-5 has particular utility as a convergent validity measure because of its brief nature (5 items) coupled with the myriad of support for its psychometric properties (e.g., Berwick et al., 1991; Rivera-Riquelme et al., 2019; Thorsen et al., 2013). As just a few examples, Rivera-Riquelme et al. (2019) found acceptable internal consistency reliability evidence (α = .71, ω = .78) and internal structure validity evidence of MHI-5 scores. In addition, the findings of Thorsen et al. (2013) demonstrated convergent validity evidence of MHI-5 scores. Findings in the extant literature (e.g., Foster et al., 2016; Vijayan & Joseph, 2015) established an inverse relationship between anxiety and mental health. Thus, a strong negative correlation ( r > −.50; Sink & Stroh, 2006) between the MHI-5 and GAD-7 would support convergent validity evidence of scores.

     The Generalized Anxiety Disorder–7. The GAD-7 is a 7-item screening tool for appraising anxiety severity (Spitzer et al., 2006). Participants respond to test items based on the following prompt: “Over the last 2 weeks, how often have you been bothered by the following problems?” and anchor definitions: 0 = not at all , 1 = several days , 2 = more than half the days , or 3 = nearly every day (Spitzer et al., 2006, p. 1739). Sample test items include “being so restless that it’s hard to sit still” and “feeling afraid as if something awful might happen.” The GAD-7 items can be summed into an interval-level composite score, with higher scores indicating greater levels of Anxiety Severity. GAD-7 scores can range from 0 to 21 and are classified as mild (0–5), moderate (6–10), moderately severe (11–15), or severe (16–21).

In the initial score validation study, Spitzer et al. (2006) found evidence for internal consistency (α = .92) and test-retest reliability (intraclass correlation = .83) of GAD-7 scores among adults in the United States who were receiving services in primary care clinics. In more recent years, a number of additional investigators found internal consistency reliability evidence for GAD-7 scores, including samples of undergraduate college students in the southern United States (α = .91; Sriken et al., 2022), Black and Latinx adults in the United States (α = .93, ω = .93; Kalkbrenner, 2022), and English-speaking college students living in Ethiopia (ω = .77; Manzar et al., 2021). Similarly, the data set in the present study displayed acceptable internal consistency reliability evidence for GAD-7 scores (α = .82, ω = .81).

Spitzer et al. (2006) used factor analysis to establish internal structure validity, correlations with established screening tools for convergent validity, and criterion validity evidence by demonstrating the capacity of GAD-7 scores for detecting likely cases of generalized anxiety disorder. A number of subsequent investigators found internal structure validity evidence of GAD-7 scores via CFA and multiple-group CFA (Kalkbrenner, 2022; Sriken et al., 2022). In addition, the findings of Sriken et al. (2022) supported both the convergent and divergent validity of GAD-7 scores with other established tests. The data set in the present study ( N = 124) was not large enough for internal structure validity testing. However, a strong negative correlation ( r = −.78) between the GAD-7 and MHI-5 revealed convergent validity evidence of GAD-7 scores with the present sample of undergraduate students.

In terms of norming and cross-cultural fairness, there were qualitative differences between the normative GAD-7 sample in the original score validation study (adults in the United States receiving services in primary care clinics) and the non-clinical sample of young adult college students in the present study. However, the demographic profile of the present sample is consistent with Sriken et al. (2022), who validated GAD-7 scores with a large sample ( N = 414) of undergraduate college students. For example, the demographic profile of the sample in the current study for gender identity closely resembled the composition of Sriken et al.’s sample, which included 66.7% women, 33.1% men, and 0.2% transgender individuals. In terms of ethnic identity, the demographic profile of the present sample was consistent with Sriken et al. for White and Black participants, although the present sample reflected a somewhat smaller proportion of Asian students (19.6%) and a greater proportion of Latinx students (5.3%).

Data Analysis and Assumption Checking      The present study included two categorical-level independent variables and one continuous-level dependent variable. The first independent variable, program, consisted of three levels: (a) deep breathing with progressive muscle relaxation, (b) group exercise, or (c) both exercise and deep breathing with progressive muscle relaxation. The second independent variable, time, consisted of two levels: the beginning of the semester and the end of the semester. The dependent variable was participants’ interval-level score on the GAD-7. Accordingly, a 3 (program) X 2 (time) mixed-design analysis of variance (ANOVA) was the most appropriate statistical test for answering the research question (Field, 2018).

The data were examined for the following statistical assumptions for a mixed-design ANOVA: absence of outliers, normality, homogeneity of variance, and sphericity of the covariance matrix based on the recommendations of Field (2018). Standardized z -scores revealed an absence of univariate outliers ( z > 3.29). A review of skewness and kurtosis values were highly consistent with a normal distribution, with the majority of values less than ± 1.0. The results of a Levene’s test demonstrated that the data met the assumption of homogeneity of variance, F (2, 121) = 0.73, p = .486. Testing the data for sphericity was not applicable in this case, as the within-subjects IV (time) only comprised two levels.

——- End Exemplar ——-

The current article is a primer on guidelines, best practices, and recommendations for writing or evaluating the rigor of the Methods section of quantitative studies. Although the major elements of the Methods section summarized in this manuscript tend to be similar across the national peer-reviewed counseling journals, differences can exist between journals based on the content of the article and the editorial board members’ preferences. Accordingly, it can be advantageous for prospective authors to review recently published manuscripts in their target journal(s) to look for any similarities in the structure of the Methods (and other sections). For instance, in one journal, participants and procedures might be reported in a single subsection, whereas in other journals they might be reported separately. In addition, most journals post a list of guidelines for prospective authors on their websites, which can include instructions for writing the Methods section. The Methods section might be the most important section in a quantitative study, as in all likelihood methodological flaws cannot be resolved once data collection is complete, and serious methodological flaws will compromise the integrity of the entire study, rendering it unpublishable. It is also essential that consumers of quantitative research can proficiently evaluate the quality of a Methods section, as poor methods can make the results meaningless. Accordingly, the significance of carefully planning, executing, and writing a quantitative research Methods section cannot be understated.

Conflict of Interest and Funding Disclosure The authors reported no conflict of interest or funding contributions for the development of this manuscript.

American Counseling Association. (2014). ACA code of ethics .

American Psychological Association. (2020). Publication manual of the American Psychological Association: The official guide to APA style (7th ed.).

American Educational Research Association, American Psychological Association, & National Council on Measurement in Education. (2014). The standards for educational and psychological testing . https://www.aera.net/Publications/Books/Standards-for-Educational-Psychological-Testing-2014-Editio n

Balkin, R. S., & Sheperis, C. J. (2011). Evaluating and reporting statistical power in counseling research. Journal of Counseling & Development , 89 (3), 268–272. https://doi.org/10.1002/j.1556-6678.2011.tb00088.x

Bardhoshi, G., & Erford, B. T. (2017). Processes and procedures for estimating score reliability and precision. Measurement and Evaluation in Counseling and Development , 50 (4), 256–263. https://doi.org/10.1080/07481756.2017.1388680

Berwick, D. M., Murphy, J. M., Goldman, P. A., Ware, J. E., Jr., Barsky, A. J., & Weinstein, M. C. (1991). Performance of a five-item mental health screening test. Medical Care , 29 (2), 169–176. https://doi.org/10.1097/00005650-199102000-00008

Cook, R. M. (2021). Addressing missing data in quantitative counseling research. Counseling Outcome Research and Evaluation , 12 (1), 43–53. https://doi.org/10.1080/21501378.2019.171103

Creswell, J. W., & Creswell, J. D. (2018). Research design: Qualitative, quantitative, and mixed methods approaches (5th ed.). SAGE.

Educational Testing Service. (2016). ETS international principles for fairness review of assessments: A manual for developing locally appropriate fairness review guidelines for various countries . https://www.ets.org/content/dam/ets-org/pdfs/about/fairness-review-international.pdf

Faul, F., Erdfelder, E., Buchner, A., & Lang, A.-G. (2009). Statistical power analyses using G*Power 3.1: Tests for correlation and regression analyses. Behavior Research Methods , 41 (4), 1149–1160. https://doi.org/10.3758/BRM.41.4.1149

Field, A. (2018). Discovering statistics using IBM SPSS Statistics (5th ed.). SAGE.

Flinn, R. E., & Kalkbrenner, M. T. (2021). Matching variables with the appropriate statistical tests in counseling research. Teaching and Supervision in Counseling , 3 (3), Article 4. https://doi.org/10.7290/tsc030304

Foster, T., Steen, L., O’Ryan, L., & Nelson, J. (2016). Examining how the Adlerian life tasks predict anxiety in first-year counseling students. The Journal of Individual Psychology , 72 (2), 104–120. https://doi.org/10.1353/jip.2016.0009

Giordano, A. L., Schmit, M. K., & Schmit, E. L. (2021). Best practice guidelines for publishing rigorous research in counseling. Journal of Counseling & Development , 99 (2), 123–133. https://doi.org/10.1002/jcad.12360

Hays, D. G., & Singh, A. A. (2012). Qualitative inquiry in clinical and educational settings . Guilford.

Heppner, P. P., Wampold, B. E., Owen, J., Wang, K. T., & Thompson, M. N. (2016). Research design in counseling (4th ed.). Cengage.

Kalkbrenner, M. T. (2021a). Alpha, omega, and H internal consistency reliability estimates: Reviewing these options and when to use them. Counseling Outcome Research and Evaluation . https://doi.org/10.1080/21501378.2021.1940118

Kalkbrenner, M. T. (2021b). Enhancing assessment literacy in professional counseling: A practical overview of factor analysis. The Professional Counselor , 11 (3), 267–284. https://doi.org/10.15241/mtk.11.3.267

Kalkbrenner, M. T. (2021c). A practical guide to instrument development and score validation in the social sciences: The MEASURE Approach. Practical Assessment, Research & Evaluation , 26 (1), Article 1. https://doi.org/10.7275/svg4-e671

Kalkbrenner, M. T. (2022). Validation of scores on the Lifestyle Practices and Health Consciousness Inventory with Black and Latinx adults in the United States: A three-dimensional model. Measurement and Evaluation in Counseling and Development , 55 (2), 84–97. https://doi.org/10.1080/07481756.2021.1955214

Kane, M. (2010). Validity and fairness. Language Testing , 27 (2), 177–182. https://doi.org/10.1177/0265532209349467

Kane, M., & Bridgeman, B. (2017). Research on validity theory and practice at ETS . In R. E. Bennett & M. von Davier (Eds.), Advancing human assessment: The methodological, psychological and policy contributions of ETS (pp. 489–552). Springer. https://doi.org/10.1007/978-3-319-58689-2_16

Korn, J. H., & Bram, D. R. (1988). What is missing in the Method section of APA journal articles? American Psychologist , 43 (12), 1091–1092. https://doi.org/10.1037/0003-066X.43.12.1091

Leedy, P. D., & Ormrod, J. E. (2019). Practical research: Planning and design (12th ed.). Pearson.

Lutz, W., & Hill, C. E. (2009). Quantitative and qualitative methods for psychotherapy research: Introduction to special section. Psychotherapy Research , 19 (4–5), 369–373. https://doi.org/10.1080/10503300902948053

Manzar, M. D., Alghadir, A. H., Anwer, S., Alqahtani, M., Salahuddin, M., Addo, H. A., Jifar, W. W., & Alasmee, N. A. (2021). Psychometric properties of the General Anxiety Disorders-7 Scale using categorical data methods: A study in a sample of university attending Ethiopian young adults. Neuropsychiatric Disease and Treatment , 17 (1), 893–903. https://doi.org/10.2147/NDT.S295912

McNeish, D. (2018). Thanks coefficient alpha, we’ll take it from here. Psychological Methods , 23 (3), 412–433. https://doi.org/10.1037/met0000144

Myers, T. A. (2011). Goodbye, listwise deletion: Presenting hot deck imputation as an easy and effective tool for handling missing data. Communication Methods and Measures , 5 (4), 297–310. https://doi.org/10.1080/19312458.2011.624490

Neeru, Khakha, D. C., Satapathy, S., & Dey, A. B. (2015). Impact of Jacobson Progressive Muscle Relaxation (JPMR) and deep breathing exercises on anxiety, psychological distress and quality of sleep of hospitalized older adults. Journal of Psychosocial Research , 10 (2), 211–223.

Neukrug, E. S., & Fawcett, R. C. (2015). Essentials of testing and assessment: A practical guide for counselors, social workers, and psychologists (3rd ed.). Cengage.

Onwuegbuzie, A. J., & Leech, N. L. (2005). On becoming a pragmatic researcher: The importance of combining quantitative and qualitative research methodologies. International Journal of Social Research Methodology , 8 (5), 375–387. https://doi.org/10.1080/13645570500402447

Patterson, M. S., Gagnon, L. R., Vukelich, A., Brown, S. E., Nelon, J. L., & Prochnow, T. (2021). Social networks, group exercise, and anxiety among college students. Journal of American College Health , 69 (4), 361–369. https://doi.org/10.1080/07448481.2019.1679150

Rivera-Riquelme, M., Piqueras, J. A., & Cuijpers, P. (2019). The Revised Mental Health Inventory-5 (MHI-5) as an ultra-brief screening measure of bidimensional mental health in children and adolescents. Psychiatry Research , 247 (1), 247–253. https://doi.org/10.1016/j.psychres.2019.02.045

Rovai, A. P., Baker, J. D., & Ponton, M. K. (2013). Social science research design and statistics: A practitioner’s guide to research methods and SPSS analysis . Watertree Press.

Servan-Schreiber, D. (2009). Anticancer: A new way of life (3rd ed.). Viking Publishing.

Sink, C. A., & Mvududu, N. H. (2010). Statistical power, sampling, and effect sizes: Three keys to research relevancy. Counseling Outcome Research and Evaluation , 1 (2), 1–18. https://doi.org/10.1177/2150137810373613

Sink, C. A., & Stroh, H. R. (2006). Practical significance: The use of effect sizes in school counseling research. Professional School Counseling , 9 (5), 401–411. https://doi.org/10.1177/2156759X0500900406

Smagorinsky, P. (2008). The method section as conceptual epicenter in constructing social science research reports. Written Communication , 25 (3), 389–411. https://doi.org/10.1177/0741088308317815

Spitzer, R. L., Kroenke, K., Williams, J. B. W., & Löwe, B. (2006). A brief measure for assessing Generalized Anxiety Disorder: The GAD-7. Archives of Internal Medicine , 166 (10), 1092–1097. https://doi.org/10.1001/archinte.166.10.1092

Sriken, J., Johnsen, S. T., Smith, H., Sherman, M. F., & Erford, B. T. (2022). Testing the factorial validity and measurement invariance of college student scores on the Generalized Anxiety Disorder (GAD-7) Scale across gender and race. Measurement and Evaluation in Counseling and Development , 55 (1), 1–16. https://doi.org/10.1080/07481756.2021.1902239

Stamm, B. H. (2010). The Concise ProQOL Manual (2nd ed.). bit.ly/StammProQOL

Strauss, M. E., & Smith, G. T. (2009). Construct validity: Advances in theory and methodology. Annual Review of Clinical Psychology , 5 , 1–25. https://doi.org/10.1146/annurev.clinpsy.032408.153639

Swank, J. M., & Mullen, P. R. (2017). Evaluating evidence for conceptually related constructs using bivariate correlations. Measurement and Evaluation in Counseling and Development , 50 (4), 270–274. https://doi.org/10.1080/07481756.2017.1339562

Thorsen, S. V., Rugulies, R., Hjarsbech, P. U., & Bjorner, J. B. (2013). The predictive value of mental health for long-term sickness absence: The Major Depression Inventory (MDI) and the Mental Health Inventory (MHI-5) compared. BMC Medical Research Methodology , 13 (1), Article 115. https://doi.org/10.1186/1471-2288-13-115

Vijayan, P., & Joseph, M. I. (2015). Wellness and social interaction anxiety among adolescents. Indian Journal of Health and Wellbeing , 6 (6), 637–639.

Wester, K. L., Borders, L. D., Boul, S., & Horton, E. (2013). Research quality: Critique of quantitative articles in the Journal of Counseling & Development . Journal of Counseling & Development , 91 (3), 280–290. https://doi.org/10.1002/j.1556-6676.2013.00096.x

Appendix Outline and Brief Overview of a Quantitative Methods Section

  • Research design (e.g., group comparison [experimental, quasi-experimental, ex-post-facto], correlational/predictive) and conceptual framework
  • Researcher bias and reflexivity statement

Participants and Procedures

  • Recruitment procedures for data collection in enough detail for replication
  • Research ethics including but not limited to receiving institutional review board (IRB) approval
  • Sampling procedure: Researcher access to prospective participants, recruitment procedures, and data collection modality (e.g., online survey)
  • Sampling technique: Probability sampling (e.g., simple random sampling, systematic random sampling, stratified random sampling, cluster sampling) or non-probability sampling (e.g., volunteer sampling, convenience sampling, purposive sampling, quota sampling, snowball sampling, matched sampling)
  • A priori statistical power analysis
  • Sampling frame, response rate, raw sample, missing data, and the size of the final useable sample
  • Demographic breakdown for participants
  • Timeframe, setting, and location where data were collected
  • Introduction of the instrument and construct(s) of measurement (include sample test items)
  • * Note : At a minimum, internal structure validity evidence of scores should include both exploratory factor analysis (EFA) and confirmatory factor analysis (CFA).
  • *Note : Only using coefficient alpha without completing statistical assumption checking is insufficient. Compute both coefficient omega and alpha or alpha with proper assumption checking.
  • Review and citations of original psychometric studies and normative samples

Data Analysis

  • Operationalized variables and scales of measurement
  • Procedures for matching variables with appropriate statistical analyses
  • Assumption checking procedures

Note . This appendix is a brief summary and not a substitute for the narrative in the text of this article.

Michael T. Kalkbrenner , PhD, NCC, is an associate professor at New Mexico State University. Correspondence may be addressed to Michael T. Kalkbrenner, 1780 E. University Ave., Las Cruces, NM 88003, [email protected].

major sections of a research report used in counseling research

Recent Publications

  • Book Review—Cognitive Information Processing: Career Theory, Research, and Practice May 15, 2024
  • Bridging the Gap: From Awareness to Action Introduction to the Special Issue March 18, 2024
  • “A Learning Curve”: Counselors’ Experiences Working With Sex Trafficking January 17, 2024
  • Ableist Microaggressions, Disability Characteristics, and Nondominant Identities January 17, 2024

AllPsych

This type of report is likely to be discarded well before the end.  If it had contained important new knowledge this information never made it to its intended destination.  The journal would likely not publish it and had it gotten published, would have frustrated the reader to the point of confusion and disregard.  Therefore, we follow a specific writing style to avoid this type of mess.  And, while following a style may seem time consuming and frustrating in itself, it helps assure that your newfound knowledge makes its way into the world.

The American Psychological Association [APA] has developed what is the most well known and most used manual of publication style in any of the social sciences.  The most recent version was published in 2002 and marks the fifth edition.  While the text is somewhat daunting at first glance, the style does assure that your knowledge will be disseminated in an organized and understood fashion.

Most research reports follow a specific list a sections as recommended by this manual.  These sections include: Title Page, Abstract, Introduction, Methods, Results, Discussion, References, Appendices, and Author Note.  Each of these areas will be summarized below, but for any serious researcher understanding the specifics of the APA manual is imperative.

Title Page.

The title page of a research report serves two important functions.  First, it provides a quick summary of the research, including the title of the article, authors’ names, and affiliation.  Second, it provides a means for a blind evaluation.  When submitted to a professional journal, a short title is placed on the title page and carried throughout the remainder of the paper.  Since the authors’ names and affiliation are only on the title page, removing this page prior to review reduces the chance of bias by the journal reviewers.  Once the reviews are complete, the title page is once again attached and the recommendations of the reviewers can be returned to the authors.

The abstract is the second page of the research report.  Consider the abstract a short summary of the article.  It is typically between 100 and 150 words and includes a summary of the major areas of the paper.  Often included in an abstract are the problem or original theory, a one or two sentence explanation of previous research in this area, the characteristics of the present study, the results, and a brief discussion statement.  An abstract allows the reader to quickly understand what the article is about and help him or her decide if further reading will be helpful.

Introduction.

The main body of the paper has four sections, with the introduction being the first.  The purpose of the introduction is to introduce the reader to the topic and discuss the background of the issue at hand.  For instance, in our article on work experience, the introduction would likely include a statement of the problem, for example: “prior work experience may play an important role in student achievement in college.”

The introduction also includes a literature review, which typically follows the introduction of the topic.  All of the research you completed while developing your study goes here.  It is important to bring the reader up to date and lead them into why you decided to conduct this study.  You may cite research related to motivation and success after college and argue that gaining prior work experience may delay college graduation but also helps to improve the college experience and may ultimately further an individual’s career.  You may also review research that argues against your theory.  The goal of the introduction is to lead the reader into your study so that he has a solid background of the material and an understanding of your rationale.

The methods section is the second part of the body of the article.  Methods refers to the actual procedures used to perform the research.  Areas discussed will usually include subject recruitment and assignment to groups, subject attributes, and possibly pretest findings.  Any surveys or treatments will also be discussed in this section.  The main point of the methods section is to allow others to critique your research and replicate it if desired.  The methods section is often the most systematic section in that small details are typically included in order to help others critique, evaluate, and/or replicate the research process.

Most experimental studies include a statistical analysis of the results, which is the major focus of the results section.  Included here are the procedures and statistical analyses performed, the rationale for choosing specific procedures, and ultimately the results.  Charts, tables, and graphs are also often included to better explain the treatment effects or the differences and similarities between groups.  Ultimately, the end of the results section reports the acceptance or rejection of the null hypothesis.  For example, is there a difference between the grades of students with prior work experience and students without prior work experience?

Discussion.

While the first three sections of the body are specific in terms of what is included, the discussion section can be less formal.  This section allows the authors to critique the research, discuss how the results are applicable to real life or even how they don’t support the original theory.  Discussion refers to the authors opportunity to discuss in a less formal manner the results and implications of the research and is often used to suggest needs for additional research on specific areas related to the current study.

References.

Throughout the paper and especially in the introduction section, articles from other authors are cited.  The references section includes a list of all articles used in the development of the hypothesis that were cited in the literature review section.  You many also see a sections that includes recommended readings, referring to important articles related to the topic that were not cited in the actual paper.

Appendices.

Appendices are always included at the end of the paper.  Graphs, charts, and tables are also included at the end, in part due to changes that may take place when the paper is formatted for publication.  Appendices should include only material that is relevant and assists the reader in understanding the current study.  Actual raw data is rarely included in a research paper.

Author Note.

Finally, the authors are permitted to include a short note at the end of the paper.  This note is often personal and may be used to thank colleagues who assisted in the research but not to the degree of warranting co-authorship.  This section can also be used to inform the reader that the current study is part of a larger study or represents the results of a dissertation.  The author note is very short, usually no more than a few sentences.

major sections of a research report used in counseling research

11.2 Writing a Research Report in American Psychological Association (APA) Style

Learning objectives.

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report A type of journal article in which the author reports on a new empirical research study. , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a title page The first page of an APA-style manuscript, containing the title, author names and affiliations, and author note. . The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioral Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behavior?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

It’s Soooo Cute!

How Informal Should an Article Title Be?

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal of Personality and Social Psychology .

  • “Let’s Get Serious: Communicating Commitment in Romantic Relationships”
  • “Through the Looking Glass Clearly: Accuracy and Assumed Similarity in Well-Adjusted Individuals’ First Impressions”
  • “Don’t Hide Your Happiness! Positive Emotion Dissociation, Social Connectedness, and Psychological Functioning”
  • “Forbidden Fruit: Inattention to Attractive Alternatives Provokes Implicit Relationship Reactance”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The abstract A short summary (approximately 200 words) of a research article. In an APA-style manuscript, the abstract appears on the second page. is a summary of the study. It is the second page of the manuscript and is headed with the word Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The introduction The first major section of an APA-style empirical research report. It typically includes an opening, a literature review, and a closing. begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The opening The first paragraph or two of the introduction of an APA-style empirical report. It introduces the research question and explains why it is interesting. , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behavior (not about researchers or their research; Bem, 2003). Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.), The compleat academic: A practical guide for the beginning social scientist (2nd ed.). Washington, DC: American Psychological Association. Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humorous anecdote (Jacoby, 1999).

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (p. 3).

Although both humor and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the literature review A written summary of previous research on a topic. It constitutes the bulk of the introduction of an APA-style empirical research report. , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favorite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the balance of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The closing The last paragraph or two of the introduction of an APA-style empirical research report. It restates the research question and comments on the method. of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question or hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 4 , 377–383. concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behavior during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions (p. 378).

Thus the introduction leads smoothly into the next major section of the article—the method section.

The method section The section of an APA-style empirical research report in which the method is described in detail. At minimum, it includes a participants subsection and a design and procedure subsections. is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centered on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Figure 11.1 Three Ways of Organizing an APA-Style Method

major sections of a research report used in counseling research

After the participants section, the structure can vary a bit. Figure 11.1 "Three Ways of Organizing an APA-Style Method" shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on.

The results section The section of an APA-style empirical research report in which the results are described in detail. is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Some journals now make the raw data available online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A third preliminary issue is the reliability of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items. A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.), The compleat academic: A practical guide for the beginning social scientist (2nd ed.). Washington, DC: American Psychological Association. suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The discussion The final major section of an APA-style empirical research report. It typically includes a summary of the research, a discussion of theoretical and practical implications of the study, limitations of the study, and suggestions for future research. is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how can they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What new research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968), Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility. Journal of Personality and Social Psychology, 4 , 377–383. for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end when you have made your final point (although you should avoid ending on a limitation).

The references section begins on a new page with the heading “References” centered at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendixes, Tables, and Figures

Appendixes, tables, and figures come after the references. An appendix An optional section at the end of an APA-style manuscript used to present important supplemental material. is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centered at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendixes come tables and then figures. Tables and figures are both used to present results. Figures can also be used to illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figure 11.2 "Title Page and Abstract" , Figure 11.3 "Introduction and Method" , Figure 11.4 "Results and Discussion" , and Figure 11.5 "References and Figure" show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

Figure 11.2 Title Page and Abstract

major sections of a research report used in counseling research

This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.3 Introduction and Method

major sections of a research report used in counseling research

Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.4 Results and Discussion

major sections of a research report used in counseling research

The discussion begins immediately after the results section ends.

Figure 11.5 References and Figure

major sections of a research report used in counseling research

If there were appendixes or tables, they would come before the figure.

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g., Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different color each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Wiley Open Access Collection

Logo of blackwellopen

How should we evaluate research on counselling and the treatment of depression? A case study on how the National Institute for Health and Care Excellence's draft 2018 guideline for depression considered what counts as best evidence

Michael barkham.

1 Centre for Psychological Services Research, University of Sheffield, Sheffield, UK

Naomi P. Moller

2 Open University, Milton Keynes, UK

3 British Association for Counselling and Psychotherapy, Lutterworth, UK

Joanne Pybis

Health guidelines are developed to improve patient care by ensuring the most recent and ‘best available evidence’ is used to guide treatment recommendations. The National Institute for Health and Care Excellence's ( NICE 's ) guideline development methodology acknowledges that evidence needed to answer one question (treatment efficacy) may be different from evidence needed to answer another (cost‐effectiveness, treatment acceptability to patients). This review uses counselling in the treatment of depression as a case study, and interrogates the constructs of ‘best’ evidence and ‘best’ guideline methodologies.

The review comprises six sections: (i) implications of diverse definitions of counselling in research; (ii) research findings from meta‐analyses and randomised controlled trials ( RCT s); (iii) limitations to trials‐based evidence; (iv) findings from large routine outcome datasets; (v) the inclusion of qualitative research that emphasises service‐user voices; and (vi) conclusions and recommendations.

Research from meta‐analyses and RCT s contained in the draft 2018 NICE Guideline is limited but positive in relation to the effectiveness of counselling in the treatment for depression. The weight of evidence suggests little, if any, advantage to cognitive behaviour therapy ( CBT ) over counselling once risk of bias and researcher allegiance are taken into account. A growing body of evidence from large NHS data sets also evidences that, for depression, counselling is as effective as CBT and cost‐effective when delivered in NHS settings.

Specifications in NICE 's updated guideline procedures allow for data other than RCT s and meta‐analyses to be included. Accordingly, there is a need to include large standardised collected data sets from routine practice as well as the voice of patients via high‐quality qualitative research.

Introduction

English health guidelines are created and regularly updated with the aim of improving patient care by ensuring that the most recent and ‘best available evidence’ is used to guide treatment (National Institute for Health and Care Excellence Guidance, 2017a ). As stated on its website: ‘National Institute for Health and Care Excellence (NICE) guidelines are evidence‐based recommendations for health and care in England’ (NICE Guidelines, 2017b ). Although some NICE guidance is also adopted by Wales, Scotland and Northern Ireland, a separate UK‐based body equivalent to NICE exists; namely the Scottish Intercollegiate Guidelines Network ( 2017 ). Mental health treatment guidelines are also developed by other international organisations, such as the World Health Organization ( 2017 ) and professional/scientific bodies such as the American Psychiatric Association ( 2017 ), and by European and other countries (Vlayen, Aertgeerts, Hannes, Sermeus & Ramaeker, 2005 ).

This article focuses on: (i) NICE guidelines because of the organisation's impact in shaping mental health care, not only in the UK but internationally (Hernandez‐Villafuerte, Garau & Devlin, 2014 ); (ii) depression, as NICE is currently updating their depression guideline (NICE, 2017d ), and; (iii) counselling as the intervention, as different guidelines have drawn different conclusions (Moriana, Gálvez‐Lara & Corpas, 2017 ). Specially, we focus on the selection and use of evidence. In terms of overall methodology, in their procedural manual NICE state: ‘Guidance is based on the best available evidence of what works, and what it costs’ (NICE, 2014 /2017, p. 14). Although the procedural manual states that randomised controlled trials (RCTs) are often the most appropriate design, it also states: ‘However, other study designs (including observational, experimental or qualitative) may also be used to assess effectiveness, or aspects of effectiveness’ (NICE, 2014 /2017, p. 15). Accordingly, we assess the extent to which NICE has adhered to its own methods manual in drawing up the draft guideline. While NICE's depression guideline is used as the example, arguments in this article are intended to have broad relevance for any organisation developing guidelines across mental health treatments.

The new revision of the NICE Guideline for Depression in Adults: Recognition and Management is scheduled to be published in January 2018 and available as a consultation document at the time of writing (NICE, 2017d ). The previous 2009 NICE Guideline stated: ‘ For people with depression who decline an antidepressant, CBT [cognitive behaviour therapy], IPT [interpersonal psychotherapy], behavioural activation and behavioural couples therapy, consider: counselling for people with persistent subthreshold depressive symptoms or mild to moderate depression’ (NICE, 2009 , p. 23). Counselling was included in the 2009 Guidelines but only for those who declined other recommended treatments; the guidelines were accordingly critiqued on the basis of limiting patient choice (British Association for Counselling and Psychotherapy, 2009 ). In addition, practitioners offering counselling to adults with depression were recommended to: ‘Discuss with the person the uncertainty of the effectiveness of counselling and psychodynamic psychotherapy in treating depression’ (p. 24). This recommendation was criticised as research suggests that both patient hope and a good therapeutic relationship are important in creating good patient outcomes (Barber, Connoll, Crits‐Christoph, Gladis & Siqueland, 2000 ). Accordingly, this recommendation would likely have negatively impacted on early engagement in counselling as well as on outcomes for counselling, if practitioners had implemented this guidance.

The consultation document for the 2018 proposed guideline states: ‘Consider counselling if a person with less severe depression would like help for significant psychosocial, relationship or employment problems and has had group CBT, exercise or facilitated self‐help, antidepressant medication, individual CBT or BA for a previous episode of depression, but this did not work well for them, or does not want group CBT, exercise or facilitated self‐help, antidepressant medication, individual CBT or BA’ (NICE, 2017d ; Recommendation 64, p. 252). It also recommends that the counselling ‘is based on a model developed specifically for depression, consists of up to 16 individual sessions each lasting up to an hour, and takes place over 12–16 weeks, including follow‐up’ (NICE, 2017d ; Recommendation 65, p. 252). Importantly, the ‘uncertainty’ directive has been removed. Hence, the proposed guideline is arguably an improvement on before, as it moves towards a principle of matching counselling with specific issues (i.e., psychosocial, relationship and employment) together with a crucial note about the specificity of the counselling model to be adopted.

Historically, the NICE Guideline for Depression has been highly influential in shaping healthcare provision for those experiencing depression. As described by Clark ( 2011 ), the NICE recommendations for depression from 2004 onwards contributed to the development and roll‐out of the Improving Access to Psychological Therapies (IAPT) programme, which in England now provides the bulk of treatment for depression in primary care (Gyani, Pumphrey, Parker, Shafran & Rose, 2012 ). One example of the impact of the revised 2009 Guideline appears to have been the cutting of counselling jobs in the NHS, with IAPT workforce census data suggesting a 35% decline in the number of qualified counsellors working as high‐intensity therapists between 2012 and 2015, in a period where the total IAPT workforce grew by almost 18% (IAPT Programme, 2013 ; NHS England & Health Education England, 2016 ). Workforce shifts that apparently follow revised NICE guidelines (e.g., counselling not being recommended as a first‐line treatment for depression) underline the importance of scrutinising guideline recommendations since a core assumption is that using ‘best’ evidence and guideline methodologies will lead to NICE recommendations that improve patient care. An implicit question in the remainder of this article is whether the positioning of counselling as a second‐tier treatment for mild‐to‐moderate depression (only) through NICE recommendations is likely to lead to improved outcomes for clients with depression.

Defining counselling as a psychological intervention

The NICE depression guidelines (2009, 2017d) have included recommendations for ‘counselling’, but the definition of ‘counselling’ is unclear. The British Association for Counselling and Psychotherapy (BACP) adopts a generic definition for both counselling and psychotherapy as umbrella terms for ‘a range of talking therapies’ (BACP, 2017 ). Equivalent professional organisations, such as the American Counseling Association (ACA) and the European Association for Counselling (EAC) define counselling in terms of a professional relationship that seeks to aid patients (ACA, 2017 ; EAC, 2017 ). What these definitions have in common is that they are nonspecific: counselling is a broad family of interventions that includes subtypes of counselling such as person‐centred therapy (PCT) or cognitive behaviour therapy (CBT). However – and problematically – the 2009 NICE Guideline for Depression directly compared ‘counselling’ with subtypes of counselling.

The 2009 NICE Guideline for Depression did not specify a definition of counselling; however, various definitions for counselling are provided in the empirical literature. For example, King, Marston and Bower ( 2014 ) reported on a reanalysis of the Health Technology Assessment‐funded trial (Ward et al., 2000 ), comprising a head‐to‐head RCT comparing ‘nondirective counselling’ and cognitive behaviour therapy (CBT), and defined the counselling used in their study as ‘a nondirective, inter‐personal approach’ (p. 1836) derived from the work of Carl Rogers. In this context, the therapy ‘counselling’ has clear theoretical and empirical roots and is a synonym for a type of talking therapy.

In contrast, a 2012 meta‐analytic study by Cuijpers et al. examined the efficacy of ‘nondirective supportive therapy’ (NDST) – which they stated is ‘commonly described in the literature as counselling’ (p. 281). They defined NDST as an approach that utilises the shared attributes (or common factors) of all talking therapies ‘without (utilizing) specific psychological techniques…’ (p. 281), which characterise particular types of therapy. Cuijpers et al. ( 2012 ) point out that many RCTs that include counselling do so as a nonspecific control group and suggest researchers appear to treat counselling as not being a bona fide active treatment. In this context ‘counselling’ is neither a category nor an example of a category, but a shared nonspecific attribute of psychological therapies in general.

The outcome of the 2009 NICE guidance recommendations spurred the development of a model of counselling for the treatment of depression designed to be effective as a high‐intensity intervention within IAPT that took the form of a person‐centred experiential therapy named Counselling for Depression (CfD; Sanders & Hill, 2014 ). The aim was to develop a bona fide psychological therapy using an established methodology that involved defining a range of basic, generic, specific and meta‐competencies for this model of therapy (Roth, Hill & Pilling, 2009 ). The CfD (person‐centred experiential) model, which is now available to IAPT patients (NHS England, 2017 ), also meets the recommendations in the 2018 draft guidelines for a model of counselling developed for depression.

The reviewed definitions suggest there are potentially two distinct forms of counselling: a nonspecific counselling that utilises generic and basic competences common to all forms of therapy, and a model‐specific form of counselling, such as person‐centred experiential counselling, which includes CfD. This distinction between generic counselling and a bona fide/active intervention potentially implies critical differences in the level of training and competencies of a practitioner (comparable to the differences between low and high‐intensity treatment in IAPT) and in the specificity of the model of intervention used. The 2018 proposed guideline does not utilise such distinctions, however, the only recommendation in the draft guidelines is that the counselling intervention should be one developed specifically for depression (yet CfD is not named). This suggests that guideline developers need to make a concerted effort to use definitions that specify the theoretical approach and potentially the level of professional training or competencies.

The current evidence for the clinical efficacy and effectiveness of counselling in the treatment of depression

NICE guidelines for depression draw on two main classes of data to arrive at clinical recommendations, namely meta‐analyses and RCTs. NICE's methodological procedures state: ‘NICE prefers data from head‐to‐head RCTs to compare the effectiveness of interventions’ (NICE, 2014 /2017, p. 103). Further, the procedures require the detailing of the methods and results of individual trials. If direct evidence from treatment comparisons is not available, then indirect comparisons can be made using network meta‐analysis (see Mills, Thorlund & Ioannidis, 2013 ). This procedure, which combines direct and indirect treatment comparisons, focuses on classes of interventions (i.e., broader headings of approaches rather than specific therapy brands) to arrive at recommendations when comparing multiple interventions. The interventions are judged against an appropriate comparator, that is, a common standard. The draft 2018 Guideline uses a pill placebo condition as the appropriate comparator. The Guideline also considers the cost‐effectiveness of interventions. In this section, we provide an overview of the current status of evidence regarding counselling as derived from meta‐analyses and RCTs.

Meta‐analyses of counselling in the treatment of depression

In terms of meta‐analyses, the aim is to combine data from multiple studies and to statistically synthesise the results to create conclusions that are more robust. There are three meta‐analyses of direct relevance.

First, Cape, Whittington, Buszewicz, Wallace and Underwood ( 2010 ) carried out a meta‐analysis and meta‐regression of 34 studies focusing on brief psychological interventions for anxiety and depression, involving 3962 patients. Most interventions were brief cognitive behaviour therapy (CBT; n  =   13), counselling ( n  =   8) or problem solving therapy (PST; n  =   12). Results showed effectiveness for all three types of therapy: studies of CBT for depression ( d: −.33, 95% CI: −.60 to −.06) and studies of CBT for mixed anxiety and depression ( d : −.26, 95% CI: −.44 to −.08); counselling in the treatment of depression alone as well as mixed anxiety and depression ( d : −.32, 95% CI: −.52 to −.11); and PST for depression and mixed anxiety and depression ( d : −.21, 95% CI: −.37 to −.05). Controlling for diagnosis, meta‐regression found no difference between CBT, counselling and PST. The authors concluded that brief CBT, counselling and PST are all effective treatments in primary care, but that effect sizes are low compared to longer length treatments. Nonetheless, it should be pointed out that for the analysis of the four studies of counselling for the treatment of depression only, the results were not statistically significant. However, four studies are not sufficient to yield reliable results.

Second, Cuijpers et al. ( 2012 ) found that studies in which NDST was compared with CBT resulted in a small and nonsignificant difference between NDST and CBT. The authors commented that NDST has been treated as a proxy for counselling, although it specifically excludes active elements that may be present in bona fide counselling interventions. However, they found that the studies with researcher allegiance in favour of the alternative psychotherapy resulted in a considerably larger effect size than studies without researcher allegiance. Moreover, in studies without an indication of researcher allegiance, the difference between NDST and other therapies was virtually zero. The authors argued that such results suggested that NDST is effective and deserved more respect from the research community.

Third, the most recent relevant study by Barth et al. ( 2013 ) adopted a network meta‐analysis – the same method used by the NICE Guideline Development Group – using 198 trials comparing seven forms of psychotherapeutic interventions, one of which was ‘supportive counselling’. The analysis found significant effects for supportive counselling compared against waitlist and that the evidence base for supportive counselling was broad. However, when that analysis focused only on the network of large trials, for four of the interventions, including supportive counselling, significant effects were no longer found. Barth et al. ( 2013 ) themselves invoked the results of the Cuijpers et al. ( 2012 ) meta‐analysis that found no difference between NDST and other treatments. They stated it was ‘unjustified’ to dismiss supportive counselling as a suboptimal treatment because, although the evidence for this intervention was less strong, the size of the differences between the interventions studied was small. They concluded that different psychotherapeutic interventions for depression have comparable, moderate‐to‐large effects.

In summary, when studies with a low researcher allegiance against counselling together with evidence from bona fide counselling interventions are considered, the meta‐analytic studies comparing counselling with CBT for depression suggest either broad equivalence of patient outcomes or, where differences do exist, that they are small.

RCTs of counselling in the treatment of depression

As a tradition, counselling in the UK is often associated with Humanistic/Experiential therapies, and there are a few RCTs which report evidence for the efficacy for these therapies with depressed patients (Goldman, Greenberg & Angus, 2006 ), including one that compared process‐experiential therapy (now referred to as emotion‐focused therapy) with CBT and found comparable outcomes (Watson, Gordon, Stermac, Kalogerakos & Steckley, 2003 ). However, only one recent report directly compared counselling (defined as nondirective person‐centred counselling) to CBT in the treatment of depression. The original study reported comparisons between nondirective counselling and CBT for mixed anxiety and depression and found no significant difference in outcomes for the two therapies (Ward et al., 2000 ). A subsequent reanalysis of the subsample of patients meeting a diagnosis of depression only, found similar results with both therapies being equally effective and both being superior to usual General Practice care at 4 months but not at 12 months (King et al., 2014 ).

The findings from this study are important because of the lack of RCT research that might provide direct head‐to‐head trial evidence for the efficacy of counselling. The 2009 NICE Guideline for Depression development process identified six relevant studies for consideration. One was excluded due to the mixed diagnosis (Ward et al., 2000 ) although, as stated, a subanalysis focusing on patients reporting depression only was considered (and subsequently published as King et al., 2014 ). Data from five other trials were also used (Bedi et al., 2000 ; Goldman et al., 2006 ; Greenberg & Watson, 1998 ; Simpson, Corney, Fitzgerald & Beecham, 2000 ; Watson et al., 2003 ). However, they were all either low powered in terms of patient numbers, had patient samples drawn from the mild‐to‐moderate range of depression only with some including subthreshold patients, or compared outcomes for similar (Humanistic/Experiential) therapies. The 2009 guideline recommendation was that counselling should not be considered as a first‐line intervention, as it had more limited evidence, and should only be considered for patients experiencing subthreshold, mild or moderate depression who declined the other treatments available. As stated, the guideline also added the qualification about the uncertainty of the evidence for counselling, and suggested patients should be advised on this matter.

In summary, while there is minimal recent RCT evidence comparing counselling as a bona fide intervention with CBT, the evidence that does exist supports the general efficacy of counselling. However, apart from the Ward/King reports, RCT studies are generally small‐scale and lack a standard comparator such as CBT. The lack of new data may explain why the recommendations for counselling in the 2009 published and 2018 draft guidelines are broadly similar. However, unlike the 2009 Guideline, the draft 2018 Guideline is based on network meta‐analyses. As some commentators have noted: ‘Nonetheless, a network meta‐analysis is not a substitute for a well conducted randomized controlled trial’ (Kanters et al., 2016 , p. 783). More immediately, perhaps, there needs to be a debate as to the appropriateness of using pill placebo as the appropriate comparator in relation to decision‐making. To use a nonclinically viable intervention as the appropriate comparator – something a patient experiencing depression would never be offered – does not appear to be the most useful benchmark for informing decision‐making regarding differing interventions (see Dias, 2013 ).

Yet, beyond meta‐analyses and RCTs, other potentially valuable sources of evidence exist that are defined by NICE as within the scope of evidence that could be considered but, unfortunately, have not been in the 2018 draft recommendations. In the next section, we argue that there has been an overreliance on the RCT design, before then presenting a case for including relevant non‐RCT data.

The limitations of currently considered evidence in guideline development

An overreliance on rcts.

Within the counselling and psychotherapy outcomes literature, there has been a long‐standing debate regarding what counts as evidence (Kazdin, 2008 ). Evidence from RCTs has traditionally been favoured due to specific features that control for systematic biases, leading them to be judged as providing the most stringent form of evidence. In short, randomisation protects against any systematic biases in the assignment of patients to treatments. The component of randomisation is probably the hallmark most often cited as underpinning the superiority of trials data in the field of the psychological therapies. However, the other central element of RCTs – participants being double‐blinded – can only be utilised in drug trials where the content of the drug can be hidden to patients and to the professional providing the medication. Hence, while trial designs in the psychological therapies are not the strongest form that the RCT design allows, it has long been held as the design that yields the most reliable and valid findings (Wessley, 2007 ).

While the strengths of RCT designs are well accepted, no research method is immune from criticism and one of the abiding criticisms of RCTs concerns their lack of generalisability (Kennedy‐Martin, Curtis, Faries, Robinson & Johnston, 2015 ). While statistical work is taking place to develop procedures in an attempt to address this issue (Stuart, Bradshaw & Leaf, 2015 ), by design, RCTs involve the careful screening of patients to ensure that all trial participants fully meet diagnostic criteria for the presenting condition under study. Typically, this involves screening out patients presenting with any comorbidities, something that leads to the criticism that RCT participants are atypical of patients in actual practice, since, for example, depression is highly comorbid with anxiety (Kaufman & Charney, 2000 ). In addition, by their very nature RCTs draw on a specific subgroup of the population of patients, namely those who are willing to be trial participants. A major reason patients decline to be participants in trials is that they do not wish to be research subjects (Barnes et al., 2013 ). In addition, there has been a long‐term concern about the lack or underrepresentation of minorities in research studies (Hussain‐Gambles, Atkin & Leese, 2004 ; Stronks, Wieringa & Hardon, 2013 ). Hence, while a well‐conducted RCT will state that the intention to offer treatment X (from an intent‐to‐treat analysis) or receipt of treatment X (from a per‐protocol analysis) is better than treatment Y in a specific setting, it will not address the question a commissioner asks, namely: will it work for us? (Cartwright & Munro, 2010 ).

Jadad and Enkin ( 2007 ), the authors of the standard guide to designing RCTs, state: ‘… randomized trials are not divine revelations, they are human constructs, and like all human constructs, are fallible. They are valuable, useful tools that should be used wisely and well’ (p. 44). Indeed, Jadad and Enkin list over 50 specific biases that are possible when carrying out a trial and go on to provide a strong warning that unless their weaknesses are acknowledged, there is a ‘risk of fundamentalism and intolerance of criticism, or alternative views (that) can discourage innovation’ (p. 45).

Despite such criticisms, trials have become the dominant source for informing clinical guidelines. Yet, as the previous Chairman of NICE, Sir Mike Rawlins, stated: ‘Awarding such prominence to the results of RCTs, however, is unreasonable’ (2008, p. 2159). Rawlins further argued in relation to the hierarchy of evidence used by NICE that privileges trials data, that ‘Hierarchies of evidence should be replaced by accepting a diversity of approaches.’ (p. 2159). And indeed, the word hierarchy does not appear at all in the NICE methods manual (NICE, 2014 /2017). Rawlins’ argument was not to abandon RCTs in favour of observational studies; rather what he sought was for researchers to improve their methods and for decision makers to avoid adopting entrenched positions about the nature of evidence. However, given the dominance of RCT evidence and the absence of relevant and available observational data in the draft 2018 guidelines, it would appear that Rawlins’ call has not been heeded.

Considering statistical power and nonindependence of patients in RCTs

A separate but major issue concerning trials, as identified earlier, is the extent to which they are appropriately powered to detect any hypothesised differences. To have confidence in the findings from RCTs that test the superiority, noninferiority or equivalence of one treatment condition against another, studies must have the required statistical power (sufficient numbers of patients in the trial) to detect such a difference if one exists. The standard criterion that defines sufficient power for a superiority trial requires that a study will have at least an 80% chance of detecting a difference at p  < .05 if one exists.

Cuijpers ( 2016 ) reviewed the statistical power needed both for individual RCTs and for meta‐analytic studies focused on adult depression. His analysis should be considered alongside the three classes of between‐group effect sizes traditionally postulated by Cohen ( 1992 ): small ( d  = .2), medium ( d  = .5), and large ( d  = .8). He identified that a sample size of 90 trial patients (i.e., 45 patients per arm) was required to find a differential effect size of d  = .6 (i.e., a medium effect size). Having established in an earlier article that an effect size of d  = .24 could be considered as a ‘minimally important difference’ from the patient's perspective (Cuijpers, Turner, Koole, van Dijke & Smit, 2014 ), he calculated that for a trial to determine such a minimally important difference between two active treatments for depression would require 548 patients – that is, 274 patients in each arm of the trial.

Yet in Cuijpers’ ( 2016 ) analysis, the mean number of patients included in RCT comparisons between CBT and another psychotherapy for depression was 52, with a range from 13 to 178. The effect size that can be detected with the average trial comprising 52 patients was d  = .79, an effect size similar to that comparing CBT with untreated control groups (i.e., d  = .71). For nondirective counselling, the analysis found that the largest study had sufficient power to detect a differential effect size of d  = .34. The largest comparative trial found in three comprehensive meta‐analyses of major types of psychotherapy comprised 221 patients. This is about 40% of the 548 patients needed to detect a clinically relevant effect size of d  = .24. Taking these statistics together, it is uncertain whether there can be sufficient confidence in the results of RCTs for adult depression conducted to date that compare CBT with another therapy because they likely lack sufficient statistical power (Cuijpers, 2016 ).

Meta‐analyses are, like single RCTs, subject to considerations of power. For meta‐analyses of RCTs focused on treatment of depression, Cuijpers ( 2016 ) suggests that for CBT (based on a mean of 52 patients per study), 18 trials would be needed to detect a significant effect of d  = .24 with a power of .8, or 24 trials with a power of .9. According to his analysis, the actual number of trials was 46, which was sufficient to detect a clinically relevant effect. However, he concluded that only 13 of these trials had a low risk of bias. This is important, as ‘bias’ is an agreed index of factors that reduce confidence in the results of RCTs. For example, a potential source of bias is the degree to which assessors or data analysts have prior knowledge of the specific intervention any individual study participant received. Hence, meta‐analyses are also vulnerable to low power once only studies with a low risk of bias are considered.

For nondirective supportive counselling (based on a slightly higher mean of 59 patients per trial), 16 trials would be needed to detect an effect of d  = .24 with a power of .8 or 21 trials with a power of .9. The 32 trials comparing counselling with other therapies therefore had sufficient power to detect a clinically relevant effect. However, only 14 trials had low risk of bias, yielding the same conclusion that there were not enough trials to detect such an effect.

In addition to issues of bias and low power, the statistical analysis applied to the data assumes that the data – that is, patients – are independent of each other. However, patients are not independent of each other as they are nested within therapists. Patient outcomes for one therapist will be correlated with the other patients from the same therapist and differ from the outcomes with other therapists. It is likely that there will be variability between the outcomes of therapists, a phenomenon known as therapist effects (Barkham, Lutz, Lambert & Saxon, 2017 ). Failure to take account of therapist effects results in this effect being attributed to the treatment effect and, thereby, inflating it (or deflating it if the therapists are not effective).

In summary, despite numerous comparative trials being conducted, from this data it is unclear whether one therapy for adult depression is more effective than another to an extent that is clinically relevant . Trials are underpowered and require much greater statistical power and less bias to determine differential effectiveness. In the light of this position, we now consider arguments for including very large data sets from routine practice.

Incorporating very large routine practice‐based data sets in guideline development for depression

As stated earlier, the NICE methods manual states that while RCTs may often be the most appropriate design, ‘other study designs (including observational, experimental or qualitative) may also be used to assess effectiveness, or aspects of effectiveness’ (NICE, 2014 /2017, p. 15). And in terms of the development work in network meta‐analysis, the aim is to move towards ‘the inclusion of studies of various designs, including observational studies, within one analysis’ (Kanters et al., 2016 , p. 783). Accordingly, there appears to be little reason, if any, for NICE not to consider high‐quality and relevant observational data.

One key development over the past decade or more has been the growth in the availability of very large data sets. For the psychological therapies, this is best exemplified by the implementation of the IAPT programme in England (London School of Economics and Political Science, 2006 ). The IAPT programme comprises a stepped care approach in which patients are initially referred for low‐intensity interventions such as psychoeducational interventions delivered by psychological wellbeing practitioners (PWPs). If not successful, these are ‘stepped up’ to high‐intensity interventions comprising CBT and several non‐CBT therapies, including CfD (person‐centred experiential therapy), a standardised model of intervention focused on depression with standards of training and supervision. Some patients, based on their presenting issues, are assigned directly to high‐intensity interventions. The IAPT programme, which was piloted in 2006 and independently evaluated (Parry et al., 2011 ), has been rolled out nationally and has focused largely on patients experiencing depression and anxiety but is being expanded to other patient groups.

A key feature of the IAPT programme is the administration of a common set of outcome measures – a minimum data set (MDS) – at each attended session. The MDS comprises the following: the Patient Health Questionnaire‐9 (PHQ‐9: Kroenke, Spitzer & Williams, 2001 ), which acts as a proxy measure for depression; the General Anxiety Disorder‐7 (GAD‐7; Spitzer, Kroenke, Williams & Löwe, 2006 ); and the Work and Social Adjustment Scale (WSAS; Mundt, Marks, Shear & Greist, 2002 ). The per‐session administration of the PHQ‐9, GAD‐7 and WSAS in IAPT has yielded potential standardised data sets from routine practice of unprecedented size. In 2015–2016 (the last year for which there is currently data), almost a million people entered IAPT treatment, with over half a million completing a course of treatment (NHS Digital, 2016 ).

The numbers of IAPT patients for whom systematic data has been collected potentially makes this one of the largest standardised data sets on the psychological therapies in the world. Kazdin ( 2008 ), on observing the general waste from data in practice settings not being used stated: ‘we are letting knowledge from practice drip through the holes of a colander’ (p. 155). Indeed, the collection and use of such large‐scale routinely collected standardised data are a hallmark of the research paradigm termed practice‐based evidence (Barkham & Margison, 2007 ; Barkham, Stiles, Lambert & Mellor‐Clark, 2010 ). While the privileging of trials data ahead of observational data may have been appropriate when the latter comprised small‐scale and unsystematic studies, this is no longer the case. In the same way that narrative reviews have developed a clear and systematic methodological underpinning to yield systematic reviews, the methods of collection and analyses of ‘routine data’ have developed a level of sophistication that can arguably no longer be dismissed (or labelled) as simply observational data.

Consistent with this practice‐based paradigm, the proposed 2018 Guideline states: ‘For all interventions for people with depression: use sessional outcome measures; review how well the treatment is working with the person; and monitor and evaluate treatment adherence’ (NICE, 2017d; Recommendation 37, p. 248). In addition, healthcare professionals delivering interventions for people with depression should: ‘receive regular high‐quality supervision; and have their competence monitored and evaluated, for example, by using video and audio tapes, and external audit’ (NICE, 2017d; Recommendation 38, p. 248). These recommendations provide the underpinning not only for enhancing the quality of clinical practice but also of ensuring the collection of high‐quality standardized data that would complement trials‐based data. However, despite the potential size of the IAPT data set and its quality, the data are not currently considered in NICE guideline developments. Given that the IAPT initiative was shaped by iterations of the NICE Guidelines for depression, the IAPT data itself may contribute to a better linkage between practice in routine settings, the yield from RCTs, and guideline development. It also enables practitioners in routine practice to contribute directly via their standardised data to informing the very guidelines that they will have to implement.

The IAPT data set: effectiveness of counselling in the treatment of depression in the NHS

The potential value of the IAPT data set in contributing to the evidence base on effective treatment for depression in adults is illustrated by examining reports and studies derived from IAPT data. Since 2013–14, IAPT have published annual reports comparing the number of referrals, average number of sessions and recovery rates between the available psychological therapies (NHS Digital, 2014 , 2015 , 2016 ). As demonstrated in Table  1 , whilst a greater proportion of referrals (approximately 60–65%) received CBT as compared with counselling, patient outcomes (i.e., recovery rates) have been virtually equivalent between the two interventions.

Data extracted from successive NHS digital reports on comparisons between cognitive behaviour therapy (CBT) and counselling/counselling for depression (CfD)

Research studies carried out by different academic groups that have accessed different portions of the IAPT data set to undertake more detailed analyses have also reported comparable outcomes between CBT and counselling in relation to the treatment of depression (Gyani, Shafran, Layard & Clark, 2013 ; Pybis, Saxon, Hill & Barkham, 2017 ). In more sophisticated studies using multilevel modelling to account for patient case mix and the nested nature of data, where differences have been observed these have been small and clinically insignificant (Pybis et al., 2017 ; Saxon, Firth & Barkham, 2017 ). These data demonstrate that for patients accessing psychological therapy throughout the NHS, counselling is, to all intents and purposes, as effective as CBT in the treatment of depression for both moderate and severe levels of depression. These studies, as well as the publicly available evidence from NHS Digital, confirm the findings of earlier studies using the Clinical Outcomes in Routine Evaluation measure (CORE‐OM; Evans et al., 2002 ). These studies used routinely collected CORE‐OM data from naturalistic settings before the implementation of IAPT and yielded comparable patient outcomes between counselling and CBT (Stiles, Barkham, Mellor‐Clark & Connell, 2008 ; Stiles, Barkham, Twigg, Mellor‐Clark & Cooper, 2006 ).

In summary, the evidence from the IAPT data set is that counselling is as effective as CBT as an intervention for depression. This evidence of effectiveness in NHS practice settings across England accords with the conclusions of Cuijpers ( 2017 ), who reviewed over 500 depression RCTs from four decades of research, and concluded that there were no significant differences between the main interventions, once biases and allegiances were considered. The consistency of the trials‐based and practice‐based findings is important in supporting the value of counselling as an intervention for depression offered in the NHS in England. However, we argue that the key conclusion for guideline development from these findings is that focus of research attention should not be on repeatedly re‐evaluating the evidence for different interventions. Instead the focus should move to other factors such as therapist effects or site effects where there appear to be noticeable differences in patients’ outcomes (e.g., Saxon & Barkham, 2012 ). This refocusing away from treatment differences and towards other factors is a position endorsed by the American Psychological Association ( 2012 ).

The IAPT data set: efficiency and cost‐effectiveness of counselling in the treatment of depression

A 2010 report calculated the annual cost of depression in England to be almost £11 billion in lost earnings, demands on the health service and the cost of prescribing drugs to address the depression (Cost of Depression in England, 2010 ). In this context, the cost‐effectiveness of treatment is important to consider. Determining cost‐effectiveness with acceptable degrees of certainty requires large samples, which the IAPT data set offers in a way that trials do not. Given the NICE procedural manual states that, for example, observational data can be used for ‘aspects of effectiveness’, the potential contribution of the IAPT data set to considerations of cost‐effectiveness is significant.

Improving Access to Psychological Therapies data suggest patients accessing counselling attend fewer sessions on average than those accessing CBT (NHS Digital, 2014 , 2015 , 2016 ; Pybis et al., 2017 ; Saxon, Firth et al., 2017 ). This suggests counselling may well be cheaper and therefore more cost‐efficient than CBT as it achieves comparable patient outcomes. To consider this in more detail, a study exploring the cost‐effectiveness of IAPT as a service reported data collected from five Primary Care Trusts and found the cost of a high‐intensity session was £177 (Radhakrishnan et al., 2013 ). Using this estimate alongside figures from the latest IAPT report that counselling is typically seeing patients for 5.9 sessions, whereas CBT is seeing patients for 7.1 sessions (NHS Digital, 2016 ), this would suggest counselling costs approximately £1044 per patient and CBT approximately £1256 per patient. In 2015–16, 152,452 patients completed a course of CBT at an estimated cost of £191 million. If those same patients had received counselling the cost saving could have been over £30 million.

The potential saving of £30 million is calculated only from the fewer sessions (on average) received by counselling patients in IAPT. However, given that counsellors in IAPT are often paid a grade lower than ‘IAPT‐qualified’ therapists (Perren, 2009 ), this figure may underestimate the potential saving. Moreover, while counselling training is typically self‐funded, IAPT CBT trainings have been government funded, initially centrally and more recently locally. This illustrates the potential financial implications of how research evidence is weighed up and then synthesised into guideline recommendations for the treatment of depression.

In summary, the vast data set derived from the IAPT programme needs to be used to complement data from RCTs. And this is particularly true for questions concerning cost‐effectiveness that cannot be adequately addressed by RCTs alone. Within years, there will be patient data on millions of patients within IAPT services. Its inclusion in the scope of NICE guideline reviews would be wholly consistent with the NICE guidelines procedure manual.

Considering the role of service users’ voices via qualitative research in guideline development

The previous section has argued for guideline developers to consider very large patient data sets. In this section, we argue for guideline developers to incorporate qualitative evidence that gives voice to service users. Doing so would be in accordance with NHS England's business plan for 2016/2017, which sets out a commitment: ‘to make a genuine shift to place patients at the centre, shaping services around their preferences and involving them at all stages’ (NHS England, 2016 , p. 49). NICE has a similar commitment (NICE Patient and Public Involvement Policy, 2017c ). Currently, while qualitative research is included in guideline development, NICE processes do not allow such data to be included in the final summative analyses that shape key recommendations. Yet a number of researchers (Hill, Chui & Baumann, 2013 ; Midgley, Ansaldo & Target, 2014 ) argue that qualitative outcome studies are important to consider because they ‘offer a significant challenge to assumptions about outcome that derive from mainstream quantitative research on this topic, in relation to two questions: how the outcome is conceptualised, and the overall effectiveness of therapy’ (McLeod, 2013 , p. 65). Reviewing existing literature, McLeod suggested patients themselves conceptualise outcome much more broadly than in terms of symptom or behavioural change (Binder, Holgersen & Nielsen, 2010 ). Typically, patients acknowledge ways in which therapy has been helpful but also where it has failed, suggesting that quantitative outcome research may overstate therapeutic effectiveness. Qualitative studies can also help answer questions about patient experience and expectations of NHS services, including whether treatments are credible and acceptable to them, which have an impact on outcomes.

Turning to qualitative research focused on depression, there is a growing literature on understanding the experiences of patient populations such as minority ethnic groups (e.g., Lawrence et al., 2006a ), women (e.g., Stoppard & McMullen, 1999 ), men (e.g., Emslie, Ridge, Ziebland & Hunt, 2006 ) and older adults (e.g., Lawrence et al., 2006b ). Such studies elucidate population‐specific depression experiences that can be useful in understanding why certain populations benefit less from treatment. There is also a literature that seeks to describe the experience of aspects of depression such as recovery (e.g., Ridge & Ziebland, 2006 ) or types of depression such as postnatal depression (e.g., Beck, 2002 ). However, currently relatively little research focuses on patients’ experiences of depression treatment. There is some research on depressed patients’ experiences of computer‐mediated depression treatment (e.g., Beattie, Shaw, Kaur & Kessler, 2009 ; Lillevoll et al., 2013 ), and mindfulness (e.g., Mason & Hargreaves, 2001 ; Smith, Graham & Senthinathan, 2007 ). However, there is less research on the major modalities such as CBT (e.g., Barnes et al., 2013 ), psychodynamic (e.g., Valkonen, Hänninen & Lindfors, 2011 ) and process‐experiential therapies (e.g., Timulak & Elliott, 2003 ). The lack matters because such qualitative research focusing on treatment experiences provides a method by which theoretical assumptions about how a therapy ‘works’ can be evaluated against the patient perspective.

Even more rare are comparative qualitative outcome studies (e.g., Nilsson, Svensson, Sandell & Clinton, 2007 ). Such studies focusing on depression are valuable because they can foster understanding of whether patients experience outcomes differently in different therapies. One example is Straarup and Poulsen's ( 2015 ) study, which compared patients’ experiences of CBT and metacognitive therapy and found evidence of different understandings of the causes of depression and what had changed as a result of therapy.

In summary, qualitative research has considerable value in terms of capturing patients’ experiences of psychotherapy that can inform practice (see Levitt, Pomerville & Surace, 2016 ). This suggests the need: (1) to consider qualitative outcome studies in guideline development and recommendations, and (2) encouraging further research focused on guideline‐recommended treatments and differential patient experiences.

Towards a broader spectrum of best evidence

Whatever the potential pool of data, guideline organisations need to establish and implement procedures for making recommendations. A recent review considered how different national organisations produce clinical guidelines. Moriana et al. ( 2017 ) analysed and compiled lists of evidence‐based psychological treatments by disorder using data provided by RCTs, meta‐analyses, guidelines and systematic reviews of NICE, Cochrane, Division 12 of the American Psychological Association and the Australian Psychological Society. For depression, they found poor agreement with no single intervention obtaining positive consensus agreement from all four organisations. The authors suggested one possible cause for the lack of agreement might be subtle biases in committee procedures, while evidence considered by both NICE and Cochrane may be overinfluenced by the key meta‐analyses that both organisations commission to support their decision‐making. Whilst one organisation might favour its own procedures in this way, the process lacks standardisation across the different bodies and leads to discrepancies in guidance.

The finding that guideline processes have led to different treatment recommendations for the same condition underlines the criticisms of an approach to synthesising evidence that rigidly prioritises RCTs. We argue that a rigorous and relevant knowledge base of the psychological therapies cannot be built on one research paradigm or type of data alone but should incorporate both evidence‐based practice (i.e., trials) and practice‐based evidence (i.e., routine practice data; Barkham & Margison, 2007 ). In this conceptualisation, trials provide evidence from a top‐down model (RCT evidence generating national guidelines that are implemented in practice settings) while practice‐based evidence builds upwards using data from routine practice settings to guide interventions and inform guideline development. Both paradigms are complementary and, most importantly, the results from one paradigm can be tested out in the other. Further, a synthesis of evidence from both paradigms ensures that the data from trials remain directly connected and relevant to routine practice, creating a continual cycle between practice and research and between practitioners and researchers.

Given the points made here, there is little justification for relying solely on trials data and dismissing evidence from large standardised routine datasets delivering NICE recommended and IAPT approved psychological therapies. There are issues and vulnerabilities with both paradigms and the evidence they provide, but it is no longer credible to suggest that the term best applies only to trials data. To abide by the advice of Rawlins ( 2008 ) as well as Jadad and Enkin ( 2007 ), views concerning nontrial data need to become more accommodating. Overall, a collective move to a position of considering the weight of evidence from a wider bandwidth or spectrum provides a more rounded and inclusive view of available high‐quality data. By applying the concept of teleoanalysis – that is, the synthesis of different categories of evidence to obtain a quantitative summary – it is possible to arrive at more robust and relevant conclusions (Clarke & Barkham, 2009 ; Wald & Morris, 2003 ). This, we would suggest, is an approach that would yield both better and more relevant evidence. Accordingly, IAPT data now needs to be considered alongside evidence from trials to form a more complete and accurate picture of the comparative effectiveness of psychological therapies. Further, high‐quality qualitative data require inclusion in arriving at recommendations, particularly as it is a primary source for patients’ perspectives and experiences.

Conclusions and recommendations

We have argued for greater precision in defining the profession and practice of counselling, provided an overview of research on counselling for the treatment of depression from meta‐analyses and RCTs, raised issues arising from a sole reliance on trials, and put the case for broadening the bandwidth of high‐quality evidence using large routine standardised data sets and the consideration of high‐quality qualitative studies. Overall, with regard to depression, counselling is effective. Some analyses suggest it is somewhat less effective than other therapies for depression (e.g., CBT), but when research findings are adjusted for researcher allegiance and low risk of bias, such differences are minimal and not clinically relevant (Cuijpers, 2017 ). Results from (very) large standardised data sets in routine practice show counselling to be as effective as CBT in the treatment of patient‐reported depression and with a suggestion that it may be more cost‐efficient. However, such data are not considered by NICE even though it is consistent with the scope of data defined in their guideline development procedural manual (NICE, 2014 /2017).

One clear observation concerning RCTs in the field of depression is the paucity of high‐quality head‐to‐head trials relating to counselling. In addition, there are calls from advocates of RCTs for trials to be larger and pragmatic (Wessley, 2007 ). In response to such calls, there is a large pragmatic noninferiority RCT comparing CfD (Person‐centred experiential therapy) with CBT as the benchmark treatment that will yield initial results late in 2018 (Saxon, Ashley et al., 2017 ). Particularly significant is the trial's focus on patients diagnosed as experiencing moderate or severe depression. The results regarding any differential effectiveness of counselling between moderate and severe depression will address a key issue as to whether CfD could be considered as a front‐line intervention. Funders should call for other therapeutic approaches to be evaluated using CBT as a benchmark – to determine whether another therapy is, in any clinically meaningful way, noninferior to CBT. In this way, a robust and relevant knowledge base will be constructed that aims to ensure quality and standards of psychological interventions for the treatment of depression while providing choice to patients. This is important giving the mounting empirical evidence that improving patient treatment choice improves therapy outcomes (Lindhiem, Bennett, Trentacosta & McLear, 2014 ; Williams et al., 2016 ).

Finally, in this article, we have sought to make an argument about re‐evaluating the definition of best evidence for guideline development. Using the evidence base for counselling in the treatment of depression as an example, we have argued that guideline developers should move towards integrating differing forms of high‐quality evidence rather than relying on trials alone. But this requires change for all stakeholders: for individual researchers in counselling to be strategic and ensure their work builds cumulatively on the work of others; for researchers in organisations to yield larger and more substantive studies; for service providers to collaborate in collating common data through, for example, building practice research networks; for counselling bodies to devise, fund and implement research strategies that will deliver a robust evidence base for practice; and for guideline developers to accept a diversity of substantive research approaches that, combined, will yield best evidence. In doing so, not only will it be possible to draw more robust conclusions about the cost‐effectiveness of depression treatment in the NHS and the clinical efficacy and effectiveness of different interventions, but also potentially the community, service, therapist, and patient variables that significantly impact on patient outcomes.

Acknowledgements

We would like to thank the anonymous reviewers for their helpful comments on an earlier draft.

Biographies

Michael Barkham is Professor of Clinical Psychology and Director of the Centre for Psychological Services Research at the University of Sheffield.

Naomi P. Moller is Joint Head of Research for the British Association for Counselling and Psychotherapy and Senior Lecturer in the School of Psychology at the Open University.

Joanne Pybis is Senior Research Fellow for the British Association for Counselling and Psychotherapy.

The views expressed in this article are our own and do not necessarily reflect the views of our respective organisations.

  • American Counseling Association (2017). What is counseling? Retrieved from https://www.counseling.org/aca-community/learn-about-counseling/what-is-counseling/overview
  • American Psychiatric Association (2017). APA practice guidelines . Retrieved from http://psychiatryonline.org/guidelines
  • American Psychological Association (2012). Recognition of psychotherapy effectiveness . Retrieved from http://www.apa.org/about/policy/resolution-psychotherapy.aspx
  • Barber, J. P. , Connoll, M. B. , Crits‐Christoph, P. , Gladis, L. , & Siqueland, L. (2000). Alliance predicts patients’ outcome beyond in‐treatment change in symptoms . Journal of Consulting and Clinical Psychology , 68 , 1027–1032. [ PubMed ] [ Google Scholar ]
  • Barkham, M. , Lutz, W. , Lambert, M. J. , & Saxon, D. (2017). Therapist effects, effective therapists, and the law of variability In Castonguay L. G., & Hill C. E. (Eds.), How and why are some therapists better than others?: Understanding therapist effects (pp. 13–36). Washington, DC: American Psychological Association. [ Google Scholar ]
  • Barkham, M. , & Margison, F. (2007). Practice‐based evidence as a complement to evidence‐based practice: from dichotomy to chiasmus In Freeman C., & Power M. (Eds.), Handbook of evidence‐based psychotherapies: A guide for research and practice (pp. 443–476). Chichester, United Kingdom: Wiley. [ Google Scholar ]
  • Barkham, M. , Stiles, W. B. , Lambert, M. J. , & Mellor‐Clark, J. (2010). Building a rigorous and relevant knowledge‐base for the psychological therapies In Barkham M., Hardy G. E., & Mellor‐Clark J. (Eds.), Developing and delivering practice‐based evidence: A guide for the psychological therapies (pp. 21–61). Chichester, United Kingdom: Wiley. [ Google Scholar ]
  • Barnes, M. , Sherlock, S. , Thomas, L. , Kessler, D. , Kuyken, W. , Owen‐Smith, A. , … Turner, K. (2013). No pain, no gain: depressed clients’ experiences of cognitive behavioural therapy . British Journal of Clinical Psychology , 52 , 347–364. [ PubMed ] [ Google Scholar ]
  • Barth, J. , Munder, T. , Gerger, H. , Nüesch, E. , Trelle, S. , Znoj, H. , … Cuijpers, P. (2013). Comparative efficacy of seven psychotherapeutic interventions for patients with depression: a network meta‐analysis . PLoS Medicine , 10 , e1001454. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Beattie, A. , Shaw, A. , Kaur, S. , & Kessler, D. (2009). Primary‐care patients’ expectations and experiences of online cognitive behavioural therapy for depression: a qualitative study . Health Expectations , 12 , 45–59. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Beck, C. T. (2002). Postpartum depression: a metasynthesis . Qualitative Health Research , 12 , 453–472. [ PubMed ] [ Google Scholar ]
  • Bedi, N. , Chilvers, C. , Churchill, R. , Dewey, M. , Duggan, C. , Fielding, K. , … Williams, I. (2000). Assessing effectiveness of treatment of depression in primary care . British Journal of Psychiatry , 177 , 312–328. [ PubMed ] [ Google Scholar ]
  • Binder, P. E. , Holgersen, H. , & Nielsen, G. H. S. (2010). What is a ‘good outcome’ in psychotherapy? A qualitative exploration of former patients’ point of view . Psychotherapy Research , 20 , 285–294. [ PubMed ] [ Google Scholar ]
  • British Association for Counselling and Psychotherapy (2009). BACP issues warning that new depression guidelines may harm patients , October 9 2009. Retrieved from http://www.bacp.co.uk/media/index.php?newsId=1610
  • British Association for Counselling and Psychotherapy (2017). What is counselling? Retrieved from http://www.bacp.co.uk/crs/Training/whatiscounselling.php
  • Cape, J. , Whittington, C. , Buszewicz, M. , Wallace, P. , & Underwood, L. (2010). Brief psychological therapies for anxiety and depression in primary care: meta‐analysis and meta‐regression . BMC Medicine , 8 , 38. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cartwright, N. , & Munro, E. (2010). The limitations of randomized controlled trials in predicting effectiveness . Journal of Evaluation in Clinical Practice , 16 , 260–266. [ PubMed ] [ Google Scholar ]
  • Clark, D. M. (2011). Implementing NICE guidelines for the psychological treatment of depression and anxiety disorders: the IAPT experience . International Review of Psychiatry , 23 , 318–327. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Clarke, J. , & Barkham, M. (2009). Tribute to Phil Richardson ‐ Evidence de rigueur: the shape of evidence in psychological therapies and the modern practitioner as teleoanalyst . Clinical Psychology Form , 202 , 7–11. [ Google Scholar ]
  • Cohen, J. (1992). A power primer . Psychological Bulletin , 112 , 155–159. [ PubMed ] [ Google Scholar ]
  • Cost of Depression in England (2010). All party parliamentary Group on wellbeing economics . Retrieved from https://wellbeingeconomics.wordpress.com/reports/Retrieved July 19 2017.
  • Cuijpers, P. (2016). Are all psychotherapies equally effective in the treatment of adult depression? The lack of statistical power of comparative outcome studies . Evidence Based Mental Health , 19 , 39–42. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Cuijpers, P. (2017). Four decades of outcome research on psychotherapies for adult depression: an overview of a series of meta‐analyses . Canadian Psychology/Psychologie Canadienne , 58 , 7–19. [ Google Scholar ]
  • Cuijpers, P. , Driessen, E. , Hollon, S. D. , van Oppen, P. , Barth, J. , & Andersson, G. (2012). The efficacy of non‐directive supportive therapy for adult depression: a meta‐analysis . Clinical Psychology Review , 32 , 280–291. [ PubMed ] [ Google Scholar ]
  • Cuijpers, P. , Turner, E. H. , Koole, S. L. , van Dijke, A. , & Smit F. (2014). What is the threshold for a clinically relevant effect? The case of major depressive disorders Depression and Anxiety , 31 , 374–378. [ PubMed ] [ Google Scholar ]
  • Dias S. (2013). Using network meta‐analysis (NMA) for decision making . Paper presented at the 38th International Society for Clinical Biostatistics, Munich, Germany. [ Google Scholar ]
  • Emslie, C. , Ridge, D. , Ziebland, S. , & Hunt, K. (2006). Men's accounts of depression: reconstructing or resisting hegemonic masculinity? Social Science & Medicine , 62 , 2246–2257. [ PubMed ] [ Google Scholar ]
  • European Association for Counselling (2017). Definition of counselling . Retrieved from http://eac.eu.com/standards-ethics/definition-counselling/ . Cited 19 June 2017.
  • Evans, C. , Connell, J. , Barkham, M. , Margison, F. , Mellor‐Clark, J. , McGrath, G. , & Audin, K. (2002). Towards a standardised brief outcome measure: psychometric properties and utility of the CORE‐OM . British Journal of Psychiatry , 180 , 51–60. [ PubMed ] [ Google Scholar ]
  • Goldman, R. N. , Greenberg, L. S. , & Angus, L. (2006). The effects of adding emotion‐focussed interventions to the client‐centred relationship conditions in the treatment of depression . Psychotherapy Research , 16 , 537–549. [ Google Scholar ]
  • Greenberg, L. S. , & Watson, J. C. (1998). Experiential therapy of depression: differential effects of client‐centred relationship conditions and process experiential interventions . Psychotherapy Research , 8 , 210–224. [ Google Scholar ]
  • Gyani, A. , Pumphrey, N. , Parker, H. , Shafran, R. , & Rose, S. (2012). Investigating the use of NICE guidelines and IAPT services in the treatment of depression . Mental Health in Family Medicine , 9 , 149–160. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Gyani, A. , Shafran, R. , Layard, R. , & Clark, D. M. (2013). Enhancing recovery rates: lessons from year one of IAPT . Behaviour Research and Therapy , 51 , 597–606. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Hernandez‐Villafuerte, K. , Garau, M. , & Devlin, N. (2014). Do NICE decisions affect decisions in other countries? Value Health , 17 , A418. [ PubMed ] [ Google Scholar ]
  • Hill, C. E. , Chui, H. , & Baumann, E. (2013). Revisiting and reenvisioning the outcome problem in psychotherapy: an argument to include individualized and qualitative measurement . Psychotherapy , 50 , 68–76. [ PubMed ] [ Google Scholar ]
  • Hussain‐Gambles, M. , Atkin, K. , & Leese, B. (2004). Why ethnic minority groups are underrepresented in clinical trials: a review of the literature . Health and Social Care in the Community , 12 , 382–389. [ PubMed ] [ Google Scholar ]
  • IAPT Programme (2013). Census of the IAPT Workforce as at August 2012 . Retrieved from https://www.uea.ac.uk/documents/246046/11919343/iapt-workforce-education-and-training-2012-census-report.pdf/907e15d0-b36a-432c-8058-b2452d3628de . Cited 19 June 2017.
  • Jadad, A. R. , & Enkin, M. W. (2007). Randomized control trials: Questions, answers and musings , 2nd edn Oxford, United Kingdom: Blackwell. [ Google Scholar ]
  • Kanters, S. , Ford, N. , Druyts, E. , Thorlund, K. , Mills, E. J. , & Bansback, N. (2016). Use of network meta‐analysis in clinical guidelines . Bulletin of the World Health Organization , 94 , 782–784. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Kaufman, J. , & Charney, D. (2000). Comorbidity of mood and anxiety disorders . Depression and Anxiety , 12 ( Suppl. 1 ), 69–74. [ PubMed ] [ Google Scholar ]
  • Kazdin, A. E. (2008). Evidence‐based treatment and practice. New opportunities to bridge clinical research and practice, enhance the knowledge base and improve patient care . American Psychologist , 63 , 146–159. [ PubMed ] [ Google Scholar ]
  • Kennedy‐Martin, T. , Curtis, S. , Faries, D. , Robinson, S. , & Johnston, J. (2015). A literature review on the representativeness of randomized controlled trial samples and implications for the external validity of trial results . Trials , 16 , 495. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • King, M. , Marston, L. , & Bower, P. (2014). Comparison of non‐directive counselling and cognitive behaviour therapy for patients presenting in general practice with an ICD‐10 depressive episode: a randomized control trial . Psychological Medicine , 44 , 1835–1844. [ PubMed ] [ Google Scholar ]
  • Kroenke, K. , Spitzer, R. L. , & Williams, J. B. W. (2001). The PHQ‐9: validity of a brief depression severity measure . Journal of General Internal Medicine , 16 , 606–613. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lawrence, V. , Banerjee, S. , Bhugra, D. , Sangha, K. , Turner, S. , & Murray, J. (2006a). Coping with depression in later life: a qualitative study of help‐seeking in three ethnic groups . Psychological Medicine , 36 , 1375–1383. [ PubMed ] [ Google Scholar ]
  • Lawrence, V. , Murray, J. , Banerjee, S. , Turner, S. , Sangha, K. , Byng, R. , … Macdonald, A. (2006b). Concepts and causation of depression: a cross‐cultural study of the beliefs of older adults . The Gerontologist , 46 , 23–32. [ PubMed ] [ Google Scholar ]
  • Levitt, H. M. , Pomerville, A. , & Surace, F. I. (2016). A qualitative meta‐analysis examining clients’ experiences of psychotherapy: a new agenda . Psychological Bulletin , 142 , 801–830. [ PubMed ] [ Google Scholar ]
  • Lillevoll, K. R. , Wilhelmsen, M. , Kolstrup, N. , Høifødt, R. S. , Waterloo, K. , Eisemann, M. , & Risør, M. B. (2013). Patients’ experiences of helpfulness in guided internet‐based treatment for depression: qualitative study of integrated therapeutic dimensions . Journal of Medical Internet Research , 15 ( 6 ), e126. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Lindhiem, O. , Bennett, C. B. , Trentacosta, C. J. , & McLear, C. (2014). Client preferences affect treatment satisfaction, completion, and clinical outcome: a meta‐analysis . Clinical Psychology Review , 34 , 506–517. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • London School of Economics and Political Science. Centre for Economic Performance. Mental Health Policy Group (2006). The depression report: A new deal for depression and anxiety disorders [online]. London, UK: LSE Research Online. [ Google Scholar ]
  • Mason, O. , & Hargreaves, I. (2001). A qualitative study of mindfulness‐based cognitive therapy for depression . Psychology and Psychotherapy: Theory, Research and Practice , 74 , 197–212. [ PubMed ] [ Google Scholar ]
  • McLeod, J. (2013). Qualitative research: methods and contributions In Lambert M. J. (Ed.), Bergin and Garfield's handbook of psychotherapy and behavior change , 6th edn (pp. 49–84). Hoboken, NJ: Wiley. [ Google Scholar ]
  • Midgley, N. , Ansaldo, F. , & Target, M. (2014). The meaningful assessment of therapy outcomes: incorporating a qualitative study into a randomized controlled trial evaluating the treatment of adolescent depression . Psychotherapy , 51 , 128–137. [ PubMed ] [ Google Scholar ]
  • Mills, E. J. , Thorlund, K. , & Ioannidis, J. P. A. (2013). Demystifying trial networks and network meta‐analysis . BMJ , 346 ( f2914 ), 1–6. [ PubMed ] [ Google Scholar ]
  • Moriana, J. A. , Gálvez‐Lara, M. , & Corpas, J. (2017). Psychological treatments for mental disorders in adults: a review of the evidence of leading international organizations . Clinical Psychology Review , 54 , 29–43. [ PubMed ] [ Google Scholar ]
  • Mundt, J. C. , Marks, I. M. , Shear, M. K. , & Greist, J. M. (2002). The work and social adjustment scale: a simple measure of impairment in functioning . British Journal of Psychiatry , 180 , 461–464. [ PubMed ] [ Google Scholar ]
  • National Institute for Health and Care Excellence (2009). NICE guidelines for depression in adults: recognition and management . Retrieved from https://www.nice.org.uk/guidance/cg90 [ PubMed ]
  • National Institute for Health and Care Excellence (2014/updated 2017). Developing NICE guidelines: the manual: Process and methods . Retrieved from https://www.nice.org.uk/process/pmg20 [ PubMed ]
  • National Institute for Health and Care Excellence (2017a). NICE guidance . Retrieved from https://www.nice.org.uk/about/what-we-do/our-programmes/nice-guidance
  • National Institute for Health and Care Excellence (2017b). NICE guidelines . Retrieved from https://www.nice.org.uk/About/What-we-do/Our-Programmes/NICE-guidance/NICE-guidelines
  • National Institute for Health and Care Excellence (2017c). Patient and public involvement policy . Retrieved from https://www.nice.org.uk/about/nice-communities/public-involvement/patient-and-public-involvement-policy
  • National Institute for Heath and Care Excellence (2017d). Depression in adults: recognition and management: Draft guidance consultation . Retrieved from https://www.nice.org.uk/guidance/GID-CGWAVE0725/documents/draft-guideline [ PubMed ]
  • NHS Digital (2014). Psychological therapies: Annual report on the use of IAPT services . England, 2013‐2014. Retrieved from http://content.digital.nhs.uk/catalogue/PUB14899 [ Google Scholar ]
  • NHS Digital (2015). Psychological therapies: Annual report on the use of IAPT services . England, 2014‐2015. Retrieved from http://content.digital.nhs.uk/catalogue/PUB19098 [ Google Scholar ]
  • NHS Digital (2016). Psychological therapies: Annual report on the use of IAPT services . England, 2015‐2016. Retrieved from http://content.digital.nhs.uk/pubs/psycther1516 [ Google Scholar ]
  • NHS England (2016). Our 2016/2017 business plan . Retrieved from https://www.england.nhs.uk/wp-content/uploads/2016/03/bus-plan-16.pdf
  • NHS England (2017). IAPT workforce . Retrieved from https://www.england.nhs.uk/mental-health/adults/iapt/workforce/
  • NHS England and Health Education England (2016). 2015 Adult IAPT workforce census report . Retrieved from https://www.england.nhs.uk/mentalhealth/wp-content/uploads/sites/29/2016/09/adult-iapt-workforce-census-report-15.pdf
  • Nilsson, T. , Svensson, M. , Sandell, R. , & Clinton, D. (2007). Patients’ experiences of change in cognitive–behavioral therapy and psychodynamic therapy: a qualitative comparative study . Psychotherapy Research , 17 , 553–566. [ Google Scholar ]
  • Parry, G. , Barkham, M. , Brazier, J. , Dent‐Brown, K. , Hardy, G. , Kendrick, T. , … Lovell, K. (2011). An evaluation of a new service model: Improving access to psychological therapies demonstration sites 2006–2009 . Final report. NIHR Service Delivery and Organisation programme. [ Google Scholar ]
  • Perren, S. (2009). Topics in training: thinking of applying for the IAPT high or low‐intensity training? Sara Perren has some advice . Healthcare Counselling and Psychotherapy Journal , 1 , 29–30. [ Google Scholar ]
  • Pybis, J. , Saxon, D. , Hill, A. , & Barkham, M. (2017). The comparative effectiveness and efficiency of cognitive behaviour therapy and counselling in the treatment of depression: evidence from the 2 nd UK national audit of psychological therapies . BMC Psychiatry , 17 , 215. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Radhakrishnan, M. , Hammond, G. , Jones, P. B. , Watson, A. , McMillan‐Shields, F. , & Lafortune, L. (2013). Cost of Improving Access to Psychological Therapies (IAPT) programme: an analysis of cost of session, treatment and recovery in selected primary care trusts in the east of England region . Behaviour Research and Therapy , 51 , 37–45. [ PubMed ] [ Google Scholar ]
  • Rawlins, M. (2008). De testimonio: on the evidence for decisions about the use of therapeutic interventions . The Lancet , 372 , 2152–2161. [ PubMed ] [ Google Scholar ]
  • Ridge, D. , & Ziebland, S. (2006). ‘The old me could never have done that’: how people give meaning to recovery following depression . Qualitative Health Research , 16 , 1038–1053. [ PubMed ] [ Google Scholar ]
  • Roth, A. D. , Hill, A. , & Pilling, S. (2009). The competences required to deliver effective humanistic psychological therapies . London, UK: Department of Health. [ Google Scholar ]
  • Sanders, P. , & Hill, A. (2014). Counselling for depression: A person‐centred and experiential approach to practice . London, United Kingdom: Sage. [ Google Scholar ]
  • Saxon, D. , Ashley, K. , Bishop‐Edwards, L. , Connell, J. , Harrison, P. , Ohlsen, S. , … Barkham, M. (2017). A pragmatic randomized controlled trial assessing the non‐inferiority of counselling for depression versus cognitive‐behaviour therapy for patients in primary care meeting a diagnosis of moderate or severe depression (PRaCTICED): a study protocol for a randomized controlled trial . Trials , 18 , 93. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Saxon, D. , & Barkham, M. (2012). Patterns of therapist variability: therapist effects and the contribution of patient severity and risk . Journal of Consulting and Clinical Psychology , 80 , 535–546. [ PubMed ] [ Google Scholar ]
  • Saxon, D. , Firth, N. , & Barkham, M. (2017). The relationship between therapist effects and therapy delivery factors: therapy modality, dosage, and non‐completion Administration and Policy in Mental Health , 44 , 705–715. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Scottish Intercollegiate Guidelines Network (2017). What we do . Retrieved from http://www.sign.ac.uk/what-we-do.html
  • Simpson, S. , Corney, R. , Fitzgerald, P. , & Beecham, J. (2000). A randomised controlled trial to evaluate the effectiveness and cost‐effectiveness of counselling patients with chronic depression . Health Technology Assessment , 4 ( 36 ), 1–83. [ PubMed ] [ Google Scholar ]
  • Smith, A. , Graham, L. , & Senthinathan, S. (2007). Mindfulness‐based cognitive therapy for recurring depression in older people: a qualitative study . Aging and Mental Health , 11 , 346–357. [ PubMed ] [ Google Scholar ]
  • Spitzer, R. L. , Kroenke, K. , Williams, J. B. W. , & Löwe, B. (2006). A brief measure for assessing generalized anxiety disorder: the GAD‐7 . Archives of Internal Medicine , 22 , 1092–1097. [ PubMed ] [ Google Scholar ]
  • Stiles, W. B. , Barkham, M. , Mellor‐Clark, J. , & Connell, J. (2008). Effectiveness of cognitive‐behavioural, person‐centred, and psychodynamic therapies in UK primary care routine practice: replication in a larger sample . Psychological Medicine , 38 , 677–688. [ PubMed ] [ Google Scholar ]
  • Stiles, W. B. , Barkham, M. , Twigg, E. , Mellor‐Clark, J. , & Cooper, M. (2006). Effectiveness of cognitive‐behavioural, person‐centred, and psychodynamic therapies as practiced in UK National Health Service settings . Psychological Medicine , 36 , 555–566. [ PubMed ] [ Google Scholar ]
  • Stoppard, J. M. , & McMullen, L. M. (1999). Toward an understanding of depression from the standpoint of women: exploring contributions of qualitative research approaches . Canadian Psychology , 40 , 75–76. [ Google Scholar ]
  • Straarup, N. S. , & Poulsen, S. (2015). Helpful aspects of metacognitive therapy and cognitive behaviour therapy for depression: a qualitative study . The Cognitive Behaviour Therapist , 8 , e22. [ Google Scholar ]
  • Stronks, K. , Wieringa, F. , & Hardon, A. (2013). Confronting diversity in the production of clinical evidence goes beyond merely including under‐represented groups in clinical trials . Trials , 14 , 177. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Stuart, E. A. , Bradshaw, C. P. , & Leaf, P. J. (2015). Assessing the generalizability of randomized trial results to target populations . Prevention Science , 16 , 475–485. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Timulak, L. , & Elliott, R. (2003). Empowerment events in process‐experiential psychotherapy of depression: an exploratory qualitative analysis . Psychotherapy Research , 13 , 443–460. [ PubMed ] [ Google Scholar ]
  • Valkonen, J. , Hänninen, V. , & Lindfors, O. (2011). Outcomes of psychotherapy from the perspective of the users . Psychotherapy Research , 21 , 227–240. [ PubMed ] [ Google Scholar ]
  • Vlayen, J. , Aertgeerts, B. , Hannes, K. , Sermeus, W. , & Ramaeker, D. (2005). A systematic review of appraisal tools for clinical practice guidelines: multiple similarities and one common deficit . International Journal for Quality in Health Care , 17 , 235–242. [ PubMed ] [ Google Scholar ]
  • Wald, N. J. , & Morris, J. K. (2003). Teleoanalysis: combining data from different types of study . BMJ , 327 , 616–618. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Ward, E. , King, M. , Lloyd, M. , Bower, P. , Sibbald, B. , Farrely, S. , … Addington‐Hall, J. (2000). Randomised controlled trial of non‐directive counselling, cognitive‐behaviour therapy, and usual general practitioner care for patients with depression. I: clinical effectiveness . BMJ , 321 , 1383–1388. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • Watson, J. C. , Gordon, L. B. , Stermac, L. , Kalogerakos, F. , & Steckley, P. (2003). Comparing the effectiveness of process‐experiential with cognitive‐behavioral psychotherapy in the treatment of depression . Journal of Consulting and Clinical Psychology , 71 , 773–781. [ PubMed ] [ Google Scholar ]
  • Wessley, S. (2007). Commentary: a defence of the randomized controlled trial in mental health . BioSocieties , 2 , 115–127. [ Google Scholar ]
  • Williams, R. , Farquharson, L. , Palmer, L. , Bassett, P. , Clarke, J. , Clark, D. M. , & Crawford, M. J. (2016). Patient preference in psychological treatment and associations with self‐reported outcome: national cross‐sectional survey in England and Wales . BMC Psychiatry , 16 , 4. [ PMC free article ] [ PubMed ] [ Google Scholar ]
  • World Health Organization (2017). WHO guidelines approved by the Guidelines Review Committee . Retrieved from http://www.who.int/publications/guidelines/en/

Uncomplicated Reviews of Educational Research Methods

  • Writing a Research Report

.pdf version of this page

This review covers the basic elements of a research report. This is a general guide for what you will see in journal articles or dissertations. This format assumes a mixed methods study, but you can leave out either quantitative or qualitative sections if you only used a single methodology.

This review is divided into sections for easy reference. There are five MAJOR parts of a Research Report:

1.    Introduction 2.    Review of Literature 3.    Methods 4.    Results 5.    Discussion

As a general guide, the Introduction, Review of Literature, and Methods should be about 1/3 of your paper, Discussion 1/3, then Results 1/3.

Section 1 : Cover Sheet (APA format cover sheet) optional, if required.

Section 2: Abstract (a basic summary of the report, including sample, treatment, design, results, and implications) (≤ 150 words) optional, if required.

Section 3 : Introduction (1-3 paragraphs) •    Basic introduction •    Supportive statistics (can be from periodicals) •    Statement of Purpose •    Statement of Significance

Section 4 : Research question(s) or hypotheses •    An overall research question (optional) •    A quantitative-based (hypotheses) •    A qualitative-based (research questions) Note: You will generally have more than one, especially if using hypotheses.

Section 5: Review of Literature ▪    Should be organized by subheadings ▪    Should adequately support your study using supporting, related, and/or refuting evidence ▪    Is a synthesis, not a collection of individual summaries

Section 6: Methods ▪    Procedure: Describe data gathering or participant recruitment, including IRB approval ▪    Sample: Describe the sample or dataset, including basic demographics ▪    Setting: Describe the setting, if applicable (generally only in qualitative designs) ▪    Treatment: If applicable, describe, in detail, how you implemented the treatment ▪    Instrument: Describe, in detail, how you implemented the instrument; Describe the reliability and validity associated with the instrument ▪    Data Analysis: Describe type of procedure (t-test, interviews, etc.) and software (if used)

Section 7: Results ▪    Restate Research Question 1 (Quantitative) ▪    Describe results ▪    Restate Research Question 2 (Qualitative) ▪    Describe results

Section 8: Discussion ▪    Restate Overall Research Question ▪    Describe how the results, when taken together, answer the overall question ▪    ***Describe how the results confirm or contrast the literature you reviewed

Section 9: Recommendations (if applicable, generally related to practice)

Section 10: Limitations ▪    Discuss, in several sentences, the limitations of this study. ▪    Research Design (overall, then info about the limitations of each separately) ▪    Sample ▪    Instrument/s ▪    Other limitations

Section 11: Conclusion (A brief closing summary)

Section 12: References (APA format)

Share this:

About research rundowns.

Research Rundowns was made possible by support from the Dewar College of Education at Valdosta State University .

  • Experimental Design
  • What is Educational Research?
  • Writing Research Questions
  • Mixed Methods Research Designs
  • Qualitative Coding & Analysis
  • Qualitative Research Design
  • Correlation
  • Effect Size
  • Instrument, Validity, Reliability
  • Mean & Standard Deviation
  • Significance Testing (t-tests)
  • Steps 1-4: Finding Research
  • Steps 5-6: Analyzing & Organizing
  • Steps 7-9: Citing & Writing

Create a free website or blog at WordPress.com.

' src=

  • Already have a WordPress.com account? Log in now.
  • Subscribe Subscribed
  • Copy shortlink
  • Report this content
  • View post in Reader
  • Manage subscriptions
  • Collapse this bar

major sections of a research report used in counseling research

Snapsolve any problem by taking a picture. Try it in the Numerade app?

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 11: Presenting Your Research

Writing a Research Report in American Psychological Association (APA) Style

Learning Objectives

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a  title page . The title is centred in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioural Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behaviour?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The  abstract  is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The  introduction  begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The  opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behaviour (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humourous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humour and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favourite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The  closing  of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question or hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behaviour during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centred on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Three ways of organizing an APA-style method. Long description available.

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on.

The  results section  is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Several journals now encourage the open sharing of raw data online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A third preliminary issue is the reliability of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items. A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The  discussion  is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end when you have made your final point (although you should avoid ending on a limitation).

The references section begins on a new page with the heading “References” centred at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An  appendix  is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centred at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

""

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g.,  Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different colour each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.

Long Descriptions

Figure 11.1 long description: Table showing three ways of organizing an APA-style method section.

In the simple method, there are two subheadings: “Participants” (which might begin “The participants were…”) and “Design and procedure” (which might begin “There were three conditions…”).

In the typical method, there are three subheadings: “Participants” (“The participants were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”).

In the complex method, there are four subheadings: “Participants” (“The participants were…”), “Materials” (“The stimuli were…”), “Design” (“There were three conditions…”), and “Procedure” (“Participants viewed each stimulus on the computer screen…”). [Return to Figure 11.1]

  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The compleat academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵

A type of research article which describes one or more new empirical studies conducted by the authors.

The page at the beginning of an APA-style research report containing the title of the article, the authors’ names, and their institutional affiliation.

A summary of a research study.

The third page of a manuscript containing the research question, the literature review, and comments about how to answer the research question.

An introduction to the research question and explanation for why this question is interesting.

A description of relevant previous research on the topic being discusses and an argument for why the research is worth addressing.

The end of the introduction, where the research question is reiterated and the method is commented upon.

The section of a research report where the method used to conduct the study is described.

The main results of the study, including the results from statistical analyses, are presented in a research article.

Section of a research report that summarizes the study's results and interprets them by referring back to the study's theoretical background.

Part of a research report which contains supplemental material.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

major sections of a research report used in counseling research

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 16 May 2024

The Egyptian pyramid chain was built along the now abandoned Ahramat Nile Branch

  • Eman Ghoneim   ORCID: orcid.org/0000-0003-3988-0335 1 ,
  • Timothy J. Ralph   ORCID: orcid.org/0000-0002-4956-606X 2 ,
  • Suzanne Onstine 3 ,
  • Raghda El-Behaedi 4 ,
  • Gad El-Qady 5 ,
  • Amr S. Fahil 6 ,
  • Mahfooz Hafez 5 ,
  • Magdy Atya 5 ,
  • Mohamed Ebrahim   ORCID: orcid.org/0000-0002-4068-5628 5 ,
  • Ashraf Khozym 5 &
  • Mohamed S. Fathy 6  

Communications Earth & Environment volume  5 , Article number:  233 ( 2024 ) Cite this article

61k Accesses

1725 Altmetric

Metrics details

  • Archaeology
  • Geomorphology
  • Hydrogeology
  • Sedimentology

The largest pyramid field in Egypt is clustered along a narrow desert strip, yet no convincing explanation as to why these pyramids are concentrated in this specific locality has been given so far. Here we use radar satellite imagery, in conjunction with geophysical data and deep soil coring, to investigate the subsurface structure and sedimentology in the Nile Valley next to these pyramids. We identify segments of a major extinct Nile branch, which we name The Ahramat Branch, running at the foothills of the Western Desert Plateau, where the majority of the pyramids lie. Many of the pyramids, dating to the Old and Middle Kingdoms, have causeways that lead to the branch and terminate with Valley Temples which may have acted as river harbors along it in the past. We suggest that The Ahramat Branch played a role in the monuments’ construction and that it was simultaneously active and used as a transportation waterway for workmen and building materials to the pyramids’ sites.

Similar content being viewed by others

major sections of a research report used in counseling research

Lidar reveals pre-Hispanic low-density urbanism in the Bolivian Amazon

major sections of a research report used in counseling research

Medieval demise of a Himalayan giant summit induced by mega-landslide

major sections of a research report used in counseling research

Quantitative assessment of the erosion and deposition effects of landslide-dam outburst flood, Eastern Himalaya

Introduction.

The landscape of the northern Nile Valley in Egypt, between Lisht in the south and the Giza Plateau in the north, was subject to a number of environmental and hydrological changes during the past few millennia 1 , 2 . In the Early Holocene (~12,000 years before present), the Sahara of North Africa transformed from a hyper-arid desert to a savannah-like environment, with large river systems and lake basins 3 , 4 due to an increase in global sea level at the end of the Last Glacial Maximum (LGM). The wet conditions of the Sahara provided a suitable habitat for people and wildlife, unlike in the Nile Valley, which was virtually inhospitable to humans because of the constantly higher river levels and swampy environment 5 . At this time, Nile River discharge was high, which is evident from the extensive deposition of organic-rich fluvial sediment in the Eastern Mediterranean basin 6 . Based on the interpretation of archeological material and pollen records, this period, known as the African Humid Period (AHP) (ca. 14,500–5000 years ago), was the most significant and persistent wet period from the early to mid-Holocene in the eastern Sahara region 7 , with an annual rainfall rate of 300–920 mm yr −1   8 . During this time the Nile would have had several secondary channels branching across the floodplain, similar to those described by early historians (e.g., Herodotus).

During the mid-Holocene (~10,000–6000 years ago), freshwater marshes were common within the Nile floodplain causing habitation to be more nucleated along the desert margins of the Nile Valley 9 . The desert margins provided a haven from the high Nile water. With the ending of the AHP and the beginning of the Late Holocene (~5500 years ago to present), rainfall greatly declined, and the region’s humid phase gradually came to an end with punctuated short wet episodes 10 . Due to increased aridity in the Sahara, more people moved out of the desert towards the Nile Valley and settled along the edge of the Nile floodplain. With the reduced precipitation, sedimentation increased in and around the Nile River channels causing the proximal floodplain to rise in height and adjacent marshland to decrease in the area 11 , 12 estimated the Nile flood levels to have ranged from 1 to 4 m above the baseline (~5000 BP). Inhabitants moved downhill to the Nile Valley and settled in the elevated areas on the floodplain, including the raised natural levees of the river and jeziras (islands). This was the beginning of the Old Kingdom Period (ca. 2686 BCE) and the time when early pyramid complexes, including the Step Pyramid of Djoser, were constructed at the margins of the floodplain. During this time the Nile discharge was still considerably higher than its present level. The high flow of the river, particularly during the short-wet intervals, enabled the Nile to maintain multiple branches, which meandered through its floodplain. Although the landscape of the Nile floodplain has greatly transformed due to river regulation associated with the construction of the Aswan High Dam in the 1960s, this region still retains some clear hydro-geomorphological traces of the abandoned river channels.

Since the beginning of the Pharaonic era, the Nile River has played a fundamental role in the rapid growth and expansion of the Egyptian civilization. Serving as their lifeline in a largely arid landscape, the Nile provided sustenance and functioned as the main water corridor that allowed for the transportation of goods and building materials. For this reason, most of the key cities and monuments were in close proximity to the banks of the Nile and its peripheral branches. Over time, however, the main course of the Nile River laterally migrated, and its peripheral branches silted up, leaving behind many ancient Egyptian sites distant from the present-day river course 9 , 13 , 14 , 15 . Yet, it is still unclear as to where exactly the ancient Nile courses were situated 16 , and whether different reaches of the Nile had single or multiple branches that were simultaneously active in the past. Given the lack of consensus amongst scholars regarding this subject, it is imperative to develop a comprehensive understanding of the Nile during the time of the ancient Egyptian civilization. Such a poor understanding of Nile River morphodynamics, particularly in the region that hosts the largest pyramid fields of Egypt, from Lisht to Giza, limits our understanding of how changes in the landscape influenced human activities and settlement patterns in this region, and significantly restricts our ability to understand the daily lives and stories of the ancient Egyptians.

Currently, much of the original surface of the ancient Nile floodplain is masked by either anthropogenic activity or broad silt and sand sheets. For this reason, singular approaches such as on-ground searches for the remains of hidden former Nile branches are both increasingly difficult and inauspicious. A number of studies have already been carried out in Egypt to locate segments of the ancient Nile course. For instance 9 , proposed that the axis of the Nile River ran far west of its modern course past ancient cities such as el-Ashmunein (Hermopolis) 13 . mapped the ancient hydrological landscape in the Luxor area and estimated both an eastward and westward Nile migration rate of 2–3 km per 1000 years. In the Nile Delta region 17 , detected several segments of buried Nile distributaries and elevated mounds using geoelectrical resistivity surveys. Similarly, a study by Bunbury and Lutley 14 identified a segment of an ancient Nile channel, about 5000 years old, near the ancient town of Memphis ( men-nefer ). More recently 15 , used cores taken around Memphis to reveal a section of a lateral ancient Nile branch that was dated to the Neolithic and Predynastic times (ca. 7000–5000 BCE). On the bank of this branch, Memphis, the first capital of unified Egypt, was founded in early Pharaonic times. Over the Dynastic period, this lateral branch then significantly migrated eastwards 15 . A study by Toonen et al. 18 , using borehole data and electrical resistivity tomography, further revealed a segment of an ancient Nile branch, dating to the New Kingdom Period, situated near the desert edge west of Luxor. This river branch would have connected important localities and thus played a significant role in the cultural landscape of this area. More recent research conducted further north by Sheisha et al. 2 , near the Giza Plateau, indicated the presence of a former river and marsh-like environment in the floodplain east of the three great Pyramids of Giza.

Even though the largest concentration of pyramids in Egypt are located along a narrow desert strip from south Lisht to Giza, no explanation has been offered as to why these pyramid fields were condensed in this particular area. Monumental structures, such as pyramids and temples, would logically be built near major waterways to facilitate the transportation of their construction materials and workers. Yet, no waterway has been found near the largest pyramid field in Egypt, with the Nile River lying several kilometers away. Even though many efforts to reconstruct the ancient Nile waterways have been conducted, they have largely been confined to small sites, which has led to the mapping of only fragmented sections of the ancient Nile channel systems.

In this work, we present remote sensing, geomorphological, soil coring and geophysical evidence to support the existence of a long-lost ancient river branch, the Ahramat Branch, and provide the first map of the paleohydrological setting in the Lisht-Giza area. The finding of the Ahramat Branch is not only crucial to our understanding of why the pyramids were built in these specific geographical areas, but also for understanding how the pyramids were accessed and constructed by the ancient population. It has been speculated by many scholars that the ancient Egyptians used the Nile River for help transporting construction materials to pyramid building sites, but until now, this ancient Nile branch was not fully uncovered or mapped. This work can help us better understand the former hydrological setting of this region, which would in turn help us learn more about the environmental parameters that may have influenced the decision to build these pyramids in their current locations during the time of Pharaonic Egypt.

Position and morphology of the Ahramat Branch

Synthetic Aperture Radar (SAR) imagery and radar high-resolution elevation data for the Nile floodplain and its desert margins, between south Lisht and the Giza Plateau area, provide evidence for the existence of segments of a major ancient river branch bordering 31 pyramids dating from the Old Kingdom to Second Intermediate Period (2686−1649 BCE) and spanning between Dynasties 3–13 (Fig.  1a ). This extinct branch is referred to hereafter as the Ahramat Branch, meaning the “Pyramids Branch” in Arabic. Although masked by the cultivated fields of the Nile floodplain, subtle topographic expressions of this former branch, now invisible in optical satellite data, can be traced on the ground surface by TanDEM-X (TDX) radar data and the Topographic Position Index (TPI). Data analysis indicates that this lateral distributary channel lies between 2.5 and 10.25 km west from the modern Nile River. The branch appears to have a surface channel depth between 2 and 8 m, a channel length of about 64 km and a channel width of 200–700 m, which is similar to the width of the contemporary neighboring Nile course. The size and longitudinal continuity of the Ahramat Branch and its proximity to all the pyramids in the study area implies a functional waterway of great significance.

figure 1

a Shows the Ahramat Branch borders a large number of pyramids dating from the Old Kingdom to the 2 nd Intermediate Period and spanning between Dynasties 3 and 13. b Shows Bahr el-Libeini canal and remnant of abandoned channel visible in the 1911 historical map (Egyptian Survey Department scale 1:50,000). c Bahr el-Libeini canal and the abandoned channel are overlain on satellite basemap. Bahr el-Libeini is possibly the last remnant of the Ahramat Branch before it migrated eastward. d A visible segment of the Ahramat Branch in TDX is now partially occupied by the modern Bahr el-Libeini canal. e A major segment of the Ahramat Branch, approximately 20 km long and 0.5 km wide, can be traced in the floodplain along the Western Desert Plateau south of the town of Jirza. Location of e is marked in white a box in a . (ESRI World Image Basemap, source: Esri, Maxar, Earthstar Geographics).

A trace of a 3 km river segment of the Ahramat Branch, with a width of about 260 m, is observable in the floodplain west of the Abu Sir pyramids field (Fig.  1b–d ). Another major segment of the Ahramat Branch, approximately 20 km long and 0.5 km wide can be traced in the floodplain along the Western Desert Plateau south of the town of Jirza (Fig.  1e ). The visible segments of the Ahramat Branch in TDX are now partially occupied by the modern Bahr el-Libeini canal. Such partial overlap between the courses of this canal, traced in the1911 historical maps (Egyptian Survey Department scale 1:50,000), and the Ahramat Branch is clear in areas where the Nile floodplain is narrower (Fig.  1b–d ), while in areas where the floodplain gets wider, the two water courses are about 2 km apart. In light of that, Bahr el-Libeini canal is possibly the last remnant of the Ahramat Branch before it migrated eastward, silted up, and vanished. In the course of the eastward migration over the Nile floodplain, the meandering Ahramat Branch would have left behind traces of abandoned channels (narrow oxbow lakes) which formed as a result of the river erosion through the neck of its meanders. A number of these abandoned channels can be traced in the 1911 historical maps near the foothill of the Western Desert plateau proving the eastward shifting of the branch at this locality (Fig.  1b–d ). The Dahshur Lake, southwest of the city of Dahshur, is most likely the last existing trace of the course of the Ahramat Branch.

Subsurface structure and sedimentology of the Ahramat Branch

Geophysical surveys using Ground Penetrating Radar (GPR) and Electromagnetic Tomography (EMT) along a 1.2 km long profile revealed a hidden river channel lying 1–1.5 m below the cultivated Nile floodplain (Fig.  2 ). The position and shape of this river channel is in an excellent match with those derived from radar satellite imagery for the Ahramat Branch. The EMT profile shows a distinct unconformity in the middle, which in this case indicates sediments that have a different texture than the overlying recent floodplain silt deposits and the sandy sediments that are adjacent to this former branch (Fig.  2 ). GPR overlapping the EMT profile from 600–1100 m on the transect confirms this. Here, we see evidence of an abandoned riverbed approximately 400 m wide and at least 25 m deep (width:depth ratio ~16) at this location. This branch has a symmetrical channel shape and has been infilled with sandy Neonile sediment different to other surrounding Neonile deposits and the underlying Eocene bedrock. The geophysical profile interpretation for the Ahramat Branch at this locality was validated using two sediment cores of depths 20 m (Core A) and 13 m (Core B) (Fig.  3 ). In Core A between the center and left bank of the former branch we found brown sandy mud at the floodplain surface and down to ~2.7 m with some limestone and chert fragments, a reddish sandy mud layer with gravel and handmade material inclusions at ~2.8 m, a gray sandy mud layer from ~3–5.8 m, another reddish sandy mud layer with gravel and freshwater mussel shells at ~6 m, black sandy mud from ~6–8 m, and sandy silt grading into clean, well-sorted medium sand dominated the profile from ~8 to >13 m. In Core B on the right bank of the former branch we found recently deposited brown sandy mud at the floodplain surface and down to ~1.5 m, alternating brown and gray layers of silty and sandy mud down to ~4 m (some reddish layers with gravel and handmade material inclusions), a black sandy mud layer from ~4–4.9 m, and another reddish sandy mud layer with gravel and freshwater mussel shells at ~5 m, before clean, well-sorted medium sand dominated the profile from 5 to >20 m. Shallow groundwater was encountered in both cores concurrently with the sand layers, indicating that the buried sedimentary structure of the abandoned Ahramat Branch acts as a conduit for subsurface water flow beneath the distal floodplain of the modern Nile River.

figure 2

a Locations of geophysical profile and soil drilling (ESRI World Image Basemap, source: Esri, Maxar, Earthstar Geographics). Photos taken from the field while using the b Electromagnetic Tomography (EMT) and c Ground Penetrating Radar (GPR). d Showing the apparent conductivity profile, e showing EMT profile, and f showing GPR profiles with overlain sketch of the channel boundary on the GPR graph. g Simplified interpretation of the buried channel with the location of the two-soil coring of A and B.

figure 3

It shows two-soil cores, A and core B, with soil profile descriptions, graphic core logs, sediment grain size charts, and example photographs.

Alignment of old and middle kingdom pyramids to the Ahramat Branch

The royal pyramids in ancient Egypt are not isolated monuments, but rather joined with several other structures to form complexes. Besides the pyramid itself, the pyramid complex includes the mortuary temple next to the pyramid, a valley temple farther away from the pyramid on the edge of a waterbody, and a long sloping causeway that connects the two temples. A causeway is a ceremonial raised walkway, which provides access to the pyramid site and was part of the religious aspects of the pyramid itself 19 . In the study area, it was found that many of the causeways of the pyramids run perpendicular to the course of the Ahramat Branch and terminate directly on its riverbank.

In Egyptian pyramid complexes, the valley temples at the end of causeways acted as river harbors. These harbors served as an entry point for the river borne visitors and ceremonial roads to the pyramid. Countless valley temples in Egypt have not yet been found and, therefore, might still be buried beneath the agricultural fields and desert sands along the riverbank of the Ahramat Branch. Five of these valley temples, however, partially survived and still exist in the study area. These temples include the valley temples of the Bent Pyramid, the Pyramid of Khafre, and the Pyramid of Menkaure from Dynasty 4; the valley temple of the Pyramid of Sahure from Dynasty 5, and the valley temple of the Pyramid of Pepi II from Dynasty 6. All the aforementioned temples are dated to the Old Kingdom. These five surviving temples were found to be positioned adjacent to the riverbank of the Ahramat Branch, which strongly implies that this river branch was contemporaneously functioning during the Old Kingdom, at the time of pyramid construction.

Analysis of the ground elevation of the 31 pyramids and their proximity to the floodplain, within the study area, helped explain the position and relative water level of the Ahramat Branch during the time between the Old Kingdom and Second Intermediate Period (ca. 2649–1540 BCE). Based on Fig. ( 4) , the Ahramat Branch had a high-water level during the first part of the Old Kingdom, especially during Dynasty 4. This is evident from the high ground elevation and long distance from the floodplain of the pyramids dated to that period. For instance, the remote position of the Bent and Red Pyramids in the desert, very far from the Nile floodplain, is a testament to the branch’s high-water level. On the contrary, during the Old Kingdom, our data demonstrated that the Ahramat Branch would have reached its lowest level during Dynasty 5. This is evident from the low altitudes and close proximity to the floodplain of most Dynasty 5 pyramids. The orientation of the Sahure Pyramid’s causeway (Dynasty 5) and the location of its valley temple in the low-lying floodplain provide compelling evidence for the relatively low water level proposition of the Ahramat Branch during this stage. The water level of the Ahramat Branch would have been slightly raised by the end of Dynasty 5 (the last 15–30 years), during the reign of King Unas and continued to rise during Dynasty 6. The position of Pepi II and Merenre Pyramids (Dynasty 6) deep in the desert, west of the Djedkare Isesi Pyramid (Dynasty 5), supports this notion.

figure 4

It explains the position and relative water level of the Ahramat Branch during the time between the Old Kingdom and Second Intermediate Period. a Shows positive correlation between the ground elevation of the pyramids and their proximity to the floodplain. b Shows positive correlation between the average ground elevation of the pyramids and their average proximity to the floodplain in each Dynasty. c Illustrates the water level interpretation by Hassan (1986) in Faiyum Lake in correlation to the average pyramids ground elevation and average distances to the floodplain in each Dynasty. d The data indicates that the Ahramat Branch had a high-water level during the first period of the Old Kingdom, especially during Dynasty 4. The water level reduced afterwards but was raised slightly in Dynasty 6. The position of the Middle Kingdom’s pyramids, which was at lower altitudes and in close proximity to the floodplain as compared to those of the Old Kingdom might be explained by the slight eastward migration of the Ahramat Branch.

In addition, our analysis in Fig. ( 4) , shows that the Qakare Ibi Pyramid of Dynasty 8 was constructed very close to the floodplain on very low elevation, which implies that the Nile water levels were very low at this time of the First Intermediate Period (2181–2055 BCE). This finding is in agreement with previous work conducted by Kitchen 20 which implies that the sudden collapse of the Old Kingdom in Egypt (after 4160 BCE) was largely caused by catastrophic failure of the annual flood of the Nile River for a period of 30–40 years. Data from soil cores near Memphis indicated that the Old Kingdom settlement is covered by about 3 m of sand 11 . Accordingly, the Ahramat Branch was initially positioned further west during the Old Kingdom and then shifted east during the Middle Kingdom due to the drought-induced sand encroachments of the First Intermediate Period, “a period of decentralization and weak pharaonic rule” in ancient Egypt, spanning about 125 years (2181–2055 BCE) post Old Kingdom era. Soil cores from the drilling program at Memphis show dominant dry conditions during the First Intermediate Period with massive eolian sand sheets extended over a distance of at least 0.5 km from the edge of the western desert escarpment 21 . The Ahramat Branch continued to move east during the Second Intermediate Period until it had gradually lost most of its water supply by the New Kingdom.

The western tributaries of the Ahramat Branch

Sentinal-1 radar data unveiled several wide channels (inlets) in the Western Desert Plateau connected to the Ahramat Branch. These inlets are currently covered by a layer of sand, thus partially invisible in multispectral satellite imagery. In Sentinal-1 radar imagery, the valley floors of these inlets appear darker than the surrounding surfaces, indicating subsurface fluvial deposits. These smooth deposits appear dark owing to the specular reflection of the radar signals away from the receiving antenna (Fig.  5a, b ) 22 . Considering that Sentinel-1’s C-Band has a penetration capability of approximately 50 cm in dry sand surface 23 , this would suggest that the riverbed of these channels is covered by at least half a meter of desert sand. Unlike these former inlets, the course of the Ahramat Branch is invisible in SAR data due in large part to the presence of dense farmlands in the floodplain, which limits radar penetration and the detection of underlying fluvial deposition. Moreover, the radar topographic data from TDX revealed the areal extent of these inlets. Their river courses were extracted from TDX data using the Topographic Position Index (TPI), an algorithm which is used to compute the topographic slope positions and to automate landform classifications (Fig.  5c, d ). Negative TPI values show the former riverbeds of the inlets, while positive TPI signify the riverbanks bordering them.

figure 5

a Conceptual sketch of the dependence of surface roughness on the sensor wavelength λ (modified after 48 ). b Expected backscatter characteristics in sandy desert areas with buried dry riverbeds. c Dry channels/inlets masked by desert sand in the Dahshur area. d The channels’ courses were extracted using TPI. Negative TPI values highlight the courses of the channels while positive TPI signify their banks.

Analysis indicated that several of the pyramid’s causeways, from Dynasties 4 and 6, lead to the inlet’s riverbanks (Fig.  6 ). Among these pyramids, are the Bent Pyramid, the first pyramid built by King Snefru in Dynasty 4 and among the oldest, largest, and best preserved ancient Egyptian pyramids that predates the Giza Pyramids. This pyramid is situated at the royal necropolis of Dahshur. The position of the Bent Pyramid, deep in the desert, far from the modern Nile floodplain, remained unexplained by researchers. This pyramid has a long causeway (~700 m) that is paved in the desert with limestone blocks and is attached to a large valley temple. Although all the pyramids’ valley temples in Egypt are connected to a water body and served as the landing point of all the river-borne visitors, the valley temple of the Bent Pyramid is oddly located deep in the desert, very distant from any waterways and more than 1 km away from the western edge of the modern Nile floodplain. Radar data revealed that this temple overlooked the bank of one of these extinct channels (called Wadi al-Taflah in historical maps). This extinct channel (referred to hereafter as the Dahshur Inlet due to its geographical location) is more than 200 m wide on average (Fig.  6 ). In light of this finding, the Dahshur Inlet, and the Ahramat Branch, are thus strongly argued to have been active during Dynasty 4 and must have played an important role in transporting building materials to the Bent Pyramid site. The Dahshur Inlet could have also served the adjacent Red Pyramid, the second pyramid built by the same king (King Snefru) in the Dahshur area. Yet, no traces of a causeway nor of a valley temple has been found thus far for the Red Pyramid. Interestingly, pyramids in this site dated to the Middle Kingdom, including the Amenemhat III pyramid, also known as the Black Pyramid, White Pyramid, and Pyramid of Senusret III, are all located at least 1 km far to the east of the Dynasty 4 pyramids (Bent and Red) near the floodplain (Fig.  6 ), which once again supports the notion of the eastward shift of the Ahramat Branch after the Old Kingdom.

figure 6

a The two inlets are presently covered by sand, thus invisible in optical satellite imagery. b Radar data, and c TDX topographic data reveal the riverbed of the Sakkara Inlet due to radar signals penetration capability in dry sand. b and c show the causeways of Pepi II and Merenre Pyramids, from Dynasty 6, leading to the Saqqara Inlet. The Valley Temple of Pepi II Pyramid overlooks the inlet riverbank, which indicates that the inlet, and thus Ahramat Branch, were active during Dynasty 6. d Radar data, and e TDX topographic data, reveal the riverbed of the Dahshur Inlet with the Bent Pyramid’s causeway of Dynasty 4 leading to the Inlet. The Valley Temple of the Bent Pyramid overlooks the riverbank of the Dahshur Inlet, which indicates that the inlet and the Ahramat Branch were active during Dynasty 4 of the Old Kingdom.

Radar satellite data revealed yet another sandy buried channel (tributary), about 6 km north of the Dahshur Inlet, to the west of the ancient city of Memphis. This former fluvial channel (referred to hereafter as the Saqqara Inlet due to its geographical location) connects to the Ahramat Branch with a broad river course of more than 600 m wide. Data shows that the causeways of the two pyramids of Pepi II and Merenre, situated at the royal necropolis of Saqqara and dated to Dynasty 6, lead directly to the banks of the Saqqara Inlet (see Fig.  6 ). The 400 m long causeway of Pepi II pyramid runs northeast over the southern Saqqara plateau and connects to the riverbank of the Saqqara Inlet from the south. The causeway terminates with a valley temple that lies on the inlet’s riverbank. The 250 long causeway of the Pyramid of Merenre runs southeast over the northern Saqqara plateau and connects to the riverbank of the Saqqara Inlet from the north. Since both pyramids dated to Dynasty 6, it can be argued that the water level of the Ahramat Branch was higher during this period, which would have flooded at least the entrance of its western inlets. This indicates that the downstream segment of the Saqqara Inlet was active during Dynasty 6 and played a vital role in transporting construction materials and workers to the two pyramids sites. The fact that none of the Dynasty 5 pyramids in this area (e.g., the Djedkare Isesi Pyramid) were positioned on the Saqqara Inlet suggests that the water level in the Ahramat Branch was not high enough to enter and submerge its inlets during this period.

In addition, our data analysis clearly shows that the causeways of the Khafre, Menkaure, and Khentkaus pyramids, in the Giza Plateau, lead to a smaller but equally important river bay associated with the Ahramat Branch. This lagoon-like river arm is referred to here as the Giza Inlet (Fig.  7 ). The Khufu Pyramid, the largest pyramid in Egypt, seems to be connected directly to the river course of the Ahramat Branch (Fig.  7 ). This finding proves once again that the Ahramat Branch and its western inlets were hydrologically active during Dynasty 4 of the Old Kingdom. Our ancient river inlet hypothesis is also in accordance with earlier research, conducted on the Giza Plateau, which indicates the presence of a river and marsh-like environment in the floodplain east of the Giza pyramids 2 .

figure 7

The causeways of the four Pyramids lead to an inlet, which we named the Giza Inlet, that connects from the west with the Ahramat Branch. These causeways connect the pyramids with valley temples which acted as river harbors in antiquity. These river segments are invisible in optical satellite imagery since they are masked by the cultivated lands of the Nile floodplain. The photo shows the valley temple of Khafre Pyramid (Photo source: Author Eman Ghoneim).

During the Old Kingdom Period, our analysis suggests that the Ahramat Branch had a high-water level during the first part, especially during Dynasty 4 whereas this water level was significantly decreased during Dynasty 5. This finding is in agreement with previous studies which indicate a high Nile discharge during Dynasty 4 (e.g., ref. 24 ). Sediment isotopic analysis of the Nile Delta indicated that Nile flows decrease more rapidly by the end of Dynasty 4 25 , in addition 26 reported that during Dynasties 5 and 6 the Nile flows were the lowest of the entire Dynastic period. This long-lost Ahramat Branch (possibly a former Yazoo tributary to the Nile) was large enough to carry a large volume of the Nile discharge in the past. The ancient channel segment uncovered by 1 , 15 west of the city of Memphis through borehole logs is most likely a small section of the large Ahramat Branch detected in this study. In the Middle Kingdom, although previous studies implied that the Nile witnessed abundant flood with occasional failures (e.g., ref. 27 ), our analysis shows that all the pyramids from the Middle Kingdom were built far east of their Old Kingdom counterparts, on lower altitudes and in close proximity to the floodplain as compared to those of the Old Kingdom. This paradox might be explained by the fact that the Ahramat Branch migrated eastward, slightly away from the Western Desert escarpment, prior to the construction of the Middle Kingdom pyramids, resulting in the pyramids being built eastward so that they could be near the waterway.

The eastward migration and abandonment of the Ahramat Branch could be attributed to gradual tilting of the Nile delta and floodplain in lower Egypt towards the northeast due to tectonic activity 28 . A topographic tilt such as this would have accelerated river movement eastward due to the river being located in the west at a relatively higher elevation of the floodplain. While near-channel floodplain deposition would naturally lead to alluvial ridge development around the active Ahramat Branch, and therefore to lower-lying tracts of adjacent floodplain to the east, regional tilting may explain the wholesale lateral migration of the river in that direction. The eastward migration and abandonment of the branch could also be ascribed to sand incursion due to the branch’s proximity to the Western Desert Plateau, where windblown sand is abundant. This would have increased sand deposition along the riverbanks and caused the river to silt up, particularly during periods of low flow. The region experienced drought during the First Intermediate Period, prior to the Middle Kingdom. In the area of Abu Rawash north 29 and Dahshur site 11 , settlements from the Early Dynastic and Old Kingdom were found to be covered by more than 3 m of desert sands. During this time, windblown sand engulfed the Old Kingdom settlements and desert sands extended eastward downhill over a distance of at least 0.5 km 21 . The abandonment of sites at Abusir (5 th Dynasty), where the early pottery-rich deposits are covered by wind-blown sand and then mud without sherds, can be used as evidence that the Ahramat Branch migrated eastward after the Old Kingdom. The increased sand deposition activity, during the end of the Old Kingdom, and throughout the First Intermediate Period, was most likely linked to the period of drought and desertification of the Sahara 30 . In addition, the reduced river discharge caused by decreased rainfall and increased aridity in the region would have gradually reduced the river course’s capacity, leading to silting and abandonment of the Ahramat Branch as the river migrated to the east.

The Dahshur, Saqqara, and Giza inlets, which were connected to the Ahramat Branch from the west, were remnants of past active drainage systems dated to the late Tertiary or the Pleistocene when rainwater was plentiful 31 . It is proposed that the downstream reaches of these former channels (wadis) were submerged during times of high-water levels of the Ahramat Branch, forming long narrow water arms (inlets) that gave a wedge-like shape to the western flank of the Ahramat Branch. During the Old Kingdom, the waters of these inlets would have flowed westward from the Ahramat Branch rather than from their headwaters. As the drought intensified during the First Intermediate Period, the water level of the Ahramat Branch was lowered and withdrew from its western inlets, causing them to silt up and eventually dry out. The Dahshur, Saqqara, and Giza inlets would have provided a bay environment where the water would have been calm enough for vessels and boats to dock far from the busy, open water of the Ahramat Branch.

Sediments from the Ahramat Branch riverbed, which were collected from the two deep soil cores (cores A and B), show an abrupt shift from well-sorted medium sands at depth to overlying finer materials with layers including gravel, shell, and handmade materials. This indicates a step-change from a relatively consistent higher-energy depositional regime to a generally lower-energy depositional regime with periodic flash floods at these sites. So, the Ahramat Branch in this region carried and deposited well-sorted medium sand during its last active phase, and over time became inactive, infilling with sand and mud until an abrupt change led the (by then) shallow depression fill with finer distal floodplain sediment (possibly in a wetland) that was utilized by people and experienced periodic flash flooding. Validation of the paleo-channel position and sediment type using these cores shows that the Ahramat Branch has similar morphological features and an upward-fining depositional sequence as that reported near Giza, where two cores were previously used to reconstruct late Holocene Nile floodplain paleo-environments 2 . Further deep soil coring could determine how consistent the geomorphological features are along the length of the Ahramat branch, and to help explain anomalies in areas where the branch has less surface expression and where remote sensing and geophysical techniques have limitations. Considering more core logs can give a better understanding of the floodplain and the buried paleo-channels.

The position of the Ahramat Branch along the western edge of the Nile floodplain suggests it to be the downstream extension of Bahr Yusef. In fact, Bahr Yusef’s course may have initially flowed north following the natural surface gradient of the floodplain before being forced to turn west to flow into the Fayum Depression. This assumption could be supported by the sharp westward bend of Bahr Yusef’s course at the entrance to the Fayum Depression, which could be a man-made attempt to change the waterflow direction of this branch. According to Römer 32 , during the Middle Kingdom, the Gadallah Dam located at the entrance of the Fayum, and a possible continuation running eastwards, blocked the flow of Bahr Yusef towards the north. However, a sluice, probably located near the village of el-Lahun, was created in order to better control the flow of water into the Fayum. When the sluice was locked, the water from Bahr Yusef was directed to the west and into the depression, and when the sluice was open, the water would flow towards the north via the course of the Ahramat Branch. Today, the abandoned Ahramat Branch north of Fayum appears to support subsurface water flow in the buried coarse sand bed layers, however these shallow groundwater levels are likely to be quite variable due to proximity of the bed layers to canals and other waterways that artificially maintain shallow groundwater. Groundwater levels in the region are known to be variable 33 , but data on shallow groundwater could be used to further validate the delineated paleo-channel of the Ahramat Branch.

The present work enabled the detection of segments of a major former Nile branch running at the foothills of the Western Desert Plateau, where the vast majority of the Ancient Egyptian pyramids lie. The enormity of this branch and its proximity to the pyramid complexes, in addition to the fact that the pyramids’ causeways terminate at its riverbank, all imply that this branch was active and operational during the construction phase of these pyramids. This waterway would have connected important locations in ancient Egypt, including cities and towns, and therefore, played an important role in the cultural landscape of the region. The eastward migration and abandonment of the Ahramat Branch could be attributed to gradual movement of the river to the lower-lying adjacent floodplain or tilting of the Nile floodplain toward the northeast as a result of tectonic activity, as well as windblown sand incursion due to the branch’s proximity to the Western Desert Plateau. The increased sand deposition was most likely related to periods of desertification of the Great Sahara in North Africa. In addition, the branch eastward movement and diminishing could be explained by the reduction of the river discharge and channel capacity caused by the decreased precipitation and increased aridity in the region, particularly during the end of the Old Kingdom.

The integration of radar satellite data with geophysical surveying and soil coring, which we utilized in this study, is a highly adaptable approach in locating similar former buried river systems in arid regions worldwide. Mapping the hidden course of the Ahramat Branch, allowed us to piece together a more complete picture of ancient Egypt’s former landscape and a possible water transportation route in Lower Egypt, in the area between Lisht and the Giza Plateau.

Revealing this extinct Nile branch can provide a more refined idea of where ancient settlements were possibly located in relation to it and prevent them from being lost to rapid urbanization. This could improve the protection measures of Egyptian cultural heritage. It is the hope that our findings can improve conservation measures and raise awareness of these sites for modern development planning. By understanding the landscape of the Nile floodplain and its environmental history, archeologists will be better equipped to prioritize locations for fieldwork investigation and, consequently, raise awareness of these sites for conservation purposes and modern development planning. Our finding has filled a much-needed knowledge gap related to the dominant waterscape in ancient Egypt, which could help inform and educate a wide array of global audiences about how earlier inhabitants were living and in what ways shifts in their landscape drove human activity in such an iconic region.

Materials and methods

The work comprised of two main elements: satellite remote sensing and historical maps and geophysical survey and sediment coring, complemented by archeological resources. Using this suite of investigative techniques provided insights into the nature and relationship of the former Ahramat Branch with the geographical location of the pyramid complexes in Egypt.

Satellite remote sensing and historical maps

Unlike optical sensors that image the land surface, radar sensors image the subsurface due to their unique ability to penetrate the ground and produce images of hidden paleo-rivers and structures. In this context, radar waves strip away the surface sand layer and expose previously unidentified buried channels. The penetration capability of radar waves in the hyper-arid regions of North Africa is well documented 4 , 34 , 35 , 36 , 37 . The penetration depth varies according to the radar wavelength used at the time of imaging. Radar signal penetration becomes possible without significant attenuation if the surface cover material is extremely dry (<1% moisture content), fine grained (<1/5 of the imaging wavelength) and physically homogeneous 23 . When penetrating desert sand, radar signals have the ability to detect subsurface soil roughness, texture, compactness, and dielectric properties 38 . We used the European Space Agency (ESA) Sentinel-1 data, a radar satellite constellation consisting of a C-Band synthetic aperture radar (SAR) sensor, operating at 5.405 GHz. The Sentinel-1 SAR image used here was acquired in a descending orbit with an interferometric wide swath mode (IW) at ground resolutions of 5 m × 20 m, and dual polarizations of VV + VH. Since Sentinal-1 is operated in the C-Band, it has an estimated penetration depth of 50 cm in very dry, sandy, loose soils 39 . We used ENVI v. 5.7 SARscape software for processing radar imagery. The used SAR processing sequences have generated geo-coded, orthorectified, terrain-corrected, noise free, radiometrically calibrated, and normalized Sentinel-1 images with a pixel size of 12.5 m. In SAR imagery subsurface fluvial deposits appear dark owing to specular reflection of the radar signals away from the receiving antenna, whereas buried coarse and compacted material, such as archeological remains appear bright due to diffuse reflection of radar signals 40 .

Other previous studies have shown that combining radar topographic imagery (e.g., Shuttle Radar Topography Mission-SRTM) with SAR images improves the extraction and delineation of mega paleo-drainage systems and lake basins concealed under present-day topographic signatures 3 , 4 , 22 , 41 . Topographic data represents a primary tool in investigating surface landforms and geomorphological change both spatially and temporally. This data is vital in mapping past river systems due to its ability to show subtle variations in landform morphology 37 . In low lying areas, such as the Nile floodplain, detailed elevation data can detect abandoned channels, fossilized natural levees, river meander scars and former islands, which are all crucial elements for reconstructing the ancient Nile hydrological network. In fact, the modern topography in many parts of the study area is still a good analog of the past landscape. In the present study, TanDEM-X (TDX) topographic data, from the German Aerospace Centre (DLR), has been utilized in ArcGIS Pro v. 3.1 software due to its fine spatial resolution of 0.4 arc-second ( ∼ 12 m). TDX is based on high frequency X-Band Synthetic Aperture Radar (SAR) (9.65 GHz) and has a relative vertical accuracy of 2 m for areas with a slope of ≤20% 42 . This data was found to be superior to other topographic DEMs (e.g., Shuttle Radar Topography Mission and ASTER Global Digital Elevation Map) in displaying fine topographic features even in the cultivated Nile floodplain, thus making it particularly well suited for this study. Similar archeological investigations using TDX elevation data in the flat terrains of the Seyhan River in Turkey and the Nile Delta 43 , 44 allowed for the detection of levees and other geomorphologic features in unprecedented spatial resolution. We used the Topographic Position Index (TPI) module of 45 with the TDX data by applying varying neighboring radiuses (20–100 m) to compute the difference between a cell elevation value and the average elevation of the neighborhood around that cell. TPI values of zero are either flat surfaces with minimal slope, or surfaces with a constant gradient. The TPI can be computed using the following expression 46 .

Where the scaleFactor is the outer radius in map units and Irad and Orad are the inner and outer radius of annulus in cells. Negative TPI values highlight abandoned riverbeds and meander scars, while positive TPI signify the riverbanks and natural levees bordering them.

The course of the Ahramat Branch was mapped from multiple data sources and used different approaches. For instance, some segments of the river course were derived automatically using the TPI approach, particularly in the cultivated floodplain, whereas others were mapped using radar roughness signatures specially in sandy desert areas. Moreover, a number of abandoned channel segments were digitized on screen from rectified historical maps (Egyptian Survey Department scale 1:50,000 collected on years 1910–1911) near the foothill of the Western Desert Plateau. These channel segments together with the former river course segments delineated from radar and topographic data were aggregated to generate the former Ahramat Branch. In addition to this and to ensure that none of the channel segments of the Ahramat Branch were left unmapped during the automated process, a systematic grid-based survey (through expert’s visual observation) was performed on the satellite data. Here, Landsat 8 and Sentinal-2 multispectral images, Sentinal-1 radar images and TDX topographic data were used as base layers, which were thoroughly examined, grid-square by grid-square (2*2 km per a square) at a full resolution, in order to identify small-scale fluvial landforms, anomalous agricultural field patterns and irregular ditches, and determine their spatial distributions. Here, ancient fluvial channels were identified using two key aspects: First, the sinuous geometry of natural and manmade features and, second the color tone variations in the satellite imagery. For example, clusters of contiguous pixels with darker tones and sinuous shapes may signify areas of a higher moisture content in optical imagery, and hence the possible existence of a buried riverbed. Stretching and edge detection were applied to enhance contrasts in satellite images brightness to enable the visualization of traces of buried river segments that would otherwise go unobserved. Lastly, all the pyramids and causeways in the study site, along with ancient harbors and valley temples, as indicators of preexisting river channels, were digitized from satellite data and available archeological resources and overlaid onto the delineated Ahramat Branch for geospatial analysis.

Geophysical survey and sediment coring

Geophysical measurements using Ground Penetrating Radar (GPR) and Electromagnetic Tomography (EMT) were utilized to map subsurface fluvial features and validate the satellite remote sensing findings. GPR is effective in detecting changes of dielectric constant properties of sediment layers, and its signal responses can be directly related to changes in relative porosity, material composition, and moisture content. Therefore, GPR can help in identifying transitional boundaries in subsurface layers. EMT, on the other hand, shows the variations and thickness of large-scale sedimentary deposits and is more useful in clay-rich soil than GPR. In summer 2022, a geophysical profile was measured using GPR and EMT units with a total length of approximately 1.2 km. The GPR survey was conducted with a central frequency antenna of 35 MHz and a trigger interval of 5 cm. The EMT survey was performed using the multi-frequency terrain conductivity (EM–34–3) measuring system with a spacing of 10–11 meters between stations. To validate the remote sensing and geophysical data, two sediment cores with depths of 20 m (Core A) and 13 m (Core B) were collected using a deep soil driller. These cores were collected from along the geophysical profile in the floodplain. Sieving and organic analysis were performed on the sediment samples at Tanta University sediment lab to extract information about grain size for soil texture and total organic carbon. In soil texture analysis medium to coarse sediment, such as sands, are typical for river channel sediments, loamy sand and sandy loam deposits can be interpreted as levees and crevasse splays, whereas fine texture deposits, such as silt loam, silty clay loam, and clay deposits, are representative of the more distal parts of the river floodplain 47 .

Data availability

Data for replicating the results of this study are available as supplementary files at: https://figshare.com/articles/journal_contribution/Pyramids_Elevations_and_Distances_xlsx/25216259 .

Bunbury, J., Tavares, A., Pennington, B. & Gonçalves, P. Development of the Memphite Floodplain: Landscape and Settlement Symbiosis in the Egyptian Capital Zone. In The Nile: Natural and Cultural Landscape in Egypt (eds. Willems, H. & Dahms, J.-M.) 71–96 (Transcript Verlag, 2017). https://doi.org/10.1515/9783839436158-003 .

Sheisha, H. et al. Nile waterscapes facilitated the construction of the Giza pyramids during the 3rd millennium BCE. Proc. Natl. Acad. Sci. 119 , e2202530119 (2022).

Article   CAS   Google Scholar  

Ghoneim, E. & El-Baz, F. K. DEM‐optical‐radar data integration for palaeohydrological mapping in the northern Darfur, Sudan: implication for groundwater exploration. Int. J. Remote Sens. 28 , 5001–5018 (2007).

Article   Google Scholar  

Ghoneim, E., Benedetti, M. M. & El-Baz, F. K. An integrated remote sensing and GIS analysis of the Kufrah Paleoriver, Eastern Sahara. Geomorphology 139 , 242–257 (2012).

Zaki, A. S. et al. Did increased flooding during the African Humid Period force migration of modern humans from the Nile Valley? Quat. Sci. Rev. 272 , 107200 (2021).

Rohling, E. J., Marino, G. & Grant, K. M. Mediterranean climate and oceanography, and the periodic development of anoxic events (sapropels). Earth Sci. Rev. 143 , 62–97 (2015).

DeMenocal, P. et al. Abrupt onset and termination of the African Humid Period: rapid climate responses to gradual insolation forcing. Quat. Sci. Rev. 19 , 347–361 (2000).

Ritchie, J. C. & Haynes, C. V. Holocene vegetation zonation in the eastern Sahara. Nature 330 , 645–647 (1987).

Butzer, K. W. Early Hydraulic Civilization in Egypt: A Study in Cultural Ecology (The University of Chicago press, Chicago [Ill.] London, 1976).

Kröpelin, S. et al. Climate-Driven Ecosystem Succession in the Sahara: The Past 6000 Years. Science 320 , 765–768 (2008).

Bunbury, J. & Jeffreys, D. Real and Literary Landscapes in Ancient Egypt. Camb. Archaeol. J. 21 , 65–76 (2011).

Sterling, S. Mortality Profiles as Indicators of Slowed Reproductive Rates: Evidence from Ancient Egypt. J. Anthropol. Archaeol. 18 , 319–343 (1999).

Hillier, J. K., Bunbury, J. M. & Graham, A. Monuments on a migrating Nile. J. Archaeol. Sci. 34 , 1011–1015 (2007).

Bunbury, J. & Lutley, K. The Nile on the move. https://api.semanticscholar.org/CorpusID:131474399 (2008).

Hassan, F. A., Hamdan, M. A., Flower, R. J., Shallaly, N. A. & Ebrahem, E. Holocene alluvial history and archaeological significance of the Nile floodplain in the Saqqara-Memphis region, Egypt. Quat. Sci. Rev. 176 , 51–70 (2017).

Bietak, M., Czerny, E. & Forstner-Müller, I. Cities and urbanism in ancient Egypt . Papers from a workshop in November 2006 at the Austrian Academy of Sciences (Austrian Academy of Sciences, 2010).

El-Qady, G., Shaaban, H., El-Said, A. A., Ghazala, H. & El-Shahat, A. Tracing of the defunct Canopic Nile branch using geoelectrical resistivity data around Itay El-Baroud area, Nile Delta, Egypt. J. Geophys. Eng. 8 , 83–91 (2011).

Toonen, W. H. J. et al. Holocene fluvial history of the Nile’s west bank at ancient Thebes, Luxor, Egypt, and its relation with cultural dynamics and basin-wide hydroclimatic variability. Geoarchaeology 33 , 273–290 (2018).

Lehner, M. The Complete Pyramids (Thames and Hudson, New York, 1997).

Kitchen, K. A. The chronology of ancient Egypt. World Archaeol. 23 , 201–208 (1991).

Giddy, L. & Jeffreys, D. Memphis, 1991. J. Egypt. Archaeol. 78 , 1–11 (1992).

Ghoneim, E., Robinson, C. & El‐Baz, F. Radar topography data reveal drainage relics in the eastern Sahara. Int. J. Remote Sens. 28 , 1759–1772 (2007).

Roth, L. & Elachi, C. Coherent electromagnetic losses by scattering from volume inhomogeneities. IEEE Trans. Antennas Propag. 23 , 674–675 (1975).

Hassan, F. A. Holocene lakes and prehistoric settlements of the Western Faiyum, Egypt. J. Archaeol. Sci. 13 , 483–501 (1986).

Woodward, J. C., Macklin, M. G., Krom, M. D. & Williams, M. A. J. The Nile: Evolution, Quaternary River Environments and Material Fluxes. In Large Rivers (ed. Gupta, A.) 261–292 (John Wiley & Sons, Ltd, Chichester, UK, 2007). https://doi.org/10.1002/9780470723722.ch13 .

Krom, M. D., Stanley, J. D., Cliff, R. A. & Woodward, J. C. Nile River sediment fluctuations over the past 7000 yr and their key role in sapropel development. Geology 30 , 71–74 (2002).

Stanley, J.-D., Krom, M. D., Cliff, R. A. & Woodward, J. C. Short contribution: Nile flow failure at the end of the Old Kingdom, Egypt: Strontium isotopic and petrologic evidence. Geoarchaeology 18 , 395–402 (2003).

Stanley, D. J. & Warne, A. G. Nile Delta: Recent Geological Evolution and Human Impact. Science 260 , 628–634 (1993).

Jones, M. A new old Kingdom settlement near Ausim: report of the archaeological discoveries made in the Barakat drain improvements project, https://api.semanticscholar.org/CorpusID:194486461 (1995).

Bunbury, J. M. The development of the River Nile and the Egyptian Civilization: A Water Historical Perspective with Focus on the First Intermediate Period. In A History of Water: Rivers and Society — From the Birth of Agriculture to Modern Times , Vol. 2 (eds. Tvedt, T. & Coopey, R) 50–69 (I.B. Tauris, 2010).

Bubenzer, O. & Riemer, H. Holocene climatic change and human settlement between the central Sahara and the Nile Valley: Archaeological and geomorphological results. Geoarchaeology 22 , 607–620 (2007).

Römer, C. The Nile in the Fayum: Strategies of Dominating and Using the Water Resources of the River in the Oasis in the Middle Kingdom and the Graeco-Roman Period. In The Nile: Natural and Cultural Landscape in Egypt (eds. Willems, H. & Dahms, J.-M.) 171–192 (transcript Verlag, 2017). https://doi.org/10.1515/9783839436158-006 .

Mansour, K. et al. Investigation of Groundwater Occurrences Along the Nile Valley Between South Cairo and Beni Suef, Egypt, Using Geophysical and Geodetic Techniques. Pure Appl. Geophys. 180 , 3071–3088 (2023).

McCauley, J. F. et al. Subsurface Valleys and Geoarcheology of the Eastern Sahara Revealed by Shuttle Radar. Science 218 , 1004–1020 (1982).

El-Baz, F. & Robinson, C. A. Paleo-channels revealed by SIR-C data in the Western Desert of Egypt: Implications to sand dune accumulations. In Proceedings of the 12th International Conference on Applied Geologic Remote Sensing , Vol. 1, I–469 (Environmental Research Institute of Michigan, Ann Arbor, 1997).

Robinson, C. A., El-Baz, F., Al-Saud, T. S. M. & Jeon, S. B. Use of radar data to delineate palaeodrainage leading to the Kufra Oasis in the eastern Sahara. J. Afr. Earth Sci. 44 , 229–240 (2006).

Ghoneim, E. Rimaal: A Sand Buried Structure of Possible Impact Origin in the Sahara: Optical and Radar Remote Sensing Investigation. Remote Sens. 10 , 880 (2018).

Ghoneim, E. M. Ibn-Batutah: A possible simple impact structure in southeastern Libya, a remote sensing study. Geomorphology 103 , 341–350 (2009).

Schaber, G. G., Kirk, R. L. & Strom, R. Data base of impact craters on Venus based on analysis of Magellan radar images and altimetry data. U.S. Geological Survey, Open-File Report, https://doi.org/10.3133/ofr98104 , https://pubs.usgs.gov/of/1998/0104/report.pdf (1998).

Ghoneim, E. & El-Baz, F. K. Satellite Image Data Integration for Groundwater Exploration in Egypt, https://api.semanticscholar.org/CorpusID:216495993 (2020).

Skonieczny, C. et al. African humid periods triggered the reactivation of a large river system in Western Sahara. Nat. Commun. 6 , 8751 (2015).

Wessel, B. et al. Accuracy assessment of the global TanDEM-X Digital Elevation Model with GPS data. ISPRS J. Photogramm. Remote Sens. 139 , 171–182 (2018).

Erasmi, S., Rosenbauer, R., Buchbach, R., Busche, T. & Rutishauser, S. Evaluating the Quality and Accuracy of TanDEM-X Digital Elevation Models at Archaeological Sites in the Cilician Plain, Turkey. Remote Sens. 6 , 9475–9493 (2014).

Ginau, A., Schiestl, R. & Wunderlich, J. Integrative geoarchaeological research on settlement patterns in the dynamic landscape of the northwestern Nile delta. Quat. Int. 511 , 51–67 (2019).

JENNESS, J. Topographic position index (tpi_jen.avx_extension for Arcview 3.x, v.1.3a, Jenness Enterprises [EB/OL], http://www.jennessent.com/arcview/tpi.htm (2006).

Weiss, A. D. Topographic position and landforms analysis, https://api.semanticscholar.org/CorpusID:131349144 (2001).

Verstraeten, G., Mohamed, I., Notebaert, B. & Willems, H. The Dynamic Nature of the Transition from the Nile Floodplain to the Desert in Central Egypt since the Mid-Holocene. In The Nile: Natural and Cultural Landscape in Egypt (eds. Willems, H. & Dahms, J.-M.) 239–254 (transcript Verlag, 2017). https://doi.org/10.1515/9783839436158-009 .

Meyer, F. Spaceborne Synthetic Aperture Radar: Principles, data access, and basic processing techniques. In Synthetic Aperture Radar the SAR Handbook: Comprehensive Methodologies for Forest Monitoring and Biomass Estimation. 21–64 (2019). https://doi.org/10.25966/nr2c-s697 , https://gis1.servirglobal.net/TrainingMaterials/SAR/SARHB_FullRes.pdf .

Download references

Acknowledgements

This work was funded by NSF grant # 2114295 awarded to E.G., S.O. and T.R. and partially supported by Research Momentum Fund, UNCW, to E.G. TanDEM-X data was awarded to E.G. and R.E by the German Aerospace Centre (DLR) (contract # DEM_OTHER2886). Permissions for collecting soil coring and sampling were obtained from the Faculty of Science, Tanta University, Egypt by coauthors Dr. Amr Fhail and Dr. Mohamed Fathy. Bradley Graves at Macquarie University assisted with preparation of the sedimentological figures. Hamada Salama at NRIAG assisted with the GPR field data collection.

Author information

Authors and affiliations.

Department of Earth and Ocean Sciences, University of North Carolina Wilmington, Wilmington, NC, 28403-5944, USA

Eman Ghoneim

School of Natural Sciences, Macquarie University, Macquarie, NSW, 2109, Australia

Timothy J. Ralph

Department of History, The University of Memphis, Memphis, TN, 38152-3450, USA

Suzanne Onstine

Near Eastern Languages and Civilizations, University of Chicago, Chicago, IL, 60637, USA

Raghda El-Behaedi

National Research Institute of Astronomy and Geophysics (NRIAG), Helwan, Cairo, 11421, Egypt

Gad El-Qady, Mahfooz Hafez, Magdy Atya, Mohamed Ebrahim & Ashraf Khozym

Geology Department, Faculty of Science, Tanta University, Tanta, 31527, Egypt

Amr S. Fahil & Mohamed S. Fathy

You can also search for this author in PubMed   Google Scholar

Contributions

Eman Ghoneim conceived the ideas, lead the research project, and conducted the data processing and interpretations. The manuscript was written and prepared by Eman Ghoneim. Timothy J. Ralph co-supervised the project, contributed to the geomorphological and sedimentological interpretations, edited the manuscript and the figures. Suzanne Onstine co-supervised the project, contributed to the archeological and historical interpretations, and edited the manuscript. Raghda El-Behaedi contributed to the remote sensing data processing and methodology and edited the manuscript. Gad El-Qady supervised the geophysical survey. Mahfooz Hafez, Magdy Atya, Mohamed Ebrahim, Ashraf Khozym designed, collected, and interpreted the GPR and EMT data. Amr S. Fahil and Mohamed S. Fathy supervised the soil coring, sediment analysis, drafted sedimentological figures and contributed to the interpretations. All authors reviewed the manuscript and participated in the fieldwork.

Corresponding author

Correspondence to Eman Ghoneim .

Ethics declarations

Competing interests.

The authors declare no competing interests.

Peer review

Peer review information.

Communications Earth & Environment thanks Ritambhara Upadhyay and Judith Bunbury for their contribution to the peer review of this work. Primary Handling Editors: Patricia Spellman and Joe Aslin. A peer review file is available.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Peer review file, supplementary information file, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Ghoneim, E., Ralph, T.J., Onstine, S. et al. The Egyptian pyramid chain was built along the now abandoned Ahramat Nile Branch. Commun Earth Environ 5 , 233 (2024). https://doi.org/10.1038/s43247-024-01379-7

Download citation

Received : 06 December 2023

Accepted : 10 April 2024

Published : 16 May 2024

DOI : https://doi.org/10.1038/s43247-024-01379-7

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines . If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.

major sections of a research report used in counseling research

Chapter 11: Presenting Your Research

Writing a research report in american psychological association (apa) style, learning objectives.

  • Identify the major sections of an APA-style research report and the basic contents of each section.
  • Plan and write an effective APA-style research report.

In this section, we look at how to write an APA-style empirical research report , an article that presents the results of one or more new studies. Recall that the standard sections of an empirical research report provide a kind of outline. Here we consider each of these sections in detail, including what information it contains, how that information is formatted and organized, and tips for writing each section. At the end of this section is a sample APA-style research report that illustrates many of these principles.

Sections of a Research Report

Title page and abstract.

An APA-style research report begins with a  title page . The title is centred in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that elaborates on the main title, in which case the main title and subtitle are separated by a colon. Here are some titles from recent issues of professional journals published by the American Psychological Association.

  • Sex Differences in Coping Styles and Implications for Depressed Mood
  • Effects of Aging and Divided Attention on Memory for Items and Their Contexts
  • Computer-Assisted Cognitive Behavioural Therapy for Child Anxiety: Results of a Randomized Clinical Trial
  • Virtual Driving and Risk Taking: Do Racing Games Increase Risk-Taking Cognitions, Affect, and Behaviour?

Below the title are the authors’ names and, on the next line, their institutional affiliation—the university or other institution where the authors worked when they conducted the research. As we have already seen, the authors are listed in an order that reflects their contribution to the research. When multiple authors have made equal contributions to the research, they often list their names alphabetically or in a randomly determined order.

It’s  Soooo  Cute!  How Informal Should an Article Title Be?

In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as “cute.” They usually take the form of a play on words or a well-known expression that relates to the topic under study. Here are some examples from recent issues of the Journal Psychological Science .

  • “Smells Like Clean Spirit: Nonconscious Effects of Scent on Cognition and Behavior”
  • “Time Crawls: The Temporal Resolution of Infants’ Visual Attention”
  • “Scent of a Woman: Men’s Testosterone Responses to Olfactory Ovulation Cues”
  • “Apocalypse Soon?: Dire Messages Reduce Belief in Global Warming by Contradicting Just-World Beliefs”
  • “Serial vs. Parallel Processing: Sometimes They Look Like Tweedledum and Tweedledee but They Can (and Should) Be Distinguished”
  • “How Do I Love Thee? Let Me Count the Words: The Social Effects of Expressive Writing”

Individual researchers differ quite a bit in their preference for such titles. Some use them regularly, while others never use them. What might be some of the pros and cons of using cute article titles?

For articles that are being submitted for publication, the title page also includes an author note that lists the authors’ full institutional affiliations, any acknowledgments the authors wish to make to agencies that funded the research or to colleagues who commented on it, and contact information for the authors. For student papers that are not being submitted for publication—including theses—author notes are generally not necessary.

The  abstract  is a summary of the study. It is the second page of the manuscript and is headed with the word  Abstract . The first line is not indented. The abstract presents the research question, a summary of the method, the basic results, and the most important conclusions. Because the abstract is usually limited to about 200 words, it can be a challenge to write a good one.

Introduction

The  introduction  begins on the third page of the manuscript. The heading at the top of this page is the full title of the manuscript, with each important word capitalized as on the title page. The introduction includes three distinct subsections, although these are typically not identified by separate headings. The opening introduces the research question and explains why it is interesting, the literature review discusses relevant previous research, and the closing restates the research question and comments on the method used to answer it.

The Opening

The  opening , which is usually a paragraph or two in length, introduces the research question and explains why it is interesting. To capture the reader’s attention, researcher Daryl Bem recommends starting with general observations about the topic under study, expressed in ordinary language (not technical jargon)—observations that are about people and their behaviour (not about researchers or their research; Bem, 2003 [1] ). Concrete examples are often very useful here. According to Bem, this would be a poor way to begin a research report:

Festinger’s theory of cognitive dissonance received a great deal of attention during the latter part of the 20th century (p. 191)

The following would be much better:

The individual who holds two beliefs that are inconsistent with one another may feel uncomfortable. For example, the person who knows that he or she enjoys smoking but believes it to be unhealthy may experience discomfort arising from the inconsistency or disharmony between these two thoughts or cognitions. This feeling of discomfort was called cognitive dissonance by social psychologist Leon Festinger (1957), who suggested that individuals will be motivated to remove this dissonance in whatever way they can (p. 191).

After capturing the reader’s attention, the opening should go on to introduce the research question and explain why it is interesting. Will the answer fill a gap in the literature? Will it provide a test of an important theory? Does it have practical implications? Giving readers a clear sense of what the research is about and why they should care about it will motivate them to continue reading the literature review—and will help them make sense of it.

Breaking the Rules

Researcher Larry Jacoby reported several studies showing that a word that people see or hear repeatedly can seem more familiar even when they do not recall the repetitions—and that this tendency is especially pronounced among older adults. He opened his article with the following humourous anecdote:

A friend whose mother is suffering symptoms of Alzheimer’s disease (AD) tells the story of taking her mother to visit a nursing home, preliminary to her mother’s moving there. During an orientation meeting at the nursing home, the rules and regulations were explained, one of which regarded the dining room. The dining room was described as similar to a fine restaurant except that tipping was not required. The absence of tipping was a central theme in the orientation lecture, mentioned frequently to emphasize the quality of care along with the advantages of having paid in advance. At the end of the meeting, the friend’s mother was asked whether she had any questions. She replied that she only had one question: “Should I tip?” (Jacoby, 1999, p. 3)

Although both humour and personal anecdotes are generally discouraged in APA-style writing, this example is a highly effective way to start because it both engages the reader and provides an excellent real-world example of the topic under study.

The Literature Review

Immediately after the opening comes the  literature review , which describes relevant previous research on the topic and can be anywhere from several paragraphs to several pages in length. However, the literature review is not simply a list of past studies. Instead, it constitutes a kind of argument for why the research question is worth addressing. By the end of the literature review, readers should be convinced that the research question makes sense and that the present study is a logical next step in the ongoing research process.

Like any effective argument, the literature review must have some kind of structure. For example, it might begin by describing a phenomenon in a general way along with several studies that demonstrate it, then describing two or more competing theories of the phenomenon, and finally presenting a hypothesis to test one or more of the theories. Or it might describe one phenomenon, then describe another phenomenon that seems inconsistent with the first one, then propose a theory that resolves the inconsistency, and finally present a hypothesis to test that theory. In applied research, it might describe a phenomenon or theory, then describe how that phenomenon or theory applies to some important real-world situation, and finally suggest a way to test whether it does, in fact, apply to that situation.

Looking at the literature review in this way emphasizes a few things. First, it is extremely important to start with an outline of the main points that you want to make, organized in the order that you want to make them. The basic structure of your argument, then, should be apparent from the outline itself. Second, it is important to emphasize the structure of your argument in your writing. One way to do this is to begin the literature review by summarizing your argument even before you begin to make it. “In this article, I will describe two apparently contradictory phenomena, present a new theory that has the potential to resolve the apparent contradiction, and finally present a novel hypothesis to test the theory.” Another way is to open each paragraph with a sentence that summarizes the main point of the paragraph and links it to the preceding points. These opening sentences provide the “transitions” that many beginning researchers have difficulty with. Instead of beginning a paragraph by launching into a description of a previous study, such as “Williams (2004) found that…,” it is better to start by indicating something about why you are describing this particular study. Here are some simple examples:

Another example of this phenomenon comes from the work of Williams (2004).

Williams (2004) offers one explanation of this phenomenon.

An alternative perspective has been provided by Williams (2004).

We used a method based on the one used by Williams (2004).

Finally, remember that your goal is to construct an argument for why your research question is interesting and worth addressing—not necessarily why your favourite answer to it is correct. In other words, your literature review must be balanced. If you want to emphasize the generality of a phenomenon, then of course you should discuss various studies that have demonstrated it. However, if there are other studies that have failed to demonstrate it, you should discuss them too. Or if you are proposing a new theory, then of course you should discuss findings that are consistent with that theory. However, if there are other findings that are inconsistent with it, again, you should discuss them too. It is acceptable to argue that the  balance  of the research supports the existence of a phenomenon or is consistent with a theory (and that is usually the best that researchers in psychology can hope for), but it is not acceptable to  ignore contradictory evidence. Besides, a large part of what makes a research question interesting is uncertainty about its answer.

The Closing

The  closing  of the introduction—typically the final paragraph or two—usually includes two important elements. The first is a clear statement of the main research question or hypothesis. This statement tends to be more formal and precise than in the opening and is often expressed in terms of operational definitions of the key variables. The second is a brief overview of the method and some comment on its appropriateness. Here, for example, is how Darley and Latané (1968) [2] concluded the introduction to their classic article on the bystander effect:

These considerations lead to the hypothesis that the more bystanders to an emergency, the less likely, or the more slowly, any one bystander will intervene to provide aid. To test this proposition it would be necessary to create a situation in which a realistic “emergency” could plausibly occur. Each subject should also be blocked from communicating with others to prevent his getting information about their behaviour during the emergency. Finally, the experimental situation should allow for the assessment of the speed and frequency of the subjects’ reaction to the emergency. The experiment reported below attempted to fulfill these conditions. (p. 378)

Thus the introduction leads smoothly into the next major section of the article—the method section.

The  method section  is where you describe how you conducted your study. An important principle for writing a method section is that it should be clear and detailed enough that other researchers could replicate the study by following your “recipe.” This means that it must describe all the important elements of the study—basic demographic characteristics of the participants, how they were recruited, whether they were randomly assigned, how the variables were manipulated or measured, how counterbalancing was accomplished, and so on. At the same time, it should avoid irrelevant details such as the fact that the study was conducted in Classroom 37B of the Industrial Technology Building or that the questionnaire was double-sided and completed using pencils.

The method section begins immediately after the introduction ends with the heading “Method” (not “Methods”) centred on the page. Immediately after this is the subheading “Participants,” left justified and in italics. The participants subsection indicates how many participants there were, the number of women and men, some indication of their age, other demographics that may be relevant to the study, and how they were recruited, including any incentives given for participation.

Figure 11.1 Three Ways of Organizing an APA-Style Method

Figure 11.1 Three Ways of Organizing an APA-Style Method

After the participants section, the structure can vary a bit. Figure 11.1 shows three common approaches. In the first, the participants section is followed by a design and procedure subsection, which describes the rest of the method. This works well for methods that are relatively simple and can be described adequately in a few paragraphs. In the second approach, the participants section is followed by separate design and procedure subsections. This works well when both the design and the procedure are relatively complicated and each requires multiple paragraphs.

What is the difference between design and procedure? The design of a study is its overall structure. What were the independent and dependent variables? Was the independent variable manipulated, and if so, was it manipulated between or within subjects? How were the variables operationally defined? The procedure is how the study was carried out. It often works well to describe the procedure in terms of what the participants did rather than what the researchers did. For example, the participants gave their informed consent, read a set of instructions, completed a block of four practice trials, completed a block of 20 test trials, completed two questionnaires, and were debriefed and excused.

In the third basic way to organize a method section, the participants subsection is followed by a materials subsection before the design and procedure subsections. This works well when there are complicated materials to describe. This might mean multiple questionnaires, written vignettes that participants read and respond to, perceptual stimuli, and so on. The heading of this subsection can be modified to reflect its content. Instead of “Materials,” it can be “Questionnaires,” “Stimuli,” and so on.

The  results section  is where you present the main results of the study, including the results of the statistical analyses. Although it does not include the raw data—individual participants’ responses or scores—researchers should save their raw data and make them available to other researchers who request them. Several journals now encourage the open sharing of raw data online.

Although there are no standard subsections, it is still important for the results section to be logically organized. Typically it begins with certain preliminary issues. One is whether any participants or responses were excluded from the analyses and why. The rationale for excluding data should be described clearly so that other researchers can decide whether it is appropriate. A second preliminary issue is how multiple responses were combined to produce the primary variables in the analyses. For example, if participants rated the attractiveness of 20 stimulus people, you might have to explain that you began by computing the mean attractiveness rating for each participant. Or if they recalled as many items as they could from study list of 20 words, did you count the number correctly recalled, compute the percentage correctly recalled, or perhaps compute the number correct minus the number incorrect? A third preliminary issue is the reliability of the measures. This is where you would present test-retest correlations, Cronbach’s α, or other statistics to show that the measures are consistent across time and across items. A final preliminary issue is whether the manipulation was successful. This is where you would report the results of any manipulation checks.

The results section should then tackle the primary research questions, one at a time. Again, there should be a clear organization. One approach would be to answer the most general questions and then proceed to answer more specific ones. Another would be to answer the main question first and then to answer secondary ones. Regardless, Bem (2003) [3] suggests the following basic structure for discussing each new result:

  • Remind the reader of the research question.
  • Give the answer to the research question in words.
  • Present the relevant statistics.
  • Qualify the answer if necessary.
  • Summarize the result.

Notice that only Step 3 necessarily involves numbers. The rest of the steps involve presenting the research question and the answer to it in words. In fact, the basic results should be clear even to a reader who skips over the numbers.

The  discussion  is the last major section of the research report. Discussions usually consist of some combination of the following elements:

  • Summary of the research
  • Theoretical implications
  • Practical implications
  • Limitations
  • Suggestions for future research

The discussion typically begins with a summary of the study that provides a clear answer to the research question. In a short report with a single study, this might require no more than a sentence. In a longer report with multiple studies, it might require a paragraph or even two. The summary is often followed by a discussion of the theoretical implications of the research. Do the results provide support for any existing theories? If not, how  can  they be explained? Although you do not have to provide a definitive explanation or detailed theory for your results, you at least need to outline one or more possible explanations. In applied research—and often in basic research—there is also some discussion of the practical implications of the research. How can the results be used, and by whom, to accomplish some real-world goal?

The theoretical and practical implications are often followed by a discussion of the study’s limitations. Perhaps there are problems with its internal or external validity. Perhaps the manipulation was not very effective or the measures not very reliable. Perhaps there is some evidence that participants did not fully understand their task or that they were suspicious of the intent of the researchers. Now is the time to discuss these issues and how they might have affected the results. But do not overdo it. All studies have limitations, and most readers will understand that a different sample or different measures might have produced different results. Unless there is good reason to think they  would have, however, there is no reason to mention these routine issues. Instead, pick two or three limitations that seem like they could have influenced the results, explain how they could have influenced the results, and suggest ways to deal with them.

Most discussions end with some suggestions for future research. If the study did not satisfactorily answer the original research question, what will it take to do so? What  new  research questions has the study raised? This part of the discussion, however, is not just a list of new questions. It is a discussion of two or three of the most important unresolved issues. This means identifying and clarifying each question, suggesting some alternative answers, and even suggesting ways they could be studied.

Finally, some researchers are quite good at ending their articles with a sweeping or thought-provoking conclusion. Darley and Latané (1968) [4] , for example, ended their article on the bystander effect by discussing the idea that whether people help others may depend more on the situation than on their personalities. Their final sentence is, “If people understand the situational forces that can make them hesitate to intervene, they may better overcome them” (p. 383). However, this kind of ending can be difficult to pull off. It can sound overreaching or just banal and end up detracting from the overall impact of the article. It is often better simply to end when you have made your final point (although you should avoid ending on a limitation).

The references section begins on a new page with the heading “References” centred at the top of the page. All references cited in the text are then listed in the format presented earlier. They are listed alphabetically by the last name of the first author. If two sources have the same first author, they are listed alphabetically by the last name of the second author. If all the authors are the same, then they are listed chronologically by the year of publication. Everything in the reference list is double-spaced both within and between references.

Appendices, Tables, and Figures

Appendices, tables, and figures come after the references. An  appendix  is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the studies that are included in a meta-analysis. Each appendix begins on a new page. If there is only one, the heading is “Appendix,” centred at the top of the page. If there is more than one, the headings are “Appendix A,” “Appendix B,” and so on, and they appear in the order they were first mentioned in the text of the report.

After any appendices come tables and then figures. Tables and figures are both used to present results. Figures can also be used to illustrate theories (e.g., in the form of a flowchart), display stimuli, outline procedures, and present many other kinds of information. Each table and figure appears on its own page. Tables are numbered in the order that they are first mentioned in the text (“Table 1,” “Table 2,” and so on). Figures are numbered the same way (“Figure 1,” “Figure 2,” and so on). A brief explanatory title, with the important words capitalized, appears above each table. Each figure is given a brief explanatory caption, where (aside from proper nouns or names) only the first word of each sentence is capitalized. More details on preparing APA-style tables and figures are presented later in the book.

Sample APA-Style Research Report

Figures 11.2, 11.3, 11.4, and 11.5 show some sample pages from an APA-style empirical research report originally written by undergraduate student Tomoe Suyama at California State University, Fresno. The main purpose of these figures is to illustrate the basic organization and formatting of an APA-style empirical research report, although many high-level and low-level style conventions can be seen here too.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.2 Title Page and Abstract. This student paper does not include the author note on the title page. The abstract appears on its own page.

Figure 11.3 Introduction and Method. Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.3 Introduction and Method. Note that the introduction is headed with the full title, and the method section begins immediately after the introduction ends.

Figure 11.4 Results and Discussion The discussion begins immediately after the results section ends.

Figure 11.4 Results and Discussion The discussion begins immediately after the results section ends.

Figure 11.5 References and Figure. If there were appendices or tables, they would come before the figure.

Figure 11.5 References and Figure. If there were appendices or tables, they would come before the figure.

Key Takeaways

  • An APA-style empirical research report consists of several standard sections. The main ones are the abstract, introduction, method, results, discussion, and references.
  • The introduction consists of an opening that presents the research question, a literature review that describes previous research on the topic, and a closing that restates the research question and comments on the method. The literature review constitutes an argument for why the current study is worth doing.
  • The method section describes the method in enough detail that another researcher could replicate the study. At a minimum, it consists of a participants subsection and a design and procedure subsection.
  • The results section describes the results in an organized fashion. Each primary result is presented in terms of statistical results but also explained in words.
  • The discussion typically summarizes the study, discusses theoretical and practical implications and limitations of the study, and offers suggestions for further research.
  • Practice: Look through an issue of a general interest professional journal (e.g.,  Psychological Science ). Read the opening of the first five articles and rate the effectiveness of each one from 1 ( very ineffective ) to 5 ( very effective ). Write a sentence or two explaining each rating.
  • Practice: Find a recent article in a professional journal and identify where the opening, literature review, and closing of the introduction begin and end.
  • Practice: Find a recent article in a professional journal and highlight in a different colour each of the following elements in the discussion: summary, theoretical implications, practical implications, limitations, and suggestions for future research.
  • Bem, D. J. (2003). Writing the empirical journal article. In J. M. Darley, M. P. Zanna, & H. R. Roediger III (Eds.),  The compleat academic: A practical guide for the beginning social scientist  (2nd ed.). Washington, DC: American Psychological Association. ↵
  • Darley, J. M., & Latané, B. (1968). Bystander intervention in emergencies: Diffusion of responsibility.  Journal of Personality and Social Psychology, 4 , 377–383. ↵
  • Research Methods in Psychology. Authored by : Paul C. Price, Rajiv S. Jhangiani, and I-Chant A. Chiang. Provided by : BCCampus. Located at : https://opentextbc.ca/researchmethods/ . License : CC BY-NC-SA: Attribution-NonCommercial-ShareAlike

Footer Logo Lumen Candela

Privacy Policy

IMAGES

  1. CNL-540 Week 2 DQ.docx

    major sections of a research report used in counseling research

  2. CNL-540 Topic 2 DQ 1.txt

    major sections of a research report used in counseling research

  3. Research Report Writing

    major sections of a research report used in counseling research

  4. Research 7 1.docx

    major sections of a research report used in counseling research

  5. Topic 7.docx

    major sections of a research report used in counseling research

  6. Sample Research Proposal On Guidance And Counseling

    major sections of a research report used in counseling research

VIDEO

  1. Counseling Research-Week 1-Intro and Syllabus

  2. Career Counseling: Research and Management Date : 30/04/2022 time: 03:00 pm

  3. Maximizing Your Membership: RCOT membership benefits webinar from 30 October 2023

  4. CSJ Webinar: FUBU

  5. Results Section #1: What to Include_Shorts

  6. How Much Can You Earn with a Psychology Degree?

COMMENTS

  1. Reporting Standards for Research in Psychology

    The standards are categorized into the sections of a research report used by APA journals. To illustrate how the tables would be used, note that the Method section in Table 1 is divided into subsections regarding participant characteristics, sampling procedures, sample size, measures and covariates, and an overall categorization of the research ...

  2. 11.2 Writing a Research Report in American Psychological Association

    An appendix is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the ...

  3. PDF Doing Research in Counselling and Psychotherapy

    Principle 1: The primary aim of research is to create knowledge products. Principle 2: The meaning, significance and value of any research study depend on where it fits within the existing literature. Principle 3: Developing reliable and practically useful research-based knowledge in the field of counselling and psychotherapy requires the ...

  4. Research Paper Structure

    A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1 Many will also contain Figures and Tables and some will have an Appendix or Appendices. These sections are detailed as follows (for a more in ...

  5. Guidelines and Recommendations for Writing a Rigorous Quantitative

    The Methods section is typically the second major section in a research manuscript and can begin with an overview of the theoretical framework and research paradigm that ground the study (Creswell & Creswell, 2018; Leedy & Ormrod, 2019). ... Counseling researchers should report and explain the selection of their data analytic procedures (e.g ...

  6. Writing guidelines for Professional Psychology: Research and Practice

    Rather, when a survey or research project is presented, this may be done in a middle section labeled "The Survey" or "The Exploration" or "The Evaluation." Brief presentations of the most critical aspects of method and the major or unexpected findings are made, along with discussion of the findings that actually warrant discussion.

  7. Journal of Counseling Psychology

    The Journal of Counseling Psychology® publishes empirical research in the areas of. counseling activities (including assessment, interventions, consultation, supervision, training, prevention, psychological education, and advocacy) career and educational development and vocational psychology. diversity and underrepresented populations in ...

  8. Chapter 2.5 The Research Report

    The title page of a research report serves two important functions. First, it provides a quick summary of the research, including the title of the article, authors' names, and affiliation. Second, it provides a means for a blind evaluation. When submitted to a professional journal, a short title is placed on the title page and carried ...

  9. Research in Counseling: Methodological and Professional Issues

    1 Unless explicitly noted, the terms counseling and psychotherapy shall be used interchangeably. While there are historical, political and sometimes substantive bases for differentiating these terms in some contexts, research issues, especially methodological ones, are so intertwined between the activities conventionally labeled counseling and psychotherapy, that differential use of the two ...

  10. Contemporary Issues in Reporting Statistical, Practical, and Clinical

    Null hypothesis statistical testing (NHST) represents a primary method of addressing quantitative results in counseling research. The use of NHST and the expression of results are often limited to the populations of interest, and many studies may not be replicable. Furthermore, the communication of results in terms of practical and clinical ...

  11. The Major Sections of a Research Study According to APA

    In the early 1900s, there was no established way to write a research paper, making each author's paper a little different and a little confusing. The sections acted as a way to unify and ease reading.

  12. Sections of a Research Report

    An APA-style research report begins with a title page. The title is centered in the upper half of the page, with each important word capitalized. The title should clearly and concisely (in about 12 words or fewer) communicate the primary variables and research questions. This sometimes requires a main title followed by a subtitle that ...

  13. PDF Qualitative Research in Counseling: A Reflection for Novice ...

    This paper is thus written to support novice counselor researchers, and to inspire an emerging research culture through sharing formative experiences and lessons learned during a qualitative research project exploring minority issues in counseling. Key Words: Counseling, Health, Qualitative, Methods, and Narrative.

  14. How should we evaluate research on counselling and the treatment of

    Introduction. English health guidelines are created and regularly updated with the aim of improving patient care by ensuring that the most recent and 'best available evidence' is used to guide treatment (National Institute for Health and Care Excellence Guidance, 2017a).As stated on its website: 'National Institute for Health and Care Excellence (NICE) guidelines are evidence‐based ...

  15. Writing a Research Report

    This review is divided into sections for easy reference. There are five MAJOR parts of a Research Report: 1. Introduction 2. Review of Literature 3. Methods 4. Results 5. Discussion. As a general guide, the Introduction, Review of Literature, and Methods should be about 1/3 of your paper, Discussion 1/3, then Results 1/3.

  16. Describe the major sections of a research report used

    Describe the major sections of a research report used in counseling research. What is the core content included in each section of the report? Distinguish between writing for research purposes versus writing for standard graduate level courses.

  17. a. Describe the major sections of a research report used in counseling

    Step 1/7 a. The major sections of a research report used in counseling research typically include: Step 2/7 1. Introduction: This section provides an overview of the research topic, states the research problem or question, and presents the purpose and significance of the study.

  18. Solved Define and describe the major sections of a research

    Psychology questions and answers. Define and describe the major sections of a research report used in counseling psychology research. What is the core content included in each section of the report? Distinguish between the writing for research purposes versus writing for standard graduate level courses. How are they similar?

  19. CNL 540 Topic 2 DQ 1.docx

    Describe the major sections of a research report used in counseling research. What is the core content included in each. AI Homework Help. Expert Help. Study Resources. ... (2022), the major sections of a research report used in counseling research include a title page and abstract, an introduction, method section, results, discussion, ...

  20. Writing a Research Report in American Psychological Association (APA

    At the end of this section is a sample APA-style research report that illustrates many of these principles. Sections of a Research Report Title Page and Abstract. An APA-style research report begins with a title page. The title is centred in the upper half of the page, with each important word capitalized.

  21. Solved Describe the major sections of a research report used

    Describe the major sections of a research report used in counseling research. What is the core content included in each section of the report? Distinguish between the writing for research purposes versus writing for standard graduate level courses.

  22. The Egyptian pyramid chain was built along the now abandoned Ahramat

    The pyramids of the Western desert in Egypt were built alongside a now extinct branch of the Nile River named as the Ahramat Branch and identified using a combination of radar satellite imagery ...

  23. Writing a Research Report in American Psychological Association (APA

    An appendix is appropriate for supplemental material that would interrupt the flow of the research report if it were presented within any of the major sections. An appendix could be used to present lists of stimulus words, questionnaire items, detailed descriptions of special equipment or unusual statistical analyses, or references to the ...