Root out friction in every digital experience, super-charge conversion rates, and optimize digital self-service

Uncover insights from any interaction, deliver AI-powered agent coaching, and reduce cost to serve

Increase revenue and loyalty with real-time insights and recommendations delivered to teams on the ground

Know how your people feel and empower managers to improve employee engagement, productivity, and retention

Take action in the moments that matter most along the employee journey and drive bottom line growth

Whatever they’re are saying, wherever they’re saying it, know exactly what’s going on with your people

Get faster, richer insights with qual and quant tools that make powerful market research available to everyone

Run concept tests, pricing studies, prototyping + more with fast, powerful studies designed by UX research experts

Track your brand performance 24/7 and act quickly to respond to opportunities and challenges in your market

Explore the platform powering Experience Management

  • Free Account
  • For Digital
  • For Customer Care
  • For Human Resources
  • For Researchers
  • Financial Services
  • All Industries

Popular Use Cases

  • Customer Experience
  • Employee Experience
  • Net Promoter Score
  • Voice of Customer
  • Customer Success Hub
  • Product Documentation
  • Training & Certification
  • XM Institute
  • Popular Resources
  • Customer Stories
  • Artificial Intelligence

Market Research

  • Partnerships
  • Marketplace

The annual gathering of the experience leaders at the world’s iconic brands building breakthrough business results, live in Salt Lake City.

  • English/AU & NZ
  • Español/Europa
  • Español/América Latina
  • Português Brasileiro
  • REQUEST DEMO
  • Experience Management
  • Causal Research

Try Qualtrics for free

Causal research: definition, examples and how to use it.

16 min read Causal research enables market researchers to predict hypothetical occurrences & outcomes while improving existing strategies. Discover how this research can decrease employee retention & increase customer success for your business.

What is causal research?

Causal research, also known as explanatory research or causal-comparative research, identifies the extent and nature of cause-and-effect relationships between two or more variables.

It’s often used by companies to determine the impact of changes in products, features, or services process on critical company metrics. Some examples:

  • How does rebranding of a product influence intent to purchase?
  • How would expansion to a new market segment affect projected sales?
  • What would be the impact of a price increase or decrease on customer loyalty?

To maintain the accuracy of causal research, ‘confounding variables’ or influences — e.g. those that could distort the results — are controlled. This is done either by keeping them constant in the creation of data, or by using statistical methods. These variables are identified before the start of the research experiment.

As well as the above, research teams will outline several other variables and principles in causal research:

  • Independent variables

The variables that may cause direct changes in another variable. For example, the effect of truancy on a student’s grade point average. The independent variable is therefore class attendance.

  • Control variables

These are the components that remain unchanged during the experiment so researchers can better understand what conditions create a cause-and-effect relationship.  

This describes the cause-and-effect relationship. When researchers find causation (or the cause), they’ve conducted all the processes necessary to prove it exists.

  • Correlation

Any relationship between two variables in the experiment. It’s important to note that correlation doesn’t automatically mean causation. Researchers will typically establish correlation before proving cause-and-effect.

  • Experimental design

Researchers use experimental design to define the parameters of the experiment — e.g. categorizing participants into different groups.

  • Dependent variables

These are measurable variables that may change or are influenced by the independent variable. For example, in an experiment about whether or not terrain influences running speed, your dependent variable is the terrain.  

Why is causal research useful?

It’s useful because it enables market researchers to predict hypothetical occurrences and outcomes while improving existing strategies. This allows businesses to create plans that benefit the company. It’s also a great research method because researchers can immediately see how variables affect each other and under what circumstances.

Also, once the first experiment has been completed, researchers can use the learnings from the analysis to repeat the experiment or apply the findings to other scenarios. Because of this, it’s widely used to help understand the impact of changes in internal or commercial strategy to the business bottom line.

Some examples include:

  • Understanding how overall training levels are improved by introducing new courses
  • Examining which variations in wording make potential customers more interested in buying a product
  • Testing a market’s response to a brand-new line of products and/or services

So, how does causal research compare and differ from other research types?

Well, there are a few research types that are used to find answers to some of the examples above:

1. Exploratory research

As its name suggests, exploratory research involves assessing a situation (or situations) where the problem isn’t clear. Through this approach, researchers can test different avenues and ideas to establish facts and gain a better understanding.

Researchers can also use it to first navigate a topic and identify which variables are important. Because no area is off-limits, the research is flexible and adapts to the investigations as it progresses.

Finally, this approach is unstructured and often involves gathering qualitative data, giving the researcher freedom to progress the research according to their thoughts and assessment. However, this may make results susceptible to researcher bias and may limit the extent to which a topic is explored.

2. Descriptive research

Descriptive research is all about describing the characteristics of the population, phenomenon or scenario studied. It focuses more on the “what” of the research subject than the “why”.

For example, a clothing brand wants to understand the fashion purchasing trends amongst buyers in California — so they conduct a demographic survey of the region, gather population data and then run descriptive research. The study will help them to uncover purchasing patterns amongst fashion buyers in California, but not necessarily why those patterns exist.

As the research happens in a natural setting, variables can cross-contaminate other variables, making it harder to isolate cause and effect relationships. Therefore, further research will be required if more causal information is needed.

Get started on your market research journey with CoreXM

How is causal research different from the other two methods above?

Well, causal research looks at what variables are involved in a problem and ‘why’ they act a certain way. As the experiment takes place in a controlled setting (thanks to controlled variables) it’s easier to identify cause-and-effect amongst variables.

Furthermore, researchers can carry out causal research at any stage in the process, though it’s usually carried out in the later stages once more is known about a particular topic or situation.

Finally, compared to the other two methods, causal research is more structured, and researchers can combine it with exploratory and descriptive research to assist with research goals.

Summary of three research types

causal research table

What are the advantages of causal research?

  • Improve experiences

By understanding which variables have positive impacts on target variables (like sales revenue or customer loyalty), businesses can improve their processes, return on investment, and the experiences they offer customers and employees.

  • Help companies improve internally

By conducting causal research, management can make informed decisions about improving their employee experience and internal operations. For example, understanding which variables led to an increase in staff turnover.

  • Repeat experiments to enhance reliability and accuracy of results

When variables are identified, researchers can replicate cause-and-effect with ease, providing them with reliable data and results to draw insights from.

  • Test out new theories or ideas

If causal research is able to pinpoint the exact outcome of mixing together different variables, research teams have the ability to test out ideas in the same way to create viable proof of concepts.

  • Fix issues quickly

Once an undesirable effect’s cause is identified, researchers and management can take action to reduce the impact of it or remove it entirely, resulting in better outcomes.

What are the disadvantages of causal research?

  • Provides information to competitors

If you plan to publish your research, it provides information about your plans to your competitors. For example, they might use your research outcomes to identify what you are up to and enter the market before you.

  • Difficult to administer

Causal research is often difficult to administer because it’s not possible to control the effects of extraneous variables.

  • Time and money constraints

Budgetary and time constraints can make this type of research expensive to conduct and repeat. Also, if an initial attempt doesn’t provide a cause and effect relationship, the ROI is wasted and could impact the appetite for future repeat experiments.

  • Requires additional research to ensure validity

You can’t rely on just the outcomes of causal research as it’s inaccurate. It’s best to conduct other types of research alongside it to confirm its output.

  • Trouble establishing cause and effect

Researchers might identify that two variables are connected, but struggle to determine which is the cause and which variable is the effect.

  • Risk of contamination

There’s always the risk that people outside your market or area of study could affect the results of your research. For example, if you’re conducting a retail store study, shoppers outside your ‘test parameters’ shop at your store and skew the results.

How can you use causal research effectively?

To better highlight how you can use causal research across functions or markets, here are a few examples:

Market and advertising research

A company might want to know if their new advertising campaign or marketing campaign is having a positive impact. So, their research team can carry out a causal research project to see which variables cause a positive or negative effect on the campaign.

For example, a cold-weather apparel company in a winter ski-resort town may see an increase in sales generated after a targeted campaign to skiers. To see if one caused the other, the research team could set up a duplicate experiment to see if the same campaign would generate sales from non-skiers. If the results reduce or change, then it’s likely that the campaign had a direct effect on skiers to encourage them to purchase products.

Improving customer experiences and loyalty levels

Customers enjoy shopping with brands that align with their own values, and they’re more likely to buy and present the brand positively to other potential shoppers as a result. So, it’s in your best interest to deliver great experiences and retain your customers.

For example, the Harvard Business Review found that an increase in customer retention rates by 5% increased profits by 25% to 95%. But let’s say you want to increase your own, how can you identify which variables contribute to it?Using causal research, you can test hypotheses about which processes, strategies or changes influence customer retention. For example, is it the streamlined checkout? What about the personalized product suggestions? Or maybe it was a new solution that solved their problem? Causal research will help you find out.

Discover how to use analytics to improve customer retention.

Improving problematic employee turnover rates

If your company has a high attrition rate, causal research can help you narrow down the variables or reasons which have the greatest impact on people leaving. This allows you to prioritize your efforts on tackling the issues in the right order, for the best positive outcomes.

For example, through causal research, you might find that employee dissatisfaction due to a lack of communication and transparency from upper management leads to poor morale, which in turn influences employee retention.

To rectify the problem, you could implement a routine feedback loop or session that enables your people to talk to your company’s C-level executives so that they feel heard and understood.

How to conduct causal research first steps to getting started are:

1. Define the purpose of your research

What questions do you have? What do you expect to come out of your research? Think about which variables you need to test out the theory.

2. Pick a random sampling if participants are needed

Using a technology solution to support your sampling, like a database, can help you define who you want your target audience to be, and how random or representative they should be.

3. Set up the controlled experiment

Once you’ve defined which variables you’d like to measure to see if they interact, think about how best to set up the experiment. This could be in-person or in-house via interviews, or it could be done remotely using online surveys.

4. Carry out the experiment

Make sure to keep all irrelevant variables the same, and only change the causal variable (the one that causes the effect) to gather the correct data. Depending on your method, you could be collecting qualitative or quantitative data, so make sure you note your findings across each regularly.

5. Analyze your findings

Either manually or using technology, analyze your data to see if any trends, patterns or correlations emerge. By looking at the data, you’ll be able to see what changes you might need to do next time, or if there are questions that require further research.

6. Verify your findings

Your first attempt gives you the baseline figures to compare the new results to. You can then run another experiment to verify your findings.

7. Do follow-up or supplemental research

You can supplement your original findings by carrying out research that goes deeper into causes or explores the topic in more detail. One of the best ways to do this is to use a survey. See ‘Use surveys to help your experiment’.

Identifying causal relationships between variables

To verify if a causal relationship exists, you have to satisfy the following criteria:

  • Nonspurious association

A clear correlation exists between one cause and the effect. In other words, no ‘third’ that relates to both (cause and effect) should exist.

  • Temporal sequence

The cause occurs before the effect. For example, increased ad spend on product marketing would contribute to higher product sales.

  • Concomitant variation

The variation between the two variables is systematic. For example, if a company doesn’t change its IT policies and technology stack, then changes in employee productivity were not caused by IT policies or technology.

How surveys help your causal research experiments?

There are some surveys that are perfect for assisting researchers with understanding cause and effect. These include:

  • Employee Satisfaction Survey – An introductory employee satisfaction survey that provides you with an overview of your current employee experience.
  • Manager Feedback Survey – An introductory manager feedback survey geared toward improving your skills as a leader with valuable feedback from your team.
  • Net Promoter Score (NPS) Survey – Measure customer loyalty and understand how your customers feel about your product or service using one of the world’s best-recognized metrics.
  • Employee Engagement Survey – An entry-level employee engagement survey that provides you with an overview of your current employee experience.
  • Customer Satisfaction Survey – Evaluate how satisfied your customers are with your company, including the products and services you provide and how they are treated when they buy from you.
  • Employee Exit Interview Survey – Understand why your employees are leaving and how they’ll speak about your company once they’re gone.
  • Product Research Survey – Evaluate your consumers’ reaction to a new product or product feature across every stage of the product development journey.
  • Brand Awareness Survey – Track the level of brand awareness in your target market, including current and potential future customers.
  • Online Purchase Feedback Survey – Find out how well your online shopping experience performs against customer needs and expectations.

That covers the fundamentals of causal research and should give you a foundation for ongoing studies to assess opportunities, problems, and risks across your market, product, customer, and employee segments.

If you want to transform your research, empower your teams and get insights on tap to get ahead of the competition, maybe it’s time to leverage Qualtrics CoreXM.

Qualtrics CoreXM provides a single platform for data collection and analysis across every part of your business — from customer feedback to product concept testing. What’s more, you can integrate it with your existing tools and services thanks to a flexible API.

Qualtrics CoreXM offers you as much or as little power and complexity as you need, so whether you’re running simple surveys or more advanced forms of research, it can deliver every time.

Related resources

Market intelligence 10 min read, marketing insights 11 min read, ethnographic research 11 min read, qualitative vs quantitative research 13 min read, qualitative research questions 11 min read, qualitative research design 12 min read, primary vs secondary research 14 min read, request demo.

Ready to learn more about Qualtrics?

What is causal research design?

Last updated

14 May 2023

Reviewed by

Examining these relationships gives researchers valuable insights into the mechanisms that drive the phenomena they are investigating.

Organizations primarily use causal research design to identify, determine, and explore the impact of changes within an organization and the market. You can use a causal research design to evaluate the effects of certain changes on existing procedures, norms, and more.

This article explores causal research design, including its elements, advantages, and disadvantages.

Analyze your causal research

Dovetail streamlines causal research analysis to help you uncover and share actionable insights

  • Components of causal research

You can demonstrate the existence of cause-and-effect relationships between two factors or variables using specific causal information, allowing you to produce more meaningful results and research implications.

These are the key inputs for causal research:

The timeline of events

Ideally, the cause must occur before the effect. You should review the timeline of two or more separate events to determine the independent variables (cause) from the dependent variables (effect) before developing a hypothesis. 

If the cause occurs before the effect, you can link cause and effect and develop a hypothesis .

For instance, an organization may notice a sales increase. Determining the cause would help them reproduce these results. 

Upon review, the business realizes that the sales boost occurred right after an advertising campaign. The business can leverage this time-based data to determine whether the advertising campaign is the independent variable that caused a change in sales. 

Evaluation of confounding variables

In most cases, you need to pinpoint the variables that comprise a cause-and-effect relationship when using a causal research design. This uncovers a more accurate conclusion. 

Co-variations between a cause and effect must be accurate, and a third factor shouldn’t relate to cause and effect. 

Observing changes

Variation links between two variables must be clear. A quantitative change in effect must happen solely due to a quantitative change in the cause. 

You can test whether the independent variable changes the dependent variable to evaluate the validity of a cause-and-effect relationship. A steady change between the two variables must occur to back up your hypothesis of a genuine causal effect. 

  • Why is causal research useful?

Causal research allows market researchers to predict hypothetical occurrences and outcomes while enhancing existing strategies. Organizations can use this concept to develop beneficial plans. 

Causal research is also useful as market researchers can immediately deduce the effect of the variables on each other under real-world conditions. 

Once researchers complete their first experiment, they can use their findings. Applying them to alternative scenarios or repeating the experiment to confirm its validity can produce further insights. 

Businesses widely use causal research to identify and comprehend the effect of strategic changes on their profits. 

  • How does causal research compare and differ from other research types?

Other research types that identify relationships between variables include exploratory and descriptive research . 

Here’s how they compare and differ from causal research designs:

Exploratory research

An exploratory research design evaluates situations where a problem or opportunity's boundaries are unclear. You can use this research type to test various hypotheses and assumptions to establish facts and understand a situation more clearly.

You can also use exploratory research design to navigate a topic and discover the relevant variables. This research type allows flexibility and adaptability as the experiment progresses, particularly since no area is off-limits.

It’s worth noting that exploratory research is unstructured and typically involves collecting qualitative data . This provides the freedom to tweak and amend the research approach according to your ongoing thoughts and assessments. 

Unfortunately, this exposes the findings to the risk of bias and may limit the extent to which a researcher can explore a topic. 

This table compares the key characteristics of causal and exploratory research:

Descriptive research

This research design involves capturing and describing the traits of a population, situation, or phenomenon. Descriptive research focuses more on the " what " of the research subject and less on the " why ."

Since descriptive research typically happens in a real-world setting, variables can cross-contaminate others. This increases the challenge of isolating cause-and-effect relationships. 

You may require further research if you need more causal links. 

This table compares the key characteristics of causal and descriptive research.  

Causal research examines a research question’s variables and how they interact. It’s easier to pinpoint cause and effect since the experiment often happens in a controlled setting. 

Researchers can conduct causal research at any stage, but they typically use it once they know more about the topic.

In contrast, causal research tends to be more structured and can be combined with exploratory and descriptive research to help you attain your research goals. 

  • How can you use causal research effectively?

Here are common ways that market researchers leverage causal research effectively:

Market and advertising research

Do you want to know if your new marketing campaign is affecting your organization positively? You can use causal research to determine the variables causing negative or positive impacts on your campaign. 

Improving customer experiences and loyalty levels

Consumers generally enjoy purchasing from brands aligned with their values. They’re more likely to purchase from such brands and positively represent them to others. 

You can use causal research to identify the variables contributing to increased or reduced customer acquisition and retention rates. 

Could the cause of increased customer retention rates be streamlined checkout? 

Perhaps you introduced a new solution geared towards directly solving their immediate problem. 

Whatever the reason, causal research can help you identify the cause-and-effect relationship. You can use this to enhance your customer experiences and loyalty levels.

Improving problematic employee turnover rates

Is your organization experiencing skyrocketing attrition rates? 

You can leverage the features and benefits of causal research to narrow down the possible explanations or variables with significant effects on employees quitting. 

This way, you can prioritize interventions, focusing on the highest priority causal influences, and begin to tackle high employee turnover rates. 

  • Advantages of causal research

The main benefits of causal research include the following:

Effectively test new ideas

If causal research can pinpoint the precise outcome through combinations of different variables, researchers can test ideas in the same manner to form viable proof of concepts.

Achieve more objective results

Market researchers typically use random sampling techniques to choose experiment participants or subjects in causal research. This reduces the possibility of exterior, sample, or demography-based influences, generating more objective results. 

Improved business processes

Causal research helps businesses understand which variables positively impact target variables, such as customer loyalty or sales revenues. This helps them improve their processes, ROI, and customer and employee experiences.

Guarantee reliable and accurate results

Upon identifying the correct variables, researchers can replicate cause and effect effortlessly. This creates reliable data and results to draw insights from. 

Internal organization improvements

Businesses that conduct causal research can make informed decisions about improving their internal operations and enhancing employee experiences. 

  • Disadvantages of causal research

Like any other research method, casual research has its set of drawbacks that include:

Extra research to ensure validity

Researchers can't simply rely on the outcomes of causal research since it isn't always accurate. There may be a need to conduct other research types alongside it to ensure accurate output.

Coincidence

Coincidence tends to be the most significant error in causal research. Researchers often misinterpret a coincidental link between a cause and effect as a direct causal link. 

Administration challenges

Causal research can be challenging to administer since it's impossible to control the impact of extraneous variables . 

Giving away your competitive advantage

If you intend to publish your research, it exposes your information to the competition. 

Competitors may use your research outcomes to identify your plans and strategies to enter the market before you. 

  • Causal research examples

Multiple fields can use causal research, so it serves different purposes, such as. 

Customer loyalty research

Organizations and employees can use causal research to determine the best customer attraction and retention approaches. 

They monitor interactions between customers and employees to identify cause-and-effect patterns. That could be a product demonstration technique resulting in higher or lower sales from the same customers. 

Example: Business X introduces a new individual marketing strategy for a small customer group and notices a measurable increase in monthly subscriptions. 

Upon getting identical results from different groups, the business concludes that the individual marketing strategy resulted in the intended causal relationship.

Advertising research

Businesses can also use causal research to implement and assess advertising campaigns. 

Example: Business X notices a 7% increase in sales revenue a few months after a business introduces a new advertisement in a certain region. The business can run the same ad in random regions to compare sales data over the same period. 

This will help the company determine whether the ad caused the sales increase. If sales increase in these randomly selected regions, the business could conclude that advertising campaigns and sales share a cause-and-effect relationship. 

Educational research

Academics, teachers, and learners can use causal research to explore the impact of politics on learners and pinpoint learner behavior trends. 

Example: College X notices that more IT students drop out of their program in their second year, which is 8% higher than any other year. 

The college administration can interview a random group of IT students to identify factors leading to this situation, including personal factors and influences. 

With the help of in-depth statistical analysis, the institution's researchers can uncover the main factors causing dropout. They can create immediate solutions to address the problem.

Is a causal variable dependent or independent?

When two variables have a cause-and-effect relationship, the cause is often called the independent variable. As such, the effect variable is dependent, i.e., it depends on the independent causal variable. An independent variable is only causal under experimental conditions. 

What are the three criteria for causality?

The three conditions for causality are:

Temporality/temporal precedence: The cause must precede the effect.

Rationality: One event predicts the other with an explanation, and the effect must vary in proportion to changes in the cause.

Control for extraneous variables: The covariables must not result from other variables.  

Is causal research experimental?

Causal research is mostly explanatory. Causal studies focus on analyzing a situation to explore and explain the patterns of relationships between variables. 

Further, experiments are the primary data collection methods in studies with causal research design. However, as a research design, causal research isn't entirely experimental.

What is the difference between experimental and causal research design?

One of the main differences between causal and experimental research is that in causal research, the research subjects are already in groups since the event has already happened. 

On the other hand, researchers randomly choose subjects in experimental research before manipulating the variables.

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 25 November 2023

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 18 May 2023

Last updated: 10 April 2023

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

causal questions in research

Users report unexpectedly high data usage, especially during streaming sessions.

causal questions in research

Users find it hard to navigate from the home page to relevant playlists in the app.

causal questions in research

It would be great to have a sleep timer feature, especially for bedtime listening.

causal questions in research

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

causal questions in research

Home Market Research Research Tools and Apps

Causal Research: What it is, Tips & Examples

Causal research examines if there's a cause-and-effect relationship between two separate events. Learn everything you need to know about it.

Causal research is classified as conclusive research since it attempts to build a cause-and-effect link between two variables. This research is mainly used to determine the cause of particular behavior. We can use this research to determine what changes occur in an independent variable due to a change in the dependent variable.

It can assist you in evaluating marketing activities, improving internal procedures, and developing more effective business plans. Understanding how one circumstance affects another may help you determine the most effective methods for satisfying your business needs.

LEARN ABOUT: Behavioral Research

This post will explain causal research, define its essential components, describe its benefits and limitations, and provide some important tips.

Content Index

What is causal research?

Temporal sequence, non-spurious association, concomitant variation, the advantages, the disadvantages, causal research examples, causal research tips.

Causal research is also known as explanatory research . It’s a type of research that examines if there’s a cause-and-effect relationship between two separate events. This would occur when there is a change in one of the independent variables, which is causing changes in the dependent variable.

You can use causal research to evaluate the effects of particular changes on existing norms, procedures, and so on. This type of research examines a condition or a research problem to explain the patterns of interactions between variables.

LEARN ABOUT: Research Process Steps

Components of causal research

Only specific causal information can demonstrate the existence of cause-and-effect linkages. The three key components of causal research are as follows:

Causal Research Components

Prior to the effect, the cause must occur. If the cause occurs before the appearance of the effect, the cause and effect can only be linked. For example, if the profit increase occurred before the advertisement aired, it cannot be linked to an increase in advertising spending.

Linked fluctuations between two variables are only allowed if there is no other variable that is related to both cause and effect. For example, a notebook manufacturer has discovered a correlation between notebooks and the autumn season. They see that during this season, more people buy notebooks because students are buying them for the upcoming semester.

During the summer, the company launched an advertisement campaign for notebooks. To test their assumption, they can look up the campaign data to see if the increase in notebook sales was due to the student’s natural rhythm of buying notebooks or the advertisement.

Concomitant variation is defined as a quantitative change in effect that happens solely as a result of a quantitative change in the cause. This means that there must be a steady change between the two variables. You can examine the validity of a cause-and-effect connection by seeing if the independent variable causes a change in the dependent variable.

For example, if any company does not make an attempt to enhance sales by acquiring skilled employees or offering training to them, then the hire of experienced employees cannot be credited for an increase in sales. Other factors may have contributed to the increase in sales.

Causal Research Advantages and Disadvantages

Causal or explanatory research has various advantages for both academics and businesses. As with any other research method, it has a few disadvantages that researchers should be aware of. Let’s look at some of the advantages and disadvantages of this research design .

  • Helps in the identification of the causes of system processes. This allows the researcher to take the required steps to resolve issues or improve outcomes.
  • It provides replication if it is required.
  • Causal research assists in determining the effects of changing procedures and methods.
  • Subjects are chosen in a methodical manner. As a result, it is beneficial for improving internal validity .
  • The ability to analyze the effects of changes on existing events, processes, phenomena, and so on.
  • Finds the sources of variable correlations, bridging the gap in correlational research .
  • It is not always possible to monitor the effects of all external factors, so causal research is challenging to do.
  • It is time-consuming and might be costly to execute.
  • The effect of a large range of factors and variables existing in a particular setting makes it difficult to draw results.
  • The most major error in this research is a coincidence. A coincidence between a cause and an effect can sometimes be interpreted as a direction of causality.
  • To corroborate the findings of the explanatory research , you must undertake additional types of research. You can’t just make conclusions based on the findings of a causal study.
  • It is sometimes simple for a researcher to see that two variables are related, but it can be difficult for a researcher to determine which variable is the cause and which variable is the effect.

Since different industries and fields can carry out causal comparative research , it can serve many different purposes. Let’s discuss 3 examples of causal research:

Advertising Research

Companies can use causal research to enact and study advertising campaigns. For example, six months after a business debuts a new ad in a region. They see a 5% increase in sales revenue.

To assess whether the ad has caused the lift, they run the same ad in randomly selected regions so they can compare sales data across regions over another six months. When sales pick up again in these regions, they can conclude that the ad and sales have a valuable cause-and-effect relationship.

LEARN ABOUT: Ad Testing

Customer Loyalty Research

Businesses can use causal research to determine the best customer retention strategies. They monitor interactions between associates and customers to identify patterns of cause and effect, such as a product demonstration technique leading to increased or decreased sales from the same customers.

For example, a company implements a new individual marketing strategy for a small group of customers and sees a measurable increase in monthly subscriptions. After receiving identical results from several groups, they concluded that the one-to-one marketing strategy has the causal relationship they intended.

Educational Research

Learning specialists, academics, and teachers use causal research to learn more about how politics affects students and identify possible student behavior trends. For example, a university administration notices that more science students drop out of their program in their third year, which is 7% higher than in any other year.

They interview a random group of science students and discover many factors that could lead to these circumstances, including non-university components. Through the in-depth statistical analysis, researchers uncover the top three factors, and management creates a committee to address them in the future.

Causal research is frequently the last type of research done during the research process and is considered definitive. As a result, it is critical to plan the research with specific parameters and goals in mind. Here are some tips for conducting causal research successfully:

1. Understand the parameters of your research

Identify any design strategies that change the way you understand your data. Determine how you acquired data and whether your conclusions are more applicable in practice in some cases than others.

2. Pick a random sampling strategy

Choosing a technique that works best for you when you have participants or subjects is critical. You can use a database to generate a random list, select random selections from sorted categories, or conduct a survey.

3. Determine all possible relations

Examine the different relationships between your independent and dependent variables to build more sophisticated insights and conclusions.

To summarize, causal or explanatory research helps organizations understand how their current activities and behaviors will impact them in the future. This is incredibly useful in a wide range of business scenarios. This research can ensure the outcome of various marketing activities, campaigns, and collaterals. Using the findings of this research program, you will be able to design more successful business strategies that take advantage of every business opportunity.

At QuestionPro, we offer all kinds of necessary tools for researchers to carry out their projects. It can help you get the most out of your data by guiding you through the process.

MORE LIKE THIS

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Employee Engagement Survey Tools

Top 10 Employee Engagement Survey Tools

employee engagement software

Top 20 Employee Engagement Software Solutions

May 3, 2024

customer experience software

15 Best Customer Experience Software of 2024

May 2, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

Research-Methodology

Causal Research (Explanatory research)

Causal research, also known as explanatory research is conducted in order to identify the extent and nature of cause-and-effect relationships. Causal research can be conducted in order to assess impacts of specific changes on existing norms, various processes etc.

Causal studies focus on an analysis of a situation or a specific problem to explain the patterns of relationships between variables. Experiments  are the most popular primary data collection methods in studies with causal research design.

The presence of cause cause-and-effect relationships can be confirmed only if specific causal evidence exists. Causal evidence has three important components:

1. Temporal sequence . The cause must occur before the effect. For example, it would not be appropriate to credit the increase in sales to rebranding efforts if the increase had started before the rebranding.

2. Concomitant variation . The variation must be systematic between the two variables. For example, if a company doesn’t change its employee training and development practices, then changes in customer satisfaction cannot be caused by employee training and development.

3. Nonspurious association . Any covarioaton between a cause and an effect must be true and not simply due to other variable. In other words, there should be no a ‘third’ factor that relates to both, cause, as well as, effect.

The table below compares the main characteristics of causal research to exploratory and descriptive research designs: [1]

Main characteristics of research designs

 Examples of Causal Research (Explanatory Research)

The following are examples of research objectives for causal research design:

  • To assess the impacts of foreign direct investment on the levels of economic growth in Taiwan
  • To analyse the effects of re-branding initiatives on the levels of customer loyalty
  • To identify the nature of impact of work process re-engineering on the levels of employee motivation

Advantages of Causal Research (Explanatory Research)

  • Causal studies may play an instrumental role in terms of identifying reasons behind a wide range of processes, as well as, assessing the impacts of changes on existing norms, processes etc.
  • Causal studies usually offer the advantages of replication if necessity arises
  • This type of studies are associated with greater levels of internal validity due to systematic selection of subjects

Disadvantages of Causal Research (Explanatory Research)

  • Coincidences in events may be perceived as cause-and-effect relationships. For example, Punxatawney Phil was able to forecast the duration of winter for five consecutive years, nevertheless, it is just a rodent without intellect and forecasting powers, i.e. it was a coincidence.
  • It can be difficult to reach appropriate conclusions on the basis of causal research findings. This is due to the impact of a wide range of factors and variables in social environment. In other words, while casualty can be inferred, it cannot be proved with a high level of certainty.
  • It certain cases, while correlation between two variables can be effectively established; identifying which variable is a cause and which one is the impact can be a difficult task to accomplish.

My e-book,  The Ultimate Guide to Writing a Dissertation in Business Studies: a step by step assistance  contains discussions of theory and application of research designs. The e-book also explains all stages of the  research process  starting from the  selection of the research area  to writing personal reflection. Important elements of dissertations such as  research philosophy ,  research approach ,  methods of data collection ,  data analysis  and  sampling  are explained in this e-book in simple words.

John Dudovskiy

Causal Research (Explanatory research)

[1] Source: Zikmund, W.G., Babin, J., Carr, J. & Griffin, M. (2012) “Business Research Methods: with Qualtrics Printed Access Card” Cengage Learning

Applied Causal Analysis (with R)

3.1 descriptive vs. causal questions, 3.1.1 descriptive questions.

  • e.g. How are observations distributed across trust categories in Table 3.1 ?
  • How are observations distributed across trust and gender values?
  • How are observations distributed across values of trust ( Y ), gender ( X 1 ) and time ( X 2 )?
  • Are trust values higher (on average) among females than males?

As the name suggests descriptive research questions are about describing the data. For instance, we could measure trust within the German population using the question ‘ Would you say that most people can be trusted or that you can’t be too careful in dealing with people, if 0 means “Can’t be too careful” and 10 means “Most people can be trusted”? ’ Consequently, a descriptive question would be to ask are there more individuals with a high level of trust (define as those with a value above 8) or more with a low level of trust (defined as those with a value below 2) . In other words, descriptive questions are concerned with the distribution of observations (e.g. individuals) across values of a variable (or several variables), e.g., the variable trust (Y). Importantly, descriptive questions may involve as many variables as you like. We could add a second variable, gender ( X 1 , male vs. female), and ask whether females have a higher level of trust - on average - than males. This already points to how we deal with the underlying distibutions. Normally, we summarize them using statistics such as the mean (or other statistics). And we can also develop hypotheses for our descriptive questions, e.g., we could hypothesize that females have a higher level of trust than males and subsequently test this hypotheses using we collect. Potentially, it makes sense to call hypotheses that simply concern the distribution of data across one more dimensions descriptive hypotheses . Importantly, time (which will become important later on) is just another variable and a corresponding descriptive question would be: was trust in politicians higher in January 2019 than in January 2020?

3.1.2 Causal questions

  • Is there a causal link between the distribution across values of Y and values of D ?
  • Continuous variables: Compare means
  • Categorical variables (several): Compare probabilites for categories
  • Group level: Does victimization cause individuals to have a lower level of trust on average (then if they were not victimized)?
  • Individual level: Does (non-)victimization cause individual i to have a (lower)higher trust level?

Causal research questions are of a different kind. From a distributional perspective we could ask whether the distribution of a first variable D is somehow causally related to the distribution of a second variable Y . Again we tend to summarize the corresponding distributions, e.g., we could take the mean of trust. In Table 3.2 we tabulate victimization (D), measured with the question Have you been insulted or threatened verbally since (month, year)? against trust (Y). Take note that the vicimization variable D is dichotomous (0,1) whereas the outcome variable Y has 11 values (0-10). The corresponding causal question would be: Does victimization cause individuals to have lower levels of trust (on average that is comparing the means)? Ultimately, this question resides on the group level but is strongly related to the corresponding question on the individual level: Does (non-)victimization cause individual i to have a (lower)higher trust level? One important aspect that we will encounter later on: In asking our causal question we may focus on certain subsets in our sample once we have collected some data. For instance, we could ask whether the people that have actually been victimized would have had a higher level of trust if they had not been victimized. This question focuses on the subest in our sample that has been victimized.

See Gerring ( 2012 ) for a discussion of “What?” and “Why?” questions. ↩

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Greek and Roman Papyrology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Agriculture
  • History of Education
  • History of Emotions
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Acquisition
  • Language Variation
  • Language Families
  • Language Evolution
  • Language Reference
  • Lexicography
  • Linguistic Theories
  • Linguistic Typology
  • Linguistic Anthropology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Religion
  • Music and Culture
  • Music and Media
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Oncology
  • Medical Toxicology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Clinical Neuroscience
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Neuroscience
  • Cognitive Psychology
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Strategy
  • Business History
  • Business Ethics
  • Business and Government
  • Business and Technology
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic Systems
  • Economic Methodology
  • Economic History
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Politics and Law
  • Public Administration
  • Public Policy
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Causal Reasoning

  • < Previous chapter
  • Next chapter >

The Oxford Handbook of Causal Reasoning

22 Causal Explanation

Department of Psychology University of California, Berkeley Berkeley, California, USA

  • Published: 10 May 2017
  • Cite Icon Cite
  • Permissions Icon Permissions

Explanation and causation are intimately related. Explanations often appeal to causes, and causal claims are often answers to implicit or explicit questions about why or how something occurred. This chapter considers what we can learn about causal reasoning from research on explanation. In particular, it reviews an emerging body of work suggesting that explanatory considerations—such as the simplicity or scope of a causal hypothesis—can systematically influence causal inference and learning. It also discusses proposed distinctions among types of explanations and reviews the effects of each explanation type on causal reasoning and representation. Finally, it considers the relationship between explanations and causal mechanisms and raises important questions for future research.

A doctor encounters a patient: Why does she have a fever and a rash? An engineer investigates a failure: Why did the bridge collapse? A parent wonders about her child: Why did she throw a tantrum? In each of these cases, we seek an explanation for some event—an explanation that’s likely to appeal to one or more antecedent causes . The doctor might conclude that a virus caused the symptoms, the engineer that defects in cast iron caused the collapse, and the parent that the toy’s disappearance caused the tantrum.

Not all explanations are causal, and not all causes are explanatory. Explanations in mathematics, for example, are typically taken to be non-causal, and many causal factors are either not explanatory at all, or only explanatory under particular circumstances. (Consider, for instance, appealing to the big bang as an explanation for today’s inflation rates, or the presence of oxygen as an explanation for California wildfires.) Nonetheless, causation and explanation are closely related, with many instances of causal reasoning featuring explanations and explanatory considerations, and many instances of abductive inference and explanation appealing to causes and causal considerations. The goal of the present chapter is to identify some of the connections between explanation and causation, with a special focus on how the study of explanation can inform our understanding of causal reasoning.

The chapter is divided into five sections. In the first three, we review an emerging body of work on the role of explanation in three types of causal reasoning: drawing inferences about the causes of events, learning novel causal structures, and assigning causal responsibility. In the fourth section, we consider different kinds of explanations, including a discussion of whether each kind is properly “causal” and how different kinds of explanations can differentially influence causal judgments. In the fifth section, we focus on causal explanations that appeal to mechanisms, and consider the relationship between explanation, causal claims, and mechanisms. Finally, we conclude with some important questions for future research.

Causal Inference and Inference to the Best Explanation

Consider a doctor who infers, on the basis of a patient’s symptoms, that the patient has a particular disease—one known to cause that cluster of symptoms. We will refer to such instances of causal reasoning as “causal inference,” and differentiate them from two other kinds of causal reasoning that we will discuss in subsequent sections: causal learning (which involves learning about novel causes and relationships at the type level) and assigning causal responsibility (which involves attributing an effect to one or more causes, all of which could have occurred and could have contributed to the effect).

How might explanation influence causal inference? One possibility is that people engage in a process called “inference to the best explanation” (IBE). IBE was introduced into the philosophical literature by Gilbert Harman in a 1965 paper, but the idea is likely older, and closely related to what is sometimes called “abductive inference” ( Douven, 2011 ; Lombrozo, 2012 , 2016 ; Peirce, 1955 ). The basic idea is that one infers that a hypothesis is likely to be true based on the fact that it best explains the data. To borrow vocabulary from another influential philosopher of explanation, Peter Lipton, one uses an explanation’s “loveliness” as a guide to its “likeliness” ( Lipton, 2004 ).

A great deal of work has aimed to characterize how people go about inferring causes from patterns of evidence ( Cheng, 1997 ; Cheng & Novick, 1990 , 1992 ; Glymour & Cheng, 1998 ; Griffiths & Tenenbaum, 2005 ; Kelley, 1973 ; Perales & Shanks, 2003 ; Shanks & Dickinson, 1988 ; Waldmann & Hagmayer, 2001 ; see Buehner, 2005 ; Holyoak & Cheng, 2011 ; Waldmann & Hagmayer, 2013 , for reviews), and this work is summarized in other chapters of this volume (see Part I , “Theories of Causal Cognition,” and Meder & Mayrhofer, Chapter 23 , on diagnostic reasoning). Thus a question that immediately presents itself is whether IBE is distinct from the kinds of inference these models typically involve, such as analyses of covariation or Bayesian inference. For most advocates of IBE, the answer is “yes”: IBE is a distinct inferential process, where the key commitment is that explanatory considerations play a role in guiding judgments. These considerations can include the simplicity, scope, or other “virtues” of the explanatory hypotheses under consideration.

To provide evidence for IBE as a distinctly explanatory form of inference, it is thus important to identify explanatory virtues, and to demonstrate their role in inference. The most direct evidence of this form comes from research on simplicity ( Bonawitz & Lombrozo, 2012 ; Lombrozo, 2007 ; Pacer & Lombrozo, in preparation), scope ( Khemlani, Sussman, & Oppenheimer, 2011 ), and explanatory power (Douven & Schupbach, 2015a , 2015b ). We focus on this research for the remainder of the section.

In one study from Lombrozo (2007) , participants learned novel causal structures describing the relationships between diseases and symptoms on an alien planet. For example, the conjunction of two particular symptoms—say “sore minttels” and “purple spots”—could be explained by appeal to a single disease that caused both symptoms (Tritchet’s syndrome), or by appeal to the conjunction of two diseases that each caused one symptom (Morad’s disease and a Humel infection). Lombrozo set out to test whether participants would favor the explanation that was simpler in the sense that it invoked a single common cause over two independent causes, and whether they would do so even when probabilistic evidence, in the form of disease base rates, favored the more complex explanation. Lombrozo found that participants’ explanation choices were a function of both simplicity and probability, with a substantial proportion of participants selecting the simpler explanation even when it was less likely than the complex alternative. This is consistent with the idea that an explanation’s “loveliness”—in this case, its simplicity—is used as a basis for inferring its “likeliness.”

In subsequent work, Bonawitz and Lombrozo (2012) replicated the same basic pattern of results with 5-year-old children in a structurally parallel task: children observed a toy generating two effects (a light and a spinning fan), and had to infer whether one block (which generated both effects) or two blocks (which each generated one effect) fell into the toy’s activator bin. In this case, probabilistic information was manipulated across participants by varying the number of blocks of each type and the process by which they fell into the bin. Interestingly, adults did not show a preference for simplicity above and beyond probability in this task, while the 5-year-olds did. Bonawitz and Lombrozo suggest that in the face of probabilistic uncertainty—of the kind that is generated by a more complex task like the alien diagnosis problems used in Lombrozo (2007) —adults rely on explanatory considerations such as simplicity to guide assessments of probability. But when a task involves a transparent and seemingly deterministic causal system, and when the numbers involved are small (as was the case for the task developed for young children in Bonawitz and Lombrozo, 2012 ), adults may engage in more explicit probabilistic reasoning, and may bypass explanatory considerations altogether. Consistent with this idea, adults in Lombrozo (2007) also ceased to favor simplicity when they were explicitly told that the complex hypothesis was most likely to be true.

In more recent work, Pacer and Lombrozo (in preparation) provide a more precise characterization of how people assess an explanation’s simplicity. They differentiate two intuitive metrics for causal explanations, both of which are consistent with prior results: “node simplicity,” which involves counting the number of causes invoked in an explanation; and “root simplicity,” which involves counting the number of unexplained causes invoked in an explanation. For example, suppose that Dr. Node explains a patient’s symptoms by appeal to pneumonia and sarcoma—two diseases. And that Dr. Root explains the symptoms by appeal to pneumonia, sarcoma, and HIV , where HIV is a cause (or at least a contributing factor) for both pneumonia and sarcoma. Dr. Root has invoked more causes than Dr. Node (three versus two), and so her explanation is less simple according to node simplicity. But Dr. Root has explained the symptoms by appeal to only one unexplained cause (HIV), as opposed to Dr. Node’s two (pneumonia and sarcoma), so her explanation is simpler according to root simplicity. Extending the basic method developed by Lombrozo (2007) , Pacer and Lombrozo found strong evidence that people favor explanations with low root simplicity (above and beyond what is warranted on the basis of the frequency information which they were provided), but no evidence that people are sensitive to node simplicity. By using appropriate causal structures, they were able to rule out alternative explanations for these results (e.g., that people prefer explanations that involve intervening variables).

These findings suggest that in drawing causal inferences, people do not simply engage in probabilistic inference on the basis of frequency information. In addition to frequency information, they use explanatory considerations (in this case, low root simplicity) to guide their judgments, at least in the face of probabilistic uncertainty. The findings therefore suggest that IBE plays a role in inferences concerning causal events. But is this effect restricted to simplicity, or do other explanatory considerations play a role as well? Research to date supports a role for two additional factors: narrow latent scope and explanatory power .

An explanation’s “latent scope” refers to the number of unverified effects that the explanation predicts. For example, an observed symptom could be explained by appeal to a disease that predicts that single symptom, or by appeal to a disease that additionally predicts an effect that has not yet been tested for and is hence unobserved (e.g., whether the person has low blood levels of some mineral). In this case, the former explanation has narrower latent scope. Khemlani, Sussman, and Oppenheimer (2011) found that people favor explanations with narrow latent scope, even if the two diseases are equally prevalent. Importantly, they also find that latent scope affects probability estimates: explanations with narrow latent scope are judged more likely than those with broader latent scope (see also Johnson, Johnston, Toig, & Keil, 2014 , for evidence that explanatory scope informs causal strength inferences, and Johnston, Johnson, Koven, & Keil, 2015 , for evidence of latent scope bias in children). Thus latent scope appears to be among the cues to explanatory “loveliness” that affects the perceived “likeliness” of explanatory hypotheses.

Finally, recent work by Douven and Schupbach ( 2015a , 2015b ) provides further evidence of a role for explanatory considerations in inference, with hints that the relevant consideration is “explanatory power.” Employing a quite different paradigm, Douven and Schupbach demonstrate that people’s explanatory judgments better predict their estimates of posterior probability than do objective probabilities on their own. In a study reported in Douven and Schupbach (2015a) , participants observed 10 balls successively drawn from one of two urns, which was selected by a coin flip. One urn contained 30 black balls and 10 white balls, and the other contained 15 black balls and 25 white ones. After each draw, participants were asked to consider the evidence so far, and to rate the “explanatory goodness” of each of two hypotheses: the hypothesis that the balls were drawn from the 30/10 urn, or the hypothesis that the balls were drawn from the 15/25 urn. Participants were also asked to estimate a posterior probability for each hypothesis after each draw. In a series of models, Douven and Schupbach tested whether people’s judgments of the explanatory “goodness” of each hypothesis improved model predictions of their subjective posterior probabilities, above and beyond the objective posteriors calculated on the basis of the data presented to each participant. They found that models incorporating these explanatory judgments outperformed alternatives, even when appropriately penalized for using additional predictors.

Douven and Schupbach’s (2015a) results suggest that explanatory considerations do inform assessments of probability, and that these considerations diverge from posterior probability. However, the findings do not pinpoint the nature of the explanatory considerations themselves. On what basis were participants judging one hypothesis more or less explanatory than the other? Additional analyses of these data, reported in Douven and Schupbach (2015b) , provide some hints: models that took into account some measure of “explanatory power”—computed on the basis of the objective probabilities—outperformed the basic model that only considered posteriors. The best-performing model employed a measure based on Good (1960) that roughly tracks confirmation : it takes the log of the ratio of the probability of the data given the hypothesis to the probability of the data. In other work, Schupbach (2011) finds evidence that people’s judgments of an explanation’s “goodness” are related to another measure of explanatory power, proposed by Schupbach and Sprenger (2011) , which is also related to Bayesian measures of confirmation.

These findings suggest that explanatory considerations—in the form of root simplicity, latent scope, and explanatory power—inform causal inference, and in so doing reveal something potentially surprising: that while people’s responses to evidence are systematic, they do not (always) lead to causal inferences that track the posterior probabilities of each causal hypothesis. This not only supports a role for explanatory considerations in causal inference, but also challenges the idea that identifying causes to explain effects is essentially a matter of conditionalizing on the effects to infer the most likely cause. Further challenging this idea, Pacer, Williams, Chen, Lombrozo, and Griffiths (2013) compare judgments of explanatory goodness from human participants to those generated by four distinct computational models of explanation in causal Bayesian networks, and find that models that compute measures of evidence or information considerably outperform those that compute more direct measures of (posterior) probability.

In sum, there is good evidence that people engage in a process like IBE when drawing inferences about causal events: they use explanatory considerations to guide their assessments of which causes account for observed effects, and of how likely candidate hypotheses are to be true. The most direct evidence to date concerns root simplicity, latent scope, and explanatory power, but there is indirect evidence that other explanatory considerations, such as coherence, completeness, and manifest scope, may play a similar role ( Pennington & Hastie, 1992 ; Read & Marcus-Newhall, 1993 ; Preston & Epley, 2005 ; Thagard, 1989 ; Williams & Lombrozo, 2010 ).

Before concluding this section on IBE in causal inference, it is worth considering the normative implications of this work. It is typically assumed that Bayesian updating provides the normatively correct procedure for revising belief in causal hypotheses in light of the evidence. Do the findings reported in this section describe a true departure from Bayesian inference, and therefore a systematic source of error in human judgment? This is certainly one possibility. For example, it could be that IBE describes an imperfect algorithm by which people approximate Bayesian inference. If this is the case, it becomes an interesting project to spell out when and why explanatory considerations ever succeed in approximating more direct probabilistic inference.

There are other possibilities, however. In particular, an appropriately specified Bayesian model could potentially account for these results. In fact, some have argued that IBE-like inference could simply fall out of hierarchical Bayesian inference with suitably assigned priors and likelihoods ( Henderson, 2014 ), in which case there could be a justified, Bayesian account of this behavior. It could also be that the Bayesian models implicit in the comparisons between people’s judgments and posterior probabilities fail to describe the inference that people are actually making. In their chapter in this volume on diagnostic reasoning, for example, Meder and Mayrhofer (Chapter 23 ) make the important point that there can be more than one “Bayesian” model for a given inference, and in fact find different patterns of inference for models that make different assumptions when it comes to elemental diagnostic reasoning: inferring the value of a single binary cause from a single binary effect, which has clear parallels to the cases considered here. In particular, they argue for a model that takes into account uncertainty in causal structures over one that simply computes the empirical conditional probability of a cause given an effect. Similarly, it could be that the “departures” from Bayesian updating observed here reflect the consequences of a Bayesian inference that involves more than a straight calculation of posteriors.

Finally, some argue that IBE corresponds to a distinct but normatively justifiable alternative to Bayesianism (e.g., Douven & Schupbach, 2015a ). In particular, while Bayesian inference may be the best approach for minimizing expected inaccuracy in the long run, it could be that a process like IBE dominates Bayesian inference when the goal is, say, to get things mostly right in the short term, or to achieve some other aim ( Douven, 2013 ). It could also be that explanation judgments take considerations other than accuracy into account, such as the ease with which the explanation can be communicated, remembered, or used in subsequent processing. These are all important possibilities to explore in future research.

Causal Learning and the Process of Explaining

Consider a doctor who, when confronted with a recurring pattern of symptoms, posits a previously undocumented disease, or a previously unknown link between some pathogen and those symptoms. In each case, the inference involves a change in the doctor’s beliefs about the causal structure of the world, not only about the particular patient’s illness. This kind of inference, which we will refer to as causal model learning , differs from the kinds of causal inferences considered in the preceding section in that the learner posits a novel cause or causal relation, not (only) a new token of a known type.

Just as explanatory considerations can influence causal inference, it is likely that a process like IBE can guide causal model learning. In fact, “Occam’s Razor,” the classic admonition against positing unnecessary types of entities ( Baker, 2013 ), is typically formulated and invoked in the context of positing novel types, not tokens of known types. However, research to date has not (to our knowledge) directly explored IBE in the context of causal model learning. Doing so would require assessing whether novel causes or causal relations are more likely to be inferred when they provide better explanations.

What we do know is that engaging in explanation— the process —can affect the course of causal learning. In particular, a handful of studies with preschool-aged children suggest that being prompted to explain, even without feedback on the content or quality of explanations, can promote understanding of number conservation ( Siegler, 1995 ) and of physical phenomena (e.g., a balance beam; Pine & Siegler, 2003 ), and recruit causal beliefs that are not invoked spontaneously to guide predictions ( Amsterlaw & Wellman, 2006 ; Bartsch & Wellman, 1995 ; Legare, Wellman, & Gelman, 2009 ). Prompts to explain can also accelerate children’s understanding of false belief ( Amsterlaw & Wellman, 2006 ; Wellman & Lagattuta, 2004 ; see Wellman & Liu, 2007 , and Wellman, 2011 , for reviews), which requires a revision from one causal model of behavior to a more complex model involving an unobserved variable (belief) and a causal link between beliefs and behavior (e.g., Goodman et al., 2006 ). Finally, there is evidence that prompting children to explain can lead them to preferentially learn about and remember causal mechanisms over causally irrelevant perceptual details ( Legare & Lombrozo, 2014 ), and that prompting children to explain makes them more likely to generalize internal parts and category membership from some objects to others on the basis of shared causal affordances as opposed to perceptual similarity ( Walker, Lombrozo, Legare, & Gopnik, 2014 ; see also Muentener & Bonawitz, Chapter 33 in this volume, for more on children’s causal learning).

To better understand the effects of explanation on children’s causal learning, Walker, Lombrozo, Williams, Rafferty, and Gopnik (2016) set out to isolate effects of explanation on two key factors in causal learning: evidence and prior beliefs. Walker et al. used the classic “blicket detector” paradigm ( Gopnik & Sobel, 2000 ), in which children observe blocks placed on a machine, where some of the blocks make the machine play music. Children have to learn which blocks activate the machine, which can involve positing a novel kind corresponding to a subset of blocks, and/or positing a novel causal relationship between those blocks (or some of their features) and the machine’s activation.

In Walker et al.’s studies, 5-year-old children observed eight blocks successively placed on the machine, where four activated the machine and four did not. Crucially, half the children were prompted to explain after each observation (“Why did [didn’t] this block make my machine play music?”), and the remaining children, in the control condition, were asked to report the outcome (“What happened to my machine when I put this block on it? Did it play music?”). This control task was intended to match the explanation condition in eliciting a verbal response and drawing attention to the relaionship between each block and the machine, but without requiring that the child explain.

Across studies, Walker et al. (2016) varied the properties of the blocks to investigate whether prompting children to explain made them more likely to favor causal hypotheses that were more consistent with the data (i.e., one hypothesis accounted for 100% of observations and the other for 75%) and/or more consistent with prior beliefs (i.e., one hypothesis involved heavier blocks activating the machine, which matched children’s initial asumptions; the other involved blocks of a given color activating the machine). When competing causal hypotheses were matched in terms of prior beliefs but varied in the evidence they accounted for, children who were prompted to explain were significantly more likely than controls to favor the hypothesis with stronger evidence. And when competing causal hypotheses were matched in terms of evidence but varied in their consistency with prior beliefs, children who were prompted to explain were significantly more likely than controls to favor the hypothesis with a higher prior. In other words, explaining made children more responsive to both crucial ingredients of causal learning: evidence and prior beliefs.

In their final study, Walker et al. (2016) considered a case in which evidence and prior beliefs came into conflict: a hypothesis that accounted for 100% of the evidence (“blue blocks activate the machine”) was pitted against a hypothesis favored by prior beliefs (“big blocks activate the machine”), but that only accounted for 75% of the evidence. In this case, children who were prompted to explain were significantly more likely than controls to go with prior beliefs, guessing that a novel big block rather than a novel blue block would activate the machine. This pattern of responses was compared against the predictions of a Bayesian model that incorporated children’s own priors and likelihoods as estimated from an independent task. The results suggested that children who were prompted to explain were less likely than children in the control condition to conform to Bayesian inference. This result may seem surprising in light of explainers’ greater sensitivity to both evidence and prior beliefs, which suggests that explaining results in “better” performance. However, it is less surprising in light of the findings reported in the previous section, which consistently point to a divergence between explanation-based judgments and assessments of posterior probability.

While the evidence summarized thus far is restricted to preschool-aged children, it is likely that similar processes operate in older children and adults. For instance, Kuhn and Katz (2009) had fourth-grade children engage in a causal learning task that involved identifying the causes of earthquakes by observing evidence. The children subsequently participated in a structurally similar causal learning task involving an ocean voyage, where half were instructed to explain the basis for each prediction that they made, and those in a control group were not. When the same students completed the earthquake task in a post-test, those who had explained generated a smaller number of evidence-based inferences; instead, they seemed to rely more heavily on their (mistaken) prior beliefs, in line with the findings from Walker et al. (2016) . In a classic study with eighth-grade students, Chi, De Leeuw, Chiu, and LaVancher (1994) prompted students to “self-explain” as they read a passage about the circulatory system, with students in the control condition instead prompted to read the text twice. Students who explained were significantly more likely to acquire an accurate causal model of the circulatory system, in part, they suggest, because explaining “involved the integration of new information into existing knowledge”—that is, the coordination of evidence with prior beliefs. Finally, evidence with adults investigating the effects of explanation in categorization tasks mirrors the findings from Walker et al. (2016) , with participants who explain both more responsive to evidence ( Williams & Lombrozo, 2010 ) and more likely to recruit prior beliefs ( Williams & Lombrozo, 2013 ).

Why does the process of explaining affect causal learning? One possibility is that explaining simply leads to greater attention or engagement. This is unlikely for a variety of reasons. Prior work has found that while explaining leads to some improvements in performance, it also generates systematic impairments. In one study, children prompted to explain were significantly less likely than controls to remember the color of a gear in a gear toy ( Legare & Lombrozo, 2014 ); in another, they were significantly less likely to remember which sticker was placed on a block ( Walker et al., 2014 ). Research with adults has also found that a prompt to explain can slow learning and increase error rates in a category learning task ( Williams, Lombrozo, & Rehder, 2013 ). Moreover, the findings from the final study of Walker et al. (2016) suggest that prompting children to explain makes them look less, not more, like ideal Bayesian learners. Far from generating a global boost in performance, explanation seems to generate highly selective benefits.

A second possibility is that explaining plays a motivational role that is specifically tied to causal learning. In a provocatively titled paper (“Explanation as Orgasm and the Drive for Causal Understanding”), Gopnik (2000) argues that the phenomenological satisfaction that accompanies a good explanation is part of what motivates us to learn about the causal structure of the world. Prompting learners to explain could potentially ramp up this motivational process, directing children and adults to causal relationships over causally irrelevant details (consistent with Legare & Lombrozo, 2014 ; Walker et al., 2014 ). Explaining could also affect the course of causal inquiry itself, with effects on which data are acquired and how they inform beliefs (see Legare, 2012 , for preliminary evidence that explanation guides exploration).

Finally (and not mutually exclusively), it could be that effects of explanation on learning are effectively a consequence of IBE—that is, that in the course of explaining, children generate explanatory hypotheses, and these explanatory hypotheses are evaluated with “loveliness” as a proxy for “likeliness.” For instance, in Walker et al. (2016) , children may have favored the hypothesis that accounted for more evidence because it had greater scope or coverage, and the hypothesis consistent with prior knowledge because it provided a specification of mechanism or greater coherence. We suspect that this is mostly, but only mostly, correct. Some studies have found that children who are prompted to explain outperform those in control conditions even when they fail to generate the right explanation , or any explanation at all ( Walker et al., 2014 ). This suggests the existence of some effects of engaging in explanation that are not entirely reducible to the effects of having generated any particular explanation.

While such findings are puzzling on a classic interpretation of IBE, they can potentially be accommodated with a modified and augmented version (Lombrozo, 2012 , 2016 ; Wilkenfeld & Lombrozo, 2015 ). Wilkenfeld and Lombrozo (2015) argue for what they call “explaining for the best inference” (EBI), an inferential practice that differs from IBE in focusing on the process of explaining as opposed to candidate explanations themselves. While IBE and EBI are likely to go hand in hand, there could be cases in which the explanatory processes that generate the best inferences are not identical with those promoted by possessing the best explanations, and EBI allows for this possibility.

In sum, there is good evidence that the process of engaging in explanation influences causal learning. This is potentially driven by effects of explanation on the evaluation of both evidence and prior beliefs ( Walker et al., 2016 ). One possibility is that by engaging in explanation, learners are more likely to favor hypotheses that offer “lovely” explanations (Lombrozo, 2012 , 2016 ), and to engage in cognitive processes that affect learning even when a lovely or accurate explanation is not acquired ( Wilkenfeld & Lombrozo, 2015 ). It is not entirely clear, however, whether and when these effects of explanation lead to “better” causal learning. The findings from Amsterlaw and Wellman (2006) and Chi et al. (1994) suggest that effects can be positive, accelerating conceptual development and learning. Other findings are more mixed (e.g., Kuhn & Katz, 2009 ), with the modeling result from Walker et al. (2016) suggesting that prompting children to explain makes them integrate evidence and prior beliefs in a manner that corresponds less closely to Bayesian inference. Better delineating the contours of explanation’s beneficial and detrimental effects will be an important step for future research. It will also be important to investigate how people’s tendency to engage in explanation spontaneously corresponds to these effects. That is, are the conditions under which explaining is beneficial also the conditions under which people tend to spontaneously explain?

Assigning Causal Responsibility

The previous sections considered two kinds of causal reasoning, one involving novel causal structures and the other causal events generated by known structures. Another important class of causal judgments involves the assignment of causal responsibility : to which cause(s) do we attribute a given effect? For instance, a doctor might attribute her patient’s disease to his weak immune system or to a cold virus, when both are in fact present and play a causal role.

Causal attribution has received a great deal of attention within social psychology, with the classic conundrum concerning the attribution of some behavior to a person (“she’s so clumsy!”) versus a situation (“the staircase is so slippery!”) (for reviews, see Fiske & Taylor, 2013 ; Kelley & Michela, 1980 ; Malle, 2004 ). While this research is often framed in terms of causation, it is natural to regard attribution in terms of explanation, with attributions corresponding to an answer to the question of why some event occurred (“Why did Ava slip?”). In his classic “ANOVA model,” Kelley ( 1967 , 1973 ) proposed that people effectively carry out an analysis of covariation between the behavior and a number of internal and external factors, such as the person, stimulus, and situation. For example, to explain why Ava slipped on the staircase yesterday, one would consider how this behavior fares along the dimensions of consensus (did other people slip?), the distinctiveness of the stimulus (did she slip only on that staircase?), and consistency across situations (does she usually slip, or was it the only time she did so?). Subsequent work, however, has identified a variety of additional factors that influence people’s attributions (e.g., Ahn, Kalish, Medin, & Gelman, 1995 ; Försterling, 1992 ; Hewstone & Jaspars, 1987 ; McArthur, 1972 ), and some have challenged the basic dichotomy on which the person-versus-situation analysis is based (Malle, 1999 , 2004 ; Malle, Knobe, O’Laughlin, Pearce, & Nelson, 2000 ). (We direct readers interested in social attribution to Hilton, Chapter 32 in this volume.)

Assignments of causal responsibility also arise in the context of what is sometimes called “causal selection”: the problem of deciding which cause or causes in a chain or other causal structure best explain or account for some effect. Such judgments are especially relevant in moral and legal contexts, where they are closely tied to attributions of blame. For example, suppose that someone steps on a log, which pushes a boulder onto a picnic blanket, crushing a chocolate pie. The person, the log, and the boulder all played a causal role in the pie’s destruction, but various factors might influence our assignment of causal responsibility, including the location of each factor in the chain, whether and by how much it increased the probability of the outcome, and whether the person intended and foresaw the culinary catastrophe (see, e.g., Hart & Honoré, 1985 ; Hilton, McClure, & Sutton, 2009 ; Lagnado & Channon, 2008 ; McClure, Hilton, & Sutton, 2007 ; Spellman, 1997 ). (Chapter 29 in this volume, in which Lagnado and Gerstenberg discuss moral and legal reasoning, explores these issues in detail; also relevant is Chapter 12 by Danks on singular causation.)

While research has not (to our knowledge) investigated whether explanatory considerations such as simplicity and explanatory power influence judgments of causal responsibility, ideas from the philosophy and psychology of explanation can usefully inform research on this topic. For example, scholars of explanation often emphasize the ways in which an explanation request is underspecified by a why-question itself. When we ask, “Why did Ava slip on the stairs?” the appropriate response is quite different if we’re trying to get at why Ava slipped (as opposed to Boris) than if we’re trying to get at why Ava slipped on the stairs (as opposed to the landing). These questions involve a shift in what van Fraassen (1980) calls a “contrast class,” that is, the set of alternatives to the target event that the explanation should differentiate from the target via some appropriate relation (see also Cheng & Novick, 1991 ).

McGill (1989) showed in a series of studies that a number of previously established effects in causal attribution—effects of perspective (actor vs. observer; Jones & Nisbett, 1971 ), covariation information (consensus and distinctiveness; Kelley, 1967 ), and the valence of the behavior being explained (positive vs. negative; Weiner, 1985 )—are related to shifts in the contrast class. Specifically, by manipulating the contrast class adopted by participants, McGill was able to eliminate the actor–observer asymmetry, interfere with the roles of consensus and distinctiveness information, and counteract self-serving attributions of positive versus negative performance. These findings underscore the close relationship between attribution and explanation.

Focusing on explanation is also helpful in bringing to the foreground questions of causal relevance as distinct from probability . In a 1996 paper, Hilton and Erb presented a set of studies designed to clearly differentiate these notions. In one study, Hilton and Erb showed that contextual information can influence the perceived “goodness” and relevance of an explanation without necessarily affecting its probability. For example, participants were asked to rate the following explanation of why a watch broke (an example adapted from Einhorn & Hogarth, 1986 ): “the watch broke because the hammer hit it.” This explanation was rated as fairly good, relevant, and likely to be true; however, after learning that the hammer hit the watch during a routine testing procedure at a watch factory, participants’ ratings of explanation quality and relevance dropped. In contrast, ratings of probability remained high, suggesting that causal relevance and the probability of an explanation can diverge, and that these two factors differ in their susceptibility to this contextual manipulation. It is possible that these effects were generated by a shift in contrast, from “Why did this watch break now (as opposed to not breaking now)?” to “Why did this watch break (as opposed to some other watch breaking)?”

More recently, Chin-Parker and Bradner (2010) showed that effects of background knowledge and implicit contrasts extend to the generation of explanations. They manipulated participants’ background assumptions by presenting a sequence of causal events that either did or did not seem to unfold toward a particular functional outcome (when it did, the sequence appeared to represent a closed-loop system functioning in a self-sustaining manner). Participants’ explanations of an ambiguous observation at the end of the sequence tended to invoke a failure of a system to perform its function in the former case, but featured proximal causes in the latter case. (In contrast to prior research, context did not affect explanation evaluation in this design.)

Taken together, these studies offer another set of examples of how explanatory considerations (in this case, the contextually determined contrast class) can influence causal judgments, and suggest that ascriptions of causal responsibility may vary depending on how they are framed: in terms of causal relevance and explanation, or in terms of probability and truth. It is also possible that considerations such as simplicity and scope play a role in assigning causal responsibility, above and beyond their roles in causal inference and learning. These are interesting questions for future research.

The Varieties of Causal Explanation

There is no agreed-upon taxonomy for explanations; in fact, even the distinction between causal and non-causal explanation generates contested cases. For instance, consider an example from Putnam (1975) . A rigid board has a round hole and a square hole. A peg with a square cross-section passes through the square hole, but not the round hole. Why? Putnam suggests that this can be explained by appeal to the geometry of the rigid objects (which is not causal), without appeal to lower-level physical phenomena (which are presumably causal). Is this a case of non-causal explanation? Different scholars provide different answers.

One taxonomy that has proven especially fruitful in the psychological study of explanation has roots in Aristotle’s four causes (efficient, material, final, and formal), which are sometimes characterized not as causes per se, but in terms of explanation—as distinct answers to a “why?” question ( Falcon, 2015 ). Efficient causes, which identify “the primary source of the change or rest” (e.g., a carpenter who makes a table), seem like the most canonically causal. Material causes, which specify “that out of which” something is made (e.g., wood for a table), are not causal in a narrow sense (for instance, we wouldn’t say that the wood causes or is a cause of the table), but they nonetheless play a clear causal role in the production of an object. Final and formal causes are less clearly causal; but, as we consider in the following discussion, there are ways in which each could be understood causally, as well.

First, consider final causes, which offer “that for the sake of which a thing is done.” Final cause explanations (or perhaps more accurately, their contemporary counterparts) are also known as teleological or functional explanations, as they offer a goal or a function. For instance, we might explain the detour to the café by appeal to a goal (getting coffee), or the blade’s sharpness by appeal to its function (slicing vegetables). On the face of it, these explanations defy the direction of causal influence: they explain a current event (the detour) or property (the sharpness) by appeal to something that occurs only later (the coffee acquisition or the vegetable slicing). Nonetheless, some philosophers have argued that teleological explanations can be understood causally (e.g., Wright, 1976 ), and there is evidence that adults ( Lombrozo & Carey, 2006 ) and children ( Kelemen & DiYanni, 2005 ) treat them causally, as well (see also Chaigneau, Barsalou, & Sloman, 2004 , and Lombrozo & Rehder, 2012 , for more general investigations of the causal structure of functions).

How can teleological explanations be causal? On Wright’s view, teleological explanations do not explain the present by appeal to the future—rather, the appeal to an unrealized goal or function is a kind of shorthand for a complex causal process that brought about (and hence preceded ) what is being explained. In cases of intentional action, the function or goal could be a shorthand for the corresponding intention that came first: the detour to the café was caused by a preceding intention to get coffee, and the blade’s sharpness was caused by the designer’s antecedent intention to create a tool for slicing vegetables. Other cases, however, can be more complex. For instance, we might explain this zebra’s stripes by appeal to their biological function (camouflage) because its ancestors had stripes that produced effective camouflage, and in part for that reason, stripes were increased or maintained in the population. If past zebra stripes didn’t produce camouflage, then this zebra wouldn’t have stripes (indeed, this zebra might not exist at all). In this case, the function can be explanatory because it was produced by “a causal process sensitive to the consequences of changes it produces” ( Lombrozo & Carey, 2006 ; Wright, 1976 ), even in the absence of a preceding intention to realize the function.

Lombrozo and Carey (2006) tested these ideas as a descriptive account of the conditions under which adults accept teleological explanations. In one study, they presented participants with causal stories in which a functional property did or did not satisfy Wright’s conditions. For example, participants learned about genetically engineered gophers that eat weeds, and whose pointy claws damage the roots of weeds as they dig, making them popular among farmers. The causal role of “damaging roots” in bringing about the pointy claws varied across conditions, from no role (the genetic engineer accidentally introduced a gene sequence that resulted in gophers with pointy claws), to a causal role stemming from an intention to damage roots (the genetic engineer intended to help eliminate weeds, and to that end engineered pointy claws), to a causal role without an intention to damage roots (the genetic engineer didn’t realize that pointy claws damaged weed roots, but did notice that the pointy claws were popular and decided to create all of his gophers with pointy claws). Participants then rated the acceptability and quality of teleological (and other) explanations. For the vignette involving genetically engineered gophers, they were asked why the gophers had pointy claws, and rated “because the pointy claws damage weed roots” as a response.

In this and subsequent studies, Lombrozo and Carey (2006) found that teleological explanations are understood causally in the sense that participants only accepted teleological explanations when the function or goal invoked in the explanation played an appropriate causal role in bringing about what was being explained. More precisely, this causal requirement was necessary for teleological explanations to be accepted, but not sufficient . In the preceding examples, teleological explanations were accepted at high levels when the function was intended, at moderate levels when the function played a non-intentional causal role, and at low levels when the function played no causal role at all. Lombrozo and Carey suggest (and provide evidence) that in addition to satisfying certain causal requirements, teleological explanations might call for the existence of a general pattern that makes the function predictively useful.

Kelemen and DiYanni (2005) conducted a study with elementary school children (6–7 and 9–10-year-olds) investigating the relationship between their acceptance and generation of teleological explanations for natural phenomena, on the one hand, and their causal commitments concerning their origins, on the other hand—specifically, whether they believed that an intentional designer of some kind (“someone or something”) made them or they “just happened.” The tendency to endorse and generate teleological explanations of natural events, non-living natural objects, and animals was significantly correlated with belief in the existence of an intentional creator of some kind, be it God, a human, or an unspecified force or agent. While these findings do not provide direct support for the idea that teleological explanations are grounded in a preceding intention to produce the specific function in question, the link between teleological explanations and intentional design more generally is consistent with the idea that teleological explanations involve some basic causal commitments. Along the same lines, Kelemen, Rottman, and Seston (2013) found that adults (including professional scientists) who believe in God or “Gaia” are more likely to accept scientifically unwarranted teleological explanations (see also ojalehto, Waxman, & Medin, 2013 , for a relevant discussion). Thus, the findings to date suggest that teleological explanations are understood causally by both adults and children.

What about formal explanations? Within Aristotle’s framework, a formal explanation offers “the form” of something or “the account of what-it-is-to-be.” Within psychology, what little work there is on formal explanation has focused on explanations that appeal to category membership. For example, Prasada and Dillingham (2006) define formal explanations as stating that tokens of a type have certain properties because they are the kinds of things they are (i.e., tokens of the respective type): we can say that Zach diagnoses ailments because he is a doctor , or that a particular object is sharp because it is a knife .

In their original paper and in subsequent work, Prasada and Dillingham ( 2006 , 2009 ) argue that formal explanations are not causal, but instead are explanatory by virtue of a part–whole relationship. They show that only properties that are considered to be aspects of the kind support formal explanations, in contrast to “statistical” properties that are merely reliably associated with the kind. For example, people accepted a formal explanation of why something has four legs by reference to its category (“because it’s a dog”), and also accepted the claim that “having four legs” is one aspect of being a dog. In contrast, participants rejected formal explanations such as “that (pointing to a barn) is red because it’s a barn,” and also denied that being red is one aspect of being a barn (even though most barns are red). Prasada and Dillingham (2009) argue that the relationship underlying such formal explanation is constitutive (not causal): aspects are connected to kinds via a part–whole relationship, and such relationships are explanatory because the “existence of a whole presupposes the existence of its parts, and thus the existence of a part is rendered intelligible by identifying the whole of which it is a part” (p. 421).

Prasada and Dillingham offer two additional pieces of evidence for the proposal that formal explanations are constitutive, and not causal. First, they demonstrate the explanatory potential of the part–whole relationship by showing that when this relationship is made explicit, even statistical features can support formal explanations. For example, we can explain, “Why is that (pointing to a barn) red? Because it is a red barn,” where being red is understood as part of being a red barn ( Prasada & Dillingham, 2009 ). This explanation isn’t great, but neither is it tautological: it identifies the source of the redness in something about the red barn, as opposed, for instance, to the light that happens to be shining on it (see also Cimpian & Salomon, 2014 , on “inherent” explanations). Less convincingly, they attempt to differentiate formal explanations from causal-essentialist explanations. On causal-essentialist accounts, a category’s essence is viewed as the cause of the category members’ properties ( Gelman, 2003 ; Gelman & Hirschfeld, 1999 ; Medin & Ortony, 1989 ), which could ground formal explanations in a causal relationship. To test this, Prasada and Dillingham had participants evaluate explanations such as “Why does that (pointing to a dog) have four legs? Because it has the essence of a dog which causes it to have four legs” ( Prasada & Dillingham, 2006 ). While there was a trend for formal explanations to be rated more highly than causal-essentialist explanations for properties that were taken to be aspects of a given kind, the results were inconclusive. As Prasada and Dillingham acknowledge, the wording of the causal-essentialist explanations was awkward, which could partially account for their middling acceptance. It thus remains a possibility that at least some formal explanations are understood causally, as pointers to some category-associated essence or causal factor responsible for the properties being explained.

One reason it is valuable to recognize the diversity of explanations is that different kinds of explanations lead to systematically different patterns of causal judgment. For example, Lombrozo (2009) investigated the relationship between different kinds of causal explanations and the relative importance of features in classification (see also Ahn, 1998 ). Participants learned about novel artifacts and organisms with three causally related features. To illustrate, one item involved “holings,” a type of flower with “brom” compounds in its stem, which makes it bend over as it grows, which means its pollen can be spread to other flowers by wandering field mice. Participants were asked a why-question about the middle feature (e.g., “Why do holings typically bend over?”), which was ambiguous as a request for a mechanistic explanation (e.g., “Because of the brom compounds”) or a teleological explanation (e.g., “In order to spread their pollen”). Participants provided an explanation and were subsequently asked to decide whether novel flowers were holings, where some shared the mechanistic feature (brom compounds) and some shared the functional feature (bending over). Lombrozo found that participants who provided functional explanations in response to the ambiguous why-question were significantly more likely than participants who did not to then privilege the functional feature relative to the mechanistic feature when it came to classification. Similarly, a follow-up study found that experimentally prompting participants to generate a particular explanation type by disambiguating the why-question (“In other words, what purpose might bending over serve?”) had the same effect (see also Lombrozo & Rehder, 2012 , for additional evidence about the relationship between functions and kind classification).

Additional studies suggest that the effects of mechanistic versus functional explanations extend beyond judgments of category membership. Lombrozo and Gwynne (2014) employed a method similar to Lombrozo (2009) , presenting participants with causal chains consisting of three elements, such as a certain gene that causes a speckled pattern in a plant, which attracts butterflies that play a role in pollination. Participants explained the middle feature (the speckled pattern) and generalized a number of aspects of that feature (e.g., its density, contrast, and color) to novel entities that shared either a causal or a functional feature with the original. Lombrozo and Gwynne found that explaining a property functionally (versus mechanistically) promoted the corresponding type of generalization.

Vasilyeva and Coley (2013) demonstrated a similar link between explanation and generalization in an open-ended task. Participants learned about plants and animals possessing novel but informative properties (e.g., ducks have parasite X [or X-cells ]) and generated hypotheses about which other organisms might share the property. In the course of generating these hypotheses, participants spontaneously produced formal, causal, and teleological explanations in a manner consistent with the property they reasoned about. Of most importance the type of explanation predicted the type of generalization: for example, people were most likely to generalize properties to entities related via causal interactions (e.g., plants and insects that ducks eat, or things that eat ducks) after generating causal explanations (e.g., they got it from their food). In a separate set of studies, Vasilyeva and Coley (in preparation) ruled out an alternative account based exclusively on the direct effects of generalized properties on generalizations.

Beyond highlighting some causal relationships over others, different kinds of explanations could change the way participants represent and reason about causal structure. Indeed, findings from Lombrozo (2010) suggest that this is the case. In a series of studies, Lombrozo presented participants with causal structures drawn from the philosophical literature and intended to disambiguate two accounts of causation: those based on some kind of dependence relationship (see Le Pelley, Griffiths, and Beesley, Chapter 2 in this volume) and those based on some kind of transference (see Wolff and Thorstad, Chapter 9 in this volume). According to one version of the former view, C is a cause of E if it is the case that had C not occurred, E would not have occurred. In other words, E depends upon C in the appropriate way, in this case counterfactually. According to one version of transference views, C is a cause of E if there was a physical connection between C and E—some continuous mechanism or conserved physical quantity, such as momentum.

While dependence and transference often go hand in hand, they can come apart in cases of “double prevention” and “overdetermination.” Lombrozo presented participants with such cases and found that judgments were more closely aligned with dependence views than transference views when the causal structures were directed toward a function or goal, and therefore supported a teleological explanation. Lombrozo (2010) explains this result, in part, by appeal to the idea of equifinality : when a process is goal-directed, the end may be achieved despite variations in the means. To borrow William James’s famous example, Romeo will find his way to Juliet whatever obstacle is placed in his path ( James, 1890 ). He might scale a fence or wade through a river, but the end—reaching Juliet—will remain the same. When participants reason about a structure in teleological or goal-directed terms, they may similarly represent it as means- or mechanism-invariant, and therefore focus on dependence relationships irrespective of the specific transference that happened to obtain.

In sum, pluralism has long been recognized as a feature of explanation, with Aristotle’s taxonomy providing a useful starting point for charting variation in explanations (although it is by no means the only taxonomy of explanation; see, for example, Cimpian & Salomon, 2014 , on inherent versus extrinsic explanations). We have reviewed evidence that teleological explanations are causal explanations, but that they are nonetheless treated differently from mechanistic explanations, which do not appeal to functions or goals. The evidence concerning formal explanations is less conclusive, but points to a viable alternative to a causal interpretation, with formal explanation instead depending on constitutive part–whole relations.

Recognizing explanatory pluralism can provide a useful road map for thinking about pluralism when it comes to causation and causal relations. In fact, as we have seen, different kinds of explanations do lead to systematic differences in classification and inference, with evidence that causal relationships themselves may be represented differently under different “explanatory modes.” In the following section, we take a closer look at mechanistic explanations and their relationship to causation and mechanisms.

Explanation and Causal Mechanisms

The “mechanistic explanations” considered in the previous section concerned the identification of one or more causes that preceded some effect. Often, however, causal explanations do not simply identify causes, but instead aim to articulate how the cause brought about the effect. That is, they involve a mechanism . But what, precisely, is a mechanism? Are all mechanisms causal? And do mechanisms have a privileged relationship to explanation? In this section, we begin to address these questions about the relationship between mechanisms and explanations. For a more general discussion of mechanisms, we direct readers to the chapter on mechanisms by Johnson and Ahn (Chapter 8 in this volume).

Within psychology, there is growing interest in the role of mechanisms in causal reasoning. For example, Ahn, Kalish, Medin, and Gelman (1995) found that people seek “mechanistic” information in causal attribution. Park and Sloman (2013) found that people’s violations of the Markov assumption depended on their “mechanistic” beliefs about the underlying causal structure. Buehner and McGregor (2006) showed that beliefs about mechanism type moderate effects of temporal contiguity in causal judgments (see also Ahn & Bailenson, 1996 ; Buehner & May, 2004 ; Fugelsang & Thompson, 2000 ; Koslowski & Okagaki, 1986 ; Koslowski, Okagaki, Lorenz, & Umbach, 1989 ; for reviews, see Ahn & Kalish, 2000 ; Johnson & Ahn, Chapter 8 in this volume; Koslowski, 1996 , 2012 ; Koslowski & Masnik, 2010 ; Sloman & Lagnado, 2014 ; Waldmann & Hagmayer, 2013 ). Despite these frequent appeals to mechanisms and mechanistic information, however, there is no explicitly articulated and widely endorsed conception of “mechanism.”

Most often, a mechanism is taken to spell out the intermediate steps between some cause and some effect. For example, Park and Sloman (2014) define a mechanism as “the set of causes, enablers, disablers, and preventers that are directly involved in producing an effect, along with information about how the effect comes about, including how it unfolds over time” (p. 807). Research that adopts a perspective along these lines often goes further in explicitly identifying such mechanisms as explanations (and these terms are often used interchangeably, as in Koslowski & Masnik, 2010 ). Other work operationalizes mechanisms using measures of explanation, implicitly suggesting a correspondence. For example, to validate a manipulation of mechanism, Park and Sloman asked participants whether the same explanation applies to both effects in a common-cause structure (see also Park & Sloman, 2013 ). Similarly, in a study examining mental representations of mechanisms, Johnson and Ahn (2015) considered (but did not ultimately endorse) an “explanatory” sense of mechanism, which they operationalized by asking participants to rate the extent to which some event B explains why event A led to event C.

Shifting from psychology to philosophy, we find a class of accounts of explanation that likewise associate explanations with a specification of mechanisms (e.g., Bechtel & Abrahamsen, 2005 ; Glennan, 1996 , 2002 ; Machamer, Darden, & Craver, 2000 ; Railton, 1978 ; Salmon, 1984 ). Consistent with the empirical work reviewed earlier, some of these accounts (e.g., Railton, 1978 ; Salmon, 1984 ) consider mechanisms to be “sequences of interconnected events” ( Glennan, 2002 , p. S345). Canonical examples include causal chains or networks of events leading to a specific outcome, such as a person who kicks a ball, which bounces off a pole, which breaks a window. On these views, explanation, causation, and mechanisms are not only intimately related, but potentially interdefined.

A second view of mechanisms within philosophy, however, departs more dramatically from work in psychology, and also suggests a more circumscribed role for causation. These views analyze mechanisms as complex systems that involve a (typically hierarchical) structure and arrangement of parts and processes, such as that exhibited by a watch, a cell, or a socioeconomic system (e.g., Bechtel & Abrahamsen, 2005 ; Glennan, 1996 , 2002 ; Machamer, Darden, & Craver, 2000 ). Within this framework, Craver and Bechtel (2007) offer an insightful analysis of causal and non-causal relationships within a multilevel mechanistic system. Specifically, they suggest that interlevel (i.e., “vertical”) relationships within a mechanism are not causal, but constitutive . For instance, a change in rhodopsin in retinal cells can partially explain how signal transduction occurs, but we wouldn’t say that this change causes signal transduction; it arguably is signal transduction (or one aspect of it). Craver and Bechtel point out that constitutive relations conflict with many common assumptions about event causation: that causes and effects must be distinct events, that causes precede their effects, that the causal relation is asymmetrical, and so on. Unlike causation, explanation can accommodate both causal (intralevel) relationships and constitutive (interlevel) relationships, of the kind documented by Prasada and Dillingham’s (2009) work on formal explanation.

Although Craver and Bechtel convincingly argue that the causal reading of interlevel relationships is erroneous (see also Glennan, 2010 , for related claims), as a descriptive matter, it could be that laypeople nonetheless interpret them in causal terms. An example from the Betty Crocker Cookbook , discussed by Patricia Churchland (1994) , illustrates the temptation. In the book, Crocker is correct to explain that microwave ovens work by accelerating the molecules comprising the food, but she wrongly states that the excited molecules rub against one another and that their friction generates heat. Crocker assumes that the increase in mean kinetic energy of the molecules causes heat, when in fact heat is constituted by the mean kinetic energy of the molecules ( Craver & Bechtel, 2007 ). A study by Chi, Roscoe, Slotta, Roy, and Chase (2012) showed that eighth and ninth graders, like Crocker, tended to misconstrue non-sequential, emergent processes as direct sequential causal relationships. It’s possible that adults might make similar errors as well, assimilating non-causal explanations to a causal mold.

There are thus many open questions about how best to define mechanisms for the purposes of psychological theory, and about the extent to which mechanisms are represented in terms of strictly causal relationships. What we do know, however, is that explanations and mechanisms seem to share a privileged relationship. More precisely, there is evidence that the association between mechanisms and explanation claims is closer than that between mechanisms and corresponding causal claims ( Vasilyeva & Lombrozo, 2015 ).

The studies by Vasilyeva and Lombrozo (2015) used “minimal pairs”: causal and explanatory claims that were matched as closely as possible. For example, participants read about a person, PK, who spent some time in the portrait section of a museum and made an optional donation to the museum. They were then asked to evaluate how good they found an explanation for the donation (“Why did PK make an optional donation to the museum? Because PK spent some time in the portrait section”), or how strongly they endorsed a causal relationship (“Do you think there exists a causal relationship between PK spending some time in a portrait section and PK making an optional donation to the museum?”).

Vasilyeva and Lombrozo varied two factors across items and participants: the strength of covariation evidence between the candidate cause and effect, and knowledge of a mediating mechanism. In the museum example, some participants learned the speculative hypothesis that “being surrounded by many portraits (as opposed to other kinds of paintings) creates a sense that one is surrounded by watchful others. This reminds the person of their social obligations, which in turn encourages them to donate money to the public museum.” Both explanation and causal judgments were affected by these manipulations of covariation and mechanism information. However, they were not affected equally: specifying a mechanism had a stronger effect on explanation ratings than on causal ratings, while the strength of covariation evidence had a stronger effect on causal ratings than on explanation ratings.

The findings from Vasilyeva and Lombrozo (2015) support a special relationship between explanations and mechanisms. They also challenge views that treat explanations as equivalent to identifying causal relationships, since matched explanation and causal claims were differentially sensitive to mechanisms and covariation. The findings thus raise the possibility that explanatory and causal judgments are tuned to support different cognitive functions. For example, explanation could be especially geared toward reliable and broad generalizations ( Lombrozo & Carey, 2006 ), which can benefit from mechanistic information: when we understand the mechanism by which some cause generates some effect, we can more readily infer whether the same relationship will obtain across variations in circumstances. By learning the mechanism that mediates the relationship between visiting a portrait gallery and making an optional museum donation, for example, we are in a better position to predict whether visiting a figurative versus an abstract sculpture garden will have the same effect. This benefit can potentially be realized with quite skeletal mechanistic ( Rozenblit & Keil, 2002 ) or functional understanding ( Alter, Oppenheimer, & Zemla, 2010 ); people need not understand a mechanism in full detail to gain some inferential advantage. Causal claims, by contrast, could more closely track the evidence concerning a particular event or relationship, rather than the potential for broad generalization.

In sum, the picture that emerges is one of partial overlap between causality, explanation, and mechanisms. Work in philosophy offers a variety of proposals emphasizing different aspects of mechanisms: structure, functions, temporally unfolding processes connecting starting conditions to the end state, and so on. Explanatory and causal judgments could track different aspects of mechanisms, resulting in the patterns of association and divergence observed. We suspect that adopting more explicit and sophisticated notions of mechanism will help research in this area move forward. On a methodological note, we think the strategy adopted in Vasilyeva and Lombrozo (2015) —of contrasting the characteristics of causal explanation claims with “matched” causal claims—could be useful in driving a wedge between different kinds of judgments, thus shedding light on their unique characteristics and potentially unique roles in human cognition. This strategy can also generalize to other kinds of judgments. For example, Dehghani, Iliev, and Kaufmann (2012) and Rips and Edwards (2013) both report systematic patterns of divergence between explanations and counterfactual claims, another judgment with a potentially foundational relationship to both explanation and causation.

Conclusions

Throughout the chapter, we have presented good evidence that explanatory considerations affect causal reasoning, with implications for causal inference, causal learning, and attribution. We have also considered different kinds of explanations, including their differential effects on causal generalizations and causal representation, and the role of mechanisms in causal explanation. However, many questions remain open. We highlight four especially pressing questions here.

First, we have observed many instances in which explanation leads to departures from “normative” reasoning, at least on the assumption that one ought to infer causes and causal relationships by favoring causal hypotheses with the highest posterior probabilities. Are these departures truly errors? Or have we mischaracterized the relevant competence? In particular, could it be that explanatory judgments are well-tuned to some cognitive end, but that end is not the approximation of posterior probabilities?

Second, we have focused on a characterization of explanations and the effects of engaging in explanation, with little attention to underlying cognitive mechanisms. How do people actually go about generating and evaluating causal explanations? How do the mental representations that support explanation relate to those that represent causal structure? And how do explanatory capacities arise over the course of development?

Third, what is the relationship between causal and non-causal explanations? Are they both explanatory by virtue of some shared explanatory relationship, or are causal explanations explanatory by virtue of being causal, with non-causal explanations explanatory for some other reason (for instance, because they embody a part–whole relationship)? On each view, what are the implications for causation?

Finally, we have seen how debates in explanation (from both philosophy and psychology) can inform the study of causation, with examples including inference to the best explanation, the idea of a “contrast class,” and pluralism about explanatory kinds. Can the literature on levels of explanation (e.g., Potochnik, 2010 ) perhaps inspire some new debates about levels of causation (as in, e.g., Woodward, 2010 )? Recent work on hierarchical Bayesian models and hierarchical causal structures is beginning to move in this direction, with the promise of a richer and more powerful way to understand humans’ remarkable ability to reason about and explain the causal structure of the world.

Acknowledgments

The preparation of this chapter was partially supported by the Varieties of Understanding Project funded by the Templeton Foundation, as well as an NSF CAREER award to the first author (DRL-1056712). We are also grateful to David Danks, Samuel Johnson, and Michael Waldmann for helpful comments on a previous draft if this chapter.

Ahn, W. ( 1998 ). Why are different features central for natural kinds and artifacts? The role of causal status in determining feature centrality.   Cognition , 69 (2), 135–178.

Google Scholar

Ahn, W. K. , & Bailenson, J. ( 1996 ). Causal attribution as a search for underlying mechanisms: An explanation of the conjunction fallacy and the discounting principle.   Cognitive Psychology , 31 (1), 82–123.

Ahn, W. K. , & Kalish, C. ( 2000 ). The role of mechanism beliefs in causal reasoning. In F. C. Keil (Ed.), Explanation and cognition . Cambridge, MA: MIT Press.

Google Preview

Ahn, W. K. , Kalish, C. W. , Medin, D. L. , & Gelman, S. A. ( 1995 ). The role of covariation versus mechanism information in causal attribution.   Cognition , 54 , 299–352.

Alter, A. L. , Oppenheimer, D. M. , & Zemla, J. C. ( 2010 ). Missing the trees for the forest: A construal level account of the illusion of explanatory depth.   Journal of Personality and Social Psychology , 99 , 436–451.

Amsterlaw, J. , & Wellman, H. M. ( 2006 ). Theories of mind in transition: A microgenetic study of the development of false belief understanding.   Journal of Cognition and Development , 7 (2), 139–172.

Baker, A. ( 2013 ). Simplicity. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Fall 2013 ed.). http://plato.stanford.edu/archives/fall2013/entries/simplicity/ .

Bartsch, K. , & Wellman, H. M. ( 1995 ). Children talk about the mind . Oxford: Oxford University Press.

Bechtel, W. , & Abrahamsen, A. ( 2005 ). Explanation: A mechanist alternative.   Studies in History and Philosophy of Science Part C :Studies in History and Philosophy of Biological and Biomedical Sciences , 36 (1995), 421–441.

Bonawitz, E. B. , & Lombrozo, T. ( 2012 ). Occam’s rattle: Children’s use of simplicity and probability to constrain inference.   Developmental Psychology , 48 (4), 1156–1164.

Buehner, M. J. ( 2005 ). Contiguity and covariation in human causal inference.   Learning & Behavior: A Psychonomic Society Publication , 33 (2), 230–238.

Buehner, M. J. , & May, J. ( 2004 ). Abolishing the effect of reinforcement delay on human causal learning.   The Quarterly Journal of Experimental Psychology , 57 B , 179–191.

Buehner, M. J. , & McGregor, S. ( 2006 ). Temporal delays can facilitate causal attribution: Towards a general timeframe bias in causal induction.   Thinking & Reasoning , 12 , 353–378.

Chaigneau, S. E. , Barsalou, L. W. , & Sloman, S. A. ( 2004 ). Assessing the causal structure of function.   Journal of Experimental Psychology: General , 133 (4), 601–25.

Cheng, P. W. ( 1997 ). From covariation to causation: A causal power theory.   Psychological Review , 104 , 367–405.

Cheng, P. W. , & Novick, L. R. ( 1990 ). A probabilistic contrast model of causal induction.   Journal of Personality and Social Psychology , 58 (4), 545.

Cheng, P. W. , & Novick, L. R. ( 1991 ). Causes versus enabling conditions.   Cognition , 40 (1–2), 83–120.

Cheng, P. W. , & Novick, L. R. ( 1992 ). Covariation in natural causal induction.   Psychological Review , 99 (2), 365–382.

Chi, M. T. H. , De Leeuw, N. , Chiu, M.-H. , & Lavancher, C. ( 1994 ). Eliciting self-explanations improves understanding.   Cognitive Science , 18 (3), 439–477.

Chi, M. T. H. , Roscoe, R. D. , Slotta, J. D. , Roy, M. , & Chase, C. C. ( 2012 ). Misconceived causal explanations for emergent processes.   Cognitive Science , 36 (1), 1–61.

Chin-Parker, S. , & Bradner, A. ( 2010 ). Background shifts affect explanatory style: How a pragmatic theory of explanation accounts for background effects in the generation of explanations.   Cognitive Processing , 11 (3), 227–249.

Churchland, P. S. ( 1994 ). Can neurobiology teach us anything about consciousness?   Proceedings and Addresses of the American Philosophical Association , 67 (4), 23–40.

Cimpian, A. , & Salomon, E. ( 2014 ). The inherence heuristic: An intuitive means of making sense of the world, and a potential precursor to psychological essentialism.   Behavioral and Brain Sciences , 37 (5), 461–480.

Craver, C. F. , & Bechtel, W. ( 2007 ). Top-down causation without top-down causes.   Biology and Philosophy , 22 , 547–563.

Dehghani, M. , Iliev, R. , & Kaufmann, S. ( 2012 ). Causal explanation and fact mutability in counterfactual reasoning.   Mind & Language , 27 (1), 55–85.

Douven, I. ( 2011 ). Abduction. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy . (Spring 2011 ed.). http://plato.stanford.edu/archives/spr2011/entries/abduction/ .

Douven, I. ( 2013 ). Inference to the best explanation, Dutch books, and inaccuracy minimisation.   Philosophical Quarterly , 63 (252), 428–444.

Douven, I. , & Schupbach, J. N. ( 2015 a). The role of explanatory considerations in updating.   Cognition , 142 , 299–311.

Douven, I. , & Schupbach, J. N. ( 2015 b). Probabilistic alternatives to Bayesianism: The case of explanationism.   Frontiers in Psychology , 6 , 1–9.

Einhorn, H. J. , & Hogarth, R. M. ( 1986 ). Judging probable cause.   Psychological Bulletin , 99 (1), 3–19.

Falcon, A. ( 2015 ). Aristotle on causality. In E. N. Zalta (Ed.), The Stanford encyclopedia of philosophy (Spring 2015 Edition). http://plato.stanford.edu/archives/spr2015/entries/aristotle-causality/ .

Fiske, S. T. , & Taylor, S. E. ( 2013 ). Social cognition: From brains to culture . Thousand Oaks, CA: Sage.

Försterling, F. ( 1992 ). The Kelley model as an analysis of variance analogy: How far can it be taken?   Journal of Experimental Social Psychology , 28 (5), 475–490.

Fugelsang, J. A. , & Thompson, V. A. ( 2000 ). Strategy selection in causal reasoning: When beliefs and covariation collide.   Canadian Journal of Experimental Psychology , 54 , 15–32.

Gelman, S. A. ( 2003 ). The essential child: Origins of essentialism in everyday thought . Oxford: Oxford University Press.

Gelman, S. A. , & Hirschfeld, L. A. ( 1999 ). How biological is essentialism.   Folkbiology , 9 , 403–446.

Glennan, S. ( 1996 ). Mechanisms and the nature of causation.   Erkenntnis , 44 (1), 49–71.

Glennan, S. ( 2002 ). Rethinking mechanistic explanation.   Philosophy of Science , 69 (3), S342–S353.

Glennan, S. ( 2010 ). Mechanisms, causes, and the layered model of the world.   Philosophy and Phenomenological Research , 81 (2), 362–381.

Glymour, C. , & Cheng, P. W. ( 1998 ). Causal mechanism and probablity: A normative approach. In M. Oaksford & N. Chater (Eds.), Rational models of cognition (pp. 295–313). Oxford: Oxford University Press.

Good, I. J. ( 1960 ). Weight of evidence, corroboration, explanatory power, information and the utility of experiments.   Journal of the Royal Statistical Society: Series B (Methodological) , 22 (2), 319–331.

Goodman, N. D. , Baker, C. L. , Bonawitz, E. B. , Mansinghka, V. K. , Gopnik, A. , Wellman, H. , et al. ( 2006 ). Intuitive theories of mind: A rational approach to false belief. In R. Sun & N. Miyake (Eds.), Proceedings of the 28th annual conference of the Cognitive Science Society (pp. 1382–1387). Mahwah, NJ: Lawrence Erlbaum Associates.

Gopnik, A. ( 2000 ). Explanation as orgasm and the drive for causal knowledge: The function, evolution, and phenomenology of the theory formation system. In F. Keil & R. A. Wilson (Eds.), Explanation and cognition (pp. 299–323). Cambridge, MA: MIT Press.

Gopnik, A. , & Sobel, D. M. ( 2000 ). Detecting blickets: How young children use information about novel causal powers in categorization and induction.   Child Development , 71 (5), 1205–1222.

Griffiths, T. L. , & Tenenbaum, J. B. ( 2005 ). Structure and strength in causal induction.   Cognitive Psychology , 51 (4), 334–384.

Harman, G. H. ( 1965 ). The inference to the best explanation.   Philosophical Review , 74 (1), 88–95.

Hart, H. L. A. , & Honoré, T. ( 1985 ). Causation in the Law . Oxford: Oxford University Press.

Henderson, L. ( 2014 ). Bayesianism and inference to the best explanation.   The British Journal for the Philosophy of Science , 65 , 687–715.

Hewstone, M. , & Jaspars, J. ( 1987 ). Covariation and causal attribution: A logical model of the intuitive analysis of variance.   Journal of Personality and Social Psychology , 53 (4), 663–672.

Hilton, D. J. , & Erb, H.-P. ( 1996 ). Mental models and causal explanation: Judgments of probable cause and explanatory relevance.   Thinking and Reasoning , 2 (4), 273–308.

Hilton, D. J. , McClure, J. , & Sutton, R. M. ( 2009 ). Selecting explanations from causal chains: Do statistical principles explain preferences for voluntary causes?   European Journal of Social Psychology , 39 , 1–18.

Holyoak, K. J. , & Cheng, P. W. ( 2011 ). Causal learning and inference as a rational process: The new synthesis.   Annual Review of Psychology , 62 , 135–163.

James, W. ( 1890 ). The principles of psychology . New York: H. Holt.

Johnson, S. G. B. , & Ahn, W. ( 2015 ). Causal networks or causal islands? The Rrepresentation of mechanisms and the transitivity of causal judgment.   Cognitive Science , 1–36.

Johnson, S. G. B. , Johnston, A. M. , Toig, A. E. , & Keil, F. C. ( 2014 ). Explanatory scope informs causal strength inferences. In P. Bello , M. Guarini , M. McShane , & B. Scassellati (Eds.), Proceedings of the 36th annual conference of the Cognitive Science Society (pp. 2453–2458). Austin, TX: Cognitive Science Society.

Johnston, A. M. , Johnson, S. G. B. , Koven M. L. , & Keil, F. C. ( 2015 ). Probabilistic versus heuristic accounts of explanation in children: Evidence from a latent scope bias. In D. C. Noelle , R. Dale , A. S. Warlaumont , J. Yoshimi , T. Matlock , C. D. Jennings , & P. P. Maglio (Eds.), Proceedings of the 37th annual conference of the Cognitive Science Society (pp. 1021–1026). Austin, TX: Cognitive Science Society.

Jones, E. E. , & Nisbett, R. E. ( 1971 ). The actor and the observer: Divergent perceptions of the causes of behavior. In E. E. Jones et al. (Eds.), Attribution: Perceiving the causes of behavior . Morristown, N.J.: General Learning Press.

Kelemen, D. , & DiYanni, C. ( 2005 ). Intuitions about origins: Purpose and intelligent design in children’s reasoning about nature.   Journal of Cognition and Development , 6 , 3–31.

Kelemen, D. , Rottman, J. , & Seston, R. ( 2013 ). Professional physical scientists display tenacious teleological tendencies: Purpose-based reasoning as a cognitive default.   Journal of Experimental Psychology: General , 142 (4), 1074–1083.

Kelley, H. H. ( 1967 ). Attribution theory in social psychology.   Nebraska Symposium on Motivation , 15 , 192–238.

Kelley, H. H. ( 1973 ). The process of causal attributions.   American Psychologist , 28 , 107–128.

Kelley, H. H. , & Michela, J. L. ( 1980 ). Attribution theory and research.   Annual Review of Psychology , 31 , 457–501.

Kemp, C. , Goodman, N. , & Tenenbaum, J. ( 2010 ). “ http://www.psy.cmu.edu/%7Eckemp/papers/kempgt10_learningtolearncausalmodels.pdf ” Learning to learn causal models.   Cognitive Science , 34 (7), 1185–1243.

Khemlani, S. S. , Sussman, A. B. , & Oppenheimer, D. M. ( 2011 ). Harry Potter and the sorcerer’s scope: Latent scope biases in explanatory reasoning.   Memory & Cognition , 39 (3), 527–535.

Koslowski, B. ( 1996 ). Theory and evidence: The development of scientific reasoning . Cambridge, MA: MIT Press.

Koslowski, B. ( 2012 ). Scientific reasoning: Explanation, confirmation bias, and scientific practice. In G. Feist & M. Gorman (Eds.), Handbook of the psychology of science . New York: Springer.

Koslowski, B. , & Masnick, A. ( 2010 ). Causal reasoning and explanation. In U. C. Goswami (Ed.), The Wiley-Blackwell handbook of childhood cognitive development (2nd ed., pp. 377–398). Malden, MA: Wiley-Blackwell.

Koslowski, B. , & Okagaki, L. ( 1986 ). Non-Humean indices of causation in problem-solving situations: Causal mechanism, analogous effects, and the status of rival alternative accounts.   Child Development , 57 (5), 1100–1108.

Koslowski, B. , Okagaki, L. , Lorenz, C. , & Umbach, D. ( 1989 ). When covariation is not enough: The role of causal mechanism, sampling method, and sample size in causal reasoning.   Child Development , 60 (6), 1316–1327.

Kuhn, D. , & Katz, J. ( 2009 ). Are self-explanations always beneficial?   Journal of Experimental Child Psychology , 103 (3), 386–394.

Lagnado, D. A. , & Channon, S. ( 2008 ). Judgments of cause and blame: the effects of intentionality and foreseeability.   Cognition , 108 (3), 754–70.

Legare, C. H. ( 2012 ). Exploring explanation: Explaining inconsistent evidence informs exploratory, hypothesis-testing behavior in young children.   Child Development , 83 (1), 173–85.

Legare, C. H. , & Lombrozo, T. ( 2014 ). Selective effects of explanation on learning during early childhood.   Journal of Experimental Child Psychology , 126 , 198–212.

Legare, C. H. , Wellman, H. M. , & Gelman, S. A. ( 2009 ). Evidence for an explanation advantage in naïve biological reasoning.   Cognitive Psychology , 58 (2), 177–94.

Lipton, P. ( 2004 ). Inference to the best explanation . London: Routledge.

Lombrozo, T. ( 2007 ). Simplicity and probability in causal explanation.   Cognitive Psychology , 55 (3), 232–257.

Lombrozo, T. ( 2009 ). Explanation and categorization: How “why?” informs “what?.”   Cognition , 110 (2), 248–53.

Lombrozo, T. ( 2010 ). Causal-explanatory pluralism: How intentions, functions, and mechanisms influence causal ascriptions.   Cognitive Psychology , 61 (4), 303–32.

Lombrozo, T. ( 2012 ). Explanation and abductive inference. In Oxford handbook of thinking and reasoning (pp. 260–276). Oxford: Oxford University Press.

Lombrozo, T. ( 2016 ). Explanatory preferences shape learning and inference.   Trends in Cognitive Sciences , 20 , 748–759.

Lombrozo, T. , & Carey, S. ( 2006 ). Functional explanation and the function of explanation.   Cognition , 99 (2), 167–204.

Lombrozo, T. , & Gwynne, N. Z. (2014). Explanation and inference: Mechanistic and functional explanations guide property generalization.   Frontiers in Human Neuroscience , 8 (September), 700.

Lombrozo, T. , & Rehder, B. ( 2012 ). Functions in biological kind classification.   Cognitive Psychology , 65 (4), 457–485.

Machamer, P. , Darden, L. , & Craver, C. F. ( 2000 ). Thinking about mechanisms.   Philosophy of Science , 67 (1), 1–25.

Malle, B. F. ( 1999 ). How people explain behavior: A new theoretical framework.   Personality and Social Psychology Review , 3 (1), 23–48.

Malle, B. F. ( 2004 ). How the mind explains behavior: Folk explanations, meaning, and social interaction . Cambridge, MA: MIT Press.

Malle, B. F. , Knobe, J. , O’Laughlin, M. J. , Pearce, G. E. , & Nelson, S. E. ( 2000 ). Conceptual structure and social functions of behavior explanations: Beyond person-situation attributions.   Journal of Personality and Social Psychology , 79 (3), 309–326.

Mansinghka, V. K. , Kemp, C. , Tenenbaum, J. B. , & Griffiths, T. L. (2006). Structured priors for structure learning. Proceedings of the Twenty-Second Conference on Uncertainty in Artificial Intelligence (UAI 2006).

McArthur, L. A. ( 1972 ). The how and what of why: Some determinants and consequences of causal attribution.   Journal of Personality and Social Psychology , 22 (2), 171–193.

McClure, J. , Hilton, D. J. , & Sutton, R. M. ( 2007 ). Judgments of voluntary and physical causes in causal chains: Probabilistic and social functionalist criteria for attributions.   European Journal of Social Psychology , 37 , 879–901.

McGill, A. L. ( 1989 ). Context effects in judgments of causation.   Journal of Personality and Social Psychology , 57 (2), 189–200.

Medin, D. L. , & Ortony, A. ( 1989 ). Psychological essentialism. In S. Vosniadou & A. Ortony (Eds.), Similarity and Analogical Reasoning (pp. 179–195). Cambridge: Cambridge University Press.

ojalehto, b. , Waxman, S. R. , & Medin, D. L. ( 2013 ). Teleological reasoning about nature: Intentional design or relational perspectives?   Trends in Cognitive Sciences , 17 (4), 166–171.

Pacer, M. , & Lombrozo, T. ( 2015 ). Ockham’s Razor cuts to the root: simplicity in causal explanation. Manuscript in revision.

Pacer, M. , Williams, J. J. , Chen, X. , Lombrozo, T. , & Griffiths, T. L. ( 2013 ). Evaluating computational models of explanation using human judgments. In A. Nicholson & P. Smyth (Eds.), Proceedings of the Twenty-Ninth Conference on Uncertainty in Artificial Intelligence (pp. 498–507). Corvallis, Oregon: AUAI Press.

Park, J. , & Sloman, S. A. ( 2014 ). Causal explanation in the face of contradiction.   Memory & Cognition , 42 (5), 806–820.

Park, J. , & Sloman, S. A. ( 2013 ). Mechanistic beliefs determine adherence to the Markov property in causal reasoning.   Cognitive Psychology , 67 (4), 186–216.

Peirce, C. S. ( 1955 ). Abduction and induction. In Philosophical writings of Peirce (Vol. 11). New York: Dover.

Pennington, N. , & Hastie, R. ( 1992 ). Explaining the evidence: Tests of the Story Model for juror decision making.   Journal of Personality and Social Psychology , 62 (2), 189–206.

Perales, J. C. , & Shanks, D. R. ( 2003 ). Normative and descriptive accounts of the influence of power and contingency on causal judgement.   The Quarterly Journal of Experimental Psychology. A: Human Experimental Psychology , 56 (6), 977–1007.

Pine, K. J. , & Siegler, R. S. (2003). The role of explanatory activity in increasing the generality of thinking. Paper presented at the biennial meeting of the Society for Research in Child Development, Tampa, FL.

Potochnik, A. ( 2010 ). Levels of explanation reconceived.   Philosophy of Science , 77 (1), 59–72.

Prasada, S. , & Dillingham, E. M. ( 2006 ). Principled and statistical connections in common sense conception.   Cognition , 99 (1), 73–112.

Prasada, S. , & Dillingham, E. M. ( 2009 ). Representation of principled connections: A window onto the formal aspect of common sense conception.   Cognitive Science , 33 (3), 401–48.

Preston, J. , & Epley, N. ( 2005 ). Explanations versus applications: The explanatory power of valuable beliefs.   Psychological Science , 16 (10), 826–832.

Putnam, H. ( 1975 ). Philosophy and our mental life. In H. Putnam , Mind, Language and Reality: Philosophical Papers (Vol. 2). New York: Cambridge University Press.

Railton, P. ( 1978 ). A deductive-nomological model of probabilistic explanation.   Philosophy of Science , 44 (2), 206–226.

Read, S. J. , & Marcus-Newhall, A. ( 1993 ). Explanatory coherence in social explanations: A parallel distributed processing account.   Journal of Personality and Social Psychology , 65 (3), 429.

Rips, L. J. , & Edwards, B. J. ( 2013 ). Inference and explanation in counterfactual reasoning.   Cognitive Science , 37 (6), 1107–35.

Rozenblit, L. , & Keil, F. ( 2002 ). The misunderstood limits of folk science: An illusion of explanatory depth.   Cognitive Science , 26 , 521–562.

Salmon, W. ( 1984 ). Scientific explanation and the causal structure of the world . Princeton, NJ: Princeton University Press.

Schupbach, J. N. ( 2011 ). Comparing probabilistic measures of explanatory power.   Philosophy of Science , 78 (5), 813–829.

Schupbach, J. N. , & Sprenger, J. ( 2011 ). The logic of explanatory power.   Philosophy of Science , 78 (1), 105–127.

Shanks, D. R. , & Dickinson, A. ( 1988 ). Associative accounts of causality judgment.   Psychology of Learning and Motivation: Advances in Research and Theory , 21 (C), 229–261.

Siegler, R. S. ( 1995 ). How does change occur: A microgenetic study of number conservation.   Cognitive Psychology , 28 , 225–273.

Sloman, S. A , & Lagnado, D. ( 2014 ). Causality in thought.   Annual Review of Psychology , 66 , 223–247.

Spellman, B. A. ( 1997 ). Crediting causality.   Journal of Experimental Psychology: General , 126 (4), 323–348.

Thagard, P. ( 1989 ). Explanatory coherence.   Behavioral and Brain Sciences , 12 , 435–502.

van Fraassen, B. C. ( 1980 ). The scientific image . Oxford University Press.

Vasilyeva, N. , & Coley, J.C. ( 2013 ). Evaluating two mechanisms of flexible induction: Selective memory retrieval and evidence explanation. In M. Knauff , M. Pauen , N. Sebanz , & I. Wachsmuth (Eds.), Proceedings of the 35th annual conference of the Cognitive Science Society (pp. 3645–3650). Austin, TX: Cognitive Science Society.

Vasilyeva, N. , & Lombrozo, T. ( 2015 ). Explanation and causal judgments are differentially sensitive to covariation and mechanism information. In Proceedings of the 37th annual conference of the Cognitive Science Society (pp. 2663–2668). Austin, TX: Cognitive Science Society.

Waldmann, M. R. , & Hagmayer, Y. ( 2001 ). Estimating causal strength: The role of structural knowledge and processing effort.   Cognition , 82 (1), 27–58.

Waldmann, M. R. , & Hagmayer, Y. ( 2013 ). Causal reasoning. In D. Reisberg (Ed.), Oxford handbook of cognitive psychology (pp. 733–752). New York: Oxford University Press.

Walker, C. M. , Lombrozo, T. , Legare, C. H. , & Gopnik, A. ( 2014 ). Explaining prompts children to privilege inductively rich properties.   Cognition , 133 (2), 343–57.

Walker, C.M. , Lombrozo, T. , Williams, J. J. , Rafferty, A. , & Gopnik, A. ( 2016 ). Explaining constrains causal learning in childhood.   Child Development .

Weiner, B. ( 1985 ). An attributional theory of achievement motivation and emotion.   Psychological Review , 92 (4), 548–573.

Wellman, H. M. ( 2011 ). Reinvigorating explanations for the study of early cognitive development.   Child Development Perspectives , 5 (1), 33–38.

Wellman, H. M. , & Lagattuta, K. H. ( 2004 ). Theory of mind for learning and teaching: The nature and role of explanation.   Cognitive Development , 19 , 479–497.

Wellman, H. M. , & Liu, D. ( 2007 ). Causal reasoning as informed by the early development of explanations. In A. Gopnik & L. Schulz (Eds.), Causal Learning: Psychology, Philosophy, and Computation (pp. 261–279). Oxford: Oxford University Press.

Wilkenfeld, D. A. , & Lombrozo, T. ( 2015 ). Infernece to the Best Explanation (IBE) vs. Explaining for the Best Inference (EBI).   Science and Education , 24 (9–10), 1059–1077.

Williams, J. J. , & Lombrozo, T. ( 2010 ). The role of explanation in discovery and generalization: Evidence from category learning.   Cognitive Science , 34 (5), 776–806.

Williams, J. J. , & Lombrozo, T. ( 2013 ). Explanation and prior knowledge interact to guide learning.   Cognitive Psychology , 66 (1), 55–84.

Williams, J. J. , Lombrozo, T. , & Rehder, B. ( 2013 ). The hazards of explanation: Overgeneralization in the face of exceptions.   Journal of Experimental Psychology. General , 142 (4), 1006–14.

Woodward, J. ( 2010 ). Causation in biology: Stability, specificity, and the choice of levels of explanation.   Biology & Philosophy , 25 (3), 287–318.

Wright, L. ( 1976 ). Teleological explanations: An etiological analysis of goals and functions . Berkeley: University of California Press.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

Formulating causal questions and principled statistical answers

Affiliations.

  • 1 Department of Applied Mathematics, Computer Science and Statistics, Ghent University, Ghent, Belgium.
  • 2 Department of Medical Epidemiology and Biostatistics, Karolinska Institutet, Stockholm, Sweden.
  • 3 Department of Clinical Epidemiology/Biomedical Data Sciences, Leiden University Medical Center, Leiden, The Netherlands.
  • 4 Great Ormond Street Institute of Child Health, University College London, London, UK.
  • 5 Division of Biostatistics, McGill University, Montreal, Quebec, Canada.
  • 6 Department of Statistics, Uppsala University, Uppsala, Sweden.
  • PMID: 32964526
  • PMCID: PMC7756489
  • DOI: 10.1002/sim.8741

Although review papers on causal inference methods are now available, there is a lack of introductory overviews on what they can render and on the guiding criteria for choosing one particular method. This tutorial gives an overview in situations where an exposure of interest is set at a chosen baseline ("point exposure") and the target outcome arises at a later time point. We first phrase relevant causal questions and make a case for being specific about the possible exposure levels involved and the populations for which the question is relevant. Using the potential outcomes framework, we describe principled definitions of causal effects and of estimation approaches classified according to whether they invoke the no unmeasured confounding assumption (including outcome regression and propensity score-based methods) or an instrumental variable with added assumptions. We mainly focus on continuous outcomes and causal average treatment effects. We discuss interpretation, challenges, and potential pitfalls and illustrate application using a "simulation learner," that mimics the effect of various breastfeeding interventions on a child's later development. This involves a typical simulation component with generated exposure, covariate, and outcome data inspired by a randomized intervention study. The simulation learner further generates various (linked) exposure types with a set of possible values per observation unit, from which observed as well as potential outcome data are generated. It thus provides true values of several causal effects. R code for data generation and analysis is available on www.ofcaus.org, where SAS and Stata code for analysis is also provided.

Keywords: causation; instrumental variable; inverse probability weighting; matching; potential outcomes; propensity score.

© 2020 The Authors. Statistics in Medicine published by John Wiley & Sons Ltd.

Publication types

  • Research Support, Non-U.S. Gov't
  • Computer Simulation
  • Propensity Score
  • Research Design*

Grants and funding

  • MR/R025215/1/MRC_/Medical Research Council/United Kingdom

Asking Questions to Provide a Causal Explanation – Do People Search for the Information Required by Cognitive Psychological Theories?

  • First Online: 28 July 2020

Cite this chapter

causal questions in research

  • York Hagmayer 5 &
  • Neele Engelmann 5  

Part of the book series: Jerusalem Studies in Philosophy and History of Science ((JSPS))

452 Accesses

In this paper, we give a brief overview of current, cognitive-psychological theories, which provide an account for how people explain facts: causal model theories (the predominant type of dependence theory) and mechanistic theories. These theories differ in (i) what they assume people to explain and (ii) how they assume people to provide an explanation. In consequence, they require different types of knowledge in order to explain. We work out predictions from the theoretical accounts for the questions people may ask to fill in gaps in knowledge. Two empirical studies are presented looking at the questions people ask in order to get or give an explanation. The first observational study explored the causal questions people ask on the internet, including questions asking for an explanation. We also analyzed the facts that people want to have explained and found that people inquire about tokens and types of events as well as tokens and types of causal relations. The second experimental study directly investigated which information people ask for in order to provide an explanation. Several scenarios describing tokens and types of events were presented to participants. As a second factor, we manipulated whether the facts were familiar to participants or not. Questions were analyzed and coded with respect to the information inquired about. We found that both factors affected the types of questions participants asked. Surprisingly, participants asked only few questions about actual causation or about information, which would have allowed them to infer actual causation, when a token event had to be explained. Overall the findings neither fully supported causal model nor mechanistic theories. Hence, they are in contrast to many other studies, in which participants were provided with relevant information upfront and just asked for an explanation or judgment. We conclude that more empirical and theoretical work is needed to reconcile the findings from these two lines of research into causal explanations.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

It is important to mention that there are also dispositional theories of causal cognition (e.g., force dynamics, Wolff 2007 ). Due to length considerations we do not discuss them here.

By contrast, there is quite a bit of research on information search in causal learning and hypothesis testing (see Crupi et al. 2018 , for an overview). There is also some research on information search in decision making and problem solving (e.g., Huber et al. 1997 ).

Note that these questions can provide important information for causal attribution. Information about the time course of events can rule out certain causes as actual causes like in the Billy and Suzy case, and information about causal power or strength can also help to establish actual causation.

Three participants which were assigned to the token condition failed to respond to the unfamiliar token events. Therefore the degrees of freedom were smaller for this comparison.

Note that this difference would not be statistically significant when controlling for the number of analyses conducted (controlling for the number of analyses avoids an inflation of the risk for an alpha error in statistical analyses). All other statistically significant results would still be significant.

Ahn, W. K., Kalish, C. W., Medin, D. L., & Gelman, S. A. (1995). The role of covariation versus mechanism information in causal attribution. Cognition, 54 , 299–352.

Google Scholar  

Barrett, J. C. (1994). Cellular and molecular mechanisms of asbestos carcinogenicity: Implications for biopersistence. Environmental Health Perspectives, 102 (Suppl 5), 19–23.

Beebee, H., Hitchcock, C., & Menzies, P. (2009). The Oxford handbook of causation . New York: Oxford University Press.

Bertz, L. (2018). Asking questions to provide a causal explanation – The role of familiarity. Unpublished Bachelor’s thesis, Georg-August-University Göttingen, Göttingen, Germany.

Bullock, M., Gelman, R., & Baillargeon, R. (1982). The development of causal reasoning. The Developmental Psychology of Time , 209–254.

Cheng, P. W. (1997). From covariation to causation: A causal power theory. Psychological Review, 104 (2), 367–405.

Cheng, P. W., & Novick, L. R. (1990). A probabilistic contrast model of causal induction. Journal of Personality and Social Psychology, 58 (4), 545.

Cheng, P. W., & Novick, L. R. (2005). Constraints and nonconstraints in causal learning: Reply to White (2005) and to Luhmann & Ahn (2005). Psychological Review, 112 (3), 694–706.

Crupi, V., Nelson, J. D., Meder, B., Cevolani, G., & Tentori, K. (2018). Generalized information theory meets human cognition: Introducing a unified framework to model uncertainty and information search. Cognitive Science, 42 , 1410–1456.

Danks, D. (2014). Unifying the mind: Cognitive representations as graphical models . Cambridge: MIT Press.

Danks, D. (2016). Causal search, causal modeling, and the folk. In J. Sytsma & J. W. Buckwalter (Eds.), A companion to experimental philosophy (pp. 463–471). Oxford: Wiley Blackwell.

Danks, D. (2017). Singular causation. In M. R. Waldmann (Ed.), Oxford handbook of causal reasoning (pp. 201–215). Oxford: Oxford University Press.

Didkowska, J., Wojciechowska, U., Mańczuk, M., & Łobaszewski, J. (2016). Lung cancer epidemiology: Contemporary and future challenges worldwide. Annals of Translational Medicine, 4 (8), 150.

Dowe, P. (2000). Physical causation . Cambridge: Cambridge University Press.

Falcon, A. (2019). Aristotle on causality. In The Stanford encyclopedia of philosophy (Spring 2019 Edition). Retrieved from https://plato.stanford.edu/archives/spr2019/entries/aristotle-causality/

Gopnik, A., Glymour, C., Sobel, D. M., Schulz, L. E., Kushnir, T., & Danks, D. (2004). A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review, 111 (1), 3–32.

Griffiths, T. L., & Tenenbaum, J. B. (2005). Structure and strength in causal induction. Cognitive Psychology, 51 (4), 334–384.

Hagmayer, Y., & Fernbach, P. (2017). Causality in decision-making. In M. Waldmann (Ed.), The Oxford handbook of causal reasoning (pp. 495–512). New York: Oxford University Press.

Hall, N. (2004). Two concepts of causation. In J. Collins, E. Hall, & L. Paul (Eds.), Causation and counterfactuals (pp. 225–276). Cambridge, Ma: MIT Press.

Halpern, J. Y. (2015). A modification of the Halpern-Pearl definition of causality. In Proceedings of the 24th international joint conference on artificial intelligence (IJCAI) (pp. 3022–3033).

Halpern, J. Y., & Pearl, J. (2005a). Causes and explanations: A structural-model approach. Part I: Causes. The British Journal for the Philosophy of Science, 56 (4), 843–887.

Halpern, J. Y., & Pearl, J. (2005b). Causes and explanations: A structural-model approach. Part II: Explanations. The British Journal for the Philosophy of Science, 56 (4), 889–911.

Hartmann, D. P., Barrios, B. A., & Wood, D. D. (2004). Principles of behavioral observation. In S. N. Haynes & E. M. Hieby (Eds.), Comprehensive handbook of psychological assessment. Vol. 3: Behavioral assessment (pp. 108–127). New York: Wiley.

Hecht, S. S. (2012). Lung carcinogenesis by tobacco smoke. International Journal of Cancer, 131 (12), 2724–2732.

Huber, O., Wider, R., & Huber, O. W. (1997). Active information search and complete information presentation in naturalistic decision tasks. Acta Psychologica, 95 , 15–29.

Huber, O., Huber, O. W., & Bär, A. S. (2011). Information search and mental representation in risky decision making: The advantages first principle. Journal of Behavioral Decision Making, 24 , 223–248.

Keil, F. C. (2006). Explanation and understanding. Annual Review of Psychology, 57 , 227–254.

Keim Campbell, J., O’Rouke, M., & Silverstein, H. (2007). Causation and explanation . Cambridge, MA: MIT Press.

Kelley, H. H. (1973). The processes of causal attribution. American Psychologist, 28 (2), 107.

Koslowski, B. (1996). Theory and evidence: The development of scientific reasoning . Cambridge, MA: MIT Press.

Koslowski, B., Okagaki, L., Lorenz, C., & Umbach, D. (1989). When covariation is not enough: The role of causal mechanism, sampling method, and sample size in causal reasoning. Child Development, 60 (6), 1316–1327.

Lagnado, D. A., Waldmann, M. R., Hagmayer, Y., & Sloman, S. A. (2007). Beyond covariation. In L. Schulz & A. Gopnik (Eds.), Causal learning: Psychology, philosophy, and computation (pp. 154–172). Oxford/New York: Oxford University Press.

Lagnado, D. A., Gerstenberg, T., & Zultan, R. I. (2013). Causal responsibility and counterfactuals. Cognitive Science, 37 (6), 1036–1073.

Lewis, D. (1973). Counterfactuals . Malden: Blackwell.

Lombrozo, T., & Vasilyeva, N. (2017). Causal explanation. In M. Waldmann (Ed.), Oxford handbook of causal reasoning (pp. 415–432). New York: Oxford University Press.

Machamer, P., Darden, L., & Craver, C. F. (2000). Thinking about mechanisms. Philosophy of Science, 67 , 1–25.

Meder, B., Mayrhofer, R., & Waldmann, M. R. (2014). Structure induction in diagnostic causal reasoning. Psychological Review, 121 (3), 277.

Menzies, P. (2017). Counterfactual theories of causation. In The Stanford encyclopedia of philosophy (Winter 2017 Edition). Retrieved from https://plato.stanford.edu/archives/win2017/entries/causation-counterfactual/

Michotte, A. E. (1946). The perception of causality . New York: Basic Books.

Nozick, R. (1993). The nature of rationality . Princeton: Princeton University Press.

Pearl, J. (2000). Causality . Cambridge, MA: Cambridge University Press.

Proctor, R. N. (2012). The history of the discovery of the cigarette–lung cancer link: Evidentiary traditions, corporate denial, global toll. Tobacco Control, 21 (2), 87–91.

Rottman, B. M., & Hastie, R. (2014). Reasoning about causal relationships: Inferences on causal networks. Psychological Bulletin, 140 (1), 109–139.

Rozenblit, L., & Keil, F. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26 (5), 521–562.

Ruggeri, A., & Lombrozo, T. (2015). Children adapt their questions to achieve efficient search. Cognition, 143 , 203–216.

Sloman, S. (2005). Causal models: How people think about the world and its alternatives . New York: Oxford University Press.

Sloman, S., & Fernbach, P. (2017). The knowledge illusion: The myth of individual knowledge and the power of collective wisdom . New York: Penguin Random House.

Sloman, S. A., & Hagmayer, Y. (2006). The causal psycho-logic of choice. Trends in Cognitive Sciences, 10 (9), 407–412.

Spirtes, P., Glymour, C. N., Scheines, R., Heckerman, D., Meek, C., Cooper, G., & Richardson, T. (2000). Causation, prediction, and search . Cambridge, MA: MIT Press.

Stephan, S., & Waldmann, M. R. (2018). Preemption in singular causation judgments: A computational model. Topics in Cognitive Science, 10 , 242–257.

Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331 (6022), 1279–1285.

Waldmann, M. R. (1996). Knowledge-based causal induction. Psychology of Learning and Motivation, 34 , 47–88.

Waldmann, M. R. (2000). Competition among causes but not effects in predictive and diagnostic learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 26 , 53–76.

Waldmann, M. R. (2017). The Oxford handbook of causal reasoning . New York: Oxford University Press.

Waldmann, M. R., Cheng, P. W., Hagmayer, Y., & Blaisdell, A. P. (2008). Causal learning in rats and humans: A minimal rational model. In N. Chater & M. Oaksford (Eds.), The probabilistic mind. Prospects for Bayesian cognitive science (pp. 453–484). Oxford: Oxford University Press.

Walsh, C. R., & Sloman, S. A. (2011). The meaning of cause and prevent: The role of causal mechanism. Mind & Language, 26 (1), 21–52.

Weiner, B. (1985). An attributional theory of achievement motivation and emotion. Psychological Review, 92 (4), 548–573.

Wolff, P. (2007). Representing causation. Journal of Experimental Psychology: General, 136 , 82–111.

Download references

Author information

Authors and affiliations.

Department of Cognitive and Decision Sciences, Institute of Psychology, University of Göettingen, Göttingen, Germany

York Hagmayer & Neele Engelmann

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to York Hagmayer .

Editor information

Editors and affiliations.

Language, Logic and Cognition Center, The Department of Hebrew Language, Hebrew University of Jerusalem, Jerusalem, Israel

Elitzur A. Bar-Asher Siegal

Language, Logic and Cognition Center, The Linguistic Department, Hebrew University of Jerusalem, Jerusalem, Israel

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Hagmayer, Y., Engelmann, N. (2020). Asking Questions to Provide a Causal Explanation – Do People Search for the Information Required by Cognitive Psychological Theories?. In: Bar-Asher Siegal, E., Boneh, N. (eds) Perspectives on Causation. Jerusalem Studies in Philosophy and History of Science. Springer, Cham. https://doi.org/10.1007/978-3-030-34308-8_4

Download citation

DOI : https://doi.org/10.1007/978-3-030-34308-8_4

Published : 28 July 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-34307-1

Online ISBN : 978-3-030-34308-8

eBook Packages : Religion and Philosophy Philosophy and Religion (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

The Sourcebook for Teaching Science

  • Sourcebook Home

Science Teaching Series

  • The Sourcebook for Teaching Science
  • Hands-On Physics Activities
  • Hands-On Chemistry Activities

Internet Resources

I. developing scientific literacy.

  • 1 - Building a Scientific Vocabulary
  • 2 - Developing Science Reading Skills
  • 3 - Developing Science Writing Skills
  • 4 - Science, Technology & Society

II. Developing Scientific Reasoning

  • 5 - Employing Scientific Methods
  • 6 - Developing Scientific Reasoning
  • 7 - Thinking Critically & Misconceptions

III. Developing Scientific Understanding

  • 8 - Organizing Science Information
  • 9 - Graphic Oganizers for Science
  • 10 - Learning Science with Analogies
  • 11 - Improving Memory in Science
  • 12 - Structure and Function in Science
  • 13 - Games for Learning Science

IV. Developing Scientific Problem Solving

  • 14 - Science Word Problems
  • 15 - Geometric Principles in Science
  • 16 - Visualizing Problems in Science
  • 17 - Dimensional Analysis
  • 18 - Stoichiometry

V. Developing Scientific Research Skills

  • 19 - Scientific Databases
  • 20 - Graphing & Data Analysis
  • 21 - Mapping & Visualizing Data
  • 22 - Science Inquiry & Research
  • 23 - Science Projects & Fairs

VI. Resources for Teaching Science

  • 24 - Science Curriculum & Instruction
  • 25 - Planning Science Instruction
  • 26 - The Science Laboratory
  • 27 - Science Reference Information

Types of Research Questions

Check out the science fair sites for sample research questions.

Descriptive Designed primarily to describe what is going on or what exists

  • What are the characteristics of a burning candle ?
  • Which stage of mitosis is longest?
  • Which is more common, right-eye or left-eye dominance ?
  • If two sounds have the same pitch, do they have the same frequency ?
  • What complimentary colors do color blind individuals see?
  • Is there any pattern to occurrence of earthquakes ?
  • How can one determine the center of gravity ?
  • Are all printed materials composed of the same colors ?
  • What is affect of exercise on heart rate?
  • What is the effect hand fatigue on reaction time ?
  • What are the most potent vectors for disease transmission?
  • How does exercise affect the rate of carbon dioxide production ?
  • How is the diffusion of air freshener influenced by temperature?
  • How does concentration of silver nitrate affect the formation of silver crystals?
  • Norman Herr, Ph.D.

Featured Clinical Reviews

  • Screening for Atrial Fibrillation: US Preventive Services Task Force Recommendation Statement JAMA Recommendation Statement January 25, 2022
  • Evaluating the Patient With a Pulmonary Nodule: A Review JAMA Review January 18, 2022

Select Your Interests

Customize your JAMA Network experience by selecting one or more topics from the list below.

  • Academic Medicine
  • Acid Base, Electrolytes, Fluids
  • Allergy and Clinical Immunology
  • American Indian or Alaska Natives
  • Anesthesiology
  • Anticoagulation
  • Art and Images in Psychiatry
  • Artificial Intelligence
  • Assisted Reproduction
  • Bleeding and Transfusion
  • Caring for the Critically Ill Patient
  • Challenges in Clinical Electrocardiography
  • Climate and Health
  • Climate Change
  • Clinical Challenge
  • Clinical Decision Support
  • Clinical Implications of Basic Neuroscience
  • Clinical Pharmacy and Pharmacology
  • Complementary and Alternative Medicine
  • Consensus Statements
  • Coronavirus (COVID-19)
  • Critical Care Medicine
  • Cultural Competency
  • Dental Medicine
  • Dermatology
  • Diabetes and Endocrinology
  • Diagnostic Test Interpretation
  • Drug Development
  • Electronic Health Records
  • Emergency Medicine
  • End of Life, Hospice, Palliative Care
  • Environmental Health
  • Equity, Diversity, and Inclusion
  • Facial Plastic Surgery
  • Gastroenterology and Hepatology
  • Genetics and Genomics
  • Genomics and Precision Health
  • Global Health
  • Guide to Statistics and Methods
  • Hair Disorders
  • Health Care Delivery Models
  • Health Care Economics, Insurance, Payment
  • Health Care Quality
  • Health Care Reform
  • Health Care Safety
  • Health Care Workforce
  • Health Disparities
  • Health Inequities
  • Health Policy
  • Health Systems Science
  • History of Medicine
  • Hypertension
  • Images in Neurology
  • Implementation Science
  • Infectious Diseases
  • Innovations in Health Care Delivery
  • JAMA Infographic
  • Law and Medicine
  • Leading Change
  • Less is More
  • LGBTQIA Medicine
  • Lifestyle Behaviors
  • Medical Coding
  • Medical Devices and Equipment
  • Medical Education
  • Medical Education and Training
  • Medical Journals and Publishing
  • Mobile Health and Telemedicine
  • Narrative Medicine
  • Neuroscience and Psychiatry
  • Notable Notes
  • Nutrition, Obesity, Exercise
  • Obstetrics and Gynecology
  • Occupational Health
  • Ophthalmology
  • Orthopedics
  • Otolaryngology
  • Pain Medicine
  • Palliative Care
  • Pathology and Laboratory Medicine
  • Patient Care
  • Patient Information
  • Performance Improvement
  • Performance Measures
  • Perioperative Care and Consultation
  • Pharmacoeconomics
  • Pharmacoepidemiology
  • Pharmacogenetics
  • Pharmacy and Clinical Pharmacology
  • Physical Medicine and Rehabilitation
  • Physical Therapy
  • Physician Leadership
  • Population Health
  • Primary Care
  • Professional Well-being
  • Professionalism
  • Psychiatry and Behavioral Health
  • Public Health
  • Pulmonary Medicine
  • Regulatory Agencies
  • Reproductive Health
  • Research, Methods, Statistics
  • Resuscitation
  • Rheumatology
  • Risk Management
  • Scientific Discovery and the Future of Medicine
  • Shared Decision Making and Communication
  • Sleep Medicine
  • Sports Medicine
  • Stem Cell Transplantation
  • Substance Use and Addiction Medicine
  • Surgical Innovation
  • Surgical Pearls
  • Teachable Moment
  • Technology and Finance
  • The Art of JAMA
  • The Arts and Medicine
  • The Rational Clinical Examination
  • Tobacco and e-Cigarettes
  • Translational Medicine
  • Trauma and Injury
  • Treatment Adherence
  • Ultrasonography
  • Users' Guide to the Medical Literature
  • Vaccination
  • Venous Thromboembolism
  • Veterans Health
  • Women's Health
  • Workflow and Process
  • Wound Care, Infection, Healing
  • Download PDF
  • Share X Facebook Email LinkedIn
  • Permissions

What Does the Proposed Causal Inference Framework for Observational Studies Mean for JAMA and the JAMA Network Journals?

  • 1 Executive Managing Editor, JAMA and JAMA Network
  • 2 Department of Emergency Medicine, Harbor-UCLA Medical Center, Torrance, California
  • 3 Statistical Editor, JAMA
  • 4 Deputy Editor, JAMA
  • 5 Executive Editor, JAMA and JAMA Network
  • Special Communication Causal Inference and Effects of Interventions From Observational Studies in Medical Journals Issa J. Dahabreh, MD, ScD; Kirsten Bibbins-Domingo, PhD, MD, MAS JAMA

The Special Communication “Causal Inferences About the Effects of Interventions From Observational Studies in Medical Journals,” published in this issue of JAMA , 1 provides a rationale and framework for considering causal inference from observational studies published by medical journals. Our intent is to invite discussion of this framework, explore its application in the context of specific study designs, and actively examine how this framework could be implemented and used by authors, peer reviewers, and editors of medical journals, including JAMA and the journals of the JAMA Network. Our overarching goal is to ensure that findings from observational designs may be appropriately interpreted in thoughtful and circumspect manners and applied by readers, other researchers, and clinicians, with the ultimate goal of improving patient care and public and global health.

Two points are worth underscoring in describing our intention with this publication. First, the proposal for causal interpretation of some observational studies should not be interpreted as diminished enthusiasm at JAMA for well-conducted randomized clinical trials that remain the foundation of evidence-based medicine. More than half of the Original Investigations published in JAMA last year were randomized clinical trials, and our examination of the reporting of observational studies does not signal an intent to depart from this practice. 2  JAMA and all of the journals of the JAMA Network also publish observational studies, many intending to provide evidence that addresses important causal clinical or public health questions, and some using designs and analytic approaches that produce results that may have a causal interpretation when key assumptions are plausible. Our responsibility to readers is to report scientific findings with precision and clarity. Part of this responsibility is to keep pace with methodological advances and to provide guidance and flexibility to authors to enable this precision and clarity in communicating the intent of the research and the carefully structured interpretation of the findings.

Second, while this framework could be applied to all observational studies published in JAMA and the JAMA Network, we anticipate that causal interpretations will be possible only in a select subset of these studies. Many observational studies do not address causal questions, and for some with this intent, causal inference may not be relevant, sufficiently well supported, or even possible. As acknowledged in the Special Communication: “For some observational studies that start with causal goals, causal inference may prove impossible; in these cases, estimates may be given associational interpretations. In addition, many important descriptive and predictive research questions can be answered by observational studies that do not require causal notions.” 1

So with excitement and trepidation, we will now consider how best to balance methodologic advances and semantic and interpretive flexibility in the reporting of research with the principles in our long-standing and often discussed reporting policy that generally limits use of causal language to well-done randomized clinical trials. 3 We anticipate that this will be a multistep process of considering potential changes, including how and when we can apply the proposed causal inference framework to select observational studies. Next steps will include reviewing when and how specific observational study designs and analyses can support causal inferences. To support this part of our process, JAMA will publish a new set of articles in the Guide to Statistics and Methods series that address how the proposed causal inference framework may be applied to specific study designs and methods. These guides will build on previous guidance 4 - 10 and include practical tips and concrete examples of specific study designs and analyses, such as target trial emulation, instrumental variable analysis, regression discontinuity, interrupted time series, difference-in-difference, and mediation analysis. Reports of these studies as well as nonrandomized controlled studies (or other “quasi-experimental” studies) will require specific conditions be met to support casual inferences, including clearly discussing the necessary assumptions in view of background knowledge; incorporating design elements to improve plausibility of assumptions; implementing statistical methods to address bias due to confounding, selection, missing data, and measurement error; and properly quantifying uncertainty.

In parallel, there will be continued discussion among our journal editors and statistical editors, engagement with researchers and authors, and plans to retain and identify additional peer reviewers with understanding of causal inference methods as well as knowledge to adequately judge the appropriateness of these methods as explicated in reports of studies.

We look forward to readers’ and other stakeholders’ comments about the proposed framework as we continue exploring how best to apply casual inference concepts and provide recommendations for authors, reviewers, editors, and readers.

Published Online: May 9, 2024. doi:10.1001/jama.2024.8107

Corresponding Author: Annette Flanagin, RN, MA ( [email protected] ).

Conflict of Interest Disclosures: Dr Lewis reported serving as senior medical scientist at Berry Consultants LLC, a statistical consulting firm focusing on the design, implementation, and analysis of adaptive and platform clinical trials. No other disclosures were reported.

See More About

Flanagin A , Lewis RJ , Muth CC , Curfman G. What Does the Proposed Causal Inference Framework for Observational Studies Mean for JAMA and the JAMA Network Journals? JAMA. Published online May 09, 2024. doi:10.1001/jama.2024.8107

Manage citations:

© 2024

Artificial Intelligence Resource Center

Cardiology in JAMA : Read the Latest

Browse and subscribe to JAMA Network podcasts!

Others Also Liked

  • Register for email alerts with links to free full-text articles
  • Access PDFs of free articles
  • Manage your interests
  • Save searches and receive search alerts

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • Current issue
  • For authors
  • New editors
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • Causal overstatements in modern physical activity research
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-4135-0408 Eivind Schjelderup Skarpsno 1 , 2
  • 1 Department of Public Health and Nursing , Norwegian University of Science and Technology , Trondheim , Norway
  • 2 Department of Neurology and Clinical Neurophysiology , St. Olav's University Hospital , Trondheim , Norway
  • Correspondence to Dr Eivind Schjelderup Skarpsno, Department of Public Health and Nursing, Norwegian University of Science and Technology, Trondheim, Norway; eivind.s.skarpsno{at}ntnu.no

https://doi.org/10.1136/bjsports-2023-108031

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

  • Epidemiology
  • Physical activity
  • Prospective Studies

The challenge of causation in physical activity research

Although advancements such as access to large datasets with device-measured physical behaviour, and advances in statistics, have improved our understanding of the associations between physical activity (PA) and health outcomes, PA research often contains causal overstatements. The line between correlational and causal PA research is narrow, and confounding and reverse causation may lead to false conclusions. We contend that data must be able to answer a causal question before implications for ’24-hour’ PA guidelines and interventions are considered.

Observational studies published in high-ranking medical journals have demonstrated how advancements in PA measurement technologies have advanced our understanding of the potential effects of PA on differential health outcomes. 3–6 However, these technological advancements have also come with challenges concerning how we should handle the richness of ’24-hour’ data (eg, different physical behaviours, dimensions and domains) when attempting to answer causal questions. 7 As a response, compositional data analysis (CoDa) has been suggested as an approach which integrates associations between different physical behaviours and health outcomes. 8 CoDa highlights the importance of accounting for the codependency of time spent in different behaviours and has led to a growing interest in using observational data to understand how reallocating time between different behaviours affect health. 9

A recent review 10 included 103 studies that assessed how reallocating time between behaviours influenced different health outcomes. In this review, which included 77 cross-sectional studies, the overall conclusion was that moderate-to-vigorous PA, at the expense of light PA, sedentary behaviour or sleep, was associated with better health outcomes (eg, adiposity, biomarkers, mental health and chronic disease). However, since most of the included studies were cross-sectional, the authors expressed the obvious need for prospective and experimental study designs. Interestingly, the findings from the above-mentioned review coincide with a recent cross-sectional study published in a leading cardiology journal. 11 This latter paper, which is undeniably well written and with comprehensive analyses based on data from five countries, concludes that there exist a clear hierarchy of behaviours and that moderate-to-vigorous PA demonstrated the strongest, most time-efficient protective associations with cardiometabolic outcomes. 11 The study also implies an unfavourable association when sleep replaced any time spent active, for instance, standing. While the causal interpretation and consequently the causal language in the above-mentioned cross-sectional studies seems appealing, it is problematic because the impact of reverse causation is inevitable. The fact that the outcomes of interest are responsible for the variation in PA rather than the other way around is likely to lead to misinterpretation of the observed associations. It is impossible to estimate the causal effects of reallocating time between physical behaviours from studies of this nature.

Optimising PA research to determine causation

Suppose for simplicity that we want to conduct the first observational study ever on whether objectively measured PA is a causal risk factor for cardiovascular disease over a 10-year period. Say that our data show that inactive participants have 80% increased risk of cardiovascular disease compared with the most active participants. It is likely that the active ones would have lower risk of cardiovascular disease even if they did not exercise because they live healthier lives (eg, less likely to smoke, can afford healthy food, seek healthcare services). Suppose that we adjusted for all important confounders selected based on pre-existing knowledge of causal relations and that these confounders were accurately measured. Although no empirical finding can provide absolute certainty, we can talk about a potential causal relation because PA was validly measured before the participants developed cardiovascular disease and because we attempted to remove imbalances between the groups of inactive and active participants.

In times where detailed PA data become more available and the statistical approaches to deal with these more comprehensive, it is more important than ever to remember the fundamentals of epidemiology. Larger data and more precise measurement of PA reduce measurement error but does not remove problems of confounding and reverse causation. Indeed, the causal structure of the data may become more complex when we have a myriad of opportunities to define PA behaviour. Dealing with ’24-hour’ observational data is challenging from a causal inference perspective and necessitates careful consideration of the joint effects of physical behaviours on health outcomes. 12 We should, therefore, establish a robust causal framework on how we can estimate causal effects in the context of compositional 24-hour PA data. We should also triangulate results across other causal approaches (eg, instrumental variables analysis (with and without genetic instruments), within sibling comparisons, negative control) and conduct randomised experiments when feasible (eg, short-term effects of actually replacing time in different behaviours). Like any other method, the above-mentioned approaches have biases and limitations (eg, invalid instruments, lack of power, selection issues, cross-sibling interactions, untestable assumptions) that must be carefully considered before starting a new study.

It is time to stress the need to take the causal question seriously in order to create better empirical evidence that truly can support ‘24-hours’ PA guidelines. To achieve this, authors should call descriptive studies by their names, and journal editors should discourage the use of misleading causal language. This will lead to better scientific reporting and to a more successful encounter for the reader. Cross-sectional studies can provide useful descriptions of the distributions of physical behaviour in the population but should not provide basis for 24- hour recommendations. Because of methodological limitations, it is not possible for all research questions to be causally motivated, irrespective of study design, sample size, objective measures or biological plausibility of the association of interest.

Ethics statements

Patient consent for publication.

Not applicable.

  • Newman AB ,
  • Dodson JA ,
  • Church TS , et al
  • Stensvold D ,
  • Steinshamn SL , et al
  • Ekelund U ,
  • Steene-Johannessen J , et al
  • Ahmadi MN ,
  • Gill JMR , et al
  • Stamatakis E ,
  • Khurshid S ,
  • Al-Alusi MA ,
  • Churchill TW , et al
  • Migueles JH ,
  • Aadland E ,
  • Andersen LB , et al
  • Stanford TE ,
  • Martin-Fernández J-A , et al
  • Pedišić Ž ,
  • Stanford TE , et al
  • Maher C , et al
  • Blodgett JM ,
  • Atkin AJ , et al
  • Arnold KF ,
  • Tennant PWG , et al

X @E_Skarpsno

Contributors ESS is the sole author of this editorial.

Funding ESS is supported by a grant from the Liaison Committee between the Central Norway Regional Health Authority (RHA) and the Norwegian University of Science and Technology (NTNU).

Competing interests None declared.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

IMAGES

  1. Causal Research: Definition, Examples and How to Use it

    causal questions in research

  2. PPT

    causal questions in research

  3. Causal Research: The Complete Guide ⋆ Tuit Marketing

    causal questions in research

  4. Understanding Causal Research & Why It's Important for Your Business

    causal questions in research

  5. Defining and Refining the Problem CHAPTER 3 1

    causal questions in research

  6. How to Write a Causal Analysis Essay: Outline, Topics, Tips

    causal questions in research

VIDEO

  1. Case study, causal comparative or ex-post-facto research, prospective, retrospective research

  2. Causal

  3. What is causation in statistics?

  4. Causal relationships

  5. What are Causal Research Question? #causalresearchquestion

  6. What are Causal Graphs?

COMMENTS

  1. Causal Research: Definition, examples and how to use it

    Causal research, also known as explanatory research or causal-comparative research, identifies the extent and nature of cause-and-effect relationships between two or more variables. ... By looking at the data, you'll be able to see what changes you might need to do next time, or if there are questions that require further research. 6. Verify ...

  2. Types of Research Questions: Descriptive, Predictive, or Causal

    for the study methods. Good-quality, clinically useful research begins. question. Research questions fall into 1 of 3 mutu-ally exclusive types: descriptive, predic-tive, or causal. Imagine you are seeking information about whiplash injuries. You might find studies that address the fol-lowing questions. 1.

  3. Causal Research Design: Definition, Benefits, Examples

    Causal research is also useful as market researchers can immediately deduce the effect of the variables on each other under real-world conditions. ... Causal research examines a research question's variables and how they interact. It's easier to pinpoint cause and effect since the experiment often happens in a controlled setting.

  4. Causal Research: What it is, Tips & Examples

    Causal research assists in determining the effects of changing procedures and methods. Subjects are chosen in a methodical manner. As a result, it is beneficial for improving internal validity. The ability to analyze the effects of changes on existing events, processes, phenomena, and so on. Finds the sources of variable correlations, bridging ...

  5. Causal Research (Explanatory research)

    Causal research, also known as explanatory research is conducted in order to identify the extent and nature of cause-and-effect relationships. Causal research can be conducted in order to assess impacts of specific changes on existing norms, various processes etc. Causal studies focus on an analysis of a situation or a specific problem to ...

  6. A Clarification on Causal Questions: We Ask Them More Often Than We

    Many statistical methods are valid tools for answering causal questions. 2,3 Researchers should not be constrained in describing their research questions by the methods they use to try to answer them. Second, some researchers may continue to avoid the c-word because they believe that their research question is truly not a causal one.

  7. Introduction to Causal Inference Principles

    Causal inference questions are formalized in terms of counterfactual queries expressing what would have occurred under different treatment conditions, including hypothetical interventions of the agents (e.g., individuals' exposure to pollutants) being evaluated. ... Causal and associational language in observational health research: A ...

  8. Designing a Research Question

    Abstract. This chapter discusses (1) the important role of research questions for descriptive, predictive, and causal studies across the three research paradigms (i.e., quantitative, qualitative, and mixed methods); (2) characteristics of quality research questions, and (3) three frameworks to support the development of research questions and ...

  9. An Introduction to Causal Inference

    3. Structural Models, Diagrams, Causal Effects, and Counterfactuals. Any conception of causation worthy of the title "theory" must be able to (1) represent causal questions in some mathematical language, (2) provide a precise language for communicating assumptions under which the questions need to be answered, (3) provide a systematic way of answering at least some of these questions and ...

  10. Types of Research Questions: Descriptive, Predictive, or Causal

    A previous Evidence in Practice article explained why a specific and answerable research question is important for clinicians and researchers. Determining whether a study aims to answer a descriptive, predictive, or causal question should be one of the first things a reader does when reading an article. Any type of question can be relevant and useful to support evidence-based practice, but ...

  11. 3.1 Descriptive vs. causal questions

    Notes. Causal research questions are of a different kind. From a distributional perspective we could ask whether the distribution of a first variable D is somehow causally related to the distribution of a second variable Y.Again we tend to summarize the corresponding distributions, e.g., we could take the mean of trust.

  12. A Clinician's Guide to Conducting Research on Causal Effects

    Causality is at the heart of clinical decision-making, yet formal causal evidence is frequently unavailable to contribute to these decisions. A clinical researcher filling gaps in the evidence typically seeks an answer to a causal question. In practice, that clinician might be unable to conduct an RCT due to resource, ethical, or logistic barriers.

  13. Thinking Clearly About Correlations and Causation: Graphical Causal

    Causal inferences based on observational data require researchers to make very strong assumptions. Researchers who attempt to answer a causal research question with observational data should not only be aware that such an endeavor is challenging, but also understand the assumptions implied by their models and communicate them transparently.

  14. Formulating causal questions and principled statistical answers

    Statistical causal inference has made great progress over the last quarter century, deriving new estimators for well-defined estimands using new tools such as directed acyclic graphs (DAGs) and structural models for potential outcomes. 1-3 However, research papers—both theoretical and applied—tend to select an analysis method without ...

  15. Causal Explanation

    Abstract. Explanation and causation are intimately related. Explanations often appeal to causes, and causal claims are often answers to implicit or explicit questions about why or how something occurred. This chapter considers what we can learn about causal reasoning from research on explanation.

  16. What Is Causal Research? (With Examples, Benefits and Tips)

    Benefits of causal research. Common benefits of using causal research in your workplace include: Understanding more nuances of a system: Learning how each step of a process works can help you resolve issues and optimize your strategies. Developing a dependable process: You can create a repeatable process to use in multiple contexts, as you can ...

  17. The causal inference framework: a primer on concepts and methods for

    Directed acyclic graphs (DAGs) are causal diagrams to improve research design and also to identify an optimal analytic approach, particularly when the research question involves complex causal chains. 34,35 DAGs are akin to conceptual frameworks, with formal rules for defining causal effects and various forms of bias. As such, DAGs make ...

  18. Types of Research Questions: Descriptive, Predictive, or Causal

    Determining whether a study aims to answer a descriptive, predictive, or causal question should be one of the first things a reader does when reading an article. Any type of question can be relevant and useful to support evidence-based practice, but only if the question is well defined, matched to the right study design, and reported correctly ...

  19. Formulating causal questions and principled statistical answers

    This tutorial gives an overview in situations where an exposure of interest is set at a chosen baseline ("point exposure") and the target outcome arises at a later time point. We first phrase relevant causal questions and make a case for being specific about the possible exposure levels involved and the populations for which the question is ...

  20. Asking Questions to Provide a Causal Explanation

    Research on causal attribution (i.e., on how people determine the cause of a particular event) showed that people base their judgment on causal mechanisms at least when they are observable ... Of the causal questions, 29% were classified as asking for an explanation and another 16% as asking about causation. These questions presumably serve an ...

  21. Types of Research Questions

    Types of Research Questions. ... Causal: Cause and Effect Questions Designed to determine whether one or more variables causes or affects one or more outcome variables. What is affect of exercise on heart rate? What is the effect hand fatigue on reaction time? What are the most potent vectors for disease transmission? ...

  22. PDF 0 Causality: Models, Reasoning, and Inference

    This book seeks to integrate research on cause and effect inference from cog-nitive science, econometrics, epidemiology, philosophy, and statistics+ It puts ... Pearl asks two questions of the Goldberger model+ First: "What is the ex- ... causal effect, where the expectation is over the C~N,n! possible sets of obser-

  23. Research: Articulating Questions, Generating Hypotheses, and Choosing

    Articulating a clear and concise research question is fundamental to conducting a robust and useful research study. Although "getting stuck into" the data collection is the exciting part of research, this preparation stage is crucial. Clear and concise research questions are needed for a number of reasons. ... If it is a causal research ...

  24. Meaning of Proposed Causal Inference Framework for the JAMA Network

    Many observational studies do not address causal questions, and for some with this intent, causal inference may not be relevant, sufficiently well supported, or even possible. ... In addition, many important descriptive and predictive research questions can be answered by observational studies that do not require causal notions. ...

  25. Reformative concept analysis for applied psychology qualitative research

    Critical realism accommodates social relationships and political contexts as significant causal influences. Critical realist research (as described by Fletcher, Citation 2017) is flexibly deductive in that it commences with research questions derived from shortcomings identified in existing theory; relies on interpretive extraction of themes ...

  26. Causal overstatements in modern physical activity research

    We contend that data must be able to answer a causal question before implications for '24-hour' PA guidelines and interventions are considered. Ideally, all comparative-effectiveness-research questions around PA would be answered via a sufficiently powered, perfectly randomised experiment with relevant outcomes, long follow-up and perfect ...

  27. A guide to improve your causal inferences from observational data

    The problem: tackling causal questions in observational studies. ... In sum, the RI-CLPM can be a useful tool for cardiovascular nursing researchers who want to answer causal research questions. Supplemental Material. 10.1177_1474515120957241_Supplementary_Material - Supplemental material for A guide to improve your causal inferences from ...

  28. Do social media experiments prove a link with mental health: A

    Whether social media influences the mental well-being of users remains controversial. Evidence from correlational and longitudinal studies has been inconsistent, with effect sizes weak at best. However, some commentators are more convinced by experimental studies, wherein experimental groups are asked to refrain from social media use for some length of time, compared to a control group of ...