• Privacy Policy

Research Method

Home » Data Interpretation – Process, Methods and Questions

Data Interpretation – Process, Methods and Questions

Table of Contents

Data Interpretation

Data Interpretation

Definition :

Data interpretation refers to the process of making sense of data by analyzing and drawing conclusions from it. It involves examining data in order to identify patterns, relationships, and trends that can help explain the underlying phenomena being studied. Data interpretation can be used to make informed decisions and solve problems across a wide range of fields, including business, science, and social sciences.

Data Interpretation Process

Here are the steps involved in the data interpretation process:

  • Define the research question : The first step in data interpretation is to clearly define the research question. This will help you to focus your analysis and ensure that you are interpreting the data in a way that is relevant to your research objectives.
  • Collect the data: The next step is to collect the data. This can be done through a variety of methods such as surveys, interviews, observation, or secondary data sources.
  • Clean and organize the data : Once the data has been collected, it is important to clean and organize it. This involves checking for errors, inconsistencies, and missing data. Data cleaning can be a time-consuming process, but it is essential to ensure that the data is accurate and reliable.
  • Analyze the data: The next step is to analyze the data. This can involve using statistical software or other tools to calculate summary statistics, create graphs and charts, and identify patterns in the data.
  • Interpret the results: Once the data has been analyzed, it is important to interpret the results. This involves looking for patterns, trends, and relationships in the data. It also involves drawing conclusions based on the results of the analysis.
  • Communicate the findings : The final step is to communicate the findings. This can involve creating reports, presentations, or visualizations that summarize the key findings of the analysis. It is important to communicate the findings in a way that is clear and concise, and that is tailored to the audience’s needs.

Types of Data Interpretation

There are various types of data interpretation techniques used for analyzing and making sense of data. Here are some of the most common types:

Descriptive Interpretation

This type of interpretation involves summarizing and describing the key features of the data. This can involve calculating measures of central tendency (such as mean, median, and mode), measures of dispersion (such as range, variance, and standard deviation), and creating visualizations such as histograms, box plots, and scatterplots.

Inferential Interpretation

This type of interpretation involves making inferences about a larger population based on a sample of the data. This can involve hypothesis testing, where you test a hypothesis about a population parameter using sample data, or confidence interval estimation, where you estimate a range of values for a population parameter based on sample data.

Predictive Interpretation

This type of interpretation involves using data to make predictions about future outcomes. This can involve building predictive models using statistical techniques such as regression analysis, time-series analysis, or machine learning algorithms.

Exploratory Interpretation

This type of interpretation involves exploring the data to identify patterns and relationships that were not previously known. This can involve data mining techniques such as clustering analysis, principal component analysis, or association rule mining.

Causal Interpretation

This type of interpretation involves identifying causal relationships between variables in the data. This can involve experimental designs, such as randomized controlled trials, or observational studies, such as regression analysis or propensity score matching.

Data Interpretation Methods

There are various methods for data interpretation that can be used to analyze and make sense of data. Here are some of the most common methods:

Statistical Analysis

This method involves using statistical techniques to analyze the data. Statistical analysis can involve descriptive statistics (such as measures of central tendency and dispersion), inferential statistics (such as hypothesis testing and confidence interval estimation), and predictive modeling (such as regression analysis and time-series analysis).

Data Visualization

This method involves using visual representations of the data to identify patterns and trends. Data visualization can involve creating charts, graphs, and other visualizations, such as heat maps or scatterplots.

Text Analysis

This method involves analyzing text data, such as survey responses or social media posts, to identify patterns and themes. Text analysis can involve techniques such as sentiment analysis, topic modeling, and natural language processing.

Machine Learning

This method involves using algorithms to identify patterns in the data and make predictions or classifications. Machine learning can involve techniques such as decision trees, neural networks, and random forests.

Qualitative Analysis

This method involves analyzing non-numeric data, such as interviews or focus group discussions, to identify themes and patterns. Qualitative analysis can involve techniques such as content analysis, grounded theory, and narrative analysis.

Geospatial Analysis

This method involves analyzing spatial data, such as maps or GPS coordinates, to identify patterns and relationships. Geospatial analysis can involve techniques such as spatial autocorrelation, hot spot analysis, and clustering.

Applications of Data Interpretation

Data interpretation has a wide range of applications across different fields, including business, healthcare, education, social sciences, and more. Here are some examples of how data interpretation is used in different applications:

  • Business : Data interpretation is widely used in business to inform decision-making, identify market trends, and optimize operations. For example, businesses may analyze sales data to identify the most popular products or customer demographics, or use predictive modeling to forecast demand and adjust pricing accordingly.
  • Healthcare : Data interpretation is critical in healthcare for identifying disease patterns, evaluating treatment effectiveness, and improving patient outcomes. For example, healthcare providers may use electronic health records to analyze patient data and identify risk factors for certain diseases or conditions.
  • Education : Data interpretation is used in education to assess student performance, identify areas for improvement, and evaluate the effectiveness of instructional methods. For example, schools may analyze test scores to identify students who are struggling and provide targeted interventions to improve their performance.
  • Social sciences : Data interpretation is used in social sciences to understand human behavior, attitudes, and perceptions. For example, researchers may analyze survey data to identify patterns in public opinion or use qualitative analysis to understand the experiences of marginalized communities.
  • Sports : Data interpretation is increasingly used in sports to inform strategy and improve performance. For example, coaches may analyze performance data to identify areas for improvement or use predictive modeling to assess the likelihood of injuries or other risks.

When to use Data Interpretation

Data interpretation is used to make sense of complex data and to draw conclusions from it. It is particularly useful when working with large datasets or when trying to identify patterns or trends in the data. Data interpretation can be used in a variety of settings, including scientific research, business analysis, and public policy.

In scientific research, data interpretation is often used to draw conclusions from experiments or studies. Researchers use statistical analysis and data visualization techniques to interpret their data and to identify patterns or relationships between variables. This can help them to understand the underlying mechanisms of their research and to develop new hypotheses.

In business analysis, data interpretation is used to analyze market trends and consumer behavior. Companies can use data interpretation to identify patterns in customer buying habits, to understand market trends, and to develop marketing strategies that target specific customer segments.

In public policy, data interpretation is used to inform decision-making and to evaluate the effectiveness of policies and programs. Governments and other organizations use data interpretation to track the impact of policies and programs over time, to identify areas where improvements are needed, and to develop evidence-based policy recommendations.

In general, data interpretation is useful whenever large amounts of data need to be analyzed and understood in order to make informed decisions.

Data Interpretation Examples

Here are some real-time examples of data interpretation:

  • Social media analytics : Social media platforms generate vast amounts of data every second, and businesses can use this data to analyze customer behavior, track sentiment, and identify trends. Data interpretation in social media analytics involves analyzing data in real-time to identify patterns and trends that can help businesses make informed decisions about marketing strategies and customer engagement.
  • Healthcare analytics: Healthcare organizations use data interpretation to analyze patient data, track outcomes, and identify areas where improvements are needed. Real-time data interpretation can help healthcare providers make quick decisions about patient care, such as identifying patients who are at risk of developing complications or adverse events.
  • Financial analysis: Real-time data interpretation is essential for financial analysis, where traders and analysts need to make quick decisions based on changing market conditions. Financial analysts use data interpretation to track market trends, identify opportunities for investment, and develop trading strategies.
  • Environmental monitoring : Real-time data interpretation is important for environmental monitoring, where data is collected from various sources such as satellites, sensors, and weather stations. Data interpretation helps to identify patterns and trends that can help predict natural disasters, track changes in the environment, and inform decision-making about environmental policies.
  • Traffic management: Real-time data interpretation is used for traffic management, where traffic sensors collect data on traffic flow, congestion, and accidents. Data interpretation helps to identify areas where traffic congestion is high, and helps traffic management authorities make decisions about road maintenance, traffic signal timing, and other strategies to improve traffic flow.

Data Interpretation Questions

Data Interpretation Questions samples:

  • Medical : What is the correlation between a patient’s age and their risk of developing a certain disease?
  • Environmental Science: What is the trend in the concentration of a certain pollutant in a particular body of water over the past 10 years?
  • Finance : What is the correlation between a company’s stock price and its quarterly revenue?
  • Education : What is the trend in graduation rates for a particular high school over the past 5 years?
  • Marketing : What is the correlation between a company’s advertising budget and its sales revenue?
  • Sports : What is the trend in the number of home runs hit by a particular baseball player over the past 3 seasons?
  • Social Science: What is the correlation between a person’s level of education and their income level?

In order to answer these questions, you would need to analyze and interpret the data using statistical methods, graphs, and other visualization tools.

Purpose of Data Interpretation

The purpose of data interpretation is to make sense of complex data by analyzing and drawing insights from it. The process of data interpretation involves identifying patterns and trends, making comparisons, and drawing conclusions based on the data. The ultimate goal of data interpretation is to use the insights gained from the analysis to inform decision-making.

Data interpretation is important because it allows individuals and organizations to:

  • Understand complex data : Data interpretation helps individuals and organizations to make sense of complex data sets that would otherwise be difficult to understand.
  • Identify patterns and trends : Data interpretation helps to identify patterns and trends in data, which can reveal important insights about the underlying processes and relationships.
  • Make informed decisions: Data interpretation provides individuals and organizations with the information they need to make informed decisions based on the insights gained from the data analysis.
  • Evaluate performance : Data interpretation helps individuals and organizations to evaluate their performance over time and to identify areas where improvements can be made.
  • Communicate findings: Data interpretation allows individuals and organizations to communicate their findings to others in a clear and concise manner, which is essential for informing stakeholders and making changes based on the insights gained from the analysis.

Characteristics of Data Interpretation

Here are some characteristics of data interpretation:

  • Contextual : Data interpretation is always contextual, meaning that the interpretation of data is dependent on the context in which it is analyzed. The same data may have different meanings depending on the context in which it is analyzed.
  • Iterative : Data interpretation is an iterative process, meaning that it often involves multiple rounds of analysis and refinement as more data becomes available or as new insights are gained from the analysis.
  • Subjective : Data interpretation is often subjective, as it involves the interpretation of data by individuals who may have different perspectives and biases. It is important to acknowledge and address these biases when interpreting data.
  • Analytical : Data interpretation involves the use of analytical tools and techniques to analyze and draw insights from data. These may include statistical analysis, data visualization, and other data analysis methods.
  • Evidence-based : Data interpretation is evidence-based, meaning that it is based on the data and the insights gained from the analysis. It is important to ensure that the data used in the analysis is accurate, relevant, and reliable.
  • Actionable : Data interpretation is actionable, meaning that it provides insights that can be used to inform decision-making and to drive action. The ultimate goal of data interpretation is to use the insights gained from the analysis to improve performance or to achieve specific goals.

Advantages of Data Interpretation

Data interpretation has several advantages, including:

  • Improved decision-making: Data interpretation provides insights that can be used to inform decision-making. By analyzing data and drawing insights from it, individuals and organizations can make informed decisions based on evidence rather than intuition.
  • Identification of patterns and trends: Data interpretation helps to identify patterns and trends in data, which can reveal important insights about the underlying processes and relationships. This information can be used to improve performance or to achieve specific goals.
  • Evaluation of performance: Data interpretation helps individuals and organizations to evaluate their performance over time and to identify areas where improvements can be made. By analyzing data, organizations can identify strengths and weaknesses and make changes to improve their performance.
  • Communication of findings: Data interpretation allows individuals and organizations to communicate their findings to others in a clear and concise manner, which is essential for informing stakeholders and making changes based on the insights gained from the analysis.
  • Better resource allocation: Data interpretation can help organizations allocate resources more efficiently by identifying areas where resources are needed most. By analyzing data, organizations can identify areas where resources are being underutilized or where additional resources are needed to improve performance.
  • Improved competitiveness : Data interpretation can give organizations a competitive advantage by providing insights that help to improve performance, reduce costs, or identify new opportunities for growth.

Limitations of Data Interpretation

Data interpretation has some limitations, including:

  • Limited by the quality of data: The quality of data used in data interpretation can greatly impact the accuracy of the insights gained from the analysis. Poor quality data can lead to incorrect conclusions and decisions.
  • Subjectivity: Data interpretation can be subjective, as it involves the interpretation of data by individuals who may have different perspectives and biases. This can lead to different interpretations of the same data.
  • Limited by analytical tools: The analytical tools and techniques used in data interpretation can also limit the accuracy of the insights gained from the analysis. Different analytical tools may yield different results, and some tools may not be suitable for certain types of data.
  • Time-consuming: Data interpretation can be a time-consuming process, particularly for large and complex data sets. This can make it difficult to quickly make decisions based on the insights gained from the analysis.
  • Incomplete data: Data interpretation can be limited by incomplete data sets, which may not provide a complete picture of the situation being analyzed. Incomplete data can lead to incorrect conclusions and decisions.
  • Limited by context: Data interpretation is always contextual, meaning that the interpretation of data is dependent on the context in which it is analyzed. The same data may have different meanings depending on the context in which it is analyzed.

Difference between Data Interpretation and Data Analysis

Data interpretation and data analysis are two different but closely related processes in data-driven decision-making.

Data analysis refers to the process of examining and examining data using statistical and computational methods to derive insights and conclusions from it. It involves cleaning, transforming, and modeling the data to uncover patterns, relationships, and trends that can help in understanding the underlying phenomena.

Data interpretation, on the other hand, refers to the process of making sense of the findings from the data analysis by contextualizing them within the larger problem domain. It involves identifying the key takeaways from the data analysis, assessing their relevance and significance to the problem at hand, and communicating the insights in a clear and actionable manner.

In short, data analysis is about uncovering insights from the data, while data interpretation is about making sense of those insights and translating them into actionable recommendations.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Data collection

Data Collection – Methods Types and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Research Process

Research Process – Steps, Examples and Tips

Research Design

Research Design – Types, Methods and Examples

Institutional Review Board (IRB)

Institutional Review Board – Application Sample...

Evaluating Research

Evaluating Research – Process, Examples and...

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

what is interpretation of data in research

Home Market Research Research Tools and Apps

Data Interpretation: Definition and Steps with Examples

Data interpretation is the process of collecting data from one or more sources, analyzing it using appropriate methods, & drawing conclusions.

A good data interpretation process is key to making your data usable. It will help you make sure you’re drawing the correct conclusions and acting on your information.

No matter what, data is everywhere in the modern world. There are two groups and organizations: those drowning in data or not using it appropriately and those benefiting.

In this blog, you will learn the definition of data interpretation and its primary steps and examples.

What is Data Interpretation

Data interpretation is the process of reviewing data and arriving at relevant conclusions using various analytical research methods. Data analysis assists researchers in categorizing, manipulating data , and summarizing data to answer critical questions.

LEARN ABOUT: Level of Analysis

In business terms, the interpretation of data is the execution of various processes. This process analyzes and revises data to gain insights and recognize emerging patterns and behaviors. These conclusions will assist you as a manager in making an informed decision based on numbers while having all of the facts at your disposal.

Importance of Data Interpretation

Raw data is useless unless it’s interpreted. Data interpretation is important to businesses and people. The collected data helps make informed decisions.

Make better decisions

Any decision is based on the information that is available at the time. People used to think that many diseases were caused by bad blood, which was one of the four humors. So, the solution was to get rid of the bad blood. We now know that things like viruses, bacteria, and immune responses can cause illness and can act accordingly.

In the same way, when you know how to collect and understand data well, you can make better decisions. You can confidently choose a path for your organization or even your life instead of working with assumptions.

The most important thing is to follow a transparent process to reduce mistakes and tiredness when making decisions.

Find trends and take action

Another practical use of data interpretation is to get ahead of trends before they reach their peak. Some people have made a living by researching industries, spotting trends, and then making big bets on them.

LEARN ABOUT: Action Research

With the proper data interpretations and a little bit of work, you can catch the start of trends and use them to help your business or yourself grow. 

Better resource allocation

The last importance of data interpretation we will discuss is the ability to use people, tools, money, etc., more efficiently. For example, If you know via strong data interpretation that a market is underserved, you’ll go after it with more energy and win.

In the same way, you may find out that a market you thought was a good fit is actually bad. This could be because the market is too big for your products to serve, there is too much competition, or something else.

No matter what, you can move the resources you need faster and better to get better results.

What are the steps in interpreting data?

Here are some steps to interpreting data correctly.

Gather the data

The very first step in data interpretation is gathering all relevant data. You can do this by first visualizing it in a bar, graph, or pie chart. This step aims to analyze the data accurately and without bias. Now is the time to recall how you conducted your research.

Here are two question patterns that will help you to understand better.

  • Were there any flaws or changes that occurred during the data collection process?
  • Have you saved any observatory notes or indicators?

You can proceed to the next stage when you have all of your data.

  • Develop your discoveries

This is a summary of your findings. Here, you thoroughly examine the data to identify trends, patterns, or behavior. If you are researching a group of people using a sample population, this is the section where you examine behavioral patterns. You can compare these deductions to previous data sets, similar data sets, or general hypotheses in your industry. This step’s goal is to compare these deductions before drawing any conclusions.

  • Draw Conclusions

After you’ve developed your findings from your data sets, you can draw conclusions based on your discovered trends. Your findings should address the questions that prompted your research. If they do not respond, inquire about why; it may produce additional research or questions.

LEARN ABOUT: Research Process Steps

  • Give recommendations

The interpretation procedure of data comes to a close with this stage. Every research conclusion must include a recommendation. As recommendations are a summary of your findings and conclusions, they should be brief. There are only two options for recommendations; you can either recommend a course of action or suggest additional research.

Data interpretation examples

Here are two examples of data interpretations to help you understand it better:

Let’s say your users fall into four age groups. So a company can see which age group likes their content or product. Based on bar charts or pie charts, they can develop a marketing strategy to reach uninvolved groups or an outreach strategy to grow their core user base.

Another example of data analysis is the use of recruitment CRM by businesses. They utilize it to find candidates, track their progress, and manage their entire hiring process to determine how they can better automate their workflow.

Overall, data interpretation is an essential factor in data-driven decision-making. It should be performed on a regular basis as part of an iterative interpretation process. Investors, developers, and sales and acquisition professionals can benefit from routine data interpretation. It is what you do with those insights that determine the success of your business.

Contact QuestionPro experts if you need assistance conducting research or creating a data analysis. We can walk you through the process and help you make the most of your data.

MORE LIKE THIS

employee engagement software

Top 20 Employee Engagement Software Solutions

May 3, 2024

customer experience software

15 Best Customer Experience Software of 2024

May 2, 2024

Journey Orchestration Platforms

Journey Orchestration Platforms: Top 11 Platforms in 2024

employee pulse survey tools

Top 12 Employee Pulse Survey Tools Unlocking Insights in 2024

May 1, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

A Guide To The Methods, Benefits & Problems of The Interpretation of Data

Data interpretation blog post by datapine

Table of Contents

1) What Is Data Interpretation?

2) How To Interpret Data?

3) Why Data Interpretation Is Important?

4) Data Interpretation Skills

5) Data Analysis & Interpretation Problems

6) Data Interpretation Techniques & Methods

7) The Use of Dashboards For Data Interpretation

8) Business Data Interpretation Examples

Data analysis and interpretation have now taken center stage with the advent of the digital age… and the sheer amount of data can be frightening. In fact, a Digital Universe study found that the total data supply in 2012 was 2.8 trillion gigabytes! Based on that amount of data alone, it is clear the calling card of any successful enterprise in today’s global world will be the ability to analyze complex data, produce actionable insights, and adapt to new market needs… all at the speed of thought.

Business dashboards are the digital age tools for big data. Capable of displaying key performance indicators (KPIs) for both quantitative and qualitative data analyses, they are ideal for making the fast-paced and data-driven market decisions that push today’s industry leaders to sustainable success. Through the art of streamlined visual communication, data dashboards permit businesses to engage in real-time and informed decision-making and are key instruments in data interpretation. First of all, let’s find a definition to understand what lies behind this practice.

What Is Data Interpretation?

Data interpretation refers to the process of using diverse analytical methods to review data and arrive at relevant conclusions. The interpretation of data helps researchers to categorize, manipulate, and summarize the information in order to answer critical questions.

The importance of data interpretation is evident, and this is why it needs to be done properly. Data is very likely to arrive from multiple sources and has a tendency to enter the analysis process with haphazard ordering. Data analysis tends to be extremely subjective. That is to say, the nature and goal of interpretation will vary from business to business, likely correlating to the type of data being analyzed. While there are several types of processes that are implemented based on the nature of individual data, the two broadest and most common categories are “quantitative and qualitative analysis.”

Yet, before any serious data interpretation inquiry can begin, it should be understood that visual presentations of data findings are irrelevant unless a sound decision is made regarding measurement scales. Before any serious data analysis can begin, the measurement scale must be decided for the data as this will have a long-term impact on data interpretation ROI. The varying scales include:

  • Nominal Scale: non-numeric categories that cannot be ranked or compared quantitatively. Variables are exclusive and exhaustive.
  • Ordinal Scale: exclusive categories that are exclusive and exhaustive but with a logical order. Quality ratings and agreement ratings are examples of ordinal scales (i.e., good, very good, fair, etc., OR agree, strongly agree, disagree, etc.).
  • Interval: a measurement scale where data is grouped into categories with orderly and equal distances between the categories. There is always an arbitrary zero point.
  • Ratio: contains features of all three.

For a more in-depth review of scales of measurement, read our article on data analysis questions . Once measurement scales have been selected, it is time to select which of the two broad interpretation processes will best suit your data needs. Let’s take a closer look at those specific methods and possible data interpretation problems.

How To Interpret Data? Top Methods & Techniques

Illustration of data interpretation on blackboard

When interpreting data, an analyst must try to discern the differences between correlation, causation, and coincidences, as well as many other biases – but he also has to consider all the factors involved that may have led to a result. There are various data interpretation types and methods one can use to achieve this.

The interpretation of data is designed to help people make sense of numerical data that has been collected, analyzed, and presented. Having a baseline method for interpreting data will provide your analyst teams with a structure and consistent foundation. Indeed, if several departments have different approaches to interpreting the same data while sharing the same goals, some mismatched objectives can result. Disparate methods will lead to duplicated efforts, inconsistent solutions, wasted energy, and inevitably – time and money. In this part, we will look at the two main methods of interpretation of data: qualitative and quantitative analysis.

Qualitative Data Interpretation

Qualitative data analysis can be summed up in one word – categorical. With this type of analysis, data is not described through numerical values or patterns but through the use of descriptive context (i.e., text). Typically, narrative data is gathered by employing a wide variety of person-to-person techniques. These techniques include:

  • Observations: detailing behavioral patterns that occur within an observation group. These patterns could be the amount of time spent in an activity, the type of activity, and the method of communication employed.
  • Focus groups: Group people and ask them relevant questions to generate a collaborative discussion about a research topic.
  • Secondary Research: much like how patterns of behavior can be observed, various types of documentation resources can be coded and divided based on the type of material they contain.
  • Interviews: one of the best collection methods for narrative data. Inquiry responses can be grouped by theme, topic, or category. The interview approach allows for highly focused data segmentation.

A key difference between qualitative and quantitative analysis is clearly noticeable in the interpretation stage. The first one is widely open to interpretation and must be “coded” so as to facilitate the grouping and labeling of data into identifiable themes. As person-to-person data collection techniques can often result in disputes pertaining to proper analysis, qualitative data analysis is often summarized through three basic principles: notice things, collect things, and think about things.

After qualitative data has been collected through transcripts, questionnaires, audio and video recordings, or the researcher’s notes, it is time to interpret it. For that purpose, there are some common methods used by researchers and analysts.

  • Content analysis : As its name suggests, this is a research method used to identify frequencies and recurring words, subjects, and concepts in image, video, or audio content. It transforms qualitative information into quantitative data to help discover trends and conclusions that will later support important research or business decisions. This method is often used by marketers to understand brand sentiment from the mouths of customers themselves. Through that, they can extract valuable information to improve their products and services. It is recommended to use content analytics tools for this method as manually performing it is very time-consuming and can lead to human error or subjectivity issues. Having a clear goal in mind before diving into it is another great practice for avoiding getting lost in the fog.  
  • Thematic analysis: This method focuses on analyzing qualitative data, such as interview transcripts, survey questions, and others, to identify common patterns and separate the data into different groups according to found similarities or themes. For example, imagine you want to analyze what customers think about your restaurant. For this purpose, you do a thematic analysis on 1000 reviews and find common themes such as “fresh food”, “cold food”, “small portions”, “friendly staff”, etc. With those recurring themes in hand, you can extract conclusions about what could be improved or enhanced based on your customer’s experiences. Since this technique is more exploratory, be open to changing your research questions or goals as you go. 
  • Narrative analysis: A bit more specific and complicated than the two previous methods, it is used to analyze stories and discover their meaning. These stories can be extracted from testimonials, case studies, and interviews, as these formats give people more space to tell their experiences. Given that collecting this kind of data is harder and more time-consuming, sample sizes for narrative analysis are usually smaller, which makes it harder to reproduce its findings. However, it is still a valuable technique for understanding customers' preferences and mindsets.  
  • Discourse analysis : This method is used to draw the meaning of any type of visual, written, or symbolic language in relation to a social, political, cultural, or historical context. It is used to understand how context can affect how language is carried out and understood. For example, if you are doing research on power dynamics, using discourse analysis to analyze a conversation between a janitor and a CEO and draw conclusions about their responses based on the context and your research questions is a great use case for this technique. That said, like all methods in this section, discourse analytics is time-consuming as the data needs to be analyzed until no new insights emerge.  
  • Grounded theory analysis : The grounded theory approach aims to create or discover a new theory by carefully testing and evaluating the data available. Unlike all other qualitative approaches on this list, grounded theory helps extract conclusions and hypotheses from the data instead of going into the analysis with a defined hypothesis. This method is very popular amongst researchers, analysts, and marketers as the results are completely data-backed, providing a factual explanation of any scenario. It is often used when researching a completely new topic or with little knowledge as this space to start from the ground up. 

Quantitative Data Interpretation

If quantitative data interpretation could be summed up in one word (and it really can’t), that word would be “numerical.” There are few certainties when it comes to data analysis, but you can be sure that if the research you are engaging in has no numbers involved, it is not quantitative research, as this analysis refers to a set of processes by which numerical data is analyzed. More often than not, it involves the use of statistical modeling such as standard deviation, mean, and median. Let’s quickly review the most common statistical terms:

  • Mean: A mean represents a numerical average for a set of responses. When dealing with a data set (or multiple data sets), a mean will represent the central value of a specific set of numbers. It is the sum of the values divided by the number of values within the data set. Other terms that can be used to describe the concept are arithmetic mean, average, and mathematical expectation.
  • Standard deviation: This is another statistical term commonly used in quantitative analysis. Standard deviation reveals the distribution of the responses around the mean. It describes the degree of consistency within the responses; together with the mean, it provides insight into data sets.
  • Frequency distribution: This is a measurement gauging the rate of a response appearance within a data set. When using a survey, for example, frequency distribution, it can determine the number of times a specific ordinal scale response appears (i.e., agree, strongly agree, disagree, etc.). Frequency distribution is extremely keen in determining the degree of consensus among data points.

Typically, quantitative data is measured by visually presenting correlation tests between two or more variables of significance. Different processes can be used together or separately, and comparisons can be made to ultimately arrive at a conclusion. Other signature interpretation processes of quantitative data include:

  • Regression analysis: Essentially, it uses historical data to understand the relationship between a dependent variable and one or more independent variables. Knowing which variables are related and how they developed in the past allows you to anticipate possible outcomes and make better decisions going forward. For example, if you want to predict your sales for next month, you can use regression to understand what factors will affect them, such as products on sale and the launch of a new campaign, among many others. 
  • Cohort analysis: This method identifies groups of users who share common characteristics during a particular time period. In a business scenario, cohort analysis is commonly used to understand customer behaviors. For example, a cohort could be all users who have signed up for a free trial on a given day. An analysis would be carried out to see how these users behave, what actions they carry out, and how their behavior differs from other user groups.
  • Predictive analysis: As its name suggests, the predictive method aims to predict future developments by analyzing historical and current data. Powered by technologies such as artificial intelligence and machine learning, predictive analytics practices enable businesses to identify patterns or potential issues and plan informed strategies in advance.
  • Prescriptive analysis: Also powered by predictions, the prescriptive method uses techniques such as graph analysis, complex event processing, and neural networks, among others, to try to unravel the effect that future decisions will have in order to adjust them before they are actually made. This helps businesses to develop responsive, practical business strategies.
  • Conjoint analysis: Typically applied to survey analysis, the conjoint approach is used to analyze how individuals value different attributes of a product or service. This helps researchers and businesses to define pricing, product features, packaging, and many other attributes. A common use is menu-based conjoint analysis, in which individuals are given a “menu” of options from which they can build their ideal concept or product. Through this, analysts can understand which attributes they would pick above others and drive conclusions.
  • Cluster analysis: Last but not least, the cluster is a method used to group objects into categories. Since there is no target variable when using cluster analysis, it is a useful method to find hidden trends and patterns in the data. In a business context, clustering is used for audience segmentation to create targeted experiences. In market research, it is often used to identify age groups, geographical information, and earnings, among others.

Now that we have seen how to interpret data, let's move on and ask ourselves some questions: What are some of the benefits of data interpretation? Why do all industries engage in data research and analysis? These are basic questions, but they often don’t receive adequate attention.

Your Chance: Want to test a powerful data analysis software? Use our 14-days free trial & start extracting insights from your data!

Why Data Interpretation Is Important

illustrating quantitative data interpretation with charts & graphs

The purpose of collection and interpretation is to acquire useful and usable information and to make the most informed decisions possible. From businesses to newlyweds researching their first home, data collection and interpretation provide limitless benefits for a wide range of institutions and individuals.

Data analysis and interpretation, regardless of the method and qualitative/quantitative status, may include the following characteristics:

  • Data identification and explanation
  • Comparing and contrasting data
  • Identification of data outliers
  • Future predictions

Data analysis and interpretation, in the end, help improve processes and identify problems. It is difficult to grow and make dependable improvements without, at the very least, minimal data collection and interpretation. What is the keyword? Dependable. Vague ideas regarding performance enhancement exist within all institutions and industries. Yet, without proper research and analysis, an idea is likely to remain in a stagnant state forever (i.e., minimal growth). So… what are a few of the business benefits of digital age data analysis and interpretation? Let’s take a look!

1) Informed decision-making: A decision is only as good as the knowledge that formed it. Informed data decision-making can potentially set industry leaders apart from the rest of the market pack. Studies have shown that companies in the top third of their industries are, on average, 5% more productive and 6% more profitable when implementing informed data decision-making processes. Most decisive actions will arise only after a problem has been identified or a goal defined. Data analysis should include identification, thesis development, and data collection, followed by data communication.

If institutions only follow that simple order, one that we should all be familiar with from grade school science fairs, then they will be able to solve issues as they emerge in real-time. Informed decision-making has a tendency to be cyclical. This means there is really no end, and eventually, new questions and conditions arise within the process that need to be studied further. The monitoring of data results will inevitably return the process to the start with new data and sights.

2) Anticipating needs with trends identification: data insights provide knowledge, and knowledge is power. The insights obtained from market and consumer data analyses have the ability to set trends for peers within similar market segments. A perfect example of how data analytics can impact trend prediction is evidenced in the music identification application Shazam . The application allows users to upload an audio clip of a song they like but can’t seem to identify. Users make 15 million song identifications a day. With this data, Shazam has been instrumental in predicting future popular artists.

When industry trends are identified, they can then serve a greater industry purpose. For example, the insights from Shazam’s monitoring benefits not only Shazam in understanding how to meet consumer needs but also grant music executives and record label companies an insight into the pop-culture scene of the day. Data gathering and interpretation processes can allow for industry-wide climate prediction and result in greater revenue streams across the market. For this reason, all institutions should follow the basic data cycle of collection, interpretation, decision-making, and monitoring.

3) Cost efficiency: Proper implementation of analytics processes can provide businesses with profound cost advantages within their industries. A recent data study performed by Deloitte vividly demonstrates this in finding that data analysis ROI is driven by efficient cost reductions. Often, this benefit is overlooked because making money is typically viewed as “sexier” than saving money. Yet, sound data analyses have the ability to alert management to cost-reduction opportunities without any significant exertion of effort on the part of human capital.

A great example of the potential for cost efficiency through data analysis is Intel. Prior to 2012, Intel would conduct over 19,000 manufacturing function tests on their chips before they could be deemed acceptable for release. To cut costs and reduce test time, Intel implemented predictive data analyses. By using historical and current data, Intel now avoids testing each chip 19,000 times by focusing on specific and individual chip tests. After its implementation in 2012, Intel saved over $3 million in manufacturing costs. Cost reduction may not be as “sexy” as data profit, but as Intel proves, it is a benefit of data analysis that should not be neglected.

4) Clear foresight: companies that collect and analyze their data gain better knowledge about themselves, their processes, and their performance. They can identify performance challenges when they arise and take action to overcome them. Data interpretation through visual representations lets them process their findings faster and make better-informed decisions on the company's future.

Key Data Interpretation Skills You Should Have

Just like any other process, data interpretation and analysis require researchers or analysts to have some key skills to be able to perform successfully. It is not enough just to apply some methods and tools to the data; the person who is managing it needs to be objective and have a data-driven mind, among other skills. 

It is a common misconception to think that the required skills are mostly number-related. While data interpretation is heavily analytically driven, it also requires communication and narrative skills, as the results of the analysis need to be presented in a way that is easy to understand for all types of audiences. 

Luckily, with the rise of self-service tools and AI-driven technologies, data interpretation is no longer segregated for analysts only. However, the topic still remains a big challenge for businesses that make big investments in data and tools to support it, as the interpretation skills required are still lacking. It is worthless to put massive amounts of money into extracting information if you are not going to be able to interpret what that information is telling you. For that reason, below we list the top 5 data interpretation skills your employees or researchers should have to extract the maximum potential from the data. 

  • Data Literacy: The first and most important skill to have is data literacy. This means having the ability to understand, work, and communicate with data. It involves knowing the types of data sources, methods, and ethical implications of using them. In research, this skill is often a given. However, in a business context, there might be many employees who are not comfortable with data. The issue is the interpretation of data can not be solely responsible for the data team, as it is not sustainable in the long run. Experts advise business leaders to carefully assess the literacy level across their workforce and implement training instances to ensure everyone can interpret their data. 
  • Data Tools: The data interpretation and analysis process involves using various tools to collect, clean, store, and analyze the data. The complexity of the tools varies depending on the type of data and the analysis goals. Going from simple ones like Excel to more complex ones like databases, such as SQL, or programming languages, such as R or Python. It also involves visual analytics tools to bring the data to life through the use of graphs and charts. Managing these tools is a fundamental skill as they make the process faster and more efficient. As mentioned before, most modern solutions are now self-service, enabling less technical users to use them without problem.
  • Critical Thinking: Another very important skill is to have critical thinking. Data hides a range of conclusions, trends, and patterns that must be discovered. It is not just about comparing numbers; it is about putting a story together based on multiple factors that will lead to a conclusion. Therefore, having the ability to look further from what is right in front of you is an invaluable skill for data interpretation. 
  • Data Ethics: In the information age, being aware of the legal and ethical responsibilities that come with the use of data is of utmost importance. In short, data ethics involves respecting the privacy and confidentiality of data subjects, as well as ensuring accuracy and transparency for data usage. It requires the analyzer or researcher to be completely objective with its interpretation to avoid any biases or discrimination. Many countries have already implemented regulations regarding the use of data, including the GDPR or the ACM Code Of Ethics. Awareness of these regulations and responsibilities is a fundamental skill that anyone working in data interpretation should have. 
  • Domain Knowledge: Another skill that is considered important when interpreting data is to have domain knowledge. As mentioned before, data hides valuable insights that need to be uncovered. To do so, the analyst needs to know about the industry or domain from which the information is coming and use that knowledge to explore it and put it into a broader context. This is especially valuable in a business context, where most departments are now analyzing data independently with the help of a live dashboard instead of relying on the IT department, which can often overlook some aspects due to a lack of expertise in the topic. 

Common Data Analysis And Interpretation Problems

Man running away from common data interpretation problems

The oft-repeated mantra of those who fear data advancements in the digital age is “big data equals big trouble.” While that statement is not accurate, it is safe to say that certain data interpretation problems or “pitfalls” exist and can occur when analyzing data, especially at the speed of thought. Let’s identify some of the most common data misinterpretation risks and shed some light on how they can be avoided:

1) Correlation mistaken for causation: our first misinterpretation of data refers to the tendency of data analysts to mix the cause of a phenomenon with correlation. It is the assumption that because two actions occurred together, one caused the other. This is inaccurate, as actions can occur together, absent a cause-and-effect relationship.

  • Digital age example: assuming that increased revenue results from increased social media followers… there might be a definitive correlation between the two, especially with today’s multi-channel purchasing experiences. But that does not mean an increase in followers is the direct cause of increased revenue. There could be both a common cause and an indirect causality.
  • Remedy: attempt to eliminate the variable you believe to be causing the phenomenon.

2) Confirmation bias: our second problem is data interpretation bias. It occurs when you have a theory or hypothesis in mind but are intent on only discovering data patterns that support it while rejecting those that do not.

  • Digital age example: your boss asks you to analyze the success of a recent multi-platform social media marketing campaign. While analyzing the potential data variables from the campaign (one that you ran and believe performed well), you see that the share rate for Facebook posts was great, while the share rate for Twitter Tweets was not. Using only Facebook posts to prove your hypothesis that the campaign was successful would be a perfect manifestation of confirmation bias.
  • Remedy: as this pitfall is often based on subjective desires, one remedy would be to analyze data with a team of objective individuals. If this is not possible, another solution is to resist the urge to make a conclusion before data exploration has been completed. Remember to always try to disprove a hypothesis, not prove it.

3) Irrelevant data: the third data misinterpretation pitfall is especially important in the digital age. As large data is no longer centrally stored and as it continues to be analyzed at the speed of thought, it is inevitable that analysts will focus on data that is irrelevant to the problem they are trying to correct.

  • Digital age example: in attempting to gauge the success of an email lead generation campaign, you notice that the number of homepage views directly resulting from the campaign increased, but the number of monthly newsletter subscribers did not. Based on the number of homepage views, you decide the campaign was a success when really it generated zero leads.
  • Remedy: proactively and clearly frame any data analysis variables and KPIs prior to engaging in a data review. If the metric you use to measure the success of a lead generation campaign is newsletter subscribers, there is no need to review the number of homepage visits. Be sure to focus on the data variable that answers your question or solves your problem and not on irrelevant data.

4) Truncating an Axes: When creating a graph to start interpreting the results of your analysis, it is important to keep the axes truthful and avoid generating misleading visualizations. Starting the axes in a value that doesn’t portray the actual truth about the data can lead to false conclusions. 

  • Digital age example: In the image below, we can see a graph from Fox News in which the Y-axes start at 34%, making it seem that the difference between 35% and 39.6% is way higher than it actually is. This could lead to a misinterpretation of the tax rate changes. 

Fox news graph truncating an axes

* Source : www.venngage.com *

  • Remedy: Be careful with how your data is visualized. Be respectful and realistic with axes to avoid misinterpretation of your data. See below how the Fox News chart looks when using the correct axis values. This chart was created with datapine's modern online data visualization tool.

Fox news graph with the correct axes values

5) (Small) sample size: Another common problem is using a small sample size. Logically, the bigger the sample size, the more accurate and reliable the results. However, this also depends on the size of the effect of the study. For example, the sample size in a survey about the quality of education will not be the same as for one about people doing outdoor sports in a specific area. 

  • Digital age example: Imagine you ask 30 people a question, and 29 answer “yes,” resulting in 95% of the total. Now imagine you ask the same question to 1000, and 950 of them answer “yes,” which is again 95%. While these percentages might look the same, they certainly do not mean the same thing, as a 30-person sample size is not a significant number to establish a truthful conclusion. 
  • Remedy: Researchers say that in order to determine the correct sample size to get truthful and meaningful results, it is necessary to define a margin of error that will represent the maximum amount they want the results to deviate from the statistical mean. Paired with this, they need to define a confidence level that should be between 90 and 99%. With these two values in hand, researchers can calculate an accurate sample size for their studies.

6) Reliability, subjectivity, and generalizability : When performing qualitative analysis, researchers must consider practical and theoretical limitations when interpreting the data. In some cases, this type of research can be considered unreliable because of uncontrolled factors that might or might not affect the results. This is paired with the fact that the researcher has a primary role in the interpretation process, meaning he or she decides what is relevant and what is not, and as we know, interpretations can be very subjective.

Generalizability is also an issue that researchers face when dealing with qualitative analysis. As mentioned in the point about having a small sample size, it is difficult to draw conclusions that are 100% representative because the results might be biased or unrepresentative of a wider population. 

While these factors are mostly present in qualitative research, they can also affect the quantitative analysis. For example, when choosing which KPIs to portray and how to portray them, analysts can also be biased and represent them in a way that benefits their analysis.

  • Digital age example: Biased questions in a survey are a great example of reliability and subjectivity issues. Imagine you are sending a survey to your clients to see how satisfied they are with your customer service with this question: “How amazing was your experience with our customer service team?”. Here, we can see that this question clearly influences the response of the individual by putting the word “amazing” on it. 
  • Remedy: A solution to avoid these issues is to keep your research honest and neutral. Keep the wording of the questions as objective as possible. For example: “On a scale of 1-10, how satisfied were you with our customer service team?”. This does not lead the respondent to any specific answer, meaning the results of your survey will be reliable. 

Data Interpretation Best Practices & Tips

Data interpretation methods and techniques by datapine

Data analysis and interpretation are critical to developing sound conclusions and making better-informed decisions. As we have seen with this article, there is an art and science to the interpretation of data. To help you with this purpose, we will list a few relevant techniques, methods, and tricks you can implement for a successful data management process. 

As mentioned at the beginning of this post, the first step to interpreting data in a successful way is to identify the type of analysis you will perform and apply the methods respectively. Clearly differentiate between qualitative (observe, document, and interview notice, collect and think about things) and quantitative analysis (you lead research with a lot of numerical data to be analyzed through various statistical methods). 

1) Ask the right data interpretation questions

The first data interpretation technique is to define a clear baseline for your work. This can be done by answering some critical questions that will serve as a useful guideline to start. Some of them include: what are the goals and objectives of my analysis? What type of data interpretation method will I use? Who will use this data in the future? And most importantly, what general question am I trying to answer?

Once all this information has been defined, you will be ready for the next step: collecting your data. 

2) Collect and assimilate your data

Now that a clear baseline has been established, it is time to collect the information you will use. Always remember that your methods for data collection will vary depending on what type of analysis method you use, which can be qualitative or quantitative. Based on that, relying on professional online data analysis tools to facilitate the process is a great practice in this regard, as manually collecting and assessing raw data is not only very time-consuming and expensive but is also at risk of errors and subjectivity. 

Once your data is collected, you need to carefully assess it to understand if the quality is appropriate to be used during a study. This means, is the sample size big enough? Were the procedures used to collect the data implemented correctly? Is the date range from the data correct? If coming from an external source, is it a trusted and objective one? 

With all the needed information in hand, you are ready to start the interpretation process, but first, you need to visualize your data. 

3) Use the right data visualization type 

Data visualizations such as business graphs , charts, and tables are fundamental to successfully interpreting data. This is because data visualization via interactive charts and graphs makes the information more understandable and accessible. As you might be aware, there are different types of visualizations you can use, but not all of them are suitable for any analysis purpose. Using the wrong graph can lead to misinterpretation of your data, so it’s very important to carefully pick the right visual for it. Let’s look at some use cases of common data visualizations. 

  • Bar chart: One of the most used chart types, the bar chart uses rectangular bars to show the relationship between 2 or more variables. There are different types of bar charts for different interpretations, including the horizontal bar chart, column bar chart, and stacked bar chart. 
  • Line chart: Most commonly used to show trends, acceleration or decelerations, and volatility, the line chart aims to show how data changes over a period of time, for example, sales over a year. A few tips to keep this chart ready for interpretation are not using many variables that can overcrowd the graph and keeping your axis scale close to the highest data point to avoid making the information hard to read. 
  • Pie chart: Although it doesn’t do a lot in terms of analysis due to its uncomplex nature, pie charts are widely used to show the proportional composition of a variable. Visually speaking, showing a percentage in a bar chart is way more complicated than showing it in a pie chart. However, this also depends on the number of variables you are comparing. If your pie chart needs to be divided into 10 portions, then it is better to use a bar chart instead. 
  • Tables: While they are not a specific type of chart, tables are widely used when interpreting data. Tables are especially useful when you want to portray data in its raw format. They give you the freedom to easily look up or compare individual values while also displaying grand totals. 

With the use of data visualizations becoming more and more critical for businesses’ analytical success, many tools have emerged to help users visualize their data in a cohesive and interactive way. One of the most popular ones is the use of BI dashboards . These visual tools provide a centralized view of various graphs and charts that paint a bigger picture of a topic. We will discuss the power of dashboards for an efficient data interpretation practice in the next portion of this post. If you want to learn more about different types of graphs and charts , take a look at our complete guide on the topic. 

4) Start interpreting 

After the tedious preparation part, you can start extracting conclusions from your data. As mentioned many times throughout the post, the way you decide to interpret the data will solely depend on the methods you initially decided to use. If you had initial research questions or hypotheses, then you should look for ways to prove their validity. If you are going into the data with no defined hypothesis, then start looking for relationships and patterns that will allow you to extract valuable conclusions from the information. 

During the process of interpretation, stay curious and creative, dig into the data, and determine if there are any other critical questions that should be asked. If any new questions arise, you need to assess if you have the necessary information to answer them. Being able to identify if you need to dedicate more time and resources to the research is a very important step. No matter if you are studying customer behaviors or a new cancer treatment, the findings from your analysis may dictate important decisions in the future. Therefore, taking the time to really assess the information is key. For that purpose, data interpretation software proves to be very useful.

5) Keep your interpretation objective

As mentioned above, objectivity is one of the most important data interpretation skills but also one of the hardest. Being the person closest to the investigation, it is easy to become subjective when looking for answers in the data. A good way to stay objective is to show the information related to the study to other people, for example, research partners or even the people who will use your findings once they are done. This can help avoid confirmation bias and any reliability issues with your interpretation. 

Remember, using a visualization tool such as a modern dashboard will make the interpretation process way easier and more efficient as the data can be navigated and manipulated in an easy and organized way. And not just that, using a dashboard tool to present your findings to a specific audience will make the information easier to understand and the presentation way more engaging thanks to the visual nature of these tools. 

6) Mark your findings and draw conclusions

Findings are the observations you extracted from your data. They are the facts that will help you drive deeper conclusions about your research. For example, findings can be trends and patterns you found during your interpretation process. To put your findings into perspective, you can compare them with other resources that use similar methods and use them as benchmarks.

Reflect on your own thinking and reasoning and be aware of the many pitfalls data analysis and interpretation carry—correlation versus causation, subjective bias, false information, inaccurate data, etc. Once you are comfortable with interpreting the data, you will be ready to develop conclusions, see if your initial questions were answered, and suggest recommendations based on them.

Interpretation of Data: The Use of Dashboards Bridging The Gap

As we have seen, quantitative and qualitative methods are distinct types of data interpretation and analysis. Both offer a varying degree of return on investment (ROI) regarding data investigation, testing, and decision-making. But how do you mix the two and prevent a data disconnect? The answer is professional data dashboards. 

For a few years now, dashboards have become invaluable tools to visualize and interpret data. These tools offer a centralized and interactive view of data and provide the perfect environment for exploration and extracting valuable conclusions. They bridge the quantitative and qualitative information gap by unifying all the data in one place with the help of stunning visuals. 

Not only that, but these powerful tools offer a large list of benefits, and we will discuss some of them below. 

1) Connecting and blending data. With today’s pace of innovation, it is no longer feasible (nor desirable) to have bulk data centrally located. As businesses continue to globalize and borders continue to dissolve, it will become increasingly important for businesses to possess the capability to run diverse data analyses absent the limitations of location. Data dashboards decentralize data without compromising on the necessary speed of thought while blending both quantitative and qualitative data. Whether you want to measure customer trends or organizational performance, you now have the capability to do both without the need for a singular selection.

2) Mobile Data. Related to the notion of “connected and blended data” is that of mobile data. In today’s digital world, employees are spending less time at their desks and simultaneously increasing production. This is made possible because mobile solutions for analytical tools are no longer standalone. Today, mobile analysis applications seamlessly integrate with everyday business tools. In turn, both quantitative and qualitative data are now available on-demand where they’re needed, when they’re needed, and how they’re needed via interactive online dashboards .

3) Visualization. Data dashboards merge the data gap between qualitative and quantitative data interpretation methods through the science of visualization. Dashboard solutions come “out of the box” and are well-equipped to create easy-to-understand data demonstrations. Modern online data visualization tools provide a variety of color and filter patterns, encourage user interaction, and are engineered to help enhance future trend predictability. All of these visual characteristics make for an easy transition among data methods – you only need to find the right types of data visualization to tell your data story the best way possible.

4) Collaboration. Whether in a business environment or a research project, collaboration is key in data interpretation and analysis. Dashboards are online tools that can be easily shared through a password-protected URL or automated email. Through them, users can collaborate and communicate through the data in an efficient way. Eliminating the need for infinite files with lost updates. Tools such as datapine offer real-time updates, meaning your dashboards will update on their own as soon as new information is available.  

Examples Of Data Interpretation In Business

To give you an idea of how a dashboard can fulfill the need to bridge quantitative and qualitative analysis and help in understanding how to interpret data in research thanks to visualization, below, we will discuss three valuable examples to put their value into perspective.

1. Customer Satisfaction Dashboard 

This market research dashboard brings together both qualitative and quantitative data that are knowledgeably analyzed and visualized in a meaningful way that everyone can understand, thus empowering any viewer to interpret it. Let’s explore it below. 

Data interpretation example on customers' satisfaction with a brand

**click to enlarge**

The value of this template lies in its highly visual nature. As mentioned earlier, visuals make the interpretation process way easier and more efficient. Having critical pieces of data represented with colorful and interactive icons and graphs makes it possible to uncover insights at a glance. For example, the colors green, yellow, and red on the charts for the NPS and the customer effort score allow us to conclude that most respondents are satisfied with this brand with a short glance. A further dive into the line chart below can help us dive deeper into this conclusion, as we can see both metrics developed positively in the past 6 months. 

The bottom part of the template provides visually stunning representations of different satisfaction scores for quality, pricing, design, and service. By looking at these, we can conclude that, overall, customers are satisfied with this company in most areas. 

2. Brand Analysis Dashboard

Next, in our list of data interpretation examples, we have a template that shows the answers to a survey on awareness for Brand D. The sample size is listed on top to get a perspective of the data, which is represented using interactive charts and graphs. 

Data interpretation example using a market research dashboard for brand awareness analysis

When interpreting information, context is key to understanding it correctly. For that reason, the dashboard starts by offering insights into the demographics of the surveyed audience. In general, we can see ages and gender are diverse. Therefore, we can conclude these brands are not targeting customers from a specified demographic, an important aspect to put the surveyed answers into perspective. 

Looking at the awareness portion, we can see that brand B is the most popular one, with brand D coming second on both questions. This means brand D is not doing wrong, but there is still room for improvement compared to brand B. To see where brand D could improve, the researcher could go into the bottom part of the dashboard and consult the answers for branding themes and celebrity analysis. These are important as they give clear insight into what people and messages the audience associates with brand D. This is an opportunity to exploit these topics in different ways and achieve growth and success. 

3. Product Innovation Dashboard 

Our third and last dashboard example shows the answers to a survey on product innovation for a technology company. Just like the previous templates, the interactive and visual nature of the dashboard makes it the perfect tool to interpret data efficiently and effectively. 

Market research results on product innovation, useful for product development and pricing decisions as an example of data interpretation using dashboards

Starting from right to left, we first get a list of the top 5 products by purchase intention. This information lets us understand if the product being evaluated resembles what the audience already intends to purchase. It is a great starting point to see how customers would respond to the new product. This information can be complemented with other key metrics displayed in the dashboard. For example, the usage and purchase intention track how the market would receive the product and if they would purchase it, respectively. Interpreting these values as positive or negative will depend on the company and its expectations regarding the survey. 

Complementing these metrics, we have the willingness to pay. Arguably, one of the most important metrics to define pricing strategies. Here, we can see that most respondents think the suggested price is a good value for money. Therefore, we can interpret that the product would sell for that price. 

To see more data analysis and interpretation examples for different industries and functions, visit our library of business dashboards .

To Conclude…

As we reach the end of this insightful post about data interpretation and analysis, we hope you have a clear understanding of the topic. We've covered the definition and given some examples and methods to perform a successful interpretation process.

The importance of data interpretation is undeniable. Dashboards not only bridge the information gap between traditional data interpretation methods and technology, but they can help remedy and prevent the major pitfalls of the process. As a digital age solution, they combine the best of the past and the present to allow for informed decision-making with maximum data interpretation ROI.

To start visualizing your insights in a meaningful and actionable way, test our online reporting software for free with our 14-day trial !

  • What is Data Interpretation? + [Types, Method & Tools]

busayo.longe

  • Data Collection

Data interpretation and analysis are fast becoming more valuable with the prominence of digital communication, which is responsible for a large amount of data being churned out daily. According to the WEF’s “A Day in Data” Report , the accumulated digital universe of data is set to reach 44 ZB (Zettabyte) in 2020.

Based on this report, it is clear that for any business to be successful in today’s digital world, the founders need to know or employ people who know how to analyze complex data, produce actionable insights and adapt to new market trends. Also, all these need to be done in milliseconds.

So, what is data interpretation and analysis, and how do you leverage this knowledge to help your business or research? All this and more will be revealed in this article.

What is Data Interpretation?

Data interpretation is the process of reviewing data through some predefined processes which will help assign some meaning to the data and arrive at a relevant conclusion. It involves taking the result of data analysis, making inferences on the relations studied, and using them to conclude.

Therefore, before one can talk about interpreting data, they need to be analyzed first. What then, is data analysis?

Data analysis is the process of ordering, categorizing, manipulating, and summarizing data to obtain answers to research questions. It is usually the first step taken towards data interpretation.

It is evident that the interpretation of data is very important, and as such needs to be done properly. Therefore, researchers have identified some data interpretation methods to aid this process.

What are Data Interpretation Methods?

Data interpretation methods are how analysts help people make sense of numerical data that has been collected, analyzed and presented. Data, when collected in raw form, may be difficult for the layman to understand, which is why analysts need to break down the information gathered so that others can make sense of it.

For example, when founders are pitching to potential investors, they must interpret data (e.g. market size, growth rate, etc.) for better understanding. There are 2 main methods in which this can be done, namely; quantitative methods and qualitative methods . 

Qualitative Data Interpretation Method 

The qualitative data interpretation method is used to analyze qualitative data, which is also known as categorical data . This method uses texts, rather than numbers or patterns to describe data.

Qualitative data is usually gathered using a wide variety of person-to-person techniques , which may be difficult to analyze compared to the quantitative research method .

Unlike the quantitative data which can be analyzed directly after it has been collected and sorted, qualitative data needs to first be coded into numbers before it can be analyzed.  This is because texts are usually cumbersome, and will take more time, and result in a lot of errors if analyzed in their original state. Coding done by the analyst should also be documented so that it can be reused by others and also analyzed. 

There are 2 main types of qualitative data, namely; nominal and ordinal data . These 2 data types are both interpreted using the same method, but ordinal data interpretation is quite easier than that of nominal data .

In most cases, ordinal data is usually labeled with numbers during the process of data collection, and coding may not be required. This is different from nominal data that still needs to be coded for proper interpretation.

Quantitative Data Interpretation Method

The quantitative data interpretation method is used to analyze quantitative data, which is also known as numerical data . This data type contains numbers and is therefore analyzed with the use of numbers and not texts.

Quantitative data are of 2 main types, namely; discrete and continuous data. Continuous data is further divided into interval data and ratio data, with all the data types being numeric .

Due to its natural existence as a number, analysts do not need to employ the coding technique on quantitative data before it is analyzed. The process of analyzing quantitative data involves statistical modelling techniques such as standard deviation, mean and median.

Some of the statistical methods used in analyzing quantitative data are highlighted below:

The mean is a numerical average for a set of data and is calculated by dividing the sum of the values by the number of values in a dataset. It is used to get an estimate of a large population from the dataset obtained from a sample of the population. 

For example, online job boards in the US use the data collected from a group of registered users to estimate the salary paid to people of a particular profession. The estimate is usually made using the average salary submitted on their platform for each profession.

  • Standard deviation

This technique is used to measure how well the responses align with or deviates from the mean. It describes the degree of consistency within the responses; together with the mean, it provides insight into data sets.

In the job board example highlighted above, if the average salary of writers in the US is $20,000 per annum, and the standard deviation is 5.0, we can easily deduce that the salaries for the professionals are far away from each other. This will birth other questions like why the salaries deviate from each other that much. 

With this question, we may conclude that the sample contains people with few years of experience, which translates to a lower salary, and people with many years of experience, translating to a higher salary. However, it does not contain people with mid-level experience.

  • Frequency distribution

This technique is used to assess the demography of the respondents or the number of times a particular response appears in research.  It is extremely keen on determining the degree of intersection between data points.

Some other interpretation processes of quantitative data include:

  • Regression analysis
  • Cohort analysis
  • Predictive and prescriptive analysis

Tips for Collecting Accurate Data for Interpretation  

  • Identify the Required Data Type

 Researchers need to identify the type of data required for particular research. Is it nominal, ordinal, interval, or ratio data ? 

The key to collecting the required data to conduct research is to properly understand the research question. If the researcher can understand the research question, then he can identify the kind of data that is required to carry out the research.

For example, when collecting customer feedback, the best data type to use is the ordinal data type . Ordinal data can be used to access a customer’s feelings about a brand and is also easy to interpret.

  • Avoid Biases

There are different kinds of biases a researcher might encounter when collecting data for analysis. Although biases sometimes come from the researcher, most of the biases encountered during the data collection process is caused by the respondent. 

There are 2 main biases, that can be caused by the President, namely; response bias and non-response bias . Researchers may not be able to eliminate these biases, but there are ways in which they can be avoided and reduced to a minimum.

Response biases are biases that are caused by respondents intentionally giving wrong answers to responses, while non-response bias occurs when the respondents don’t give answers to questions at all. Biases are capable of affecting the process of data interpretation .

  • Use Close Ended Surveys

Although open-ended surveys are capable of giving detailed information about the questions and allowing respondents to fully express themselves, it is not the best kind of survey for data interpretation. It requires a lot of coding before the data can be analyzed.

Close-ended surveys , on the other hand, restrict the respondents’ answers to some predefined options, while simultaneously eliminating irrelevant data.  This way, researchers can easily analyze and interpret data.

However, close-ended surveys may not be applicable in some cases, like when collecting respondents’ personal information like name, credit card details, phone number, etc.

Visualization Techniques in Data Analysis

One of the best practices of data interpretation is the visualization of the dataset. Visualization makes it easy for a layman to understand the data, and also encourages people to view the data, as it provides a visually appealing summary of the data.

There are different techniques of data visualization, some of which are highlighted below.

Bar graphs are graphs that interpret the relationship between 2 or more variables using rectangular bars. These rectangular bars can be drawn either vertically or horizontally, but they are mostly drawn vertically.

The graph contains the horizontal axis (x) and the vertical axis (y), with the former representing the independent variable while the latter is the dependent variable. Bar graphs can be grouped into different types, depending on how the rectangular bars are placed on the graph.

Some types of bar graphs are highlighted below:

  • Grouped Bar Graph

The grouped bar graph is used to show more information about variables that are subgroups of the same group with each subgroup bar placed side-by-side like in a histogram.

  • Stacked Bar Graph

A stacked bar graph is a grouped bar graph with its rectangular bars stacked on top of each other rather than placed side by side.

  • Segmented Bar Graph

Segmented bar graphs are stacked bar graphs where each rectangular bar shows 100% of the dependent variable. It is mostly used when there is an intersection between the variable categories.

Advantages of a Bar Graph

  • It helps to summarize a large data
  • Estimations of key values c.an be made at a glance
  • Can be easily understood

Disadvantages of a Bar Graph

  • It may require additional explanation.
  • It can be easily manipulated.
  • It doesn’t properly describe the dataset.

A pie chart is a circular graph used to represent the percentage of occurrence of a variable using sectors. The size of each sector is dependent on the frequency or percentage of the corresponding variables.

There are different variants of the pie charts, but for the sake of this article, we will be restricting ourselves to only 3. For better illustration of these types, let us consider the following examples.

Pie Chart Example : There are a total of 50 students in a class, and out of them, 10 students like Football, 25 students like snooker, and 15 students like Badminton. 

  • Simple Pie Chart

The simple pie chart is the most basic type of pie chart, which is used to depict the general representation of a bar chart. 

  • Doughnut Pie Chart

Doughnut pie is a variant of the pie chart, with a blank center allowing for additional information about the data as a whole to be included.

  • 3D Pie Chart

3D pie chart is used to give the chart a 3D look and is often used for aesthetic purposes. It is usually difficult to reach because of the distortion of perspective due to the third dimension.

Advantages of a Pie Chart 

  • It is visually appealing.
  • Best for comparing small data samples.

Disadvantages of a Pie Chart

  • It can only compare small sample sizes.
  • Unhelpful with observing trends over time.

Tables are used to represent statistical data by placing them in rows and columns. They are one of the most common statistical visualization techniques and are of 2 main types, namely; simple and complex tables.

  • Simple Tables

Simple tables summarize information on a single characteristic and may also be called a univariate table. An example of a simple table showing the number of employed people in a community concerning their age group.

  • Complex Tables

As its name suggests, complex tables summarize complex information and present them in two or more intersecting categories. A complex table example is a table showing the number of employed people in a population concerning their age group and sex as shown in the table below.

Advantages of Tables

  • Can contain large data sets
  • Helpful in comparing 2 or more similar things

Disadvantages of Tables

  • They do not give detailed information.
  • Maybe time-consuming.

Line graphs or charts are a type of graph that displays information as a series of points, usually connected by a straight line. Some of the types of line graphs are highlighted below.

  • Simple Line Graphs

Simple line graphs show the trend of data over time, and may also be used to compare categories. Let us assume we got the sales data of a firm for each quarter and are to visualize it using a line graph to estimate sales for the next year.

  • Line Graphs with Markers

These are similar to line graphs but have visible markers illustrating the data points

  • Stacked Line Graphs

Stacked line graphs are line graphs where the points do not overlap, and the graphs are therefore placed on top of each other. Consider that we got the quarterly sales data for each product sold by the company and are to visualize it to predict company sales for the next year.

Advantages of a Line Graph

  • Great for visualizing trends and changes over time.
  • It is simple to construct and read.

Disadvantage of a Line Graph

  • It can not compare different variables at a single place or time.
Read: 11 Types of Graphs & Charts + [Examples]

What are the Steps in Interpreting Data?

After data collection, you’d want to know the result of your findings. Ultimately, the findings of your data will be largely dependent on the questions you’ve asked in your survey or your initial study questions. Here are the four steps for accurately interpreting data

1. Gather the data

The very first step in interpreting data is having all the relevant data assembled. You can do this by visualizing it first either in a bar, graph, or pie chart. The purpose of this step is to accurately analyze the data without any bias. 

Now is the time to remember the details of how you conducted the research. Were there any flaws or changes that occurred when gathering this data? Did you keep any observatory notes and indicators?

Once you have your complete data, you can move to the next stage

2. Develop your findings

This is the summary of your observations. Here, you observe this data thoroughly to find trends, patterns, or behavior. If you are researching about a group of people through a sample population, this is where you analyze behavioral patterns. The purpose of this step is to compare these deductions before drawing any conclusions. You can compare these deductions with each other, similar data sets in the past, or general deductions in your industry. 

3. Derive Conclusions

Once you’ve developed your findings from your data sets, you can then draw conclusions based on trends you’ve discovered. Your conclusions should answer the questions that led you to your research. If they do not answer these questions ask why? It may lead to further research or subsequent questions.

4. Give recommendations

For every research conclusion, there has to be a recommendation. This is the final step in data interpretation because recommendations are a summary of your findings and conclusions. For recommendations, it can only go in one of two ways. You can either recommend a line of action or recommend that further research be conducted. 

How to Collect Data with Surveys or Questionnaires

As a business owner who wants to regularly track the number of sales made in your business, you need to know how to collect data. Follow these 4 easy steps to collect real-time sales data for your business using Formplus.

Step 1 – Register on Formplus

  • Visit Formplus on your PC or mobile device.
  • Click on the Start for Free button to start collecting data for your business.

Step 2 – Start Creating Surveys For Free

  • Go to the Forms tab beside your Dashboard in the Formplus menu.
  • Click on Create Form to start creating your survey
  • Take advantage of the dynamic form fields to add questions to your survey.
  • You can also add payment options that allow you to receive payments using Paypal, Flutterwave, and Stripe.

Step 3 – Customize Your Survey and Start Collecting Data

  • Go to the Customise tab to beautify your survey by adding colours, background images, fonts, or even a custom CSS.
  • You can also add your brand logo, colour and other things to define your brand identity.
  • Preview your form, share, and start collecting data.

Step 4 – Track Responses Real-time

  • Track your sales data in real-time in the Analytics section.

Why Use Formplus to Collect Data?  

The responses to each form can be accessed through the analytics section, which automatically analyzes the responses collected through Formplus forms. This section visualizes the collected data using tables and graphs, allowing analysts to easily arrive at an actionable insight without going through the rigorous process of analyzing the data.

  • 30+ Form Fields

There is no restriction on the kind of data that can be collected by researchers through the available form fields. Researchers can collect both quantitative and qualitative data types simultaneously through a single questionnaire.

  • Data Storage

 The data collected through Formplus are safely stored and secured in the Formplus database. You can also choose to store this data in an external storage device.

  • Real-time access

Formplus gives real-time access to information, making sure researchers are always informed of the current trends and changes in data. That way, researchers can easily measure a shift in market trends that inform important decisions.  

  • WordPress Integration

Users can now embed Formplus forms into their WordPress posts and pages using a shortcode. This can be done by installing the Formplus plugin into your WordPress websites.

Advantages and Importance of Data Interpretation  

  • Data interpretation is important because it helps make data-driven decisions.
  • It saves costs by providing costing opportunities
  • The insights and findings gotten from interpretation can be used to spot trends in a sector or industry.

Conclusion   

Data interpretation and analysis is an important aspect of working with data sets in any field or research and statistics. They both go hand in hand, as the process of data interpretation involves the analysis of data.

The process of data interpretation is usually cumbersome, and should naturally become more difficult with the best amount of data that is being churned out daily. However, with the accessibility of data analysis tools and machine learning techniques, analysts are gradually finding it easier to interpret data.

Data interpretation is very important, as it helps to acquire useful information from a pool of irrelevant ones while making informed decisions. It is found useful for individuals, businesses, and researchers.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • data analysis
  • data interpretation
  • data interpretation methods
  • how to analyse data
  • how to interprete data
  • qualitative data
  • quantitative data
  • busayo.longe

Formplus

You may also like:

15 Reasons to Choose Quantitative over Qualitative Research

This guide tells you everything you need to know about qualitative and quantitative research, it differences,types, when to use it, how...

what is interpretation of data in research

What is a Panel Survey?

Introduction Panel surveys are a type of survey conducted over a long period measuring consumer behavior and perception of a product, an...

Qualitative vs Quantitative Data:15 Differences & Similarities

Quantitative and qualitative data differences & similarities in definitions, examples, types, analysis, data collection techniques etc.

Collecting Voice of Customer Data: 9 Techniques that Work

In this article, we’ll show you nine(9) practical ways to collect voice of customer data from your clients.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

10XSheets Logo

What is Data Interpretation? Tools, Techniques, Examples

By Hady ElHady

July 14, 2023

Get Started With a Prebuilt Model

Start with a free template and upgrade when needed.

In today’s data-driven world, the ability to interpret and extract valuable insights from data is crucial for making informed decisions. Data interpretation involves analyzing and making sense of data to uncover patterns, relationships, and trends that can guide strategic actions.

Whether you’re a business professional, researcher, or data enthusiast, this guide will equip you with the knowledge and techniques to master the art of data interpretation.

What is Data Interpretation?

Data interpretation is the process of analyzing and making sense of data to extract valuable insights and draw meaningful conclusions. It involves examining patterns, relationships, and trends within the data to uncover actionable information. Data interpretation goes beyond merely collecting and organizing data; it is about extracting knowledge and deriving meaningful implications from the data at hand.

Why is Data Interpretation Important?

In today’s data-driven world, data interpretation holds immense importance across various industries and domains. Here are some key reasons why data interpretation is crucial:

  • Informed Decision-Making: Data interpretation enables informed decision-making by providing evidence-based insights. It helps individuals and organizations make choices supported by data-driven evidence, rather than relying on intuition or assumptions .
  • Identifying Opportunities and Risks: Effective data interpretation helps identify opportunities for growth and innovation. By analyzing patterns and trends within the data, organizations can uncover new market segments, consumer preferences, and emerging trends. Simultaneously, data interpretation also helps identify potential risks and challenges that need to be addressed proactively.
  • Optimizing Performance: By analyzing data and extracting insights, organizations can identify areas for improvement and optimize their performance. Data interpretation allows for identifying bottlenecks, inefficiencies, and areas of optimization across various processes, such as supply chain management, production, and customer service.
  • Enhancing Customer Experience: Data interpretation plays a vital role in understanding customer behavior and preferences. By analyzing customer data, organizations can personalize their offerings, improve customer experience, and tailor marketing strategies to target specific customer segments effectively.
  • Predictive Analytics and Forecasting: Data interpretation enables predictive analytics and forecasting, allowing organizations to anticipate future trends and make strategic plans accordingly. By analyzing historical data patterns, organizations can make predictions and forecast future outcomes, facilitating proactive decision-making and risk mitigation.
  • Evidence-Based Research and Policy Making: In fields such as healthcare, social sciences, and public policy, data interpretation plays a crucial role in conducting evidence-based research and policy-making. By analyzing relevant data, researchers and policymakers can identify trends, assess the effectiveness of interventions, and make informed decisions that impact society positively.
  • Competitive Advantage: Organizations that excel in data interpretation gain a competitive edge. By leveraging data insights, organizations can make informed strategic decisions, innovate faster, and respond promptly to market changes. This enables them to stay ahead of their competitors in today’s fast-paced business environment.

In summary, data interpretation is essential for leveraging the power of data and transforming it into actionable insights. It enables organizations and individuals to make informed decisions, identify opportunities and risks, optimize performance, enhance customer experience, predict future trends, and gain a competitive advantage in their respective domains.

The Role of Data Interpretation in Decision-Making Processes

Data interpretation plays a crucial role in decision-making processes across organizations and industries. It empowers decision-makers with valuable insights and helps guide their actions. Here are some key roles that data interpretation fulfills in decision-making:

  • Informing Strategic Planning : Data interpretation provides decision-makers with a comprehensive understanding of the current state of affairs and the factors influencing their organization or industry. By analyzing relevant data, decision-makers can assess market trends, customer preferences, and competitive landscapes. These insights inform the strategic planning process, guiding the formulation of goals, objectives, and action plans.
  • Identifying Problem Areas and Opportunities: Effective data interpretation helps identify problem areas and opportunities for improvement. By analyzing data patterns and trends, decision-makers can identify bottlenecks, inefficiencies, or underutilized resources. This enables them to address challenges and capitalize on opportunities, enhancing overall performance and competitiveness.
  • Risk Assessment and Mitigation: Data interpretation allows decision-makers to assess and mitigate risks. By analyzing historical data, market trends, and external factors, decision-makers can identify potential risks and vulnerabilities. This understanding helps in developing risk management strategies and contingency plans to mitigate the impact of risks and uncertainties.
  • Facilitating Evidence-Based Decision-Making: Data interpretation enables evidence-based decision-making by providing objective insights and factual evidence. Instead of relying solely on intuition or subjective opinions, decision-makers can base their choices on concrete data-driven evidence. This leads to more accurate and reliable decision-making, reducing the likelihood of biases or errors.
  • Measuring and Evaluating Performance: Data interpretation helps decision-makers measure and evaluate the performance of various aspects of their organization. By analyzing key performance indicators (KPIs) and relevant metrics, decision-makers can track progress towards goals, assess the effectiveness of strategies and initiatives, and identify areas for improvement. This data-driven evaluation enables evidence-based adjustments and ensures that resources are allocated optimally.
  • Enabling Predictive Analytics and Forecasting: Data interpretation plays a critical role in predictive analytics and forecasting. Decision-makers can analyze historical data patterns to make predictions and forecast future trends. This capability empowers organizations to anticipate market changes, customer behavior, and emerging opportunities. By making informed decisions based on predictive insights, decision-makers can stay ahead of the curve and proactively respond to future developments.
  • Supporting Continuous Improvement: Data interpretation facilitates a culture of continuous improvement within organizations. By regularly analyzing data, decision-makers can monitor performance, identify areas for enhancement, and implement data-driven improvements. This iterative process of analyzing data, making adjustments, and measuring outcomes enables organizations to continuously refine their strategies and operations.

In summary, data interpretation is integral to effective decision-making. It informs strategic planning, identifies problem areas and opportunities, assesses and mitigates risks, facilitates evidence-based decision-making, measures performance, enables predictive analytics, and supports continuous improvement. By harnessing the power of data interpretation, decision-makers can make well-informed, data-driven decisions that lead to improved outcomes and success in their endeavors.

Understanding Data

Before delving into data interpretation, it’s essential to understand the fundamentals of data. Data can be categorized into qualitative and quantitative types, each requiring different analysis methods. Qualitative data represents non-numerical information, such as opinions or descriptions, while quantitative data consists of measurable quantities.

Types of Data

  • Qualitative data: Includes observations, interviews, survey responses, and other subjective information.
  • Quantitative data: Comprises numerical data collected through measurements, counts, or ratings.

Data Collection Methods

To perform effective data interpretation, you need to be aware of the various methods used to collect data. These methods can include surveys, experiments, observations, interviews, and more. Proper data collection techniques ensure the accuracy and reliability of the data.

Data Sources and Reliability

When working with data, it’s important to consider the source and reliability of the data. Reliable sources include official statistics, reputable research studies, and well-designed surveys. Assessing the credibility of the data source helps you determine its accuracy and validity.

Data Preprocessing and Cleaning

Before diving into data interpretation, it’s crucial to preprocess and clean the data to remove any inconsistencies or errors. This step involves identifying missing values, outliers, and data inconsistencies, as well as handling them appropriately. Data preprocessing ensures that the data is in a suitable format for analysis.

Exploratory Data Analysis: Unveiling Insights from Data

Exploratory Data Analysis (EDA) is a vital step in data interpretation, helping you understand the data’s characteristics and uncover initial insights. By employing various graphical and statistical techniques, you can gain a deeper understanding of the data patterns and relationships.

Univariate Analysis

Univariate analysis focuses on examining individual variables in isolation, revealing their distribution and basic characteristics. Here are some common techniques used in univariate analysis:

  • Histograms: Graphical representations of the frequency distribution of a variable. Histograms display data in bins or intervals, providing a visual depiction of the data’s distribution.
  • Box plots: Box plots summarize the distribution of a variable by displaying its quartiles, median, and any potential outliers. They offer a concise overview of the data’s central tendency and spread.
  • Frequency distributions: Tabular representations that show the number of occurrences or frequencies of different values or ranges of a variable.

Bivariate Analysis

Bivariate analysis explores the relationship between two variables, examining how they interact and influence each other. By visualizing and analyzing the connections between variables, you can identify correlations and patterns. Some common techniques for bivariate analysis include:

  • Scatter plots: Graphical representations that display the relationship between two continuous variables. Scatter plots help identify potential linear or nonlinear associations between the variables.
  • Correlation analysis: Statistical measure of the strength and direction of the relationship between two variables. Correlation coefficients, such as Pearson’s correlation coefficient, range from -1 to 1, with higher absolute values indicating stronger correlations.
  • Heatmaps: Visual representations that use color intensity to show the strength of relationships between two categorical variables. Heatmaps help identify patterns and associations between variables.

Multivariate Analysis

Multivariate analysis involves the examination of three or more variables simultaneously. This analysis technique provides a deeper understanding of complex relationships and interactions among multiple variables. Some common methods used in multivariate analysis include:

  • Dimensionality reduction techniques: Approaches like Principal Component Analysis (PCA) or t-Distributed Stochastic Neighbor Embedding (t-SNE) reduce high-dimensional data into lower dimensions, simplifying analysis and visualization.
  • Cluster analysis: Grouping data points based on similarities or dissimilarities. Cluster analysis helps identify patterns or subgroups within the data.

Descriptive Statistics: Understanding Data’s Central Tendency and Variability

Descriptive statistics provides a summary of the main features of a dataset, focusing on measures of central tendency and variability. These statistics offer a comprehensive overview of the data’s characteristics and aid in understanding its distribution and spread.

Measures of Central Tendency

Measures of central tendency describe the central or average value around which the data tends to cluster. Here are some commonly used measures of central tendency:

  • Mean: The arithmetic average of a dataset, calculated by summing all values and dividing by the total number of observations.
  • Median: The middle value in a dataset when arranged in ascending or descending order. The median is less sensitive to extreme values than the mean.
  • Mode: The most frequently occurring value in a dataset.

Measures of Dispersion

Measures of dispersion quantify the spread or variability of the data points. Understanding variability is essential for assessing the data’s reliability and drawing meaningful conclusions. Common measures of dispersion include:

  • Range: The difference between the maximum and minimum values in a dataset, providing a simple measure of spread.
  • Variance: The average squared deviation from the mean, measuring the dispersion of data points around the mean.
  • Standard Deviation: The square root of the variance, representing the average distance between each data point and the mean.

Percentiles and Quartiles

Percentiles and quartiles divide the dataset into equal parts, allowing you to understand the distribution of values within specific ranges. They provide insights into the relative position of individual data points in comparison to the entire dataset.

  • Percentiles: Divisions of data into 100 equal parts, indicating the percentage of values that fall below a given value. The median corresponds to the 50th percentile.
  • Quartiles: Divisions of data into four equal parts, denoted as the first quartile (Q1), median (Q2), and third quartile (Q3). The interquartile range (IQR) measures the spread between Q1 and Q3.

Skewness and Kurtosis

Skewness and kurtosis measure the shape and distribution of data. They provide insights into the symmetry, tail heaviness, and peakness of the distribution.

  • Skewness: Measures the asymmetry of the data distribution. Positive skewness indicates a longer tail on the right side, while negative skewness suggests a longer tail on the left side.
  • Kurtosis: Measures the peakedness or flatness of the data distribution. Positive kurtosis indicates a sharper peak and heavier tails, while negative kurtosis suggests a flatter peak and lighter tails.

Inferential Statistics: Drawing Inferences and Making Hypotheses

Inferential statistics involves making inferences and drawing conclusions about a population based on a sample of data. It allows you to generalize findings beyond the observed data and make predictions or test hypotheses. This section covers key techniques and concepts in inferential statistics.

Hypothesis Testing

Hypothesis testing involves making statistical inferences about population parameters based on sample data. It helps determine the validity of a claim or hypothesis by examining the evidence provided by the data. The hypothesis testing process typically involves the following steps:

  • Formulate hypotheses: Define the null hypothesis (H0) and alternative hypothesis (Ha) based on the research question or claim.
  • Select a significance level: Determine the acceptable level of error (alpha) to guide the decision-making process.
  • Collect and analyze data: Gather and analyze the sample data using appropriate statistical tests.
  • Calculate the test statistic: Compute the test statistic based on the selected test and the sample data.
  • Determine the critical region: Identify the critical region based on the significance level and the test statistic’s distribution.
  • Make a decision: Compare the test statistic with the critical region and either reject or fail to reject the null hypothesis.
  • Draw conclusions: Interpret the results and make conclusions based on the decision made in the previous step.

Confidence Intervals

Confidence intervals provide a range of values within which the population parameter is likely to fall. They quantify the uncertainty associated with estimating population parameters based on sample data. The construction of a confidence interval involves:

  • Select a confidence level: Choose the desired level of confidence, typically expressed as a percentage (e.g., 95% confidence level).
  • Compute the sample statistic: Calculate the sample statistic (e.g., sample mean) from the sample data.
  • Determine the margin of error: Determine the margin of error, which represents the maximum likely distance between the sample statistic and the population parameter.
  • Construct the confidence interval: Establish the upper and lower bounds of the confidence interval using the sample statistic and the margin of error.
  • Interpret the confidence interval: Interpret the confidence interval in the context of the problem, acknowledging the level of confidence and the potential range of population values.

Parametric and Non-parametric Tests

In inferential statistics, different tests are used based on the nature of the data and the assumptions made about the population distribution. Parametric tests assume specific population distributions, such as the normal distribution, while non-parametric tests make fewer assumptions. Some commonly used parametric and non-parametric tests include:

  • t-tests: Compare means between two groups or assess differences in paired observations.
  • Analysis of Variance (ANOVA): Compare means among multiple groups.
  • Chi-square test: Assess the association between categorical variables.
  • Mann-Whitney U test: Compare medians between two independent groups.
  • Kruskal-Wallis test: Compare medians among multiple independent groups.
  • Spearman’s rank correlation: Measure the strength and direction of monotonic relationships between variables.

Correlation and Regression Analysis

Correlation and regression analysis explore the relationship between variables, helping understand how changes in one variable affect another. These analyses are particularly useful in predicting and modeling outcomes based on explanatory variables.

  • Correlation analysis: Determines the strength and direction of the linear relationship between two continuous variables using correlation coefficients, such as Pearson’s correlation coefficient.
  • Regression analysis: Models the relationship between a dependent variable and one or more independent variables, allowing you to estimate the impact of the independent variables on the dependent variable. It provides insights into the direction, magnitude, and significance of these relationships.

Data Interpretation Techniques: Unlocking Insights for Informed Decisions

Data interpretation techniques enable you to extract actionable insights from your data, empowering you to make informed decisions. We’ll explore key techniques that facilitate pattern recognition, trend analysis , comparative analysis , predictive modeling, and causal inference.

Pattern Recognition and Trend Analysis

Identifying patterns and trends in data helps uncover valuable insights that can guide decision-making. Several techniques aid in recognizing patterns and analyzing trends:

  • Time series analysis: Analyzes data points collected over time to identify recurring patterns and trends.
  • Moving averages: Smooths out fluctuations in data, highlighting underlying trends and patterns.
  • Seasonal decomposition: Separates a time series into its seasonal, trend, and residual components.
  • Cluster analysis: Groups similar data points together, identifying patterns or segments within the data.
  • Association rule mining: Discovers relationships and dependencies between variables, uncovering valuable patterns and trends.

Comparative Analysis

Comparative analysis involves comparing different subsets of data or variables to identify similarities, differences, or relationships. This analysis helps uncover insights into the factors that contribute to variations in the data.

  • Cross-tabulation: Compares two or more categorical variables to understand the relationships and dependencies between them.
  • ANOVA (Analysis of Variance): Assesses differences in means among multiple groups to identify significant variations.
  • Comparative visualizations: Graphical representations, such as bar charts or box plots, help compare data across categories or groups.

Predictive Modeling and Forecasting

Predictive modeling uses historical data to build mathematical models that can predict future outcomes. This technique leverages machine learning algorithms to uncover patterns and relationships in data, enabling accurate predictions.

  • Regression models: Build mathematical equations to predict the value of a dependent variable based on independent variables.
  • Time series forecasting: Utilizes historical time series data to predict future values, considering factors like trend, seasonality, and cyclical patterns.
  • Machine learning algorithms: Employ advanced algorithms, such as decision trees, random forests, or neural networks, to generate accurate predictions based on complex data patterns.

Causal Inference and Experimentation

Causal inference aims to establish cause-and-effect relationships between variables, helping determine the impact of certain factors on an outcome. Experimental design and controlled studies are essential for establishing causal relationships.

  • Randomized controlled trials (RCTs): Divide participants into treatment and control groups to assess the causal effects of an intervention.
  • Quasi-experimental designs: Apply treatment to specific groups, allowing for some level of control but not full randomization.
  • Difference-in-differences analysis: Compares changes in outcomes between treatment and control groups before and after an intervention or treatment.

Data Visualization Techniques: Communicating Insights Effectively

Data visualization is a powerful tool for presenting data in a visually appealing and informative manner. Visual representations help simplify complex information, enabling effective communication and understanding.

Importance of Data Visualization

Data visualization serves multiple purposes in data interpretation and analysis. It allows you to:

  • Simplify complex data: Visual representations simplify complex information, making it easier to understand and interpret.
  • Spot patterns and trends: Visualizations help identify patterns, trends, and anomalies that may not be apparent in raw data.
  • Communicate insights: Visualizations are effective in conveying insights to different stakeholders and audiences.
  • Support decision-making: Well-designed visualizations facilitate informed decision-making by providing a clear understanding of the data.

Choosing the Right Visualization Method

Selecting the appropriate visualization method is crucial to effectively communicate your data. Different types of data and insights are best represented using specific visualization techniques. Consider the following factors when choosing a visualization method:

  • Data type: Determine whether the data is categorical, ordinal, or numerical.
  • Insights to convey: Identify the key messages or patterns you want to communicate.
  • Audience and context: Consider the knowledge level and preferences of the audience, as well as the context in which the visualization will be presented.

Common Data Visualization Tools and Software

Several tools and software applications simplify the process of creating visually appealing and interactive data visualizations. Some widely used tools include:

  • Tableau: A powerful business intelligence and data visualization tool that allows you to create interactive dashboards, charts, and maps.
  • Power BI: Microsoft’s business analytics tool that enables data visualization, exploration, and collaboration.
  • Python libraries: Matplotlib, Seaborn, and Plotly are popular Python libraries for creating static and interactive visualizations.
  • R programming: R offers a wide range of packages, such as ggplot2 and Shiny, for creating visually appealing data visualizations.

Best Practices for Creating Effective Visualizations

Creating effective visualizations requires attention to design principles and best practices. By following these guidelines, you can ensure that your visualizations effectively communicate insights:

  • Simplify and declutter: Eliminate unnecessary elements, labels, or decorations that may distract from the main message.
  • Use appropriate chart types: Select chart types that best represent your data and the relationships you want to convey.
  • Highlight important information: Use color, size, or annotations to draw attention to key insights or trends in your data.
  • Ensure readability and accessibility: Use clear labels, appropriate font sizes, and sufficient contrast to make your visualizations easily readable.
  • Tell a story: Organize your visualizations in a logical order and guide the viewer’s attention to the most important aspects of the data.
  • Iterate and refine: Continuously refine and improve your visualizations based on feedback and testing.

Data Interpretation in Specific Domains: Unlocking Domain-Specific Insights

Data interpretation plays a vital role across various industries and domains. Let’s explore how data interpretation is applied in specific fields, providing real-world examples and applications.

Marketing and Consumer Behavior

In the marketing field, data interpretation helps businesses understand consumer behavior, market trends, and the effectiveness of marketing campaigns. Key applications include:

  • Customer segmentation: Identifying distinct customer groups based on demographics, preferences, or buying patterns.
  • Market research : Analyzing survey data or social media sentiment to gain insights into consumer opinions and preferences.
  • Campaign analysis: Assessing the impact and ROI of marketing campaigns through data analysis and interpretation.

Financial Analysis and Investment Decisions

Data interpretation is crucial in financial analysis and investment decision-making. It enables the identification of market trends, risk assessment , and portfolio optimization. Key applications include:

  • Financial statement analysis: Interpreting financial statements to assess a company’s financial health , profitability , and growth potential.
  • Risk analysis: Evaluating investment risks by analyzing historical data, market trends, and financial indicators.
  • Portfolio management: Utilizing data analysis to optimize investment portfolios based on risk-return trade-offs and diversification.

Healthcare and Medical Research

Data interpretation plays a significant role in healthcare and medical research, aiding in understanding patient outcomes, disease patterns, and treatment effectiveness. Key applications include:

  • Clinical trials: Analyzing clinical trial data to assess the safety and efficacy of new treatments or interventions.
  • Epidemiological studies: Interpreting population-level data to identify disease risk factors and patterns.
  • Healthcare analytics: Leveraging patient data to improve healthcare delivery, optimize resource allocation, and enhance patient outcomes.

Social Sciences and Public Policy

Data interpretation is integral to social sciences and public policy, informing evidence-based decision-making and policy formulation. Key applications include:

  • Survey analysis: Interpreting survey data to understand public opinion, social attitudes, and behavior patterns.
  • Policy evaluation: Analyzing data to assess the effectiveness and impact of public policies or interventions.
  • Crime analysis: Utilizing data interpretation techniques to identify crime patterns, hotspots, and trends, aiding law enforcement and policy formulation.

Data Interpretation Tools and Software: Empowering Your Analysis

Several software tools facilitate data interpretation, analysis, and visualization, providing a range of features and functionalities. Understanding and leveraging these tools can enhance your data interpretation capabilities.

Spreadsheet Software

Spreadsheet software like Excel and Google Sheets offer a wide range of data analysis and interpretation functionalities. These tools allow you to:

  • Perform calculations: Use formulas and functions to compute descriptive statistics, create pivot tables, or analyze data.
  • Visualize data: Create charts, graphs, and tables to visualize and summarize data effectively.
  • Manipulate and clean data: Utilize built-in functions and features to clean, transform, and preprocess data.

Statistical Software

Statistical software packages, such as R and Python, provide a more comprehensive and powerful environment for data interpretation. These tools offer advanced statistical analysis capabilities, including:

  • Data manipulation: Perform data transformations, filtering, and merging to prepare data for analysis.
  • Statistical modeling: Build regression models, conduct hypothesis tests, and perform advanced statistical analyses.
  • Visualization: Generate high-quality visualizations and interactive plots to explore and present data effectively.

Business Intelligence Tools

Business intelligence (BI) tools, such as Tableau and Power BI, enable interactive data exploration, analysis, and visualization. These tools provide:

  • Drag-and-drop functionality: Easily create interactive dashboards, reports, and visualizations without extensive coding.
  • Data integration: Connect to multiple data sources and perform data blending for comprehensive analysis.
  • Real-time data analysis: Analyze and visualize live data streams for up-to-date insights and decision-making.

Data Mining and Machine Learning Tools

Data mining and machine learning tools offer advanced algorithms and techniques for extracting insights from complex datasets. Some popular tools include:

  • Python libraries: Scikit-learn, TensorFlow, and PyTorch provide comprehensive machine learning and data mining functionalities.
  • R packages: Packages like caret, randomForest, and xgboost offer a wide range of algorithms for predictive modeling and data mining.
  • Big data tools: Apache Spark, Hadoop, and Apache Flink provide distributed computing frameworks for processing and analyzing large-scale datasets.

Common Challenges and Pitfalls in Data Interpretation: Navigating the Data Maze

Data interpretation comes with its own set of challenges and potential pitfalls. Being aware of these challenges can help you avoid common errors and ensure the accuracy and validity of your interpretations.

Sampling Bias and Data Quality Issues

Sampling bias occurs when the sample data is not representative of the population, leading to biased interpretations. Common types of sampling bias include selection bias, non-response bias, and volunteer bias. To mitigate these issues, consider:

  • Random sampling: Implement random sampling techniques to ensure representativeness.
  • Sample size: Use appropriate sample sizes to reduce sampling errors and increase the accuracy of interpretations.
  • Data quality checks: Scrutinize data for completeness, accuracy, and consistency before analysis.

Overfitting and Spurious Correlations

Overfitting occurs when a model fits the noise or random variations in the data instead of the underlying patterns. Spurious correlations, on the other hand, arise when variables appear to be related but are not causally connected. To avoid these issues:

  • Use appropriate model complexity: Avoid overcomplicating models and select the level of complexity that best fits the data.
  • Validate models: Test the model’s performance on unseen data to ensure generalizability.
  • Consider causal relationships: Be cautious in interpreting correlations and explore causal mechanisms before inferring causation.

Misinterpretation of Statistical Results

Misinterpretation of statistical results can lead to inaccurate conclusions and misguided actions. Common pitfalls include misreading p-values, misinterpreting confidence intervals, and misattributing causality. To prevent misinterpretation:

  • Understand statistical concepts: Familiarize yourself with key statistical concepts, such as p-values, confidence intervals, and effect sizes.
  • Provide context: Consider the broader context, study design, and limitations when interpreting statistical results.
  • Consult experts: Seek guidance from statisticians or domain experts to ensure accurate interpretation.

Simpson’s Paradox and Confounding Variables

Simpson’s paradox occurs when a trend or relationship observed within subgroups of data reverses when the groups are combined. Confounding variables, or lurking variables, can distort or confound the interpretation of relationships between variables. To address these challenges:

  • Account for confounding variables: Identify and account for potential confounders when analyzing relationships between variables.
  • Analyze subgroups: Analyze data within subgroups to identify patterns and trends, ensuring the validity of interpretations.
  • Contextualize interpretations: Consider the potential impact of confounding variables and provide nuanced interpretations.

Best Practices for Effective Data Interpretation: Making Informed Decisions

Effective data interpretation relies on following best practices throughout the entire process, from data collection to drawing conclusions. By adhering to these best practices, you can enhance the accuracy and validity of your interpretations.

Clearly Define Research Questions and Objectives

Before embarking on data interpretation, clearly define your research questions and objectives. This clarity will guide your analysis, ensuring you focus on the most relevant aspects of the data.

Use Appropriate Statistical Methods for the Data Type

Select the appropriate statistical methods based on the nature of your data. Different data types require different analysis techniques, so choose the methods that best align with your data characteristics.

Conduct Sensitivity Analysis and Robustness Checks

Perform sensitivity analysis and robustness checks to assess the stability and reliability of your results. Varying assumptions, sample sizes, or methodologies can help validate the robustness of your interpretations.

Communicate Findings Accurately and Effectively

When communicating your data interpretations, consider your audience and their level of understanding. Present your findings in a clear, concise, and visually appealing manner to effectively convey the insights derived from your analysis.

Data Interpretation Examples: Applying Techniques to Real-World Scenarios

To gain a better understanding of how data interpretation techniques can be applied in practice, let’s explore some real-world examples. These examples demonstrate how different industries and domains leverage data interpretation to extract meaningful insights and drive decision-making.

Example 1: Retail Sales Analysis

A retail company wants to analyze its sales data to uncover patterns and optimize its marketing strategies. By applying data interpretation techniques, they can:

  • Perform sales trend analysis : Analyze sales data over time to identify seasonal patterns, peak sales periods, and fluctuations in customer demand.
  • Conduct customer segmentation: Segment customers based on purchase behavior, demographics, or preferences to personalize marketing campaigns and offers.
  • Analyze product performance: Examine sales data for each product category to identify top-selling items, underperforming products, and opportunities for cross-selling or upselling.
  • Evaluate marketing campaigns: Analyze the impact of marketing initiatives on sales by comparing promotional periods, advertising channels, or customer responses.
  • Forecast future sales: Utilize historical sales data and predictive models to forecast future sales trends, helping the company optimize inventory management and resource allocation.

Example 2: Healthcare Outcome Analysis

A healthcare organization aims to improve patient outcomes and optimize resource allocation. Through data interpretation, they can:

  • Analyze patient data: Extract insights from electronic health records, medical history, and treatment outcomes to identify factors impacting patient outcomes.
  • Identify risk factors: Analyze patient populations to identify common risk factors associated with specific medical conditions or adverse events.
  • Conduct comparative effectiveness research: Compare different treatment methods or interventions to assess their impact on patient outcomes and inform evidence-based treatment decisions.
  • Optimize resource allocation: Analyze healthcare utilization patterns to allocate resources effectively, optimize staffing levels, and improve operational efficiency.
  • Evaluate intervention effectiveness: Analyze intervention programs to assess their effectiveness in improving patient outcomes, such as reducing readmission rates or hospital-acquired infections.

Example 3: Financial Investment Analysis

An investment firm wants to make data-driven investment decisions and assess portfolio performance. By applying data interpretation techniques, they can:

  • Perform market trend analysis: Analyze historical market data, economic indicators, and sector performance to identify investment opportunities and predict market trends.
  • Conduct risk analysis: Assess the risk associated with different investment options by analyzing historical returns, volatility, and correlations with market indices.
  • Perform portfolio optimization: Utilize quantitative models and optimization techniques to construct diversified portfolios that maximize returns while managing risk.
  • Monitor portfolio performance: Analyze portfolio returns, compare them against benchmarks, and conduct attribution analysis to identify the sources of portfolio performance.
  • Perform scenario analysis : Assess the impact of potential market scenarios, economic changes, or geopolitical events on investment portfolios to inform risk management strategies.

These examples illustrate how data interpretation techniques can be applied across various industries and domains. By leveraging data effectively, organizations can unlock valuable insights, optimize strategies, and make informed decisions that drive success.

Data interpretation is a fundamental skill for unlocking the power of data and making informed decisions. By understanding the various techniques, best practices, and challenges in data interpretation, you can confidently navigate the complex landscape of data analysis and uncover valuable insights.

As you embark on your data interpretation journey, remember to embrace curiosity, rigor, and a continuous learning mindset. The ability to extract meaningful insights from data will empower you to drive positive change in your organization or field.

Get Started With a Prebuilt Template!

Looking to streamline your business financial modeling process with a prebuilt customizable template? Say goodbye to the hassle of building a financial model from scratch and get started right away with one of our premium templates.

  • Save time with no need to create a financial model from scratch.
  • Reduce errors with prebuilt formulas and calculations.
  • Customize to your needs by adding/deleting sections and adjusting formulas.
  • Automatically calculate key metrics for valuable insights.
  • Make informed decisions about your strategy and goals with a clear picture of your business performance and financial health .

Marketplace Financial Model Template - Contents and Instructions

Marketplace Financial Model Template

E-Commerce Financial Model Template - Getting Started and Instructions

E-Commerce Financial Model Template

SaaS Financial Model Template - About

SaaS Financial Model Template

Standard Financial Model Template - Getting Started and Instructions

Standard Financial Model Template

E-Commerce Profit and Loss P&L Statement Template - Actuals

E-Commerce Profit and Loss Statement

SaaS Profit and Loss Statement P&L Template - Actuals

SaaS Profit and Loss Statement

Marketplace Profit and Loss Statement P&L Template - Contents and Instructions

Marketplace Profit and Loss Statement

Startup Profit and Loss Statement P&L Template - Contents and Instructions

Startup Profit and Loss Statement

Startup Financial Model Template - Content and Instructions

Startup Financial Model Template

Excel and Google Sheets Templates and Financial Models

Expert Templates For You

Don’t settle for mediocre templates. Get started with premium spreadsheets and financial models customizable to your unique business needs to help you save time and streamline your processes.

Receive Exclusive Updates

Get notified of new templates and business resources to help grow your business. Join our community of forward-thinking entrepreneurs and stay ahead of the game!

[email protected]

© Copyright 2024 | 10XSheets | All Rights Reserved

Your email *

Your message

Send a copy to your email

Data Interpretation: Definition, Method, Benefits & Examples

In today's digital world, any business owner understands the importance of collecting, analyzing, and interpreting data. Some statistical methods are always employed in this process. Continue reading to learn how to make the most of your data.

Whatagraph marketing reporting tool

Apr 20 2021 ● 7 min read

Data Interpretation: Definition, Method, Benefits & Examples

Table of Contents

What is data interpretation, data interpretation examples, steps of data interpretation, what should users question during data interpretation, data interpretation methods, qualitative data interpretation method, quantitative data interpretation method, benefits of data interpretation.

Syracuse University defined data interpretation as the process of assigning meaning to the collected information and determining the conclusions, significance, and implications of the findings. In other words, normalizing data, aka giving meaning to the collected 'cleaned' raw data .

Data interpretation is the final step of data analysis . This is where you turn results into actionable items. To better understand it, here are 2 instances of interpreting data:

40 data sources

Let's say you've got four age groups of the user base. So a company can notice which age group is most engaged with their content or product. Based on bar charts or pie charts, they can either: develop a marketing strategy to make their product more appealing to non-involved groups or develop an outreach strategy that expands on their core user base.

Another case of data interpretation is how companies use recruitment CRM . They use it to source, track, and manage their entire hiring pipeline to see how they can automate their workflow better. This helps companies save time and improve productivity.

Interpreting data: Performance by gender

Interpreting data: Performance by gender

Data interpretation is conducted in 4 steps:

  • Assembling the information you need (like bar graphs and pie charts);
  • Developing findings or isolating the most relevant inputs;
  • Developing conclusions;
  • Coming up with recommendations or actionable solutions.

Considering how these findings dictate the course of action, data analysts must be accurate with their conclusions and examine the raw data from multiple angles. Different variables may allude to various problems, so having the ability to backtrack data and repeat the analysis using different templates is an integral part of a successful business strategy.

To interpret data accurately, users should be aware of potential pitfalls present within this process. You need to ask yourself if you are mistaking correlation for causation. If two things occur together, it does not indicate that one caused the other.

40+ data

The 2nd thing you need to be aware of is your own confirmation bias . This occurs when you try to prove a point or a theory and focus only on the patterns or findings that support that theory while discarding those that do not.

The 3rd problem is irrelevant data. To be specific, you need to make sure that the data you have collected and analyzed is relevant to the problem you are trying to solve.

Data analysts or data analytics tools help people make sense of the numerical data that has been aggregated, transformed, and displayed. There are two main methods for data interpretation: quantitative and qualitative.

This is a method for breaking down or analyzing so-called qualitative data, also known as categorical data. It is important to note that no bar graphs or line charts are used in this method. Instead, they rely on text. Because qualitative data is collected through person-to-person techniques, it isn't easy to present using a numerical approach.

Surveys are used to collect data because they allow you to assign numerical values to answers, making them easier to analyze. If we rely solely on the text, it would be a time-consuming and error-prone process. This is why it must be transformed .

This data interpretation is applied when we are dealing with quantitative or numerical data. Since we are dealing with numbers, the values can be displayed in a bar chart or pie chart. There are two main types: Discrete and Continuous. Moreover, numbers are easier to analyze since they involve statistical modeling techniques like mean and standard deviation.

Mean is an average value of a particular data set obtained or calculated by dividing the sum of the values within that data set by the number of values within that same set.

Standard Deviation is a technique is used to ascertain how responses align with or deviate from the average value or mean. It relies on the meaning to describe the consistency of the replies within a particular data set. You can use this when calculating the average pay for a certain profession and then displaying the upper and lower values in the data set.

As stated, some tools can do this automatically, especially when it comes to quantitative data. Whatagraph is one such tool as it can aggregate data from multiple sources using different system integrations. It will also automatically organize and analyze that which will later be displayed in pie charts, line charts, or bar charts, however you wish.

white label customize

Multiple data interpretation benefits explain its significance within the corporate world, medical industry, and financial industry:

data-interpretation-marketing

Anticipating needs and identifying trends . Data analysis provides users with relevant insights that they can use to forecast trends. It would be based on customer concerns and expectations .

For example, a large number of people are concerned about privacy and the leakage of personal information . Products that provide greater protection and anonymity are more likely to become popular.

Data-analysis-interpretation

Clear foresight. Companies that analyze and aggregate data better understand their own performance and how consumers perceive them. This provides them with a better understanding of their shortcomings, allowing them to work on solutions that will significantly improve their performance.

Published on Apr 20 2021

Indrė is a copywriter at Whatagraph with extensive experience in search engine optimization and public relations. She holds a degree in International Relations, while her professional background includes different marketing and advertising niches. She manages to merge marketing strategy and public speaking while educating readers on how to automate their businesses.

Create your first marketing report using Whatagraph

Related articles

Data Blending: Combine Data for Clear Insights

Data analytics · 7 mins

Data Blending: Clear Insights for Data-Driven Marketing

Data Blending in Looker Studio – Here’s a Better Way

Blending Data in Looker Studio? Here’s a Faster and More Reliable Alternative

Marketing Data Transformation - Guide & Examples

Marketing Data Transformation: How to Organize Unstructured Marketing Data?

Top 15 Data Transformation Tools for Marketers

Top 15 Data Transformation Tools for Marketers in 2024

Viral TikTok Campaigns to Copy For Success

KPIs & metrics · 7 mins

15 Inspiring TikTok Campaigns: Success Stories To Learn From

Whatagraph Expands to Become a Marketing Data Platform

Product news · 2 mins

Whatagraph Expands to Become a One-Stop Marketing Data Platform

Get marketing insights direct to your inbox.

By submitting this form, you agree to our privacy policy

what is interpretation of data in research

The Ultimate Guide to Qualitative Research - Part 2: Handling Qualitative Data

what is interpretation of data in research

  • Handling qualitative data
  • Transcripts
  • Field notes
  • Survey data and responses
  • Visual and audio data
  • Data organization
  • Data coding
  • Coding frame
  • Auto and smart coding
  • Organizing codes
  • Qualitative data analysis
  • Content analysis
  • Thematic analysis
  • Thematic analysis vs. content analysis
  • Narrative research
  • Phenomenological research
  • Discourse analysis
  • Grounded theory
  • Deductive reasoning
  • Inductive reasoning
  • Inductive vs. deductive reasoning

The role of data interpretation

Quantitative data interpretation, qualitative data interpretation, using atlas.ti for interpreting data, data visualization.

  • Qualitative analysis software

What is data interpretation? Tricks & techniques

Raw data by itself isn't helpful to research without data interpretation. The need to organize and analyze data so that research can produce actionable insights and develop new knowledge affirms the importance of the data interpretation process.

what is interpretation of data in research

Let's look at why data interpretation is important to the research process, how you can interpret data, and how the tools in ATLAS.ti can help you look at your data in meaningful ways.

The data collection process is just one part of research, and one that can often provide a lot of data without any easy answers that instantly stick out to researchers or their audiences. An example of data that requires an interpretation process is a corpus, or a large body of text, meant to represent some language use (e.g., literature, conversation). A corpus of text can collect millions of words from written texts and spoken interactions.

Challenge of data interpretation

While this is an impressive body of data, sifting through this corpus can be difficult. If you are trying to make assertions about language based on the corpus data, what data is useful to you? How do you separate irrelevant data from valuable insights? How can you persuade your audience to understand your research?

Data interpretation is a process that involves assigning meaning to the data. A researcher's responsibility is to explain and persuade their research audience on how they see the data and what insights can be drawn from their interpretation.

Interpreting raw data to produce insights

Unstructured data is any sort of data that is not organized by some predetermined structure or that is in its raw, naturally-occurring form. Without data analysis , the data is difficult to interpret to generate useful insights.

This unstructured data is not always mindless noise, however. The importance of data interpretation can be seen in examples like a blog with a series of articles on a particular subject or a cookbook with a collection of recipes. These pieces of writing are useful and perhaps interesting to readers of various backgrounds or knowledge bases.

Data interpretation starting with research inquiry

People can read a set of information, such as a blog article or a recipe, in different ways (some may read the ingredients first while others skip to the directions). Data interpretation grounds the understanding and reporting of the research in clearly defined terms such that, even if different scholars disagree on the findings of the research, they at least share a foundational understanding of how the research is interpreted.

Moreover, suppose someone is reading a set of recipes to understand the food culture of a particular place or group of people. A straightforward recipe may not explicitly or neatly convey this information. Still, a thorough reader can analyze bits and pieces of each recipe in that cookbook to understand the ingredients, tools, and methods used in that particular food culture.

As a result, your research inquiry may require you to reorganize the data in a way that allows for easier data interpretation. Analyzing data as a part of the interpretation process, especially in qualitative research , means looking for the relevant data, summarizing data for the insights they hold, and discarding any irrelevant data that is not useful to the given research inquiry.

what is interpretation of data in research

Let's look at a fairly straightforward process that can be used to turn data into valuable insights through data interpretation.

Sorting the data

Think about our previous example with a collection of recipes. You can break down a recipe into various "data points," which you might consider categories or points of measurement. A recipe can be broken down into ingredients, directions, or even preparation time, things that are often written into a recipe. Or you might look at recipes from a different angle using less observed categories, such as the cost to make the recipe or skills required to make the recipe. Whatever categories you choose, however, will determine how you interpret the data.

As a result, think about what you are trying to examine and identify what categories or measures should be used to analyze and understand the data. These data points will form your "buckets" to sort your collected data into more meaningful information for data interpretation.

Identifying trends and patterns

Once you've sorted enough of the data into your categorical buckets, you might begin to notice some telling patterns. Suppose you are analyzing a cookbook of barbecue recipes for nutritional value. In that case, you might find an abundance of recipes with high fat and sugar, while a collection of salad recipes might yield patterns of dishes with low carbohydrates. These patterns will form the basis for answering your research inquiry.

Drawing connections

The meaning of these trends and patterns is not always self-evident. When people wear the same trendy clothes or listen to the same popular music, they may do so because the clothing or music is genuinely good or because they are following the crowd. They may even be trying to impress someone they know.

As you look at the patterns in your data, you can start to look at whether the patterns coincide (or co-occur) to determine a starting point for discussion about whether they are related to each other. Whether these co-occurrences share a meaningful relationship or are only loosely correlated with each other, all data interpretation of patterns starts by looking within and across patterns and co-occurrences among them.

what is interpretation of data in research

Use ATLAS.ti to interpret data for your research

An intuitive interface combined with powerful data interpretation tools, available starting with a free trial.

Quantitative analysis through statistical methods benefits researchers who are looking to measure a particular phenomenon. Numerical data can measure the different degrees of a concept, such as temperature, speed, wealth, or even academic achievement.

Quantitative data analysis is a matter of rearranging the data to make it easier to measure. Imagine sorting a child's piggy bank full of coins into different types of coins (e.g., pennies, nickels, dimes, and quarters). Without sorting these coins for measurement, it becomes difficult to efficiently measure the value of the coins in that piggy bank.

Quantitative data interpretation method

A good data interpretation question regarding that child's piggy bank might be, "Has the child saved up enough money?" Then it's a matter of deciding what "enough money" might be, whether it's $20, $50, or even $100. Once that determination has been made, you can then answer your question after your quantitative analysis (i.e., counting the coins).

Although counting the money in a child’s piggy bank is a simple example, it illustrates the fact that a lot of quantitative data interpretation depends on having a particular value or set of values in mind against which your analysis will be compared. The number of calories or the amount of sodium you might consider healthy will allow you to determine whether a particular food is healthy. At the same time, your monthly income will inform whether you see a certain product as cheap or expensive. In any case, interpreting quantitative data often starts with having a set theory or prediction that you apply to the data.

what is interpretation of data in research

Data interpretation refers to the process of examining and reviewing data for the purpose of describing the aspects of a phenomenon or concept. Qualitative research seldom has numerical data arising from data collection; instead, qualities of a phenomenon are often generated from this research. With this in mind, the role of data interpretation is to persuade research audiences as to what qualities in a particular concept or phenomenon are significant.

While there are many different ways to analyze complex data that is qualitative in nature, here is a simple process for data interpretation that might be persuasive to your research audience:

  • Describe data in explicit detail - what is happening in the data?
  • Describe the meaning of the data - why is it important?
  • Describe the significance - what can this meaning be used for?

Qualitative data interpretation method

Coding remains one of the most important data interpretation methods in qualitative research. Coding provides a structure to the data that facilitates empirical analysis. Without this coding, a researcher can give their impression of what the data means but may not be able to persuade their audience with the sufficient evidence that structured data can provide.

Ultimately, coding reduces the breadth of the collected data to make it more manageable. Instead of thousands of lines of raw data, effective coding can produce a couple of dozen codes that can be analyzed for frequency or used to organize categorical data along the lines of themes or patterns. Analyzing qualitative data through coding involves closely looking at the data and summarizing data segments into short but descriptive phrases. These phrases or codes, when applied throughout entire data sets, can help to restructure the data in a manner that allows for easier analysis or greater clarity as to the meaning of the data relevant to the research inquiry.

Code-Document Analysis

A comparison of data sets can be useful to interpret patterns in the data. Code-Document Analysis in ATLAS.ti looks for code frequencies in particular documents or document groups. This is useful for many tasks, such as interpreting perspectives across multiple interviews or survey records. Where each document represents the opinions of a distinct person, how do perspectives differ from person to person? Understanding these differences, in this case, starts with determining where the interpretive codes in your project are applied.

Software is great at accomplishing mechanical tasks that would otherwise take time and effort better spent on analysis. Such tasks include searching for words or phrases across documents, completing complicated queries to organize the relevant information in one place, and employing statistical methods to allow the researcher to reach relevant conclusions about their data. What technology cannot do is interpret data for you; it can reorganize the data in a way that allows you to more easily reach a conclusion as to the insights you can draw from the research, but ultimately it is up to you to make the final determination as to the meaning of the patterns in the data.

This is true whether you are engaged in qualitative or quantitative research. Whether you are trying to define "happiness" or "hot" (because a "hot day" will mean different things to different people, regardless of the number representing the temperature), it is inevitably your decision to interpret the data you're given, regardless of the help a computer may provide to you.

Think of qualitative data analysis software like ATLAS.ti as an assistant to support you through the research process so you can identify key insights from your data, as opposed to identifying those insights for you. This is especially preferable in the social sciences, where human interaction and cultural practices are subjectively and socially constructed in a way that only humans can adequately understand. Human interpretation of qualitative data is not merely unavoidable; in the social sciences, it is an outright necessity.

what is interpretation of data in research

With this in mind, ATLAS.ti has several tools that can help make interpreting data easier and more insightful. These tools can facilitate the reporting and visualization of the data analysis for your benefit and the benefit of your research audience.

Code Co-Occurrence Analysis

The overlapping of codes in qualitative data is a useful starting point to determine relationships between phenomena. ATLAS.ti's Code Co-Occurrence Analysis tool helps researchers identify relationships between codes so that data interpretation regarding any possible connections can contribute to a greater understanding of the data.

what is interpretation of data in research

Memos are an important part of any research, which is why ATLAS.ti provides a space separate from your data and codes for research notes and reflection memos. Especially in the social sciences or any field that explores socially constructed concepts, a reflective memo can provide essential documentation of how researchers are involved in data gathering and data interpretation.

what is interpretation of data in research

With memos, the steps of analysis can be traced, and the entire process is open to view. Detailed documentation of the data analysis and data interpretation process can also facilitate the reporting and visualization of research when it comes time to share the research with audiences.

what is interpretation of data in research

In research, the main objective in explicitly conducting and detailing your data interpretation process is to report your research in a manner that is meaningful and persuasive to your audience. Where possible, researchers benefit from visualizing their data interpretation to provide research audiences with the necessary clarity to understand the findings of the research.

Ultimately, the various data analysis processes you employ should lead to some form of reporting where the research audience can easily understand the data interpretation. Otherwise, data interpretation holds no value if it is not understood, let alone accepted, by the research audience.

Data visualization tools in ATLAS.ti

ATLAS.ti has a number of tools that can assist with creating illustrations that contribute to explaining your data interpretation to your research audience.

what is interpretation of data in research

A TreeMap of your codes can be a useful visualization if you are conducting a thematic analysis of your data. Codes in ATLAS.ti can be marked by different colors, which is illustrative if you use colors to distinguish between different themes in your research. As codes are applied to your data, the more frequently occurring codes take up more space in the TreeMap, allowing you to examine which codes and, by use of colors, which themes are more and less apparent and help you generate theory.

what is interpretation of data in research

Sankey diagrams

The Code Co-Occurrence and Code-Document Analyses in ATLAS.ti can produce tables, graphs, and also Sankey diagrams, which are useful for visualizing the relative relationships between different codes or between codes and documents. While numerical data generated for tables can tell one story of your data interpretation, the visual information in a Sankey diagram, where higher frequencies are represented by thicker lines, can be particularly persuasive to your research audience.

what is interpretation of data in research

When it comes time to report actionable insights contributing to a theory or conceptualization, you can benefit from a visualization of the theory you have generated from your data interpretation. Networks are made up of elements of your project, usually codes, but also other elements such as documents, code groups, document groups, quotations, and memos. Researchers can then define links between these elements to illustrate connections that arise from your data interpretation.

what is interpretation of data in research

Turn data into insights with ATLAS.ti

Powerful tools to help you interpret data at your fingertips. Click here for a free trial.

  • Search Menu
  • Browse content in Arts and Humanities
  • Browse content in Archaeology
  • Anglo-Saxon and Medieval Archaeology
  • Archaeological Methodology and Techniques
  • Archaeology by Region
  • Archaeology of Religion
  • Archaeology of Trade and Exchange
  • Biblical Archaeology
  • Contemporary and Public Archaeology
  • Environmental Archaeology
  • Historical Archaeology
  • History and Theory of Archaeology
  • Industrial Archaeology
  • Landscape Archaeology
  • Mortuary Archaeology
  • Prehistoric Archaeology
  • Underwater Archaeology
  • Urban Archaeology
  • Zooarchaeology
  • Browse content in Architecture
  • Architectural Structure and Design
  • History of Architecture
  • Residential and Domestic Buildings
  • Theory of Architecture
  • Browse content in Art
  • Art Subjects and Themes
  • History of Art
  • Industrial and Commercial Art
  • Theory of Art
  • Biographical Studies
  • Byzantine Studies
  • Browse content in Classical Studies
  • Classical History
  • Classical Philosophy
  • Classical Mythology
  • Classical Literature
  • Classical Reception
  • Classical Art and Architecture
  • Classical Oratory and Rhetoric
  • Greek and Roman Papyrology
  • Greek and Roman Epigraphy
  • Greek and Roman Law
  • Greek and Roman Archaeology
  • Late Antiquity
  • Religion in the Ancient World
  • Digital Humanities
  • Browse content in History
  • Colonialism and Imperialism
  • Diplomatic History
  • Environmental History
  • Genealogy, Heraldry, Names, and Honours
  • Genocide and Ethnic Cleansing
  • Historical Geography
  • History by Period
  • History of Emotions
  • History of Agriculture
  • History of Education
  • History of Gender and Sexuality
  • Industrial History
  • Intellectual History
  • International History
  • Labour History
  • Legal and Constitutional History
  • Local and Family History
  • Maritime History
  • Military History
  • National Liberation and Post-Colonialism
  • Oral History
  • Political History
  • Public History
  • Regional and National History
  • Revolutions and Rebellions
  • Slavery and Abolition of Slavery
  • Social and Cultural History
  • Theory, Methods, and Historiography
  • Urban History
  • World History
  • Browse content in Language Teaching and Learning
  • Language Learning (Specific Skills)
  • Language Teaching Theory and Methods
  • Browse content in Linguistics
  • Applied Linguistics
  • Cognitive Linguistics
  • Computational Linguistics
  • Forensic Linguistics
  • Grammar, Syntax and Morphology
  • Historical and Diachronic Linguistics
  • History of English
  • Language Evolution
  • Language Reference
  • Language Acquisition
  • Language Variation
  • Language Families
  • Lexicography
  • Linguistic Anthropology
  • Linguistic Theories
  • Linguistic Typology
  • Phonetics and Phonology
  • Psycholinguistics
  • Sociolinguistics
  • Translation and Interpretation
  • Writing Systems
  • Browse content in Literature
  • Bibliography
  • Children's Literature Studies
  • Literary Studies (Romanticism)
  • Literary Studies (American)
  • Literary Studies (Asian)
  • Literary Studies (European)
  • Literary Studies (Eco-criticism)
  • Literary Studies (Modernism)
  • Literary Studies - World
  • Literary Studies (1500 to 1800)
  • Literary Studies (19th Century)
  • Literary Studies (20th Century onwards)
  • Literary Studies (African American Literature)
  • Literary Studies (British and Irish)
  • Literary Studies (Early and Medieval)
  • Literary Studies (Fiction, Novelists, and Prose Writers)
  • Literary Studies (Gender Studies)
  • Literary Studies (Graphic Novels)
  • Literary Studies (History of the Book)
  • Literary Studies (Plays and Playwrights)
  • Literary Studies (Poetry and Poets)
  • Literary Studies (Postcolonial Literature)
  • Literary Studies (Queer Studies)
  • Literary Studies (Science Fiction)
  • Literary Studies (Travel Literature)
  • Literary Studies (War Literature)
  • Literary Studies (Women's Writing)
  • Literary Theory and Cultural Studies
  • Mythology and Folklore
  • Shakespeare Studies and Criticism
  • Browse content in Media Studies
  • Browse content in Music
  • Applied Music
  • Dance and Music
  • Ethics in Music
  • Ethnomusicology
  • Gender and Sexuality in Music
  • Medicine and Music
  • Music Cultures
  • Music and Media
  • Music and Religion
  • Music and Culture
  • Music Education and Pedagogy
  • Music Theory and Analysis
  • Musical Scores, Lyrics, and Libretti
  • Musical Structures, Styles, and Techniques
  • Musicology and Music History
  • Performance Practice and Studies
  • Race and Ethnicity in Music
  • Sound Studies
  • Browse content in Performing Arts
  • Browse content in Philosophy
  • Aesthetics and Philosophy of Art
  • Epistemology
  • Feminist Philosophy
  • History of Western Philosophy
  • Metaphysics
  • Moral Philosophy
  • Non-Western Philosophy
  • Philosophy of Language
  • Philosophy of Mind
  • Philosophy of Perception
  • Philosophy of Science
  • Philosophy of Action
  • Philosophy of Law
  • Philosophy of Religion
  • Philosophy of Mathematics and Logic
  • Practical Ethics
  • Social and Political Philosophy
  • Browse content in Religion
  • Biblical Studies
  • Christianity
  • East Asian Religions
  • History of Religion
  • Judaism and Jewish Studies
  • Qumran Studies
  • Religion and Education
  • Religion and Health
  • Religion and Politics
  • Religion and Science
  • Religion and Law
  • Religion and Art, Literature, and Music
  • Religious Studies
  • Browse content in Society and Culture
  • Cookery, Food, and Drink
  • Cultural Studies
  • Customs and Traditions
  • Ethical Issues and Debates
  • Hobbies, Games, Arts and Crafts
  • Lifestyle, Home, and Garden
  • Natural world, Country Life, and Pets
  • Popular Beliefs and Controversial Knowledge
  • Sports and Outdoor Recreation
  • Technology and Society
  • Travel and Holiday
  • Visual Culture
  • Browse content in Law
  • Arbitration
  • Browse content in Company and Commercial Law
  • Commercial Law
  • Company Law
  • Browse content in Comparative Law
  • Systems of Law
  • Competition Law
  • Browse content in Constitutional and Administrative Law
  • Government Powers
  • Judicial Review
  • Local Government Law
  • Military and Defence Law
  • Parliamentary and Legislative Practice
  • Construction Law
  • Contract Law
  • Browse content in Criminal Law
  • Criminal Procedure
  • Criminal Evidence Law
  • Sentencing and Punishment
  • Employment and Labour Law
  • Environment and Energy Law
  • Browse content in Financial Law
  • Banking Law
  • Insolvency Law
  • History of Law
  • Human Rights and Immigration
  • Intellectual Property Law
  • Browse content in International Law
  • Private International Law and Conflict of Laws
  • Public International Law
  • IT and Communications Law
  • Jurisprudence and Philosophy of Law
  • Law and Politics
  • Law and Society
  • Browse content in Legal System and Practice
  • Courts and Procedure
  • Legal Skills and Practice
  • Primary Sources of Law
  • Regulation of Legal Profession
  • Medical and Healthcare Law
  • Browse content in Policing
  • Criminal Investigation and Detection
  • Police and Security Services
  • Police Procedure and Law
  • Police Regional Planning
  • Browse content in Property Law
  • Personal Property Law
  • Study and Revision
  • Terrorism and National Security Law
  • Browse content in Trusts Law
  • Wills and Probate or Succession
  • Browse content in Medicine and Health
  • Browse content in Allied Health Professions
  • Arts Therapies
  • Clinical Science
  • Dietetics and Nutrition
  • Occupational Therapy
  • Operating Department Practice
  • Physiotherapy
  • Radiography
  • Speech and Language Therapy
  • Browse content in Anaesthetics
  • General Anaesthesia
  • Neuroanaesthesia
  • Clinical Neuroscience
  • Browse content in Clinical Medicine
  • Acute Medicine
  • Cardiovascular Medicine
  • Clinical Genetics
  • Clinical Pharmacology and Therapeutics
  • Dermatology
  • Endocrinology and Diabetes
  • Gastroenterology
  • Genito-urinary Medicine
  • Geriatric Medicine
  • Infectious Diseases
  • Medical Toxicology
  • Medical Oncology
  • Pain Medicine
  • Palliative Medicine
  • Rehabilitation Medicine
  • Respiratory Medicine and Pulmonology
  • Rheumatology
  • Sleep Medicine
  • Sports and Exercise Medicine
  • Community Medical Services
  • Critical Care
  • Emergency Medicine
  • Forensic Medicine
  • Haematology
  • History of Medicine
  • Browse content in Medical Skills
  • Clinical Skills
  • Communication Skills
  • Nursing Skills
  • Surgical Skills
  • Browse content in Medical Dentistry
  • Oral and Maxillofacial Surgery
  • Paediatric Dentistry
  • Restorative Dentistry and Orthodontics
  • Surgical Dentistry
  • Medical Ethics
  • Medical Statistics and Methodology
  • Browse content in Neurology
  • Clinical Neurophysiology
  • Neuropathology
  • Nursing Studies
  • Browse content in Obstetrics and Gynaecology
  • Gynaecology
  • Occupational Medicine
  • Ophthalmology
  • Otolaryngology (ENT)
  • Browse content in Paediatrics
  • Neonatology
  • Browse content in Pathology
  • Chemical Pathology
  • Clinical Cytogenetics and Molecular Genetics
  • Histopathology
  • Medical Microbiology and Virology
  • Patient Education and Information
  • Browse content in Pharmacology
  • Psychopharmacology
  • Browse content in Popular Health
  • Caring for Others
  • Complementary and Alternative Medicine
  • Self-help and Personal Development
  • Browse content in Preclinical Medicine
  • Cell Biology
  • Molecular Biology and Genetics
  • Reproduction, Growth and Development
  • Primary Care
  • Professional Development in Medicine
  • Browse content in Psychiatry
  • Addiction Medicine
  • Child and Adolescent Psychiatry
  • Forensic Psychiatry
  • Learning Disabilities
  • Old Age Psychiatry
  • Psychotherapy
  • Browse content in Public Health and Epidemiology
  • Epidemiology
  • Public Health
  • Browse content in Radiology
  • Clinical Radiology
  • Interventional Radiology
  • Nuclear Medicine
  • Radiation Oncology
  • Reproductive Medicine
  • Browse content in Surgery
  • Cardiothoracic Surgery
  • Gastro-intestinal and Colorectal Surgery
  • General Surgery
  • Neurosurgery
  • Paediatric Surgery
  • Peri-operative Care
  • Plastic and Reconstructive Surgery
  • Surgical Oncology
  • Transplant Surgery
  • Trauma and Orthopaedic Surgery
  • Vascular Surgery
  • Browse content in Science and Mathematics
  • Browse content in Biological Sciences
  • Aquatic Biology
  • Biochemistry
  • Bioinformatics and Computational Biology
  • Developmental Biology
  • Ecology and Conservation
  • Evolutionary Biology
  • Genetics and Genomics
  • Microbiology
  • Molecular and Cell Biology
  • Natural History
  • Plant Sciences and Forestry
  • Research Methods in Life Sciences
  • Structural Biology
  • Systems Biology
  • Zoology and Animal Sciences
  • Browse content in Chemistry
  • Analytical Chemistry
  • Computational Chemistry
  • Crystallography
  • Environmental Chemistry
  • Industrial Chemistry
  • Inorganic Chemistry
  • Materials Chemistry
  • Medicinal Chemistry
  • Mineralogy and Gems
  • Organic Chemistry
  • Physical Chemistry
  • Polymer Chemistry
  • Study and Communication Skills in Chemistry
  • Theoretical Chemistry
  • Browse content in Computer Science
  • Artificial Intelligence
  • Computer Architecture and Logic Design
  • Game Studies
  • Human-Computer Interaction
  • Mathematical Theory of Computation
  • Programming Languages
  • Software Engineering
  • Systems Analysis and Design
  • Virtual Reality
  • Browse content in Computing
  • Business Applications
  • Computer Security
  • Computer Games
  • Computer Networking and Communications
  • Digital Lifestyle
  • Graphical and Digital Media Applications
  • Operating Systems
  • Browse content in Earth Sciences and Geography
  • Atmospheric Sciences
  • Environmental Geography
  • Geology and the Lithosphere
  • Maps and Map-making
  • Meteorology and Climatology
  • Oceanography and Hydrology
  • Palaeontology
  • Physical Geography and Topography
  • Regional Geography
  • Soil Science
  • Urban Geography
  • Browse content in Engineering and Technology
  • Agriculture and Farming
  • Biological Engineering
  • Civil Engineering, Surveying, and Building
  • Electronics and Communications Engineering
  • Energy Technology
  • Engineering (General)
  • Environmental Science, Engineering, and Technology
  • History of Engineering and Technology
  • Mechanical Engineering and Materials
  • Technology of Industrial Chemistry
  • Transport Technology and Trades
  • Browse content in Environmental Science
  • Applied Ecology (Environmental Science)
  • Conservation of the Environment (Environmental Science)
  • Environmental Sustainability
  • Environmentalist Thought and Ideology (Environmental Science)
  • Management of Land and Natural Resources (Environmental Science)
  • Natural Disasters (Environmental Science)
  • Nuclear Issues (Environmental Science)
  • Pollution and Threats to the Environment (Environmental Science)
  • Social Impact of Environmental Issues (Environmental Science)
  • History of Science and Technology
  • Browse content in Materials Science
  • Ceramics and Glasses
  • Composite Materials
  • Metals, Alloying, and Corrosion
  • Nanotechnology
  • Browse content in Mathematics
  • Applied Mathematics
  • Biomathematics and Statistics
  • History of Mathematics
  • Mathematical Education
  • Mathematical Finance
  • Mathematical Analysis
  • Numerical and Computational Mathematics
  • Probability and Statistics
  • Pure Mathematics
  • Browse content in Neuroscience
  • Cognition and Behavioural Neuroscience
  • Development of the Nervous System
  • Disorders of the Nervous System
  • History of Neuroscience
  • Invertebrate Neurobiology
  • Molecular and Cellular Systems
  • Neuroendocrinology and Autonomic Nervous System
  • Neuroscientific Techniques
  • Sensory and Motor Systems
  • Browse content in Physics
  • Astronomy and Astrophysics
  • Atomic, Molecular, and Optical Physics
  • Biological and Medical Physics
  • Classical Mechanics
  • Computational Physics
  • Condensed Matter Physics
  • Electromagnetism, Optics, and Acoustics
  • History of Physics
  • Mathematical and Statistical Physics
  • Measurement Science
  • Nuclear Physics
  • Particles and Fields
  • Plasma Physics
  • Quantum Physics
  • Relativity and Gravitation
  • Semiconductor and Mesoscopic Physics
  • Browse content in Psychology
  • Affective Sciences
  • Clinical Psychology
  • Cognitive Psychology
  • Cognitive Neuroscience
  • Criminal and Forensic Psychology
  • Developmental Psychology
  • Educational Psychology
  • Evolutionary Psychology
  • Health Psychology
  • History and Systems in Psychology
  • Music Psychology
  • Neuropsychology
  • Organizational Psychology
  • Psychological Assessment and Testing
  • Psychology of Human-Technology Interaction
  • Psychology Professional Development and Training
  • Research Methods in Psychology
  • Social Psychology
  • Browse content in Social Sciences
  • Browse content in Anthropology
  • Anthropology of Religion
  • Human Evolution
  • Medical Anthropology
  • Physical Anthropology
  • Regional Anthropology
  • Social and Cultural Anthropology
  • Theory and Practice of Anthropology
  • Browse content in Business and Management
  • Business Ethics
  • Business Strategy
  • Business History
  • Business and Technology
  • Business and Government
  • Business and the Environment
  • Comparative Management
  • Corporate Governance
  • Corporate Social Responsibility
  • Entrepreneurship
  • Health Management
  • Human Resource Management
  • Industrial and Employment Relations
  • Industry Studies
  • Information and Communication Technologies
  • International Business
  • Knowledge Management
  • Management and Management Techniques
  • Operations Management
  • Organizational Theory and Behaviour
  • Pensions and Pension Management
  • Public and Nonprofit Management
  • Strategic Management
  • Supply Chain Management
  • Browse content in Criminology and Criminal Justice
  • Criminal Justice
  • Criminology
  • Forms of Crime
  • International and Comparative Criminology
  • Youth Violence and Juvenile Justice
  • Development Studies
  • Browse content in Economics
  • Agricultural, Environmental, and Natural Resource Economics
  • Asian Economics
  • Behavioural Finance
  • Behavioural Economics and Neuroeconomics
  • Econometrics and Mathematical Economics
  • Economic History
  • Economic Systems
  • Economic Methodology
  • Economic Development and Growth
  • Financial Markets
  • Financial Institutions and Services
  • General Economics and Teaching
  • Health, Education, and Welfare
  • History of Economic Thought
  • International Economics
  • Labour and Demographic Economics
  • Law and Economics
  • Macroeconomics and Monetary Economics
  • Microeconomics
  • Public Economics
  • Urban, Rural, and Regional Economics
  • Welfare Economics
  • Browse content in Education
  • Adult Education and Continuous Learning
  • Care and Counselling of Students
  • Early Childhood and Elementary Education
  • Educational Equipment and Technology
  • Educational Strategies and Policy
  • Higher and Further Education
  • Organization and Management of Education
  • Philosophy and Theory of Education
  • Schools Studies
  • Secondary Education
  • Teaching of a Specific Subject
  • Teaching of Specific Groups and Special Educational Needs
  • Teaching Skills and Techniques
  • Browse content in Environment
  • Applied Ecology (Social Science)
  • Climate Change
  • Conservation of the Environment (Social Science)
  • Environmentalist Thought and Ideology (Social Science)
  • Natural Disasters (Environment)
  • Social Impact of Environmental Issues (Social Science)
  • Browse content in Human Geography
  • Cultural Geography
  • Economic Geography
  • Political Geography
  • Browse content in Interdisciplinary Studies
  • Communication Studies
  • Museums, Libraries, and Information Sciences
  • Browse content in Politics
  • African Politics
  • Asian Politics
  • Chinese Politics
  • Comparative Politics
  • Conflict Politics
  • Elections and Electoral Studies
  • Environmental Politics
  • European Union
  • Foreign Policy
  • Gender and Politics
  • Human Rights and Politics
  • Indian Politics
  • International Relations
  • International Organization (Politics)
  • International Political Economy
  • Irish Politics
  • Latin American Politics
  • Middle Eastern Politics
  • Political Behaviour
  • Political Economy
  • Political Institutions
  • Political Methodology
  • Political Communication
  • Political Philosophy
  • Political Sociology
  • Political Theory
  • Politics and Law
  • Public Policy
  • Public Administration
  • Quantitative Political Methodology
  • Regional Political Studies
  • Russian Politics
  • Security Studies
  • State and Local Government
  • UK Politics
  • US Politics
  • Browse content in Regional and Area Studies
  • African Studies
  • Asian Studies
  • East Asian Studies
  • Japanese Studies
  • Latin American Studies
  • Middle Eastern Studies
  • Native American Studies
  • Scottish Studies
  • Browse content in Research and Information
  • Research Methods
  • Browse content in Social Work
  • Addictions and Substance Misuse
  • Adoption and Fostering
  • Care of the Elderly
  • Child and Adolescent Social Work
  • Couple and Family Social Work
  • Developmental and Physical Disabilities Social Work
  • Direct Practice and Clinical Social Work
  • Emergency Services
  • Human Behaviour and the Social Environment
  • International and Global Issues in Social Work
  • Mental and Behavioural Health
  • Social Justice and Human Rights
  • Social Policy and Advocacy
  • Social Work and Crime and Justice
  • Social Work Macro Practice
  • Social Work Practice Settings
  • Social Work Research and Evidence-based Practice
  • Welfare and Benefit Systems
  • Browse content in Sociology
  • Childhood Studies
  • Community Development
  • Comparative and Historical Sociology
  • Economic Sociology
  • Gender and Sexuality
  • Gerontology and Ageing
  • Health, Illness, and Medicine
  • Marriage and the Family
  • Migration Studies
  • Occupations, Professions, and Work
  • Organizations
  • Population and Demography
  • Race and Ethnicity
  • Social Theory
  • Social Movements and Social Change
  • Social Research and Statistics
  • Social Stratification, Inequality, and Mobility
  • Sociology of Religion
  • Sociology of Education
  • Sport and Leisure
  • Urban and Rural Studies
  • Browse content in Warfare and Defence
  • Defence Strategy, Planning, and Research
  • Land Forces and Warfare
  • Military Administration
  • Military Life and Institutions
  • Naval Forces and Warfare
  • Other Warfare and Defence Issues
  • Peace Studies and Conflict Resolution
  • Weapons and Equipment

The Oxford Handbook of Qualitative Research (2nd edn)

  • < Previous chapter
  • Next chapter >

The Oxford Handbook of Qualitative Research (2nd edn)

31 Interpretation In Qualitative Research: What, Why, How

Allen Trent, College of Education, University of Wyoming

Jeasik Cho, Department of Educational Studies, University of Wyoming

  • Published: 02 September 2020
  • Cite Icon Cite
  • Permissions Icon Permissions

This chapter addresses a wide range of concepts related to interpretation in qualitative research, examines the meaning and importance of interpretation in qualitative inquiry, and explores the ways methodology, data, and the self/researcher as instrument interact and impact interpretive processes. Additionally, the chapter presents a series of strategies for qualitative researchers engaged in the process of interpretation and closes by presenting a framework for qualitative researchers designed to inform their interpretations. The framework includes attention to the key qualitative research concepts transparency, reflexivity, analysis, validity, evidence, and literature. Four questions frame the chapter: What is interpretation, and why are interpretive strategies important in qualitative research? How do methodology, data, and the researcher/self impact interpretation in qualitative research? How do qualitative researchers engage in the process of interpretation? And, in what ways can a framework for interpretation strategies support qualitative researchers across multiple methodologies and paradigms?

“ All human knowledge takes the form of interpretation.” In this seemingly simple statement, the late German philosopher Walter Benjamin asserted that all knowledge is mediated and constructed. In doing so, he situates himself as an interpretivist, one who believes that human subjectivity, individuals’ characteristics, feelings, opinions, and experiential backgrounds impact observations, analysis of these observations, and resultant knowledge/truth constructions. Hammersley ( 2013 ) noted,

People—unlike atoms … actively interpret or make sense of their environment and of themselves; the ways in which they do this are shaped by the particular cultures in which they live; and these distinctive cultural orientations will strongly influence not only what they believe but also what they do. (p. 26)

Contrast this perspective with positivist claims that knowledge is based exclusively on external facts, objectively observed and recorded. Interpretivists, then, acknowledge that if positivistic notions of knowledge and truth are inadequate to explain social phenomena, then positivist, hard science approaches to research (i.e., the scientific method and its variants) are also inadequate and can even have a detrimental impact. According to Polyani (1967), “The ideal of exact science would turn out to be fundamentally misleading and possibly a source of devastating fallacies” (as cited in Packer, 2018 , p. 71). So, although the literature often contrasts quantitative and qualitative research as largely a difference in kinds of data employed (numerical vs. linguistic), instead, the primary differentiation is in the foundational, paradigmatic assumptions about truth, knowledge, and objectivity.

This chapter is about interpretation and the strategies that qualitative researchers use to interpret a wide variety of “texts.” Knowledge, we assert, is constructed, both individually (constructivism) and socially (constructionism). We accept this as our starting point. Our aim here is to share our perspective on a broad set of concepts associated with the interpretive, or meaning-making, process. Although it may happen at different times and in different ways, interpretation is part of almost all qualitative research.

Qualitative research is an umbrella term that encompasses a wide array of paradigmatic views, goals, and methods. Still, there are key unifying elements that include a generally constructionist epistemological standpoint, attention to primarily linguistic data, and generally accepted protocols or syntax for conducting research. Typically, qualitative researchers begin with a starting point—a curiosity, a problem in need of solutions, a research question, and/or a desire to better understand a situation from the “native” perspectives of the individuals who inhabit that context. This is what anthropologists call the emic , or insider’s, perspective. Olivier de Sardan ( 2015 ) wrote, “It evokes the meaning that social facts have for the actors concerned. It is opposed to the term etic , which, at times, designates more external or ‘objective’ data, and, at others, the researcher’s interpretive analysis” (p. 65).

From this starting point, researchers determine the appropriate kinds of data to collect, engage in fieldwork as participant observers to gather these data, organize the data, look for patterns, and attempt to understand the emic perspectives while integrating their own emergent interpretations. Researchers construct meaning from data by synthesizing research “findings,” “assertions,” or “theories” that can be shared so that others may also gain insights from the conducted inquiry. This interpretive process has a long history; hermeneutics, the theory of interpretation, blossomed in the 17th century in the form of biblical exegesis (Packer, 2018 ).

Although there are commonalities that cut across most forms of qualitative research, this is not to say that there is an accepted, linear, standardized approach. To be sure, there are an infinite number of variations and nuances in the qualitative research process. For example, some forms of inquiry begin with a firm research question; others start without even a clear focus for study. Grounded theorists begin data analysis and interpretation very early in the research process, whereas some case study researchers, for example, may collect data in the field for a period of time before seriously considering the data and its implications. Some ethnographers may be a part of the context (e.g., observing in classrooms), but they may assume more observer-like roles, as opposed to actively participating in the context. Alternatively, action researchers, in studying issues related to their own practice, are necessarily situated toward the participant end of the participant–observer continuum.

Our focus here is on one integrated part of the qualitative research process, interpretation, the hermeneutic process of collective and individual “meaning making.” Like Willig ( 2017 ), we believe “interpretation is at the heart of qualitative research because qualitative research is concerned with meaning and the process of meaning-making … qualitative data … needs to be given meaning by the researcher” (p. 276). As we discuss throughout this chapter, researchers take a variety of approaches to interpretation in qualitative work. Four general questions guide our explorations:

What is interpretation, and why are interpretive strategies important in qualitative research?

How do methodology, data, and the researcher/self impact interpretation in qualitative research?

How do qualitative researchers engage in the process of interpretation?

In what ways can a framework for interpretation strategies support qualitative researchers across multiple methodological and paradigmatic views?

We address each of these guiding questions in our attempt to explicate our interpretation of “interpretation” and, as educational researchers, we include examples from our own work to illustrate some key concepts.

What Is Interpretation, and Why Are Interpretive Strategies Important in Qualitative Research?

Qualitative researchers and those writing about qualitative methods often intertwine the terms analysis and interpretation . For example, Hubbard and Power ( 2003 ) described data analysis as “bringing order, structure, and meaning to the data” (p. 88). To us, this description combines analysis with interpretation. Although there is nothing wrong with this construction, our understanding aligns more closely with Mills’s ( 2018 ) claim that, “put simply, analysis involves summarizing what’s in the data, whereas interpretation involves making sense of—finding meaning in—that data” (p. 176). Hesse-Biber ( 2017 ) also separated out the essential process of interpretation. She described the steps in qualitative analysis and interpretation as data preparation, data exploration, and data reduction (all part of Mills’s “analysis” processes), followed by interpretation (pp. 307–328). Willig ( 2017 ) elaborated: analysis, she claims, is “sober and systematic,” whereas interpretation is associated with “creativity and the imagination … interpretation is seen as stimulating, it is interesting and it can be illuminating” (p. 276). For the purpose of this chapter, we will adhere to Mills’s distinction, understanding analysis as summarizing and organizing and interpretation as meaning making. Unavoidably, these closely related processes overlap and interact, but our focus will be primarily on the more complex of these endeavors, interpretation. Interpretation, in this sense, is in part translation, but translation is not an objective act. Instead, translation necessarily involves selectivity and the ascribing of meaning. Qualitative researchers “aim beneath manifest behavior to the meaning events have for those who experience them” (Eisner, 1991 , p. 35). The presentation of these insider/emic perspectives, coupled with researchers’ own interpretations, is a hallmark of qualitative research.

Qualitative researchers have long borrowed from extant models for fieldwork and interpretation. Approaches from anthropology and the arts have become especially prominent. For example, Eisner’s ( 1991 ) form of qualitative inquiry, educational criticism , draws heavily on accepted models of art criticism. T. Barrett ( 2011 ), an authority on art criticism, described interpretation as a complex set of processes based on a set of principles. We believe many of these principles apply as readily to qualitative research as they do to critique. The following principles, adapted from T. Barrett’s principles of interpretation (2011), inform our examination:

Qualitative phenomena have “aboutness” : All social phenomena have meaning, but meanings in this context can be multiple, even contradictory.

Interpretations are persuasive arguments : All interpretations are arguments, and qualitative researchers, like critics, strive to build strong arguments grounded in the information, or data, available.

  Some interpretations are better than others : Barrett noted that “some interpretations are better argued, better grounded with evidence, and therefore more reasonable, more certain, and more acceptable than others.” This contradicts the argument that “all interpretations are equal,” heard in the common refrain, “Well, that’s just your interpretation.”

There can be different, competing, and contradictory interpretations of the same phenomena : As noted at the beginning of this chapter, we acknowledge that subjectivity matters, and, unavoidably, it impacts one’s interpretations. As Barrett noted, “Interpretations are often based on a worldview.”

Interpretations are not (and cannot be) “right,” but instead, they can be more or less reasonable, convincing, and informative : There is never one “true” interpretation, but some interpretations are more compelling than others.

Interpretations can be judged by coherence, correspondence, and inclusiveness : Does the argument/interpretation make sense (coherence)? Does the interpretation fit the data (correspondence)? Have all data been attended to, including outlier data that do not necessarily support identified themes (inclusiveness)?

Interpretation is ultimately a communal endeavor : Initial interpretations may be incomplete, nearsighted, and/or narrow, but eventually these interpretations become richer, broader, and more inclusive. Feminist revisionist history projects are an exemplary case. Over time, the writing, art, and cultural contributions of countless women, previously ignored, diminished, or distorted, have come to be accepted as prominent contributions given serious consideration.

So, meaning is conferred; interpretations are socially constructed arguments; multiple interpretations are to be expected; and some interpretations are better than others. As we discuss later in this chapter, what makes an interpretation “better” often hinges on the purpose/goals of the research in question. Interpretations designed to generate theory, or generalizable rules, will be better for responding to research questions aligned with the aims of more traditional quantitative/positivist research, whereas interpretations designed to construct meanings through social interaction, to generate multiple perspectives, and to represent the context-specific perspectives of the research participants are better for researchers constructing thick, contextually rich descriptions, stories, or narratives. The former relies on more atomistic interpretive strategies, whereas the latter adheres to a more holistic approach (Willis, 2007 ). Both approaches to analysis/interpretation are addressed in more detail later in this chapter.

At this point, readers might ask, Why does interpretation matter, anyway? Our response to this question involves the distinctive nature of interpretation and the ability of the interpretive process to put unique fingerprints on an otherwise relatively static set of data. Once interview data are collected and transcribed (and we realize that even the process of transcription is, in part, interpretive), documents are collected, and observations are recorded, qualitative researchers could just, in good faith and with fidelity, represent the data in as straightforward ways as possible, allowing readers to “see for themselves” by sharing as much actual data (e.g., the transcribed words of the research participants) as possible. This approach, however, includes analysis, what we have defined as summarizing and organizing data for presentation, but it falls short of what we reference and define as interpretation—attempting to explain the meaning of others’ words and actions. According to Lichtman ( 2013 ),

While early efforts at qualitative research might have stopped at description, it is now more generally accepted that a qualitative researcher goes beyond pure description.… Many believe that it is the role of the researcher to bring understanding, interpretation, and meaning. (p. 17)

Because we are fond of the arts and arts-based approaches to qualitative research, an example from the late jazz drummer, Buddy Rich, seems fitting. Rich explains the importance of having the flexibility to interpret: “I don’t think any arranger should ever write a drum part for a drummer, because if a drummer can’t create his own interpretation of the chart, and he plays everything that’s written, he becomes mechanical; he has no freedom.” The same is true for qualitative researchers: without the freedom to interpret, the researcher merely regurgitates, attempting to share with readers/reviewers exactly what the research subjects shared with him or her. It is only through interpretation that the researcher, as collaborator with unavoidable subjectivities, is able to construct unique, contextualized meaning. Interpretation, then, in this sense, is knowledge construction.

In closing this section, we will illustrate the analysis-versus-interpretation distinction with the following transcript excerpt. In this study, the authors (Trent & Zorko, 2006 ) were studying student teaching from the perspective of K–12 students. This quote comes from a high school student in a focus group interview. She is describing a student teacher she had:

The right-hand column contains codes or labels applied to parts of the transcript text. Coding will be discussed in more depth later in this chapter, but for now, note that the codes are mostly summarizing the main ideas of the text, sometimes using the exact words of the research participant. This type of coding is a part of what we have called analysis—organizing and summarizing the data. It is a way of beginning to say “what is” there. As noted, though, most qualitative researchers go deeper. They want to know more than what is; they also ask, What does it mean? This is a question of interpretation.

Specific to the transcript excerpt, researchers might next begin to cluster the early codes into like groups. For example, the teacher “felt targeted,” “assumed kids were going to behave inappropriately,” and appeared to be “overwhelmed.” A researcher might cluster this group of codes in a category called “teacher feelings and perceptions” and may then cluster the codes “could not control class” and “students off task” into a category called “classroom management.” The researcher then, in taking a fresh look at these categories and the included codes, may begin to conclude that what is going on in this situation is that the student teacher does not have sufficient training in classroom management models and strategies and may also be lacking the skills she needs to build relationships with her students. These then would be interpretations, persuasive arguments connected to the study’s data. In this specific example, the researchers might proceed to write a memo about these emerging interpretations. In this memo, they might more clearly define their early categories and may also look through other data to see if there are other codes or categories that align with or overlap this initial analysis. They may write further about their emergent interpretations and, in doing so, may inform future data collection in ways that will allow them to either support or refute their early interpretations. These researchers will also likely find that the processes of analysis and interpretation are inextricably intertwined. Good interpretations very often depend on thorough and thoughtful analyses.

How Do Methodology, Data, and the Researcher/Self Impact Interpretation in Qualitative Research?

Methodological conventions guide interpretation and the use of interpretive strategies. For example, in grounded theory and in similar methodological traditions, “formal analysis begins early in the study and is nearly completed by the end of data collection” (Bogdan & Biklen, 2007 , p. 73). Alternatively, for researchers from other traditions, for example, case study researchers, “formal analysis and theory development [interpretation] do not occur until after the data collection is near complete” (p. 73).

Researchers subscribing to methodologies that prescribe early data analysis and interpretation may employ methods like analytic induction or the constant comparison method. In using analytic induction, researchers develop a rough definition of the phenomena under study; collect data to compare to this rough definition; modify the definition as needed, based on cases that both fit and do not fit the definition; and, finally, establish a clear, universal definition (theory) of the phenomena (Robinson, 1951, cited in Bogdan & Biklen, 2007 , p. 73). Generally, those using a constant comparison approach begin data collection immediately; identify key issues, events, and activities related to the study that then become categories of focus; collect data that provide incidents of these categories; write about and describe the categories, accounting for specific incidents and seeking others; discover basic processes and relationships; and, finally, code and write about the categories as theory, “grounded” in the data (Glaser, 1965 ). Although processes like analytic induction and constant comparison can be listed as steps to follow, in actuality, these are more typically recursive processes in which the researcher repeatedly goes back and forth between the data and emerging analyses and interpretations.

In addition to methodological conventions that prescribe data analysis early (e.g., grounded theory) or later (e.g., case study) in the inquiry process, methodological approaches also impact the general approach to analysis and interpretation. Ellingson ( 2011 ) situated qualitative research methodologies on a continuum spanning “science”-like approaches on one end juxtaposed with “art”-like approaches on the other.

Researchers pursuing a more science-oriented approach seek valid, reliable, generalizable knowledge; believe in neutral, objective researchers; and ultimately claim single, authoritative interpretations. Researchers adhering to these science-focused, postpositivistic approaches may count frequencies, emphasize the validity of the employed coding system, and point to intercoder reliability and random sampling as criteria that bolster the research credibility. Researchers at or near the science end of the continuum might employ analysis and interpretation strategies that include “paired comparisons,” “pile sorts,” “word counts,” identifying “key words in context,” and “triad tests” (Bernard, Wutich, & Ryan, 2017 , pp. 112, 381, 113, 170). These researchers may ultimately seek to develop taxonomies or other authoritative final products that organize and explain the collected data.

For example, in a study we conducted about preservice teachers’ experiences learning to teach second-language learners, the researchers collected larger data sets and used a statistical analysis package to analyze survey data, and the resultant findings included descriptive statistics. These survey results were supported with open-ended, qualitative data. For example, one of the study’s findings was that “a strong majority of candidates (96%) agreed that an immersion approach alone will not guarantee academic or linguistic success for second language learners.” In narrative explanations, one preservice teacher, representative of many others, remarked, “There has to be extra instructional efforts to help their students learn English … they won’t learn English by merely sitting in the classrooms” (Cho, Rios, Trent, & Mayfield, 2012 , p. 75).

Methodologies on the art side of Ellingson’s ( 2011 ) continuum, alternatively, “value humanistic, openly subjective knowledge, such as that embodied in stories, poetry, photography, and painting” (p. 599). Analysis and interpretation in these (often more contemporary) methodological approaches do not strive for “social scientific truth,” but instead are formulated to “enable us to learn about ourselves, each other, and the world through encountering the unique lens of a person’s (or a group’s) passionate rendering of a reality into a moving, aesthetic expression of meaning” (p. 599). For these “artistic/interpretivists, truths are multiple, fluctuating and ambiguous” (p. 599). Methodologies taking more subjective approaches to analysis and interpretation include autoethnography, testimonio, performance studies, feminist theorists/researchers, and others from related critical methodological forms of qualitative practice. More specifically arts-based approaches include poetic inquiry, fiction-based research, music as method, and dance and movement as inquiry (Leavy, 2017 ). Interpretation in these approaches is inherent. For example, “ interpretive poetry is understood as a method of merging the participant’s words with the researcher’s perspective” (Leavy, 2017 , p. 82).

As an example, one of us engaged in an artistic inquiry with a group of students in an art class for elementary teachers. We called it “Dreams as Data” and, among the project aims, we wanted to gather participants’ “dreams for education in the future” and display these dreams in an accessible, interactive, artistic display (see Trent, 2002 ). The intent was not to statistically analyze the dreams/data; instead, it was more universal. We wanted, as Ellingson ( 2011 , p. 599) noted, to use participant responses in ways that “enable us to learn about ourselves, each other, and the world.” The decision was made to leave responses intact and to share the whole/raw data set in the artistic display in ways that allowed the viewers to holistically analyze and interpret for themselves. Additionally, the researcher (Trent, 2002 ) collaborated with his students to construct their own contextually situated interpretations of the data. The following text is an excerpt from one participant’s response:

Almost a century ago, John Dewey eloquently wrote about the need to imagine and create the education that ALL children deserve, not just the richest, the Whitest, or the easiest to teach. At the dawn of this new century, on some mornings, I wake up fearful that we are further away from this ideal than ever.… Collective action, in a critical, hopeful, joyful, anti-racist and pro-justice spirit, is foremost in my mind as I reflect on and act in my daily work.… Although I realize the constraints on teachers and schools in the current political arena, I do believe in the power of teachers to stand next to, encourage, and believe in the students they teach—in short, to change lives. (Trent, 2002 , p. 49)

In sum, researchers whom Ellingson ( 2011 ) characterized as being on the science end of the continuum typically use more detailed or atomistic strategies to analyze and interpret qualitative data, whereas those toward the artistic end most often employ more holistic strategies. Both general approaches to qualitative data analysis and interpretation, atomistic and holistic, will be addressed later in this chapter.

As noted, qualitative researchers attend to data in a wide variety of ways depending on paradigmatic and epistemological beliefs, methodological conventions, and the purpose/aims of the research. These factors impact the kinds of data collected and the ways these data are ultimately analyzed and interpreted. For example, life history or testimonio researchers conduct extensive individual interviews, ethnographers record detailed observational notes, critical theorists may examine documents from pop culture, and ethnomethodologists may collect videotapes of interaction for analysis and interpretation.

In addition to the wide range of data types that are collected by qualitative researchers (and most qualitative researchers collect multiple forms of data), qualitative researchers, again influenced by the factors noted earlier, employ a variety of approaches to analyzing and interpreting data. As mentioned earlier in this chapter, some advocate for a detailed/atomistic, fine-grained approach to data (see, e.g., Bernard et al., 2017 ); others prefer a more broad-based, holistic, “eyeballing” of the data. According to Willis ( 2007 ), “Eyeballers reject the more structured approaches to analysis that break down the data into small units and, from the perspective of the eyeballers, destroy the wholeness and some of the meaningfulness of the data” (p. 298).

Regardless, we assert, as illustrated in Figure 31.1 , that as the process evolves, data collection becomes less prominent later in the process, as interpretation and making sense/meaning of the data becomes more prominent. It is through this emphasis on interpretation that qualitative researchers put their individual imprints on the data, allowing for the emergence of multiple, rich perspectives. This space for interpretation allows researchers the freedom Buddy Rich alluded to in his quote about interpreting musical charts. Without this freedom, Rich noted that the process would simply be “mechanical.” Furthermore, allowing space for multiple interpretations nourishes the perspectives of many others in the community. Writer and theorist Meg Wheatley explained, “Everyone in a complex system has a slightly different interpretation. The more interpretations we gather, the easier it becomes to gain a sense of the whole.” In qualitative research, “there is no ‘getting it right’ because there could be many ‘rights’ ” (as cited in Lichtman, 2013 ).

Increasing Role of Interpretation in Data Analysis

In addition to the roles methodology and data play in the interpretive process, perhaps the most important is the role of the self/the researcher in the interpretive process. According to Lichtman ( 2013 ), “Data are collected, information is gathered, settings are viewed, and realities are constructed through his or her eyes and ears … the qualitative researcher interprets and makes sense of the data” (p. 21). Eisner ( 1991 ) supported the notion of the researcher “self as instrument,” noting that expert researchers know not simply what to attend to, but also what to neglect. He describes the researcher’s role in the interpretive process as combining sensibility , the ability to observe and ascertain nuances, with schema , a deep understanding or cognitive framework of the phenomena under study.

J. Barrett ( 2007 ) described self/researcher roles as “transformations” (p. 418) at multiple points throughout the inquiry process: early in the process, researchers create representations through data generation, conducting observations and interviews and collecting documents and artifacts. Then,

transformation occurs when the “raw” data generated in the field are shaped into data records by the researcher. These data records are produced through organizing and reconstructing the researcher’s notes and transcribing audio and video recordings in the form of permanent records that serve as the “evidentiary warrants” of the generated data. The researcher strives to capture aspects of the phenomenal world with fidelity by selecting salient aspects to incorporate into the data record. (J. Barrett, 2007 , p. 418)

Transformation continues when the researcher codes, categorizes, and explores patterns in the data (the process we call analysis).

Transformations also involve interpreting what the data mean and relating these interpretations to other sources of insight about the phenomena, including findings from related research, conceptual literature, and common experience.… Data analysis and interpretation are often intertwined and rely upon the researcher’s logic, artistry, imagination, clarity, and knowledge of the field under study. (J. Barrett, 2007 , p. 418)

We mentioned the often-blended roles of participation and observation earlier in this chapter. The role(s) of the self/researcher are often described as points along a participant–observer continuum (see, e.g., Bogdan & Biklen, 2007 ). On the far observer end of this continuum, the researcher situates as detached, tries to be inconspicuous (so as not to impact/disrupt the phenomena under study), and approaches the studied context as if viewing it from behind a one-way mirror. On the opposite, participant end, the researcher is completely immersed and involved in the context. It would be difficult for an outsider to distinguish between researcher and subjects. For example, “some feminist researchers and postmodernists take a political stance and have an agenda that places the researcher in an activist posture. These researchers often become quite involved with the individuals they study and try to improve their human condition” (Lichtman, 2013 , p. 17).

We assert that most researchers fall somewhere between these poles. We believe that complete detachment is both impossible and misguided. In doing so, we, along with many others, acknowledge (and honor) the role of subjectivity, the researcher’s beliefs, opinions, biases, and predispositions. Positivist researchers seeking objective data and accounts either ignore the impact of subjectivity or attempt to drastically diminish/eliminate its impact. Even qualitative researchers have developed methods to avoid researcher subjectivity affecting research data collection, analysis, and interpretation. For example, foundational phenomenologist Husserl ( 1913/1962 ) developed the concept of bracketing , what Lichtman describes as “trying to identify your views on the topic and then putting them aside” (2013, p. 22). Like Slotnick and Janesick ( 2011 ), we ultimately claim “it is impossible to bracket yourself” (p. 1358). Instead, we take a balanced approach, like Eisner, understanding that subjectivity allows researchers to produce the rich, idiosyncratic, insightful, and yet data-based interpretations and accounts of lived experience that accomplish the primary purposes of qualitative inquiry. Eisner ( 1991 ) wrote, “Rather than regarding uniformity and standardization as the summum bonum, educational criticism [Eisner’s form of qualitative research] views unique insight as the higher good” (p. 35). That said, we also claim that, just because we acknowledge and value the role of researcher subjectivity, researchers are still obligated to ground their findings in reasonable interpretations of the data. Eisner ( 1991 ) explained:

This appreciation for personal insight as a source of meaning does not provide a license for freedom. Educational critics must provide evidence and reasons. But they reject the assumption that unique interpretation is a conceptual liability in understanding, and they see the insights secured from multiple views as more attractive than the comforts provided by a single right one. (p. 35)

Connected to this participant–observer continuum is the way the researcher positions him- or herself in relation to the “subjects” of the study. Traditionally, researchers, including early qualitative researchers, anthropologists, and ethnographers, referenced those studied as subjects . More recently, qualitative researchers better understand that research should be a reciprocal process in which both researcher and the foci of the research should derive meaningful benefit. Researchers aligned with this thinking frequently use the term participants to describe those groups and individuals included in a study. Going a step further, some researchers view research participants as experts on the studied topic and as equal collaborators in the meaning-making process. In these instances, researchers often use the terms co-researchers or co-investigators .

The qualitative researcher, then, plays significant roles throughout the inquiry process. These roles include transforming data, collaborating with research participants or co-researchers, determining appropriate points to situate along the participant–observer continuum, and ascribing personal insights, meanings, and interpretations that are both unique and justified with data exemplars. Performing these roles unavoidably impacts and changes the researcher. Slotnick and Janesick ( 2011 ) noted, “Since, in qualitative research the individual is the research instrument through which all data are passed, interpreted, and reported, the scholar’s role is constantly evolving as self evolves” (p. 1358).

As we note later, key in all this is for researchers to be transparent about the topics discussed in the preceding section: What methodological conventions have been employed and why? How have data been treated throughout the inquiry to arrive at assertions and findings that may or may not be transferable to other idiosyncratic contexts? And, finally, in what ways has the researcher/self been situated in and impacted the inquiry? Unavoidably, we assert, the self lies at the critical intersection of data and theory, and, as such, two legs of this stool, data and researcher, interact to create the third, theory.

How Do Qualitative Researchers Engage in the Process of Interpretation?

Theorists seem to have a propensity to dichotomize concepts, pulling them apart and placing binary opposites on the far ends of conceptual continuums. Qualitative research theorists are no different, and we have already mentioned some of these continua in this chapter. For example, in the previous section, we discussed the participant–observer continuum. Earlier, we referenced both Willis’s ( 2007 ) conceptualization of atomistic versus holistic approaches to qualitative analysis and interpretation and Ellingson’s ( 2011 ) science–art continuum. Each of these latter two conceptualizations inform how qualitative researchers engage in the process of interpretation.

Willis ( 2007 ) shared that the purpose of a qualitative project might be explained as “what we expect to gain from research” (p. 288). The purpose, or what we expect to gain, then guides and informs the approaches researchers might take to interpretation. Some researchers, typically positivist/postpositivist, conduct studies that aim to test theories about how the world works and/or how people behave. These researchers attempt to discover general laws, truths, or relationships that can be generalized. Others, less confident in the ability of research to attain a single, generalizable law or truth, might seek “local theory.” These researchers still seek truths, but “instead of generalizable laws or rules, they search for truths about the local context … to understand what is really happening and then to communicate the essence of this to others” (Willis, 2007 , p. 291). In both these purposes, researchers employ atomistic strategies in an inductive process in which researchers “break the data down into small units and then build broader and broader generalizations as the data analysis proceeds” (p. 317). The earlier mentioned processes of analytic induction, constant comparison, and grounded theory fit within this conceptualization of atomistic approaches to interpretation. For example, a line-by-line coding of a transcript might begin an atomistic approach to data analysis.

Alternatively, other researchers pursue distinctly different aims. Researchers with an objective description purpose focus on accurately describing the people and context under study. These researchers adhere to standards and practices designed to achieve objectivity, and their approach to interpretation falls within the binary atomistic/holistic distinction.

The purpose of hermeneutic approaches to research is to “understand the perspectives of humans. And because understanding is situational, hermeneutic research tends to look at the details of the context in which the study occurred. The result is generally rich data reports that include multiple perspectives” (Willis, 2007 , p. 293).

Still other researchers see their purpose as the creation of stories or narratives that utilize “a social process that constructs meaning through interaction … it is an effort to represent in detail the perspectives of participants … whereas description produces one truth about the topic of study, storytelling may generate multiple perspectives, interpretations, and analyses by the researcher and participants” (Willis, 2007 , p. 295).

In these latter purposes (hermeneutic, storytelling, narrative production), researchers typically employ more holistic strategies. According to Willis ( 2007 ), “Holistic approaches tend to leave the data intact and to emphasize that meaning must be derived for a contextual reading of the data rather than the extraction of data segments for detailed analysis” (p. 297). This was the case with the Dreams as Data project mentioned earlier.

We understand the propensity to dichotomize, situate concepts as binary opposites, and create neat continua between these polar descriptors. These sorts of reduction and deconstruction support our understandings and, hopefully, enable us to eventually reconstruct these ideas in meaningful ways. Still, in reality, we realize most of us will, and should, work in the middle of these conceptualizations in fluid ways that allow us to pursue strategies, processes, and theories most appropriate for the research task at hand. As noted, Ellingson ( 2011 ) set up another conceptual continuum, but, like ours, her advice was to “straddle multiple points across the field of qualitative methods” (p. 595). She explained, “I make the case for qualitative methods to be conceptualized as a continuum anchored by art and science, with vast middle spaces that embody infinite possibilities for blending artistic, expository, and social scientific ways of analysis and representation” (p. 595).

We explained at the beginning of this chapter that we view analysis as organizing and summarizing qualitative data and interpretation as constructing meaning. In this sense, analysis allows us to describe the phenomena under study. It enables us to succinctly answer what and how questions and ensures that our descriptions are grounded in the data collected. Descriptions, however, rarely respond to questions of why . Why questions are the domain of interpretation, and, as noted throughout this text, interpretation is complex. Gubrium and Holstein ( 2000 ) noted, “Traditionally, qualitative inquiry has concerned itself with what and how questions … qualitative researchers typically approach why questions cautiously, explanation is tricky business” (p. 502). Eisner ( 1991 ) described this distinctive nature of interpretation: “It means that inquirers try to account for [interpretation] what they have given account of ” (p. 35).

Our focus here is on interpretation, but interpretation requires analysis, because without clear understandings of the data and its characteristics, derived through systematic examination and organization (e.g., coding, memoing, categorizing), “interpretations” resulting from inquiry will likely be incomplete, uninformed, and inconsistent with the constructed perspectives of the study participants. Fortunately for qualitative researchers, we have many sources that lead us through analytic processes. We earlier mentioned the accepted processes of analytic induction and the constant comparison method. These detailed processes (see, e.g., Bogdan & Biklen, 2007 ) combine the inextricably linked activities of analysis and interpretation, with analysis more typically appearing as earlier steps in the process and meaning construction—interpretation—happening later.

A wide variety of resources support researchers engaged in the processes of analysis and interpretation. Saldaña ( 2011 ), for example, provided a detailed description of coding types and processes. He showed researchers how to use process coding (uses gerunds, “-ing” words to capture action), in vivo coding (uses the actual words of the research participants/ subjects), descriptive coding (uses nouns to summarize the data topics), versus coding (uses “vs” to identify conflicts and power issues), and values coding (identifies participants’ values, attitudes, and/or beliefs). To exemplify some of these coding strategies, we include an excerpt from a transcript of a meeting of a school improvement committee. In this study, the collaborators were focused on building “school community.” This excerpt illustrates the application of a variety of codes described by Saldaña to this text:

To connect and elaborate the ideas developed in coding, Saldaña ( 2011 ) suggested researchers categorize the applied codes, write memos to deepen understandings and illuminate additional questions, and identify emergent themes. To begin the categorization process, Saldaña recommended all codes be “classified into similar clusters … once the codes have been classified, a category label is applied to them” (p. 97). So, in continuing with the study of school community example coded here, the researcher might create a cluster/category called “Value of Collaboration” and in this category might include the codes “relationships,” “building community,” and “effective strategies.”

Having coded and categorized a study’s various data forms, a typical next step for researchers is to write memos or analytic memos . Writing analytic memos allows the researcher(s) to

set in words your interpretation of the data … an analytic memo further articulates your … thinking processes on what things may mean … as the study proceeds, however, initial and substantive analytic memos can be revisited and revised for eventual integration into the report itself. (Saldaña, 2011 , p. 98)

In the study of student teaching from K–12 students’ perspectives (Trent & Zorko, 2006 ), we noticed throughout our analysis a series of focus group interview quotes coded “names.” The following quote from a high school student is representative of many others:

I think that, ah, they [student teachers] should like know your face and your name because, uh, I don’t like it if they don’t and they’ll just like … cause they’ll blow you off a lot easier if they don’t know, like our new principal is here … he is, like, he always, like, tries to make sure to say hi even to the, like, not popular people if you can call it that, you know, and I mean, yah, and the people that don’t usually socialize a lot, I mean he makes an effort to know them and know their name like so they will cooperate better with him.

Although we did not ask the focus groups a specific question about whether student teachers knew the K–12 students’ names, the topic came up in every focus group interview. We coded the above excerpt and the others “knowing names,” and these data were grouped with others under the category “relationships.” In an initial analytic memo about this, the researchers wrote,

STUDENT TEACHING STUDY—MEMO #3 “Knowing Names as Relationship Building” Most groups made unsolicited mentions of student teachers knowing, or not knowing, their names. We haven’t asked students about this, but it must be important to them because it always seems to come up. Students expected student teachers to know their names. When they did, students noticed and seemed pleased. When they didn’t, students seemed disappointed, even annoyed. An elementary student told us that early in the semester, “she knew our names … cause when we rose [sic] our hands, she didn’t have to come and look at our name tags … it made me feel very happy.” A high schooler, expressing displeasure that his student teacher didn’t know students’ names, told us, “They should like know your name because it shows they care about you as a person. I mean, we know their names, so they should take the time to learn ours too.” Another high school student said that even after 3 months, she wasn’t sure the student teacher knew her name. Another student echoed, “Same here.” Each of these students asserted that this (knowing students’ names) had impacted their relationship with the student teacher. This high school student focus group stressed that a good relationship, built early, directly impacts classroom interaction and student learning. A student explained it like this: “If you get to know each other, you can have fun with them … they seem to understand you more, you’re more relaxed, and learning seems easier.”

As noted in these brief examples, coding, categorizing, and writing memos about a study’s data are all accepted processes for data analysis and allow researchers to begin constructing new understandings and forming interpretations of the studied phenomena. We find the qualitative research literature to be particularly strong in offering support and guidance for researchers engaged in these analytic practices. In addition to those already noted in this chapter, we have found the following resources provide practical, yet theoretically grounded approaches to qualitative data analysis. For more detailed, procedural, or atomistic approaches to data analysis, we direct researchers to Miles and Huberman’s classic 1994 text, Qualitative Data Analysis , and Bernard et al.’s 2017 book Analyzing Qualitative Data: Systematic Approaches. For analysis and interpretation strategies falling somewhere between the atomistic and holistic poles, we suggest Hesse-Biber and Leavy’s ( 2011 ) chapter, “Analysis and Interpretation of Qualitative Data,” in their book, The Practice of Qualitative Research (second edition); Lichtman’s chapter, “Making Meaning From Your Data,” in her 2013 book Qualitative Research in Education: A User’s Guide (third edition); and “Processing Fieldnotes: Coding and Memoing,” a chapter in Emerson, Fretz, and Shaw’s ( 1995 ) book, Writing Ethnographic Fieldwork . Each of these sources succinctly describes the processes of data preparation, data reduction, coding and categorizing data, and writing memos about emergent ideas and findings. For more holistic approaches, we have found Denzin and Lincoln’s ( 2007 ) Collecting and Interpreting Qualitative Materials and Ellis and Bochner’s ( 2000 ) chapter “Autoethnography, Personal Narrative, Reflexivity” to both be very informative. Finally, Leavy’s 2017 book, Method Meets Art: Arts-Based Research Practice , provides support and guidance to researchers engaged in arts-based research.

Even after reviewing the multiple resources for treating data included here, qualitative researchers might still be wondering, But exactly how do we interpret? In the remainder of this section and in the concluding section of this chapter, we more concretely provide responses to this question and, in closing, we propose a framework for researchers to utilize as they engage in the complex, ambiguous, and yet exciting process of constructing meanings and new understandings from qualitative sources.

These meanings and understandings are often presented as theory, but theories in this sense should be viewed more as “guides to perception” as opposed to “devices that lead to the tight control or precise prediction of events” (Eisner, 1991 , p. 95). Perhaps Erickson’s ( 1986 ) concept of assertions is a more appropriate aim for qualitative researchers. He claimed that assertions are declarative statements; they include a summary of the new understandings, and they are supported by evidence/data. These assertions are open to revision and are revised when disconfirming evidence requires modification. Assertions, theories, or other explanations resulting from interpretation in research are typically presented as “findings” in written research reports. Belgrave and Smith ( 2002 ) emphasized the importance of these interpretations (as opposed to descriptions): “The core of the report is not the events reported by the respondent, but rather the subjective meaning of the reported events for the respondent” (p. 248).

Mills ( 2018 ) viewed interpretation as responding to the question, So what? He provided researchers a series of concrete strategies for both analysis and interpretation. Specific to interpretation, Mills (pp. 204–207) suggested a variety of techniques, including the following:

“ Extend the analysis ”: In doing so, researchers ask additional questions about the research. The data appear to say X , but could it be otherwise? In what ways do the data support emergent finding X ? And, in what ways do they not?

“ Connect findings with personal experience ”: Using this technique, researchers share interpretations based on their intimate knowledge of the context, the observed actions of the individuals in the studied context, and the data points that support emerging interpretations, as well as their awareness of discrepant events or outlier data. In a sense, the researcher is saying, “Based on my experiences in conducting this study, this is what I make of it all.”

“ Seek the advice of ‘critical’ friends ”: In doing so, researchers utilize trusted colleagues, fellow researchers, experts in the field of study, and others to offer insights, alternative interpretations, and the application of their own unique lenses to a researcher’s initial findings. We especially like this strategy because we acknowledge that, too often, qualitative interpretation is a “solo” affair.

“ Contextualize findings in the literature ”: This allows researchers to compare their interpretations to those of others writing about and studying the same/similar phenomena. The results of this contextualization may be that the current study’s findings correspond with the findings of other researchers. The results might, alternatively, differ from the findings of other researchers. In either instance, the researcher can highlight his or her unique contributions to our understanding of the topic under study.

“ Turn to theory ”: Mills defined theory as “an analytical and interpretive framework that helps the researcher make sense of ‘what is going on’ in the social setting being studied.” In turning to theory, researchers search for increasing levels of abstraction and move beyond purely descriptive accounts. Connecting to extant or generating new theory enables researchers to link their work to the broader contemporary issues in the field.

Other theorists offer additional advice for researchers engaged in the act of interpretation. Richardson ( 1995 ) reminded us to account for the power dynamics in the researcher–researched relationship and notes that, in doing so, we can allow for oppressed and marginalized voices to be heard in context. Bogdan and Biklen ( 2007 ) suggested that researchers engaged in interpretation revisit foundational writing about qualitative research, read studies related to the current research, ask evaluative questions (e.g., Is what I’m seeing here good or bad?), ask about implications of particular findings/interpretations, think about the audience for interpretations, look for stories and incidents that illustrate a specific finding/interpretation, and attempt to summarize key interpretations in a succinct paragraph. All these suggestions can be pertinent in certain situations and with particular methodological approaches. In the next and closing section of this chapter, we present a framework for interpretive strategies we believe will support, guide, and be applicable to qualitative researchers across multiple methodologies and paradigms.

In What Ways Can a Framework for Interpretation Strategies Support Qualitative Researchers across Multiple Methodological and Paradigmatic Views?

The process of qualitative research is often compared to a journey, one without a detailed itinerary and ending, but with general direction and aims and yet an open-endedness that adds excitement and thrives on curiosity. Qualitative researchers are travelers. They travel physically to field sites; they travel mentally through various epistemological, theoretical, and methodological grounds; they travel through a series of problem-finding, access, data collection, and data analysis processes; and, finally—the topic of this chapter—they travel through the process of making meaning of all this physical and cognitive travel via interpretation.

Although travel is an appropriate metaphor to describe the journey of qualitative researchers, we will also use “travel” to symbolize a framework for qualitative research interpretation strategies. By design, this framework applies across multiple paradigmatic, epistemological, and methodological traditions. The application of this framework is not formulaic or highly prescriptive; it is also not an anything-goes approach. It falls, and is applicable, between these poles, giving concrete (suggested) direction to qualitative researchers wanting to make the most of the interpretations that result from their research and yet allowing the necessary flexibility for researchers to employ the methods, theories, and approaches they deem most appropriate to the research problem(s) under study.

TRAVEL, a Comprehensive Approach to Qualitative Interpretation

In using the word TRAVEL as a mnemonic device, our aim is to highlight six essential concepts we argue all qualitative researchers should attend to in the interpretive process: transparency, reflexivity, analysis, validity, evidence, and literature. The importance of each is addressed here.

Transparency , as a research concept seems, well, transparent. But, too often, we read qualitative research reports and are left with many questions: How were research participants and the topic of study selected/excluded? How were the data collected, when, and for how long? Who analyzed and interpreted these data? A single researcher? Multiple? What interpretive strategies were employed? Are there data points that substantiate these interpretations/findings? What analytic procedures were used to organize the data prior to making the presented interpretations? In being transparent about data collection, analysis, and interpretation processes, researchers allow reviewers/readers insight into the research endeavor, and this transparency leads to credibility for both researcher and researcher’s claims. Altheide and Johnson ( 2011 ) explained,

There is great diversity of qualitative research.… While these approaches differ, they also share an ethical obligation to make public their claims, to show the reader, audience, or consumer why they should be trusted as faithful accounts of some phenomenon. (p. 584)

This includes, they noted, articulating

what the different sources of data were, how they were interwoven, and … how subsequent interpretations and conclusions are more or less closely tied to the various data … the main concern is that the connection be apparent, and to the extent possible, transparent. (p. 590)

In the Dreams as Data art and research project noted earlier, transparency was addressed in multiple ways. Readers of the project write-up were informed that interpretations resulting from the study, framed as themes , were a result of collaborative analysis that included insights from both students and instructor. Viewers of the art installation/data display had the rare opportunity to see all participant responses. In other words, viewers had access to the entire raw data set (see Trent, 2002 ). More frequently, we encounter only research “findings” already distilled, analyzed, and interpreted in research accounts, often by a single researcher. Allowing research consumers access to the data to interpret for themselves in the Dreams project was an intentional attempt at transparency.

Reflexivity , the second of our concepts for interpretive researcher consideration, has garnered a great deal of attention in qualitative research literature. Some have called this increased attention the reflexive turn (see, e.g., Denzin & Lincoln, 2004 ).

Although you can find many meanings for the term reflexivity, it is usually associated with a critical reflection on the practice and process of research and the role of the researcher. It concerns itself with the impact of the researcher on the system and the system on the researcher. It acknowledges the mutual relationships between the researcher and who and what is studied … by acknowledging the role of the self in qualitative research, the researcher is able to sort through biases and think about how they affect various aspects of the research, especially interpretation of meanings. (Lichtman, 2013 , p. 165)

As with transparency, attending to reflexivity allows researchers to attach credibility to presented findings. Providing a reflexive account of researcher subjectivity and the interactions of this subjectivity within the research process is a way for researchers to communicate openly with their audience. Instead of trying to exhume inherent bias from the process, qualitative researchers share with readers the value of having a specific, idiosyncratic positionality. As a result, situated, contextualized interpretations are viewed as an asset, as opposed to a liability.

LaBanca ( 2011 ), acknowledging the often solitary nature of qualitative research, called for researchers to engage others in the reflexive process. Like many other researchers, LaBanca utilized a researcher journal to chronicle reflexive thoughts, explorations, and understandings, but he took it a step farther. Realizing the value of others’ input, LaBanca posts his reflexive journal entries on a blog (what he calls an online reflexivity blog ) and invites critical friends, other researchers, and interested members of the community to audit his reflexive moves, providing insights, questions, and critique that inform his research and study interpretations.

We agree this is a novel approach worth considering. We, too, understand that multiple interpreters will undoubtedly produce multiple interpretations, a richness of qualitative research. So, we suggest researchers consider bringing others in before the production of the report. This could be fruitful in multiple stages of the inquiry process, but especially in the complex, idiosyncratic processes of reflexivity and interpretation. We are both educators and educational researchers. Historically, each of these roles has tended to be constructed as an isolated endeavor, the solitary teacher, the solo researcher/fieldworker. As noted earlier and in the analysis section that follows, introducing collaborative processes to what has often been a solitary activity offers much promise for generating rich interpretations that benefit from multiple perspectives.

Being consciously reflexive throughout our practice as researchers has benefitted us in many ways. In a study of teacher education curricula designed to prepare preservice teachers to support second-language learners, we realized hard truths that caused us to reflect on and adapt our own practices as teacher educators. Reflexivity can inform a researcher at all parts of the inquiry, even in early stages. For example, one of us was beginning a study of instructional practices in an elementary school. The communicated methods of the study indicated that the researcher would be largely an observer. Early fieldwork revealed that the researcher became much more involved as a participant than anticipated. Deep reflection and writing about the classroom interactions allowed the researcher to realize that the initial purpose of the research was not being accomplished, and the researcher believed he was having a negative impact on the classroom culture. Reflexivity in this instance prompted the researcher to leave the field and abandon the project as it was just beginning. Researchers should plan to openly engage in reflexive activities, including writing about their ongoing reflections and subjectivities. Including excerpts of this writing in research account supports our earlier recommendation of transparency.

Early in this chapter, for the purposes of discussion and examination, we defined analysis as “summarizing and organizing” data in a qualitative study and interpretation as “meaning making.” Although our focus has been on interpretation as the primary topic, the importance of good analysis cannot be underestimated, because without it, resultant interpretations are likely incomplete and potentially uninformed. Comprehensive analysis puts researchers in a position to be deeply familiar with collected data and to organize these data into forms that lead to rich, unique interpretations, and yet interpretations that are clearly connected to data exemplars. Although we find it advantageous to examine analysis and interpretation as different but related practices, in reality, the lines blur as qualitative researchers engage in these recursive processes.

We earlier noted our affinity for a variety of approaches to analysis (see, e.g., Hesse-Biber & Leavy, 2011 ; Lichtman, 2013 ; or Saldaña, 2011 ). Emerson et al. ( 1995 ) presented a grounded approach to qualitative data analysis: In early stages, researchers engage in a close, line-by-line reading of data/collected text and accompany this reading with open coding , a process of categorizing and labeling the inquiry data. Next, researchers write initial memos to describe and organize the data under analysis. These analytic phases allow the researcher(s) to prepare, organize, summarize, and understand the data, in preparation for the more interpretive processes of focused coding and the writing up of interpretations and themes in the form of integrative memos .

Similarly, Mills ( 2018 ) provided guidance on the process of analysis for qualitative action researchers. His suggestions for organizing and summarizing data include coding (labeling data and looking for patterns); identifying themes by considering the big picture while looking for recurrent phrases, descriptions, or topics; asking key questions about the study data (who, what, where, when, why, and how); developing concept maps (graphic organizers that show initial organization and relationships in the data); and stating what’s missing by articulating what data are not present (pp. 179–189).

Many theorists, like Emerson et al. ( 1995 ) and Mills ( 2018 ) noted here, provide guidance for individual researchers engaged in individual data collection, analysis, and interpretation; others, however, invite us to consider the benefits of collaboratively engaging in these processes through the use of collaborative research and analysis teams. Paulus, Woodside, and Ziegler ( 2008 ) wrote about their experiences in collaborative qualitative research: “Collaborative research often refers to collaboration among the researcher and the participants. Few studies investigate the collaborative process among researchers themselves” (p. 226).

Paulus et al. ( 2008 ) claimed that the collaborative process “challenged and transformed our assumptions about qualitative research” (p. 226). Engaging in reflexivity, analysis, and interpretation as a collaborative enabled these researchers to reframe their views about the research process, finding that the process was much more recursive, as opposed to following a linear progression. They also found that cooperatively analyzing and interpreting data yielded “collaboratively constructed meanings” as opposed to “individual discoveries.” And finally, instead of the traditional “individual products” resulting from solo research, collaborative interpretation allowed researchers to participate in an “ongoing conversation” (p. 226).

These researchers explained that engaging in collaborative analysis and interpretation of qualitative data challenged their previously held assumptions. They noted,

through collaboration, procedures are likely to be transparent to the group and can, therefore, be made public. Data analysis benefits from an iterative, dialogic, and collaborative process because thinking is made explicit in a way that is difficult to replicate as a single researcher. (Paulus et al., 2008 , p. 236)

They shared that, during the collaborative process, “we constantly checked our interpretation against the text, the context, prior interpretations, and each other’s interpretations” (p. 234).

We, too, have engaged in analysis similar to these described processes, including working on research teams. We encourage other researchers to find processes that fit with the methodology and data of a particular study, use the techniques and strategies most appropriate, and then cite the utilized authority to justify the selected path. We urge traditionally solo researchers to consider trying a collaborative approach. Generally, we suggest researchers be familiar with a wide repertoire of practices. In doing so, they will be in better positions to select and use strategies most appropriate for their studies and data. Succinctly preparing, organizing, categorizing, and summarizing data sets the researcher(s) up to construct meaningful interpretations in the forms of assertions, findings, themes, and theories.

Researchers want their findings to be sound, backed by evidence, and justifiable and to accurately represent the phenomena under study. In short, researchers seek validity for their work. We assert that qualitative researchers should attend to validity concepts as a part of their interpretive practices. We have previously written and theorized about validity, and, in doing so, we have highlighted and labeled what we consider two distinctly different approaches, transactional and transformational (Cho & Trent, 2006 ). We define transactional validity in qualitative research as an interactive process occurring among the researcher, the researched, and the collected data, one that is aimed at achieving a relatively higher level of accuracy. Techniques, methods, and/or strategies are employed during the conduct of the inquiry. These techniques, such as member checking and triangulation, are seen as a medium with which to ensure an accurate reflection of reality (or, at least, participants’ constructions of reality). Lincoln and Guba’s ( 1985 ) widely known notion of trustworthiness in “naturalistic inquiry” is grounded in this approach. In seeking trustworthiness, researchers attend to research credibility, transferability, dependability, and confirmability. Validity approaches described by Maxwell ( 1992 ) as “descriptive” and “interpretive” also proceed in the usage of transactional processes.

For example, in the write-up of a study on the facilitation of teacher research, one of us (Trent, 2012 ) wrote about the use of transactional processes:

“Member checking is asking the members of the population being studied for their reaction to the findings” (Sagor, 2000 , p. 136). Interpretations and findings of this research, in draft form, were shared with teachers (for member checking) on multiple occasions throughout the study. Additionally, teachers reviewed and provided feedback on the final draft of this article. (p. 44)

This member checking led to changes in some resultant interpretations (called findings in this particular study) and to adaptations of others that shaped these findings in ways that made them both richer and more contextualized.

Alternatively, in transformational approaches, validity is not so much something that can be achieved solely by employing certain techniques. Transformationalists assert that because traditional or positivist inquiry is no longer seen as an absolute means to truth in the realm of human science, alternative notions of validity should be considered to achieve social justice, deeper understandings, broader visions, and other legitimate aims of qualitative research. In this sense, it is the ameliorative aspects of the research that achieve (or do not achieve) its validity. Validity is determined by the resultant actions prompted by the research endeavor.

Lather ( 1993 ), Richardson ( 1997 ), and others (e.g., Lenzo, 1995 ; Scheurich, 1996 ) proposed a transgressive approach to validity that emphasized a higher degree of self-reflexivity. For example, Lather proposed a “catalytic validity” described as “the degree to which the research empowers and emancipates the research subjects” (Scheurich, 1996 , p. 4). Beverley ( 2000 , p. 556) proposed testimonio as a qualitative research strategy. These first-person narratives find their validity in their ability to raise consciousness and thus provoke political action to remedy problems of oppressed peoples (e.g., poverty, marginality, exploitation).

We, too, have pursued research with transformational aims. In the earlier mentioned study of preservice teachers’ experiences learning to teach second-language learners (Cho et al., 2012 ), our aims were to empower faculty members, evolve the curriculum, and, ultimately, better serve preservice teachers so that they might better serve English-language learners in their classrooms. As program curricula and activities have changed as a result, we claim a degree of transformational validity for this research.

Important, then, for qualitative researchers throughout the inquiry, but especially when engaged in the process of interpretation, is to determine the type(s) of validity applicable to the study. What are the aims of the study? Providing an “accurate” account of studied phenomena? Empowering participants to take action for themselves and others? The determination of this purpose will, in turn, inform researchers’ analysis and interpretation of data. Understanding and attending to the appropriate validity criteria will bolster researcher claims to meaningful findings and assertions.

Regardless of purpose or chosen validity considerations, qualitative research depends on evidence . Researchers in different qualitative methodologies rely on different types of evidence to support their claims. Qualitative researchers typically utilize a variety of forms of evidence including texts (written notes, transcripts, images, etc.), audio and video recordings, cultural artifacts, documents related to the inquiry, journal entries, and field notes taken during observations of social contexts and interactions. Schwandt ( 2001 ) wrote,

Evidence is essential to justification, and justification takes the form of an argument about the merit(s) of a given claim. It is generally accepted that no evidence is conclusive or unassailable (and hence, no argument is foolproof). Thus, evidence must often be judged for its credibility, and that typically means examining its source and the procedures by which it was produced [thus the need for transparency discussed earlier]. (p. 82)

Altheide and Johnson ( 2011 ) drew a distinction between evidence and facts:

Qualitative researchers distinguish evidence from facts. Evidence and facts are similar but not identical. We can often agree on facts, e.g., there is a rock, it is harder than cotton candy. Evidence involves an assertion that some facts are relevant to an argument or claim about a relationship. Since a position in an argument is likely tied to an ideological or even epistemological position, evidence is not completely bound by facts, but it is more problematic and subject to disagreement. (p. 586)

Inquirers should make every attempt to link evidence to claims (or findings, interpretations, assertions, conclusions, etc.). There are many strategies for making these connections. Induction involves accumulating multiple data points to infer a general conclusion. Confirmation entails directly linking evidence to resultant interpretations. Testability/falsifiability means illustrating that evidence does not necessarily contradict the claim/interpretation and so increases the credibility of the claim (Schwandt, 2001 ). In the study about learning to teach second-language learners, for example, a study finding (Cho et al., 2012 ) was that “as a moral claim , candidates increasingly [in higher levels of the teacher education program] feel more responsible and committed to … [English language learners]” (p. 77). We supported this finding with a series of data points that included the following preservice teacher response: “It is as much the responsibility of the teacher to help teach second-language learners the English language as it is our responsibility to teach traditional English speakers to read or correctly perform math functions.” Claims supported by evidence allow readers to see for themselves and to both examine researcher assertions in tandem with evidence and form further interpretations of their own.

Some postmodernists reject the notion that qualitative interpretations are arguments based on evidence. Instead, they argue that qualitative accounts are not intended to faithfully represent that experience, but instead are designed to evoke some feelings or reactions in the reader of the account (Schwandt, 2001 ). We argue that, even in these instances where transformational validity concerns take priority over transactional processes, evidence still matters. Did the assertions accomplish the evocative aims? What evidence/arguments were used to evoke these reactions? Does the presented claim correspond with the study’s evidence? Is the account inclusive? In other words, does it attend to all evidence or selectively compartmentalize some data while capitalizing on other evidentiary forms?

Researchers, we argue, should be both transparent and reflexive about these questions and, regardless of research methodology or purpose, should share with readers of the account their evidentiary moves and aims. Altheide and Johnson ( 2011 ) called this an evidentiary narrative and explain:

Ultimately, evidence is bound up with our identity in a situation.… An “evidentiary narrative” emerges from a reconsideration of how knowledge and belief systems in everyday life are tied to epistemic communities that provide perspectives, scenarios, and scripts that reflect symbolic and social moral orders. An “evidentiary narrative” symbolically joins an actor, an audience, a point of view (definition of a situation), assumptions, and a claim about a relationship between two or more phenomena. If any of these factors are not part of the context of meaning for a claim, it will not be honored, and thus, not seen as evidence. (p. 686)

In sum, readers/consumers of a research account deserve to know how evidence was treated and viewed in an inquiry. They want and should be aware of accounts that aim to evoke versus represent, and then they can apply their own criteria (including the potential transferability to their situated context). Renowned ethnographer and qualitative research theorist Harry Wolcott ( 1990 ) urged researchers to “let readers ‘see’ for themselves” by providing more detail rather than less and by sharing primary data/evidence to support interpretations. In the end, readers do not expect perfection. Writer Eric Liu ( 2010 ) explained, “We don’t expect flawless interpretation. We expect good faith. We demand honesty.”

Last, in this journey through concepts we assert are pertinent to researchers engaged in interpretive processes, we include attention to the literature . In discussing literature, qualitative researchers typically mean publications about the prior research conducted on topics aligned with or related to a study. Most often, this research/literature is reviewed and compiled by researchers in a section of the research report titled “Literature Review.” It is here we find others’ studies, methods, and theories related to our topics of study, and it is here we hope the assertions and theories that result from our studies will someday reside.

We acknowledge the value of being familiar with research related to topics of study. This familiarity can inform multiple phases of the inquiry process. Understanding the extant knowledge base can inform research questions and topic selection, data collection and analysis plans, and the interpretive process. In what ways do the interpretations from this study correspond with other research conducted on this topic? Do findings/interpretations corroborate, expand, or contradict other researchers’ interpretations of similar phenomena? In any of these scenarios (correspondence, expansion, contradiction), new findings and interpretations from a study add to and deepen the knowledge base, or literature, on a topic of investigation.

For example, in our literature review for the study of student teaching, we quickly determined that the knowledge base and extant theories related to the student teaching experience were immense, but also quickly realized that few, if any, studies had examined student teaching from the perspective of the K–12 students who had the student teachers. This focus on the literature related to our topic of student teaching prompted us to embark on a study that would fill a gap in this literature: Most of the knowledge base focused on the experiences and learning of the student teachers themselves. Our study, then, by focusing on the K–12 students’ perspectives, added literature/theories/assertions to a previously untapped area. The “literature” in this area (at least we would like to think) is now more robust as a result.

In another example, a research team (Trent et al., 2003 ) focused on institutional diversity efforts, mined the literature, found an appropriate existing (a priori) set of theories/assertions, and then used the existing theoretical framework from the literature as a framework to analyze data, in this case, a variety of institutional activities related to diversity.

Conducting a literature review to explore extant theories on a topic of study can serve a variety of purposes. As evidenced in these examples, consulting the literature/extant theory can reveal gaps in the literature. A literature review might also lead researchers to existing theoretical frameworks that support analysis and interpretation of their data (as in the use of the a priori framework example). Finally, a review of current theories related to a topic of inquiry might confirm that much theory already exists, but that further study may add to, bolster, and/or elaborate on the current knowledge base.

Guidance for researchers conducting literature reviews is plentiful. Lichtman ( 2013 ) suggested researchers conduct a brief literature review, begin research, and then update and modify the literature review as the inquiry unfolds. She suggested reviewing a wide range of related materials (not just scholarly journals) and additionally suggested that researchers attend to literature on methodology, not just the topic of study. She also encouraged researchers to bracket and write down thoughts on the research topic as they review the literature, and, important for this chapter, that researchers “integrate your literature review throughout your writing rather than using a traditional approach of placing it in a separate chapter” (p. 173).

We agree that the power of a literature review to provide context for a study can be maximized when this information is not compartmentalized apart from a study’s findings. Integrating (or at least revisiting) reviewed literature juxtaposed alongside findings can illustrate how new interpretations add to an evolving story. Eisenhart ( 1998 ) expanded the traditional conception of the literature review and discussed the concept of an interpretive review . By taking this interpretive approach, Eisenhart claimed that reviews, alongside related interpretations/findings on a specific topic, have the potential to allow readers to see the studied phenomena in entirely new ways, through new lenses, revealing heretofore unconsidered perspectives. Reviews that offer surprising and enriching perspectives on meanings and circumstances “shake things up, break down boundaries, and cause things (or thinking) to expand” (p. 394). Coupling reviews of this sort with current interpretations will “give us stories that startle us with what we have failed to notice” (p. 395).

In reviews of research studies, it can certainly be important to evaluate the findings in light of established theories and methods [the sorts of things typically included in literature reviews]. However, it also seems important to ask how well the studies disrupt conventional assumptions and help us to reconfigure new, more inclusive, and more promising perspectives on human views and actions. From an interpretivist perspective, it would be most important to review how well methods and findings permit readers to grasp the sense of unfamiliar perspectives and actions. (Eisenhart, 1998 , p. 397)

Though our interpretation-related journey in this chapter nears an end, we are hopeful it is just the beginning of multiple new conversations among ourselves and in concert with other qualitative researchers. Our aims have been to circumscribe interpretation in qualitative research; emphasize the importance of interpretation in achieving the aims of the qualitative project; discuss the interactions of methodology, data, and the researcher/self as these concepts and theories intertwine with interpretive processes; describe some concrete ways that qualitative inquirers engage the process of interpretation; and, finally, provide a framework of interpretive strategies that may serve as a guide for ourselves and other researchers.

In closing, we note that the TRAVEL framework, construed as a journey to be undertaken by researchers engaged in interpretive processes, is not designed to be rigid or prescriptive, but instead is designed to be a flexible set of concepts that will inform researchers across multiple epistemological, methodological, and theoretical paradigms. We chose the concepts of transparency, reflexivity, analysis, validity, evidence, and literature (TRAVEL) because they are applicable to the infinite journeys undertaken by qualitative researchers who have come before and to those who will come after us. As we journeyed through our interpretations of interpretation, we have discovered new things about ourselves and our work. We hope readers also garner insights that enrich their interpretive excursions. Happy travels!

Altheide, D. , & Johnson, J. M. ( 2011 ). Reflections on interpretive adequacy in qualitative research. In N. M. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (pp. 595–610). Thousand Oaks, CA: Sage.

Google Scholar

Google Preview

Barrett, J. ( 2007 ). The researcher as instrument: Learning to conduct qualitative research through analyzing and interpreting a choral rehearsal.   Music Education Research, 9, 417–433.

Barrett, T. ( 2011 ). Criticizing art: Understanding the contemporary (3rd ed.). New York, NY: McGraw–Hill.

Belgrave, L. L. , & Smith, K. J. ( 2002 ). Negotiated validity in collaborative ethnography. In N. M. Denzin & Y. S. Lincoln (Eds.), The qualitative inquiry reader (pp. 233–255). Thousand Oaks, CA: Sage.

Bernard, H. R. , Wutich, A. , & Ryan, G. W. ( 2017 ). Analyzing qualitative data: Systematic approaches (2nd ed.). Thousand Oaks, CA: Sage.

Beverly, J. ( 2000 ). Testimonio, subalternity, and narrative authority. In N. M. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 555–566). Thousand Oaks, CA: Sage.

Bogdan, R. C. , & Biklen, S. K. ( 2007 ). Qualitative research for education: An introduction to theories and methods (5th ed.). Boston, MA: Allyn & Bacon.

Cho, J. , Rios, F. , Trent, A. , & Mayfield, K. ( 2012 ). Integrating language diversity into teacher education curricula in a rural context: Candidates’ developmental perspectives and understandings.   Teacher Education Quarterly, 39(2), 63–85.

Cho, J. , & Trent, A. ( 2006 ). Validity in qualitative research revisited.   QR—Qualitative Research Journal, 6, 319–340.

Denzin, N. M. , & Lincoln, Y. S . (Eds.). ( 2004 ). Handbook of qualitative research . Newbury Park, CA: Sage.

Denzin, N. M. , & Lincoln, Y. S. ( 2007 ). Collecting and interpreting qualitative materials . Thousand Oaks, CA: Sage.

Eisenhart, M. ( 1998 ). On the subject of interpretive reviews.   Review of Educational Research, 68, 391–393.

Eisner, E. ( 1991 ). The enlightened eye: Qualitative inquiry and the enhancement of educational practice . New York, NY: Macmillan.

Ellingson, L. L. ( 2011 ). Analysis and representation across the continuum. In N. M. Denzin & Y. S. Lincoln (Eds.), The Sage handbook of qualitative research (pp. 595–610). Thousand Oaks, CA: Sage.

Ellis, C. , & Bochner, A. P. ( 2000 ). Autoethnography, personal narrative, reflexivity: Researcher as subject. In N. M. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 733–768). Thousand Oaks, CA: Sage.

Emerson, R. , Fretz, R. , & Shaw, L. ( 1995 ). Writing ethnographic fieldwork . Chicago, IL: University of Chicago Press.

Erickson, F. ( 1986 ). Qualitative methods in research in teaching and learning. In M. C. Wittrock (Ed.), Handbook of research on teaching (3rd ed., pp 119–161). New York, NY: Macmillan.

Glaser, B. ( 1965 ). The constant comparative method of qualitative analysis.   Social Problems, 12, 436–445.

Gubrium, J. F. , & Holstein, J. A. ( 2000 ). Analyzing interpretive practice. In N. M. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (2nd ed., pp. 487–508). Thousand Oaks, CA: Sage.

Hammersley, M. ( 2013 ). What is qualitative research? London, England: Bloomsbury Academic.

Hesse-Biber, S. N. ( 2017 ). The practice of qualitative research (3rd ed.). Thousand Oaks, CA: Sage.

Hesse-Biber, S. N. , & Leavy, P. ( 2011 ). The practice of qualitative research (2nd ed.). Thousand Oaks, CA: Sage.

Hubbard, R. S. , & Power, B. M. ( 2003 ). The art of classroom inquiry: A handbook for teacher researchers . Portsmouth, NH: Heinemann.

Husserl, E. ( 1913 /1962). Ideas: general introduction to pure phenomenology (W. R. Boyce Gibson, Trans.). London, England: Collier.

LaBanca, F. ( 2011 ). Online dynamic asynchronous audit strategy for reflexivity in the qualitative paradigm.   Qualitative Report, 16, 1160–1171.

Lather, P. ( 1993 ). Fertile obsession: Validity after poststructuralism.   Sociological Quarterly, 34, 673–693.

Leavy, P. ( 2017 ). Method meets art: Arts-based research practice (2nd ed.). New York, NY: Guilford Press.

Lenzo, K. ( 1995 ). Validity and self reflexivity meet poststructuralism: Scientific ethos and the transgressive self.   Educational Researcher, 24(4), 17–23, 45.

Lichtman, M. ( 2013 ). Qualitative research in education: A user’s guide (3rd ed.). Thousand Oaks, CA: Sage.

Lincoln, Y. S. , & Guba, E. G. ( 1985 ). Naturalistic inquiry . Beverly Hills, CA: Sage.

Liu, E. (2010). The real meaning of balls and strikes . Retrieved from http://www.huffingtonpost.com/eric-liu/the-real-meaning-of-balls_b_660915.html

Maxwell, J. ( 1992 ). Understanding and validity in qualitative research.   Harvard Educational Review, 62, 279–300.

Miles, M. B. , & Huberman, A. M. ( 1994 ). Qualitative data analysis . Thousand Oaks, CA: Sage.

Mills, G. E. ( 2018 ). Action research: A guide for the teacher researcher (6th ed.). New York, NY: Pearson.

Olivier de Sardan, J. P. ( 2015 ). Epistemology, fieldwork, and anthropology. New York, NY: Palgrave Macmillan.

Packer, M. J. ( 2018 ). The science of qualitative research (2nd ed.). Cambridge, England: Cambridge University Press.

Paulus, T. , Woodside, M. , & Ziegler, M. ( 2008 ). Extending the conversation: Qualitative research as dialogic collaborative process.   Qualitative Report, 13, 226–243.

Richardson, L. ( 1995 ). Writing stories: Co-authoring the “sea monster,” a writing story.   Qualitative Inquiry, 1, 189–203.

Richardson, L. ( 1997 ). Fields of play: Constructing an academic life . New Brunswick, NJ: Rutgers University Press.

Sagor, R. ( 2000 ). Guiding school improvement with action research . Alexandria, VA: ASCD.

Saldaña, J. ( 2011 ). Fundamentals of qualitative research . New York, NY: Oxford University Press.

Scheurich, J. ( 1996 ). The masks of validity: A deconstructive investigation.   Qualitative Studies in Education, 9, 49–60.

Schwandt, T. A. ( 2001 ). Dictionary of qualitative inquiry . Thousand Oaks, CA: Sage.

Slotnick, R. C. , & Janesick, V. J. ( 2011 ). Conversations on method: Deconstructing policy through the researcher reflective journal.   Qualitative Report, 16, 1352–1360.

Trent, A. ( 2002 ). Dreams as data: Art installation as heady research,   Teacher Education Quarterly, 29(4), 39–51.

Trent, A. ( 2012 ). Action research on action research: A facilitator’s account.   Action Learning and Action Research Journal, 18, 35–67.

Trent, A. , Rios, F. , Antell, J. , Berube, W. , Bialostok, S. , Cardona, D. , … Rush, T. ( 2003 ). Problems and possibilities in the pursuit of diversity: An institutional analysis.   Equity & Excellence, 36, 213–224.

Trent, A. , & Zorko, L. ( 2006 ). Listening to students: “New” perspectives on student teaching.   Teacher Education & Practice, 19, 55–70.

Willig, C. ( 2017 ). Interpretation in qualitative research. In C. Willig & W. Stainton-Rogers (Eds.), The Sage handbook of qualitative research in psychology (2nd ed., pp. 267–290). London, England: Sage.

Willis, J. W. ( 2007 ). Foundations of qualitative research: Interpretive and critical approaches . Thousand Oaks, CA: Sage.

Wolcott, H. ( 1990 ). On seeking-and rejecting-validity in qualitative research. In E. Eisner & A. Peshkin (Eds.), Qualitative inquiry in education: The continuing debate (pp. 121–152). New York, NY: Teachers College Press.

  • About Oxford Academic
  • Publish journals with us
  • University press partners
  • What we publish
  • New features  
  • Open access
  • Institutional account management
  • Rights and permissions
  • Get help with access
  • Accessibility
  • Advertising
  • Media enquiries
  • Oxford University Press
  • Oxford Languages
  • University of Oxford

Oxford University Press is a department of the University of Oxford. It furthers the University's objective of excellence in research, scholarship, and education by publishing worldwide

  • Copyright © 2024 Oxford University Press
  • Cookie settings
  • Cookie policy
  • Privacy policy
  • Legal notice

This Feature Is Available To Subscribers Only

Sign In or Create an Account

This PDF is available to Subscribers Only

For full access to this pdf, sign in to an existing account, or purchase an annual subscription.

  • Submit your COVID-19 Pandemic Research
  • Research Leap Manual on Academic Writing
  • Conduct Your Survey Easily
  • Research Tools for Primary and Secondary Research
  • Useful and Reliable Article Sources for Researchers
  • Tips on writing a Research Paper
  • Stuck on Your Thesis Statement?
  • Out of the Box
  • How to Organize the Format of Your Writing
  • Argumentative Versus Persuasive. Comparing the 2 Types of Academic Writing Styles
  • Very Quick Academic Writing Tips and Advices
  • Top 4 Quick Useful Tips for Your Introduction
  • Have You Chosen the Right Topic for Your Research Paper?
  • Follow These Easy 8 Steps to Write an Effective Paper
  • 7 Errors in your thesis statement
  • How do I even Write an Academic Paper?
  • Useful Tips for Successful Academic Writing

Transformative Forces: Social Entrepreneurship as Key Competency

Collaborative governance in government administration in the field of state security along the republic of indonesia (ri)-malaysia border area.

  • IT Service Management System Practices in Kenya
  • Introduction Economic and Psychological Well-Being During COVID-19 Pandemic in Albania, A Need for Sustainability
  • Designing a Framework for Assessing Agripreneurship Action for the Green Scheme Irrigation Projects, Namibia
  • The Potential Utilisation of Artificial Intelligence (AI) in Enterprises
  • Case Study – Developing a National Research and Evidence Base for The Health and Wellbeing Chapter of The Welsh Government’s 2023 Innovation Strategy for Wales
  • Slide Share

Research leap

Understanding statistical analysis: A beginner’s guide to data interpretation

Statistical analysis is a crucial part of research in many fields. It is used to analyze data and draw conclusions about the population being studied. However, statistical analysis can be complex and intimidating for beginners. In this article, we will provide a beginner’s guide to statistical analysis and data interpretation, with the aim of helping researchers understand the basics of statistical methods and their application in research.

What is Statistical Analysis?

Statistical analysis is a collection of methods used to analyze data. These methods are used to summarize data, make predictions, and draw conclusions about the population being studied. Statistical analysis is used in a variety of fields, including medicine, social sciences, economics, and more.

Statistical analysis can be broadly divided into two categories: descriptive statistics and inferential statistics. Descriptive statistics are used to summarize data, while inferential statistics are used to draw conclusions about the population based on a sample of data.

Descriptive Statistics

Descriptive statistics are used to summarize data. This includes measures such as the mean, median, mode, and standard deviation. These measures provide information about the central tendency and variability of the data. For example, the mean provides information about the average value of the data, while the standard deviation provides information about the variability of the data.

Inferential Statistics

Inferential statistics are used to draw conclusions about the population based on a sample of data. This involves making inferences about the population based on the sample data. For example, a researcher might use inferential statistics to test whether there is a significant difference between two groups in a study.

Statistical Analysis Techniques

There are many different statistical analysis techniques that can be used in research. Some of the most common techniques include:

Correlation Analysis: This involves analyzing the relationship between two or more variables.

Regression Analysis: This involves analyzing the relationship between a dependent variable and one or more independent variables.

T-Tests: This is a statistical test used to compare the means of two groups.

Analysis of Variance (ANOVA): This is a statistical test used to compare the means of three or more groups.

Chi-Square Test: This is a statistical test used to determine whether there is a significant association between two categorical variables.

Data Interpretation

Once data has been analyzed, it must be interpreted. This involves making sense of the data and drawing conclusions based on the results of the analysis. Data interpretation is a crucial part of statistical analysis, as it is used to draw conclusions and make recommendations based on the data.

When interpreting data, it is important to consider the context in which the data was collected. This includes factors such as the sample size, the sampling method, and the population being studied. It is also important to consider the limitations of the data and the statistical methods used.

Best Practices for Statistical Analysis

To ensure that statistical analysis is conducted correctly and effectively, there are several best practices that should be followed. These include:

Clearly define the research question : This is the foundation of the study and will guide the analysis.

Choose appropriate statistical methods: Different statistical methods are appropriate for different types of data and research questions.

Use reliable and valid data: The data used for analysis should be reliable and valid. This means that it should accurately represent the population being studied and be collected using appropriate methods.

Ensure that the data is representative: The sample used for analysis should be representative of the population being studied. This helps to ensure that the results of the analysis are applicable to the population.

Follow ethical guidelines : Researchers should follow ethical guidelines when conducting research. This includes obtaining informed consent from participants, protecting their privacy, and ensuring that the study does not cause harm.

Statistical analysis and data interpretation are essential tools for any researcher. Whether you are conducting research in the social sciences, natural sciences, or humanities, understanding statistical methods and interpreting data correctly is crucial to drawing accurate conclusions and making informed decisions. By following the best practices for statistical analysis and data interpretation outlined in this article, you can ensure that your research is based on sound statistical principles and is therefore more credible and reliable. Remember to start with a clear research question, use appropriate statistical methods, and always interpret your data in context. With these guidelines in mind, you can confidently approach statistical analysis and data interpretation and make meaningful contributions to your field of study.

Suggested Articles

data analysis

Types of data analysis software In research work, received cluster of results and dispersion in…

what is interpretation of data in research

How to use quantitative data analysis software   Data analysis differentiates the scientist from the…

what is interpretation of data in research

Using free research Qualitative Data Analysis Software is a great way to save money, highlight…

Learn about ethical standards and conducting research in an ethical and responsible manner.

Research is a vital part of advancing knowledge in any field, but it must be…

Related Posts

what is interpretation of data in research

Comments are closed.

Banner

Research Methods

  • Getting Started
  • What is Research Design?
  • Research Approach
  • Research Methodology
  • Data Collection
  • Data Analysis & Interpretation
  • Population & Sampling
  • Theories, Theoretical Perspective & Theoretical Framework
  • Useful Resources

Further Resources

Cover Art

Data Analysis & Interpretation

  • Quantitative Data

Qualitative Data

  • Mixed Methods

You will need to tidy, analyse and interpret the data you collected to give meaning to it, and to answer your research question.  Your choice of methodology points the way to the most suitable method of analysing your data.

what is interpretation of data in research

If the data is numeric you can use a software package such as SPSS, Excel Spreadsheet or “R” to do statistical analysis.  You can identify things like mean, median and average or identify a causal or correlational relationship between variables.  

The University of Connecticut has useful information on statistical analysis.

If your research set out to test a hypothesis your research will either support or refute it, and you will need to explain why this is the case.  You should also highlight and discuss any issues or actions that may have impacted on your results, either positively or negatively.  To fully contribute to the body of knowledge in your area be sure to discuss and interpret your results within the context of your research and the existing literature on the topic.

Data analysis for a qualitative study can be complex because of the variety of types of data that can be collected. Qualitative researchers aren’t attempting to measure observable characteristics, they are often attempting to capture an individual’s interpretation of a phenomena or situation in a particular context or setting.  This data could be captured in text from an interview or focus group, a movie, images, or documents.   Analysis of this type of data is usually done by analysing each artefact according to a predefined and outlined criteria for analysis and then by using a coding system.  The code can be developed by the researcher before analysis or the researcher may develop a code from the research data.  This can be done by hand or by using thematic analysis software such as NVivo.

Interpretation of qualitative data can be presented as a narrative.  The themes identified from the research can be organised and integrated with themes in the existing literature to give further weight and meaning to the research.  The interpretation should also state if the aims and objectives of the research were met.   Any shortcomings with research or areas for further research should also be discussed (Creswell,2009)*.

For further information on analysing and presenting qualitative date, read this article in Nature .

Mixed Methods Data

Data analysis for mixed methods involves aspects of both quantitative and qualitative methods.  However, the sequencing of data collection and analysis is important in terms of the mixed method approach that you are taking.  For example, you could be using a convergent, sequential or transformative model which directly impacts how you use different data to inform, support or direct the course of your study.

The intention in using mixed methods is to produce a synthesis of both quantitative and qualitative information to give a detailed picture of a phenomena in a particular context or setting. To fully understand how best to produce this synthesis it might be worth looking at why researchers choose this method.  Bergin**(2018) states that researchers choose mixed methods because it allows them to triangulate, illuminate or discover a more diverse set of findings.  Therefore, when it comes to interpretation you will need to return to the purpose of your research and discuss and interpret your data in that context. As with quantitative and qualitative methods, interpretation of data should be discussed within the context of the existing literature.

Bergin’s book is available in the Library to borrow. Bolton LTT collection 519.5 BER

Creswell’s book is available in the Library to borrow.  Bolton LTT collection 300.72 CRE

For more information on data analysis look at Sage Research Methods database on the library website.

*Creswell, John W.(2009)  Research design: qualitative, and mixed methods approaches.  Sage, Los Angeles, pp 183

**Bergin, T (2018), Data analysis: quantitative, qualitative and mixed methods. Sage, Los Angeles, pp182

  • << Previous: Data Collection
  • Next: Population & Sampling >>
  • Last Updated: Sep 7, 2023 3:09 PM
  • URL: https://tudublin.libguides.com/research_methods

Data Analysis and Interpretation: Revealing and explaining trends

by Anne E. Egger, Ph.D., Anthony Carpi, Ph.D.

Listen to this reading

Did you know that scientists don't always agree on what data mean? Different scientists can look at the same set of data and come up with different explanations for it, and disagreement among scientists doesn't point to bad science.

Data collection is the systematic recording of information; data analysis involves working to uncover patterns and trends in datasets; data interpretation involves explaining those patterns and trends.

Scientists interpret data based on their background knowledge and experience; thus, different scientists can interpret the same data in different ways.

By publishing their data and the techniques they used to analyze and interpret those data, scientists give the community the opportunity to both review the data and use them in future research.

Before you decide what to wear in the morning, you collect a variety of data: the season of the year, what the forecast says the weather is going to be like, which clothes are clean and which are dirty, and what you will be doing during the day. You then analyze those data . Perhaps you think, "It's summer, so it's usually warm." That analysis helps you determine the best course of action, and you base your apparel decision on your interpretation of the information. You might choose a t-shirt and shorts on a summer day when you know you'll be outside, but bring a sweater with you if you know you'll be in an air-conditioned building.

Though this example may seem simplistic, it reflects the way scientists pursue data collection, analysis , and interpretation . Data (the plural form of the word datum) are scientific observations and measurements that, once analyzed and interpreted, can be developed into evidence to address a question. Data lie at the heart of all scientific investigations, and all scientists collect data in one form or another. The weather forecast that helped you decide what to wear, for example, was an interpretation made by a meteorologist who analyzed data collected by satellites. Data may take the form of the number of bacteria colonies growing in soup broth (see our Experimentation in Science module), a series of drawings or photographs of the different layers of rock that form a mountain range (see our Description in Science module), a tally of lung cancer victims in populations of cigarette smokers and non-smokers (see our Comparison in Science module), or the changes in average annual temperature predicted by a model of global climate (see our Modeling in Science module).

Scientific data collection involves more care than you might use in a casual glance at the thermometer to see what you should wear. Because scientists build on their own work and the work of others, it is important that they are systematic and consistent in their data collection methods and make detailed records so that others can see and use the data they collect.

But collecting data is only one step in a scientific investigation, and scientific knowledge is much more than a simple compilation of data points. The world is full of observations that can be made, but not every observation constitutes a useful piece of data. For example, your meteorologist could record the outside air temperature every second of the day, but would that make the forecast any more accurate than recording it once an hour? Probably not. All scientists make choices about which data are most relevant to their research and what to do with those data: how to turn a collection of measurements into a useful dataset through processing and analysis , and how to interpret those analyzed data in the context of what they already know. The thoughtful and systematic collection, analysis, and interpretation of data allow them to be developed into evidence that supports scientific ideas, arguments, and hypotheses .

Data collection, analysis , and interpretation: Weather and climate

The weather has long been a subject of widespread data collection, analysis , and interpretation . Accurate measurements of air temperature became possible in the mid-1700s when Daniel Gabriel Fahrenheit invented the first standardized mercury thermometer in 1714 (see our Temperature module). Air temperature, wind speed, and wind direction are all critical navigational information for sailors on the ocean, but in the late 1700s and early 1800s, as sailing expeditions became common, this information was not easy to come by. The lack of reliable data was of great concern to Matthew Fontaine Maury, the superintendent of the Depot of Charts and Instruments of the US Navy. As a result, Maury organized the first international Maritime Conference , held in Brussels, Belgium, in 1853. At this meeting, international standards for taking weather measurements on ships were established and a system for sharing this information between countries was founded.

Defining uniform data collection standards was an important step in producing a truly global dataset of meteorological information, allowing data collected by many different people in different parts of the world to be gathered together into a single database. Maury's compilation of sailors' standardized data on wind and currents is shown in Figure 1. The early international cooperation and investment in weather-related data collection has produced a valuable long-term record of air temperature that goes back to the 1850s.

Figure 1: Plate XV from Maury, Matthew F. 1858. The Winds. Chapter in Explanations and Sailing Directions. Washington: Hon. Isaac Toucey.

Figure 1: Plate XV from Maury, Matthew F. 1858. The Winds. Chapter in Explanations and Sailing Directions. Washington: Hon. Isaac Toucey.

This vast store of information is considered "raw" data: tables of numbers (dates and temperatures), descriptions (cloud cover), location, etc. Raw data can be useful in and of itself – for example, if you wanted to know the air temperature in London on June 5, 1801. But the data alone cannot tell you anything about how temperature has changed in London over the past two hundred years, or how that information is related to global-scale climate change. In order for patterns and trends to be seen, data must be analyzed and interpreted first. The analyzed and interpreted data may then be used as evidence in scientific arguments, to support a hypothesis or a theory .

Good data are a potential treasure trove – they can be mined by scientists at any time – and thus an important part of any scientific investigation is accurate and consistent recording of data and the methods used to collect those data. The weather data collected since the 1850s have been just such a treasure trove, based in part upon the standards established by Matthew Maury . These standards provided guidelines for data collections and recording that assured consistency within the dataset . At the time, ship captains were able to utilize the data to determine the most reliable routes to sail across the oceans. Many modern scientists studying climate change have taken advantage of this same dataset to understand how global air temperatures have changed over the recent past. In neither case can one simply look at the table of numbers and observations and answer the question – which route to take, or how global climate has changed. Instead, both questions require analysis and interpretation of the data.

Comprehension Checkpoint

  • Data analysis: A complex and challenging process

Though it may sound straightforward to take 150 years of air temperature data and describe how global climate has changed, the process of analyzing and interpreting those data is actually quite complex. Consider the range of temperatures around the world on any given day in January (see Figure 2): In Johannesburg, South Africa, where it is summer, the air temperature can reach 35° C (95° F), and in Fairbanks, Alaska at that same time of year, it is the middle of winter and air temperatures might be -35° C (-31° F). Now consider that over huge expanses of the ocean, where no consistent measurements are available. One could simply take an average of all of the available measurements for a single day to get a global air temperature average for that day, but that number would not take into account the natural variability within and uneven distribution of those measurements.

Figure 2: Satellite image composite of average air temperatures (in degrees Celsius) across the globe on January 2, 2008 (http://www.ssec.wisc.edu/data/).

Figure 2: Satellite image composite of average air temperatures (in degrees Celsius) across the globe on January 2, 2008 (http://www.ssec.wisc.edu/data/).

Defining a single global average temperature requires scientists to make several decisions about how to process all of those data into a meaningful set of numbers. In 1986, climatologists Phil Jones, Tom Wigley, and Peter Wright published one of the first attempts to assess changes in global mean surface air temperature from 1861 to 1984 (Jones, Wigley, & Wright, 1986). The majority of their paper – three out of five pages – describes the processing techniques they used to correct for the problems and inconsistencies in the historical data that would not be related to climate. For example, the authors note:

Early SSTs [sea surface temperatures] were measured using water collected in uninsulated, canvas buckets, while more recent data come either from insulated bucket or cooling water intake measurements, with the latter considered to be 0.3-0.7° C warmer than uninsulated bucket measurements.

Correcting for this bias may seem simple, just adding ~0.5° C to early canvas bucket measurements, but it becomes more complicated than that because, the authors continue, the majority of SST data do not include a description of what kind of bucket or system was used.

Similar problems were encountered with marine air temperature data . Historical air temperature measurements over the ocean were taken aboard ships, but the type and size of ship could affect the measurement because size "determines the height at which observations were taken." Air temperature can change rapidly with height above the ocean. The authors therefore applied a correction for ship size in their data. Once Jones, Wigley, and Wright had made several of these kinds of corrections, they analyzed their data using a spatial averaging technique that placed measurements within grid cells on the Earth's surface in order to account for the fact that there were many more measurements taken on land than over the oceans.

Developing this grid required many decisions based on their experience and judgment, such as how large each grid cell needed to be and how to distribute the cells over the Earth. They then calculated the mean temperature within each grid cell, and combined all of these means to calculate a global average air temperature for each year. Statistical techniques such as averaging are commonly used in the research process and can help identify trends and relationships within and between datasets (see our Statistics in Science module). Once these spatially averaged global mean temperatures were calculated, the authors compared the means over time from 1861 to 1984.

A common method for analyzing data that occur in a series, such as temperature measurements over time, is to look at anomalies, or differences from a pre-defined reference value . In this case, the authors compared their temperature values to the mean of the years 1970-1979 (see Figure 3). This reference mean is subtracted from each annual mean to produce the jagged lines in Figure 3, which display positive or negative anomalies (values greater or less than zero). Though this may seem to be a circular or complex way to display these data, it is useful because the goal is to show change in mean temperatures rather than absolute values.

Figure 3: The black line shows global temperature anomalies, or differences between averaged yearly temperature measurements and the reference value for the entire globe. The smooth, red line is a filtered 10-year average. (Based on Figure 5 in Jones et al., 1986).

Figure 3: The black line shows global temperature anomalies, or differences between averaged yearly temperature measurements and the reference value for the entire globe. The smooth, red line is a filtered 10-year average. (Based on Figure 5 in Jones et al., 1986).

Putting data into a visual format can facilitate additional analysis (see our Using Graphs and Visual Data module). Figure 3 shows a lot of variability in the data: There are a number of spikes and dips in global temperature throughout the period examined. It can be challenging to see trends in data that have so much variability; our eyes are drawn to the extreme values in the jagged lines like the large spike in temperature around 1876 or the significant dip around 1918. However, these extremes do not necessarily reflect long-term trends in the data.

In order to more clearly see long-term patterns and trends, Jones and his co-authors used another processing technique and applied a filter to the data by calculating a 10-year running average to smooth the data. The smooth lines in the graph represent the filtered data. The smooth line follows the data closely, but it does not reach the extreme values .

Data processing and analysis are sometimes misinterpreted as manipulating data to achieve the desired results, but in reality, the goal of these methods is to make the data clearer, not to change it fundamentally. As described above, in addition to reporting data, scientists report the data processing and analysis methods they use when they publish their work (see our Understanding Scientific Journals and Articles module), allowing their peers the opportunity to assess both the raw data and the techniques used to analyze them.

  • Data interpretation: Uncovering and explaining trends in the data

The analyzed data can then be interpreted and explained. In general, when scientists interpret data, they attempt to explain the patterns and trends uncovered through analysis , bringing all of their background knowledge, experience, and skills to bear on the question and relating their data to existing scientific ideas. Given the personal nature of the knowledge they draw upon, this step can be subjective, but that subjectivity is scrutinized through the peer review process (see our Peer Review in Science module). Based on the smoothed curves, Jones, Wigley, and Wright interpreted their data to show a long-term warming trend. They note that the three warmest years in the entire dataset are 1980, 1981, and 1983. They do not go further in their interpretation to suggest possible causes for the temperature increase, however, but merely state that the results are "extremely interesting when viewed in the light of recent ideas of the causes of climate change."

  • Making data available

The process of data collection, analysis , and interpretation happens on multiple scales. It occurs over the course of a day, a year, or many years, and may involve one or many scientists whose priorities change over time. One of the fundamentally important components of the practice of science is therefore the publication of data in the scientific literature (see our Utilizing the Scientific Literature module). Properly collected and archived data continues to be useful as new research questions emerge. In fact, some research involves re-analysis of data with new techniques, different ways of looking at the data, or combining the results of several studies.

For example, in 1997, the Collaborative Group on Hormonal Factors in Breast Cancer published a widely-publicized study in the prestigious medical journal The Lancet entitled, "Breast cancer and hormone replacement therapy: collaborative reanalysis of data from 51 epidemiological studies of 52,705 women with breast cancer and 108,411 women without breast cancer" (Collaborative Group on Hormonal Factors in Breast Cancer, 1997). The possible link between breast cancer and hormone replacement therapy (HRT) had been studied for years, with mixed results: Some scientists suggested a small increase of cancer risk associated with HRT as early as 1981 (Brinton et al., 1981), but later research suggested no increased risk (Kaufman et al., 1984). By bringing together results from numerous studies and reanalyzing the data together, the researchers concluded that women who were treated with hormone replacement therapy were more like to develop breast cancer. In describing why the reanalysis was used, the authors write:

The increase in the relative risk of breast cancer associated with each year of [HRT] use in current and recent users is small, so inevitably some studies would, by chance alone, show significant associations and others would not. Combination of the results across many studies has the obvious advantage of reducing such random fluctuations.

In many cases, data collected for other purposes can be used to address new questions. The initial reason for collecting weather data, for example, was to better predict winds and storms to help assure safe travel for trading ships. It is only more recently that interest shifted to long-term changes in the weather, but the same data easily contribute to answering both of those questions.

  • Technology for sharing data advances science

One of the most exciting advances in science today is the development of public databases of scientific information that can be accessed and used by anyone. For example, climatic and oceanographic data , which are generally very expensive to obtain because they require large-scale operations like drilling ice cores or establishing a network of buoys across the Pacific Ocean, are shared online through several web sites run by agencies responsible for maintaining and distributing those data, such as the Carbon Dioxide Information Analysis Center run by the US Department of Energy (see Research under the Resources tab). Anyone can download those data to conduct their own analyses and make interpretations . Likewise, the Human Genome Project has a searchable database of the human genome, where researchers can both upload and download their data (see Research under the Resources tab).

The number of these widely available datasets has grown to the point where the National Institute of Standards and Technology actually maintains a database of databases. Some organizations require their participants to make their data publicly available, such as the Incorporated Research Institutions for Seismology (IRIS): The instrumentation branch of IRIS provides support for researchers by offering seismic instrumentation, equipment maintenance and training, and logistical field support for experiments . Anyone can apply to use the instruments as long as they provide IRIS with the data they collect during their seismic experiments. IRIS then makes these data available to the public.

Making data available to other scientists is not a new idea, but having those data available on the Internet in a searchable format has revolutionized the way that scientists can interact with the data, allowing for research efforts that would have been impossible before. This collective pooling of data also allows for new kinds of analysis and interpretation on global scales and over long periods of time. In addition, making data easily accessible helps promote interdisciplinary research by opening the doors to exploration by diverse scientists in many fields.

Table of Contents

  • Data collection, analysis, and interpretation: Weather and climate
  • Different interpretations in the scientific community
  • Debate over data interpretation spurs further research

Activate glossary term highlighting to easily identify key terms within the module. Once highlighted, you can click on these terms to view their definitions.

Activate NGSS annotations to easily identify NGSS standards within the module. Once highlighted, you can click on them to view these standards.

Data Collection, Analysis, and Interpretation

  • First Online: 03 January 2022

Cite this chapter

what is interpretation of data in research

  • Mark F. McEntee 5  

473 Accesses

Often it has been said that proper prior preparation prevents performance. Many of the mistakes made in research have their origins back at the point of data collection. Perhaps it is natural human instinct not to plan; we learn from our experiences. However, it is crucial when it comes to the endeavours of science that we do plan our data collection with analysis and interpretation in mind. In this section on data collection, we will review some fundamental concepts of experimental design, sample size estimation, the assumptions that underlie most statistical processes, and ethical principles.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Available as EPUB and PDF
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
  • Durable hardcover edition

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Al-Murshedi, S., Hogg, P., & England, A. (2018). An investigation into the validity of utilising the CDRAD 2.0 phantom for optimisation studies in digital radiography. The British Journal of Radiology . British Institute of Radiology , 91 (1089), 4. https://doi.org/10.1259/bjr.20180317

Article   Google Scholar  

Alhailiy, A. B., et al. (2019). The associated factors for radiation dose variation in cardiac CT angiography. The British Journal of Radiology . British Institute of Radiology , 92 (1096), 20180793. https://doi.org/10.1259/bjr.20180793

Article   PubMed   PubMed Central   Google Scholar  

Armato, S. G., et al. (2011). The Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI): A completed reference database of lung nodules on CT scans. Medical Physics . John Wiley and Sons Ltd , 38 (2), 915–931. https://doi.org/10.1118/1.3528204

Avison, D. E., et al. (1999). Action research. Communications of the ACM . Association for Computing Machinery (ACM) , 42 (1), 94–97. https://doi.org/10.1145/291469.291479

Båth, M., & Månsson, L. G. (2007). Visual grading characteristics (VGC) analysis: A non-parametric rank-invariant statistical method for image quality evaluation. British Journal of Radiology, 80 (951), 169–176. https://doi.org/10.1259/bjr/35012658

Chakraborty, D. P. (2017). Observer performance methods for diagnostic imaging . CRC Press. https://doi.org/10.1201/9781351228190

Book   Google Scholar  

Couper, M. P., Traugott, M. W., & Lamias, M. J. (2001). Web survey design and administration. Public Opinion Quarterly . Oxford Academic , 65 (2), 230–253. https://doi.org/10.1086/322199

Article   CAS   PubMed   Google Scholar  

European Commission European Guidelines on Quality Criteria for Diagnostic Radiographic Images EUR 16260 EN. (1995).

Google Scholar  

Fähling, M., et al. (2017). Understanding and preventing contrast-induced acute kidney injury. Nature Reviews Nephrology . Nature Publishing Group, 169–180. https://doi.org/10.1038/nrneph.2016.196

Faucon, A. L., Bobrie, G., & Clément, O. (2019). Nephrotoxicity of iodinated contrast media: From pathophysiology to prevention strategies. European Journal of Radiology . Elsevier Ireland Ltd, 231–241. https://doi.org/10.1016/j.ejrad.2019.03.008

Fisher, M. J., & Marshall, A. P. (2009). Understanding descriptive statistics. Australian Critical Care . Elsevier , 22 (2), 93–97. https://doi.org/10.1016/j.aucc.2008.11.003

Article   PubMed   Google Scholar  

Fryback, D. G., & Thornbury, J. R. (1991). The efficacy of diagnostic imaging. Medical Decision Making . Sage PublicationsSage CA: Thousand Oaks, CA , 11 (2), 88–94. https://doi.org/10.1177/0272989X9101100203

Ganesan, A., et al. (2018). A review of factors influencing radiologists’ visual search behaviour. Journal of Medical Imaging and Radiation Oncology . Blackwell Publishing , 62 (6), 747–757. https://doi.org/10.1111/1754-9485.12798

Gilligan, L. A., et al. (2020). Risk of acute kidney injury following contrast-enhanced CT in hospitalized pediatric patients: A propensity score analysis. Radiology . Radiological Society of North America Inc. , 294 (3), 548–556. https://doi.org/10.1148/radiol.2020191931

Good, P. I., & Hardin, J. W. (2012). Common errors in statistics (and how to avoid them): Fourth edition . Wiley. https://doi.org/10.1002/9781118360125

Gusterson, H. (2008). Ethnographic research. In Qualitative methods in international relations (pp. 93–113). Palgrave Macmillan UK. https://doi.org/10.1057/9780230584129_7

Chapter   Google Scholar  

Hansson, J., Månsson, L. G., & Båth, M. (2016). The validity of using ROC software for analysing visual grading characteristics data: An investigation based on the novel software VGC analyzer. Radiation Protection Dosimetry . Oxford University Press , 169 (1–4), 54–59. https://doi.org/10.1093/rpd/ncw035

Home - LUNA16 - Grand Challenge. (n.d.). Available at: https://luna16.grand-challenge.org/ . Accessed 25 Mar 2021.

Huda, W., et al. (1997). Comparison of a photostimulable phosphor system with film for dental radiology. Oral Surgery, Oral Medicine, Oral Pathology, Oral Radiology, and Endodontics . Mosby Inc. , 83 (6), 725–731. https://doi.org/10.1016/S1079-2104(97)90327-9

Iarossi, G. (2006). The power of survey design: A user’s guide for managing surveys, interpreting results, and influencing respondents . Available at: https://books.google.com/books?hl=en&lr=&id=E-8XHVsqoeUC&oi=fnd&pg=PR5&dq=survey+design&ots=fADK9Aznuk&sig=G5DiPgYM18VcoZ-PF05kT7G0OGI . Accessed 21 Mar 2021.

Jang, J. S., et al. (2018). Image quality assessment with dose reduction using high kVp and additional filtration for abdominal digital radiography. Physica Medica . Associazione Italiana di Fisica Medica , 50 , 46–51. https://doi.org/10.1016/j.ejmp.2018.05.007

Jessen, K. A. (2004). Balancing image quality and dose in diagnostic radiology. European Radiology, Supplement . Springer , 14 (1), 9–18. https://doi.org/10.1007/s10406-004-0003-7

King, N., Horrocks, C., & Brooks, J. (2018). Interviews in qualitative research . Available at: https://books.google.com/books?hl=en&lr=&id=ZdB2DwAAQBAJ&oi=fnd&pg=PP1&dq=interviews+in+research&ots=hwRx2cwH3W&sig=_gt8y-4GlHSCnTQAhLfynA3C17E . Accessed: 21 Mar 2021.

Krul, A. J., Daanen, H. A. M., & Choi, H. (2011). Self-reported and measured weight, height and body mass index (BMI) in Italy, the Netherlands and North America. The European Journal of Public Health . Oxford Academic , 21 (4), 414–419. https://doi.org/10.1093/eurpub/ckp228

Kundel, H. L. (1979). Images, image quality and observer performance. New horizons in radiology lecture. Radiology, 132 (2), 265–271. https://doi.org/10.1148/132.2.265

Makary, M. A., & Daniel, M. (2016). Medical error-the third leading cause of death in the US. BMJ (Online) . BMJ Publishing Group , 353 . https://doi.org/10.1136/bmj.i2139

Martin, C. J., Sharp, P. F., & Sutton, D. G. (1999). Measurement of image quality in diagnostic radiology. Applied Radiation and Isotopes . Elsevier Sci Ltd , 50 (1), 21–38. https://doi.org/10.1016/S0969-8043(98)00022-0

Mathematical methods of statistics / by Harald Cramer | National Library of Australia (n.d.). Available at: https://catalogue.nla.gov.au/Record/81100 . Accessed: 22 Mar 2021.

McCollough, C. H., & Schueler, B. A. (2000). Calculation of effective dose. Medical Physics . John Wiley and Sons Ltd , 27 (5), 828–837. https://doi.org/10.1118/1.598948

Meissner, H., et al. (n.d.). Best Practices for Mixed Methods Research in the Health Sciences_the_nature_and_design_of_mixed_methods_research .

Morgan, D. L. (1996). Focus groups. Annual Review of Sociology . Annual Reviews Inc. , 22 , 129–152. https://doi.org/10.1146/annurev.soc.22.1.129

Moses, L. E., Shapiro, D., & Littenberg, B. (1993). Combining independent studies of a diagnostic test into a summary roc curve: Data-analytic approaches and some additional considerations. Statistics in Medicine . John Wiley & Sons, Ltd , 12 (14), 1293–1316. https://doi.org/10.1002/sim.4780121403

Neill Howell 2008 Inferential Statistical Decision Tree – StuDocu . (n.d.). Available at: https://www.studocu.com/en-gb/document/university-of-hertfordshire/using-data-to-address-research-questions/summaries/neill-howell-2008-inferential-statistical-decision-tree/1193346/view . Accessed: 23 Mar 2021.

Neuendorf, K. A., & Kumar, A. (2016). Content analysis. In The international encyclopedia of political communication (pp. 1–10). Wiley. https://doi.org/10.1002/9781118541555.wbiepc065

Nguyen, P. K., et al. (2015). Assessment of the radiation effects of cardiac CT angiography using protein and genetic biomarkers. JACC: Cardiovascular Imaging . Elsevier Inc. , 8 (8), 873–884. https://doi.org/10.1016/j.jcmg.2015.04.016

Noordzij, M., et al. (2010). Sample size calculations: Basic principles and common pitfalls. Nephrology Dialysis Transplantation . Oxford University Press, , 25 (5), 1388–1393. https://doi.org/10.1093/ndt/gfp732

Pisano, E. D., et al. (2005). Diagnostic performance of digital versus film mammography for breast-cancer screening. New England Journal of Medicine . Massachusetts Medical Society , 353 (17), 1773–1783. https://doi.org/10.1056/NEJMoa052911

Article   CAS   Google Scholar  

ROC curve analysis with MedCalc. (n.d.). Available at: https://www.medcalc.org/manual/roc-curves.php . Accessed 30 Mar 2021.

Rudolfer, S. M. (2003). ZHOU, X.-H., OBUCHOWSKI, N. A. and MCCLISH, D. K. statistical methods in diagnostic medicine. Wiley, New York, 2002. xv + 437 pp. $94.95/£70.50. ISBN 0-471-34772-8. Biometrics . Wiley-Blackwell , 59 (1), 203–204. https://doi.org/10.1111/1541-0420.00266

Sudheesh, K., Duggappa, D. R., & Nethra, S. S. (2016). How to write a research proposal? Indian Journal of Anaesthesia . https://doi.org/10.4103/0019-5049.190617

Download references

Author information

Authors and affiliations.

University College Cork, Cork, Ireland

Mark F. McEntee

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Mark F. McEntee .

Editor information

Editors and affiliations.

Medical Imaging, Faculty of Health, University of Canberra, Burnaby, BC, Canada

Euclid Seeram

Faculty of Health, University of Canberra, Canberra, ACT, Australia

Robert Davidson

Brookfield Health Sciences, University College Cork, Cork, Ireland

Andrew England

Rights and permissions

Reprints and permissions

Copyright information

© 2021 Springer Nature Switzerland AG

About this chapter

McEntee, M.F. (2021). Data Collection, Analysis, and Interpretation. In: Seeram, E., Davidson, R., England, A., McEntee, M.F. (eds) Research for Medical Imaging and Radiation Sciences. Springer, Cham. https://doi.org/10.1007/978-3-030-79956-4_6

Download citation

DOI : https://doi.org/10.1007/978-3-030-79956-4_6

Published : 03 January 2022

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-79955-7

Online ISBN : 978-3-030-79956-4

eBook Packages : Biomedical and Life Sciences Biomedical and Life Sciences (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

Jump to navigation

Home

Cochrane Training

Chapter 15: interpreting results and drawing conclusions.

Holger J Schünemann, Gunn E Vist, Julian PT Higgins, Nancy Santesso, Jonathan J Deeks, Paul Glasziou, Elie A Akl, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group

Key Points:

  • This chapter provides guidance on interpreting the results of synthesis in order to communicate the conclusions of the review effectively.
  • Methods are presented for computing, presenting and interpreting relative and absolute effects for dichotomous outcome data, including the number needed to treat (NNT).
  • For continuous outcome measures, review authors can present summary results for studies using natural units of measurement or as minimal important differences when all studies use the same scale. When studies measure the same construct but with different scales, review authors will need to find a way to interpret the standardized mean difference, or to use an alternative effect measure for the meta-analysis such as the ratio of means.
  • Review authors should not describe results as ‘statistically significant’, ‘not statistically significant’ or ‘non-significant’ or unduly rely on thresholds for P values, but report the confidence interval together with the exact P value.
  • Review authors should not make recommendations about healthcare decisions, but they can – after describing the certainty of evidence and the balance of benefits and harms – highlight different actions that might be consistent with particular patterns of values and preferences and other factors that determine a decision such as cost.

Cite this chapter as: Schünemann HJ, Vist GE, Higgins JPT, Santesso N, Deeks JJ, Glasziou P, Akl EA, Guyatt GH. Chapter 15: Interpreting results and drawing conclusions. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Available from www.training.cochrane.org/handbook .

15.1 Introduction

The purpose of Cochrane Reviews is to facilitate healthcare decisions by patients and the general public, clinicians, guideline developers, administrators and policy makers. They also inform future research. A clear statement of findings, a considered discussion and a clear presentation of the authors’ conclusions are, therefore, important parts of the review. In particular, the following issues can help people make better informed decisions and increase the usability of Cochrane Reviews:

  • information on all important outcomes, including adverse outcomes;
  • the certainty of the evidence for each of these outcomes, as it applies to specific populations and specific interventions; and
  • clarification of the manner in which particular values and preferences may bear on the desirable and undesirable consequences of the intervention.

A ‘Summary of findings’ table, described in Chapter 14 , Section 14.1 , provides key pieces of information about health benefits and harms in a quick and accessible format. It is highly desirable that review authors include a ‘Summary of findings’ table in Cochrane Reviews alongside a sufficient description of the studies and meta-analyses to support its contents. This description includes the rating of the certainty of evidence, also called the quality of the evidence or confidence in the estimates of the effects, which is expected in all Cochrane Reviews.

‘Summary of findings’ tables are usually supported by full evidence profiles which include the detailed ratings of the evidence (Guyatt et al 2011a, Guyatt et al 2013a, Guyatt et al 2013b, Santesso et al 2016). The Discussion section of the text of the review provides space to reflect and consider the implications of these aspects of the review’s findings. Cochrane Reviews include five standard subheadings to ensure the Discussion section places the review in an appropriate context: ‘Summary of main results (benefits and harms)’; ‘Potential biases in the review process’; ‘Overall completeness and applicability of evidence’; ‘Certainty of the evidence’; and ‘Agreements and disagreements with other studies or reviews’. Following the Discussion, the Authors’ conclusions section is divided into two standard subsections: ‘Implications for practice’ and ‘Implications for research’. The assessment of the certainty of evidence facilitates a structured description of the implications for practice and research.

Because Cochrane Reviews have an international audience, the Discussion and Authors’ conclusions should, so far as possible, assume a broad international perspective and provide guidance for how the results could be applied in different settings, rather than being restricted to specific national or local circumstances. Cultural differences and economic differences may both play an important role in determining the best course of action based on the results of a Cochrane Review. Furthermore, individuals within societies have widely varying values and preferences regarding health states, and use of societal resources to achieve particular health states. For all these reasons, and because information that goes beyond that included in a Cochrane Review is required to make fully informed decisions, different people will often make different decisions based on the same evidence presented in a review.

Thus, review authors should avoid specific recommendations that inevitably depend on assumptions about available resources, values and preferences, and other factors such as equity considerations, feasibility and acceptability of an intervention. The purpose of the review should be to present information and aid interpretation rather than to offer recommendations. The discussion and conclusions should help people understand the implications of the evidence in relation to practical decisions and apply the results to their specific situation. Review authors can aid this understanding of the implications by laying out different scenarios that describe certain value structures.

In this chapter, we address first one of the key aspects of interpreting findings that is also fundamental in completing a ‘Summary of findings’ table: the certainty of evidence related to each of the outcomes. We then provide a more detailed consideration of issues around applicability and around interpretation of numerical results, and provide suggestions for presenting authors’ conclusions.

15.2 Issues of indirectness and applicability

15.2.1 the role of the review author.

“A leap of faith is always required when applying any study findings to the population at large” or to a specific person. “In making that jump, one must always strike a balance between making justifiable broad generalizations and being too conservative in one’s conclusions” (Friedman et al 1985). In addition to issues about risk of bias and other domains determining the certainty of evidence, this leap of faith is related to how well the identified body of evidence matches the posed PICO ( Population, Intervention, Comparator(s) and Outcome ) question. As to the population, no individual can be entirely matched to the population included in research studies. At the time of decision, there will always be differences between the study population and the person or population to whom the evidence is applied; sometimes these differences are slight, sometimes large.

The terms applicability, generalizability, external validity and transferability are related, sometimes used interchangeably and have in common that they lack a clear and consistent definition in the classic epidemiological literature (Schünemann et al 2013). However, all of the terms describe one overarching theme: whether or not available research evidence can be directly used to answer the health and healthcare question at hand, ideally supported by a judgement about the degree of confidence in this use (Schünemann et al 2013). GRADE’s certainty domains include a judgement about ‘indirectness’ to describe all of these aspects including the concept of direct versus indirect comparisons of different interventions (Atkins et al 2004, Guyatt et al 2008, Guyatt et al 2011b).

To address adequately the extent to which a review is relevant for the purpose to which it is being put, there are certain things the review author must do, and certain things the user of the review must do to assess the degree of indirectness. Cochrane and the GRADE Working Group suggest using a very structured framework to address indirectness. We discuss here and in Chapter 14 what the review author can do to help the user. Cochrane Review authors must be extremely clear on the population, intervention and outcomes that they intend to address. Chapter 14, Section 14.1.2 , also emphasizes a crucial step: the specification of all patient-important outcomes relevant to the intervention strategies under comparison.

In considering whether the effect of an intervention applies equally to all participants, and whether different variations on the intervention have similar effects, review authors need to make a priori hypotheses about possible effect modifiers, and then examine those hypotheses (see Chapter 10, Section 10.10 and Section 10.11 ). If they find apparent subgroup effects, they must ultimately decide whether or not these effects are credible (Sun et al 2012). Differences between subgroups, particularly those that correspond to differences between studies, should be interpreted cautiously. Some chance variation between subgroups is inevitable so, unless there is good reason to believe that there is an interaction, review authors should not assume that the subgroup effect exists. If, despite due caution, review authors judge subgroup effects in terms of relative effect estimates as credible (i.e. the effects differ credibly), they should conduct separate meta-analyses for the relevant subgroups, and produce separate ‘Summary of findings’ tables for those subgroups.

The user of the review will be challenged with ‘individualization’ of the findings, whether they seek to apply the findings to an individual patient or a policy decision in a specific context. For example, even if relative effects are similar across subgroups, absolute effects will differ according to baseline risk. Review authors can help provide this information by identifying identifiable groups of people with varying baseline risks in the ‘Summary of findings’ tables, as discussed in Chapter 14, Section 14.1.3 . Users can then identify their specific case or population as belonging to a particular risk group, if relevant, and assess their likely magnitude of benefit or harm accordingly. A description of the identifying prognostic or baseline risk factors in a brief scenario (e.g. age or gender) will help users of a review further.

Another decision users must make is whether their individual case or population of interest is so different from those included in the studies that they cannot use the results of the systematic review and meta-analysis at all. Rather than rigidly applying the inclusion and exclusion criteria of studies, it is better to ask whether or not there are compelling reasons why the evidence should not be applied to a particular patient. Review authors can sometimes help decision makers by identifying important variation where divergence might limit the applicability of results (Rothwell 2005, Schünemann et al 2006, Guyatt et al 2011b, Schünemann et al 2013), including biologic and cultural variation, and variation in adherence to an intervention.

In addressing these issues, review authors cannot be aware of, or address, the myriad of differences in circumstances around the world. They can, however, address differences of known importance to many people and, importantly, they should avoid assuming that other people’s circumstances are the same as their own in discussing the results and drawing conclusions.

15.2.2 Biological variation

Issues of biological variation that may affect the applicability of a result to a reader or population include divergence in pathophysiology (e.g. biological differences between women and men that may affect responsiveness to an intervention) and divergence in a causative agent (e.g. for infectious diseases such as malaria, which may be caused by several different parasites). The discussion of the results in the review should make clear whether the included studies addressed all or only some of these groups, and whether any important subgroup effects were found.

15.2.3 Variation in context

Some interventions, particularly non-pharmacological interventions, may work in some contexts but not in others; the situation has been described as program by context interaction (Hawe et al 2004). Contextual factors might pertain to the host organization in which an intervention is offered, such as the expertise, experience and morale of the staff expected to carry out the intervention, the competing priorities for the clinician’s or staff’s attention, the local resources such as service and facilities made available to the program and the status or importance given to the program by the host organization. Broader context issues might include aspects of the system within which the host organization operates, such as the fee or payment structure for healthcare providers and the local insurance system. Some interventions, in particular complex interventions (see Chapter 17 ), can be only partially implemented in some contexts, and this requires judgements about indirectness of the intervention and its components for readers in that context (Schünemann 2013).

Contextual factors may also pertain to the characteristics of the target group or population, such as cultural and linguistic diversity, socio-economic position, rural/urban setting. These factors may mean that a particular style of care or relationship evolves between service providers and consumers that may or may not match the values and technology of the program.

For many years these aspects have been acknowledged when decision makers have argued that results of evidence reviews from other countries do not apply in their own country or setting. Whilst some programmes/interventions have been successfully transferred from one context to another, others have not (Resnicow et al 1993, Lumley et al 2004, Coleman et al 2015). Review authors should be cautious when making generalizations from one context to another. They should report on the presence (or otherwise) of context-related information in intervention studies, where this information is available.

15.2.4 Variation in adherence

Variation in the adherence of the recipients and providers of care can limit the certainty in the applicability of results. Predictable differences in adherence can be due to divergence in how recipients of care perceive the intervention (e.g. the importance of side effects), economic conditions or attitudes that make some forms of care inaccessible in some settings, such as in low-income countries (Dans et al 2007). It should not be assumed that high levels of adherence in closely monitored randomized trials will translate into similar levels of adherence in normal practice.

15.2.5 Variation in values and preferences

Decisions about healthcare management strategies and options involve trading off health benefits and harms. The right choice may differ for people with different values and preferences (i.e. the importance people place on the outcomes and interventions), and it is important that decision makers ensure that decisions are consistent with a patient or population’s values and preferences. The importance placed on outcomes, together with other factors, will influence whether the recipients of care will or will not accept an option that is offered (Alonso-Coello et al 2016) and, thus, can be one factor influencing adherence. In Section 15.6 , we describe how the review author can help this process and the limits of supporting decision making based on intervention reviews.

15.3 Interpreting results of statistical analyses

15.3.1 confidence intervals.

Results for both individual studies and meta-analyses are reported with a point estimate together with an associated confidence interval. For example, ‘The odds ratio was 0.75 with a 95% confidence interval of 0.70 to 0.80’. The point estimate (0.75) is the best estimate of the magnitude and direction of the experimental intervention’s effect compared with the comparator intervention. The confidence interval describes the uncertainty inherent in any estimate, and describes a range of values within which we can be reasonably sure that the true effect actually lies. If the confidence interval is relatively narrow (e.g. 0.70 to 0.80), the effect size is known precisely. If the interval is wider (e.g. 0.60 to 0.93) the uncertainty is greater, although there may still be enough precision to make decisions about the utility of the intervention. Intervals that are very wide (e.g. 0.50 to 1.10) indicate that we have little knowledge about the effect and this imprecision affects our certainty in the evidence, and that further information would be needed before we could draw a more certain conclusion.

A 95% confidence interval is often interpreted as indicating a range within which we can be 95% certain that the true effect lies. This statement is a loose interpretation, but is useful as a rough guide. The strictly correct interpretation of a confidence interval is based on the hypothetical notion of considering the results that would be obtained if the study were repeated many times. If a study were repeated infinitely often, and on each occasion a 95% confidence interval calculated, then 95% of these intervals would contain the true effect (see Section 15.3.3 for further explanation).

The width of the confidence interval for an individual study depends to a large extent on the sample size. Larger studies tend to give more precise estimates of effects (and hence have narrower confidence intervals) than smaller studies. For continuous outcomes, precision depends also on the variability in the outcome measurements (i.e. how widely individual results vary between people in the study, measured as the standard deviation); for dichotomous outcomes it depends on the risk of the event (more frequent events allow more precision, and narrower confidence intervals), and for time-to-event outcomes it also depends on the number of events observed. All these quantities are used in computation of the standard errors of effect estimates from which the confidence interval is derived.

The width of a confidence interval for a meta-analysis depends on the precision of the individual study estimates and on the number of studies combined. In addition, for random-effects models, precision will decrease with increasing heterogeneity and confidence intervals will widen correspondingly (see Chapter 10, Section 10.10.4 ). As more studies are added to a meta-analysis the width of the confidence interval usually decreases. However, if the additional studies increase the heterogeneity in the meta-analysis and a random-effects model is used, it is possible that the confidence interval width will increase.

Confidence intervals and point estimates have different interpretations in fixed-effect and random-effects models. While the fixed-effect estimate and its confidence interval address the question ‘what is the best (single) estimate of the effect?’, the random-effects estimate assumes there to be a distribution of effects, and the estimate and its confidence interval address the question ‘what is the best estimate of the average effect?’ A confidence interval may be reported for any level of confidence (although they are most commonly reported for 95%, and sometimes 90% or 99%). For example, the odds ratio of 0.80 could be reported with an 80% confidence interval of 0.73 to 0.88; a 90% interval of 0.72 to 0.89; and a 95% interval of 0.70 to 0.92. As the confidence level increases, the confidence interval widens.

There is logical correspondence between the confidence interval and the P value (see Section 15.3.3 ). The 95% confidence interval for an effect will exclude the null value (such as an odds ratio of 1.0 or a risk difference of 0) if and only if the test of significance yields a P value of less than 0.05. If the P value is exactly 0.05, then either the upper or lower limit of the 95% confidence interval will be at the null value. Similarly, the 99% confidence interval will exclude the null if and only if the test of significance yields a P value of less than 0.01.

Together, the point estimate and confidence interval provide information to assess the effects of the intervention on the outcome. For example, suppose that we are evaluating an intervention that reduces the risk of an event and we decide that it would be useful only if it reduced the risk of an event from 30% by at least 5 percentage points to 25% (these values will depend on the specific clinical scenario and outcomes, including the anticipated harms). If the meta-analysis yielded an effect estimate of a reduction of 10 percentage points with a tight 95% confidence interval, say, from 7% to 13%, we would be able to conclude that the intervention was useful since both the point estimate and the entire range of the interval exceed our criterion of a reduction of 5% for net health benefit. However, if the meta-analysis reported the same risk reduction of 10% but with a wider interval, say, from 2% to 18%, although we would still conclude that our best estimate of the intervention effect is that it provides net benefit, we could not be so confident as we still entertain the possibility that the effect could be between 2% and 5%. If the confidence interval was wider still, and included the null value of a difference of 0%, we would still consider the possibility that the intervention has no effect on the outcome whatsoever, and would need to be even more sceptical in our conclusions.

Review authors may use the same general approach to conclude that an intervention is not useful. Continuing with the above example where the criterion for an important difference that should be achieved to provide more benefit than harm is a 5% risk difference, an effect estimate of 2% with a 95% confidence interval of 1% to 4% suggests that the intervention does not provide net health benefit.

15.3.2 P values and statistical significance

A P value is the standard result of a statistical test, and is the probability of obtaining the observed effect (or larger) under a ‘null hypothesis’. In the context of Cochrane Reviews there are two commonly used statistical tests. The first is a test of overall effect (a Z-test), and its null hypothesis is that there is no overall effect of the experimental intervention compared with the comparator on the outcome of interest. The second is the (Chi 2 ) test for heterogeneity, and its null hypothesis is that there are no differences in the intervention effects across studies.

A P value that is very small indicates that the observed effect is very unlikely to have arisen purely by chance, and therefore provides evidence against the null hypothesis. It has been common practice to interpret a P value by examining whether it is smaller than particular threshold values. In particular, P values less than 0.05 are often reported as ‘statistically significant’, and interpreted as being small enough to justify rejection of the null hypothesis. However, the 0.05 threshold is an arbitrary one that became commonly used in medical and psychological research largely because P values were determined by comparing the test statistic against tabulations of specific percentage points of statistical distributions. If review authors decide to present a P value with the results of a meta-analysis, they should report a precise P value (as calculated by most statistical software), together with the 95% confidence interval. Review authors should not describe results as ‘statistically significant’, ‘not statistically significant’ or ‘non-significant’ or unduly rely on thresholds for P values , but report the confidence interval together with the exact P value (see MECIR Box 15.3.a ).

We discuss interpretation of the test for heterogeneity in Chapter 10, Section 10.10.2 ; the remainder of this section refers mainly to tests for an overall effect. For tests of an overall effect, the computation of P involves both the effect estimate and precision of the effect estimate (driven largely by sample size). As precision increases, the range of plausible effects that could occur by chance is reduced. Correspondingly, the statistical significance of an effect of a particular magnitude will usually be greater (the P value will be smaller) in a larger study than in a smaller study.

P values are commonly misinterpreted in two ways. First, a moderate or large P value (e.g. greater than 0.05) may be misinterpreted as evidence that the intervention has no effect on the outcome. There is an important difference between this statement and the correct interpretation that there is a high probability that the observed effect on the outcome is due to chance alone. To avoid such a misinterpretation, review authors should always examine the effect estimate and its 95% confidence interval.

The second misinterpretation is to assume that a result with a small P value for the summary effect estimate implies that an experimental intervention has an important benefit. Such a misinterpretation is more likely to occur in large studies and meta-analyses that accumulate data over dozens of studies and thousands of participants. The P value addresses the question of whether the experimental intervention effect is precisely nil; it does not examine whether the effect is of a magnitude of importance to potential recipients of the intervention. In a large study, a small P value may represent the detection of a trivial effect that may not lead to net health benefit when compared with the potential harms (i.e. harmful effects on other important outcomes). Again, inspection of the point estimate and confidence interval helps correct interpretations (see Section 15.3.1 ).

MECIR Box 15.3.a Relevant expectations for conduct of intervention reviews

15.3.3 Relation between confidence intervals, statistical significance and certainty of evidence

The confidence interval (and imprecision) is only one domain that influences overall uncertainty about effect estimates. Uncertainty resulting from imprecision (i.e. statistical uncertainty) may be no less important than uncertainty from indirectness, or any other GRADE domain, in the context of decision making (Schünemann 2016). Thus, the extent to which interpretations of the confidence interval described in Sections 15.3.1 and 15.3.2 correspond to conclusions about overall certainty of the evidence for the outcome of interest depends on these other domains. If there are no concerns about other domains that determine the certainty of the evidence (i.e. risk of bias, inconsistency, indirectness or publication bias), then the interpretation in Sections 15.3.1 and 15.3.2 . about the relation of the confidence interval to the true effect may be carried forward to the overall certainty. However, if there are concerns about the other domains that affect the certainty of the evidence, the interpretation about the true effect needs to be seen in the context of further uncertainty resulting from those concerns.

For example, nine randomized controlled trials in almost 6000 cancer patients indicated that the administration of heparin reduces the risk of venous thromboembolism (VTE), with a risk ratio of 43% (95% CI 19% to 60%) (Akl et al 2011a). For patients with a plausible baseline risk of approximately 4.6% per year, this relative effect suggests that heparin leads to an absolute risk reduction of 20 fewer VTEs (95% CI 9 fewer to 27 fewer) per 1000 people per year (Akl et al 2011a). Now consider that the review authors or those applying the evidence in a guideline have lowered the certainty in the evidence as a result of indirectness. While the confidence intervals would remain unchanged, the certainty in that confidence interval and in the point estimate as reflecting the truth for the question of interest will be lowered. In fact, the certainty range will have unknown width so there will be unknown likelihood of a result within that range because of this indirectness. The lower the certainty in the evidence, the less we know about the width of the certainty range, although methods for quantifying risk of bias and understanding potential direction of bias may offer insight when lowered certainty is due to risk of bias. Nevertheless, decision makers must consider this uncertainty, and must do so in relation to the effect measure that is being evaluated (e.g. a relative or absolute measure). We will describe the impact on interpretations for dichotomous outcomes in Section 15.4 .

15.4 Interpreting results from dichotomous outcomes (including numbers needed to treat)

15.4.1 relative and absolute risk reductions.

Clinicians may be more inclined to prescribe an intervention that reduces the relative risk of death by 25% than one that reduces the risk of death by 1 percentage point, although both presentations of the evidence may relate to the same benefit (i.e. a reduction in risk from 4% to 3%). The former refers to the relative reduction in risk and the latter to the absolute reduction in risk. As described in Chapter 6, Section 6.4.1 , there are several measures for comparing dichotomous outcomes in two groups. Meta-analyses are usually undertaken using risk ratios (RR), odds ratios (OR) or risk differences (RD), but there are several alternative ways of expressing results.

Relative risk reduction (RRR) is a convenient way of re-expressing a risk ratio as a percentage reduction:

what is interpretation of data in research

For example, a risk ratio of 0.75 translates to a relative risk reduction of 25%, as in the example above.

The risk difference is often referred to as the absolute risk reduction (ARR) or absolute risk increase (ARI), and may be presented as a percentage (e.g. 1%), as a decimal (e.g. 0.01), or as account (e.g. 10 out of 1000). We consider different choices for presenting absolute effects in Section 15.4.3 . We then describe computations for obtaining these numbers from the results of individual studies and of meta-analyses in Section 15.4.4 .

15.4.2 Number needed to treat (NNT)

The number needed to treat (NNT) is a common alternative way of presenting information on the effect of an intervention. The NNT is defined as the expected number of people who need to receive the experimental rather than the comparator intervention for one additional person to either incur or avoid an event (depending on the direction of the result) in a given time frame. Thus, for example, an NNT of 10 can be interpreted as ‘it is expected that one additional (or less) person will incur an event for every 10 participants receiving the experimental intervention rather than comparator over a given time frame’. It is important to be clear that:

  • since the NNT is derived from the risk difference, it is still a comparative measure of effect (experimental versus a specific comparator) and not a general property of a single intervention; and
  • the NNT gives an ‘expected value’. For example, NNT = 10 does not imply that one additional event will occur in each and every group of 10 people.

NNTs can be computed for both beneficial and detrimental events, and for interventions that cause both improvements and deteriorations in outcomes. In all instances NNTs are expressed as positive whole numbers. Some authors use the term ‘number needed to harm’ (NNH) when an intervention leads to an adverse outcome, or a decrease in a positive outcome, rather than improvement. However, this phrase can be misleading (most notably, it can easily be read to imply the number of people who will experience a harmful outcome if given the intervention), and it is strongly recommended that ‘number needed to harm’ and ‘NNH’ are avoided. The preferred alternative is to use phrases such as ‘number needed to treat for an additional beneficial outcome’ (NNTB) and ‘number needed to treat for an additional harmful outcome’ (NNTH) to indicate direction of effect.

As NNTs refer to events, their interpretation needs to be worded carefully when the binary outcome is a dichotomization of a scale-based outcome. For example, if the outcome is pain measured on a ‘none, mild, moderate or severe’ scale it may have been dichotomized as ‘none or mild’ versus ‘moderate or severe’. It would be inappropriate for an NNT from these data to be referred to as an ‘NNT for pain’. It is an ‘NNT for moderate or severe pain’.

We consider different choices for presenting absolute effects in Section 15.4.3 . We then describe computations for obtaining these numbers from the results of individual studies and of meta-analyses in Section 15.4.4 .

15.4.3 Expressing risk differences

Users of reviews are liable to be influenced by the choice of statistical presentations of the evidence. Hoffrage and colleagues suggest that physicians’ inferences about statistical outcomes are more appropriate when they deal with ‘natural frequencies’ – whole numbers of people, both treated and untreated (e.g. treatment results in a drop from 20 out of 1000 to 10 out of 1000 women having breast cancer) – than when effects are presented as percentages (e.g. 1% absolute reduction in breast cancer risk) (Hoffrage et al 2000). Probabilities may be more difficult to understand than frequencies, particularly when events are rare. While standardization may be important in improving the presentation of research evidence (and participation in healthcare decisions), current evidence suggests that the presentation of natural frequencies for expressing differences in absolute risk is best understood by consumers of healthcare information (Akl et al 2011b). This evidence provides the rationale for presenting absolute risks in ‘Summary of findings’ tables as numbers of people with events per 1000 people receiving the intervention (see Chapter 14 ).

RRs and RRRs remain crucial because relative effects tend to be substantially more stable across risk groups than absolute effects (see Chapter 10, Section 10.4.3 ). Review authors can use their own data to study this consistency (Cates 1999, Smeeth et al 1999). Risk differences from studies are least likely to be consistent across baseline event rates; thus, they are rarely appropriate for computing numbers needed to treat in systematic reviews. If a relative effect measure (OR or RR) is chosen for meta-analysis, then a comparator group risk needs to be specified as part of the calculation of an RD or NNT. In addition, if there are several different groups of participants with different levels of risk, it is crucial to express absolute benefit for each clinically identifiable risk group, clarifying the time period to which this applies. Studies in patients with differing severity of disease, or studies with different lengths of follow-up will almost certainly have different comparator group risks. In these cases, different comparator group risks lead to different RDs and NNTs (except when the intervention has no effect). A recommended approach is to re-express an odds ratio or a risk ratio as a variety of RD or NNTs across a range of assumed comparator risks (ACRs) (McQuay and Moore 1997, Smeeth et al 1999). Review authors should bear these considerations in mind not only when constructing their ‘Summary of findings’ table, but also in the text of their review.

For example, a review of oral anticoagulants to prevent stroke presented information to users by describing absolute benefits for various baseline risks (Aguilar and Hart 2005, Aguilar et al 2007). They presented their principal findings as “The inherent risk of stroke should be considered in the decision to use oral anticoagulants in atrial fibrillation patients, selecting those who stand to benefit most for this therapy” (Aguilar and Hart 2005). Among high-risk atrial fibrillation patients with prior stroke or transient ischaemic attack who have stroke rates of about 12% (120 per 1000) per year, warfarin prevents about 70 strokes yearly per 1000 patients, whereas for low-risk atrial fibrillation patients (with a stroke rate of about 2% per year or 20 per 1000), warfarin prevents only 12 strokes. This presentation helps users to understand the important impact that typical baseline risks have on the absolute benefit that they can expect.

15.4.4 Computations

Direct computation of risk difference (RD) or a number needed to treat (NNT) depends on the summary statistic (odds ratio, risk ratio or risk differences) available from the study or meta-analysis. When expressing results of meta-analyses, review authors should use, in the computations, whatever statistic they determined to be the most appropriate summary for meta-analysis (see Chapter 10, Section 10.4.3 ). Here we present calculations to obtain RD as a reduction in the number of participants per 1000. For example, a risk difference of –0.133 corresponds to 133 fewer participants with the event per 1000.

RDs and NNTs should not be computed from the aggregated total numbers of participants and events across the trials. This approach ignores the randomization within studies, and may produce seriously misleading results if there is unbalanced randomization in any of the studies. Using the pooled result of a meta-analysis is more appropriate. When computing NNTs, the values obtained are by convention always rounded up to the next whole number.

15.4.4.1 Computing NNT from a risk difference (RD)

A NNT may be computed from a risk difference as

what is interpretation of data in research

where the vertical bars (‘absolute value of’) in the denominator indicate that any minus sign should be ignored. It is convention to round the NNT up to the nearest whole number. For example, if the risk difference is –0.12 the NNT is 9; if the risk difference is –0.22 the NNT is 5. Cochrane Review authors should qualify the NNT as referring to benefit (improvement) or harm by denoting the NNT as NNTB or NNTH. Note that this approach, although feasible, should be used only for the results of a meta-analysis of risk differences. In most cases meta-analyses will be undertaken using a relative measure of effect (RR or OR), and those statistics should be used to calculate the NNT (see Section 15.4.4.2 and 15.4.4.3 ).

15.4.4.2 Computing risk differences or NNT from a risk ratio

To aid interpretation of the results of a meta-analysis of risk ratios, review authors may compute an absolute risk reduction or NNT. In order to do this, an assumed comparator risk (ACR) (otherwise known as a baseline risk, or risk that the outcome of interest would occur with the comparator intervention) is required. It will usually be appropriate to do this for a range of different ACRs. The computation proceeds as follows:

what is interpretation of data in research

As an example, suppose the risk ratio is RR = 0.92, and an ACR = 0.3 (300 per 1000) is assumed. Then the effect on risk is 24 fewer per 1000:

what is interpretation of data in research

The NNT is 42:

what is interpretation of data in research

15.4.4.3 Computing risk differences or NNT from an odds ratio

Review authors may wish to compute a risk difference or NNT from the results of a meta-analysis of odds ratios. In order to do this, an ACR is required. It will usually be appropriate to do this for a range of different ACRs. The computation proceeds as follows:

what is interpretation of data in research

As an example, suppose the odds ratio is OR = 0.73, and a comparator risk of ACR = 0.3 is assumed. Then the effect on risk is 62 fewer per 1000:

what is interpretation of data in research

The NNT is 17:

what is interpretation of data in research

15.4.4.4 Computing risk ratio from an odds ratio

Because risk ratios are easier to interpret than odds ratios, but odds ratios have favourable mathematical properties, a review author may decide to undertake a meta-analysis based on odds ratios, but to express the result as a summary risk ratio (or relative risk reduction). This requires an ACR. Then

what is interpretation of data in research

It will often be reasonable to perform this transformation using the median comparator group risk from the studies in the meta-analysis.

15.4.4.5 Computing confidence limits

Confidence limits for RDs and NNTs may be calculated by applying the above formulae to the upper and lower confidence limits for the summary statistic (RD, RR or OR) (Altman 1998). Note that this confidence interval does not incorporate uncertainty around the ACR.

If the 95% confidence interval of OR or RR includes the value 1, one of the confidence limits will indicate benefit and the other harm. Thus, appropriate use of the words ‘fewer’ and ‘more’ is required for each limit when presenting results in terms of events. For NNTs, the two confidence limits should be labelled as NNTB and NNTH to indicate the direction of effect in each case. The confidence interval for the NNT will include a ‘discontinuity’, because increasingly smaller risk differences that approach zero will lead to NNTs approaching infinity. Thus, the confidence interval will include both an infinitely large NNTB and an infinitely large NNTH.

15.5 Interpreting results from continuous outcomes (including standardized mean differences)

15.5.1 meta-analyses with continuous outcomes.

Review authors should describe in the study protocol how they plan to interpret results for continuous outcomes. When outcomes are continuous, review authors have a number of options to present summary results. These options differ if studies report the same measure that is familiar to the target audiences, studies report the same or very similar measures that are less familiar to the target audiences, or studies report different measures.

15.5.2 Meta-analyses with continuous outcomes using the same measure

If all studies have used the same familiar units, for instance, results are expressed as durations of events, such as symptoms for conditions including diarrhoea, sore throat, otitis media, influenza or duration of hospitalization, a meta-analysis may generate a summary estimate in those units, as a difference in mean response (see, for instance, the row summarizing results for duration of diarrhoea in Chapter 14, Figure 14.1.b and the row summarizing oedema in Chapter 14, Figure 14.1.a ). For such outcomes, the ‘Summary of findings’ table should include a difference of means between the two interventions. However, when units of such outcomes may be difficult to interpret, particularly when they relate to rating scales (again, see the oedema row of Chapter 14, Figure 14.1.a ). ‘Summary of findings’ tables should include the minimum and maximum of the scale of measurement, and the direction. Knowledge of the smallest change in instrument score that patients perceive is important – the minimal important difference (MID) – and can greatly facilitate the interpretation of results (Guyatt et al 1998, Schünemann and Guyatt 2005). Knowing the MID allows review authors and users to place results in context. Review authors should state the MID – if known – in the Comments column of their ‘Summary of findings’ table. For example, the chronic respiratory questionnaire has possible scores in health-related quality of life ranging from 1 to 7 and 0.5 represents a well-established MID (Jaeschke et al 1989, Schünemann et al 2005).

15.5.3 Meta-analyses with continuous outcomes using different measures

When studies have used different instruments to measure the same construct, a standardized mean difference (SMD) may be used in meta-analysis for combining continuous data. Without guidance, clinicians and patients may have little idea how to interpret results presented as SMDs. Review authors should therefore consider issues of interpretability when planning their analysis at the protocol stage and should consider whether there will be suitable ways to re-express the SMD or whether alternative effect measures, such as a ratio of means, or possibly as minimal important difference units (Guyatt et al 2013b) should be used. Table 15.5.a and the following sections describe these options.

Table 15.5.a Approaches and their implications to presenting results of continuous variables when primary studies have used different instruments to measure the same construct. Adapted from Guyatt et al (2013b)

15.5.3.1 Presenting and interpreting SMDs using generic effect size estimates

The SMD expresses the intervention effect in standard units rather than the original units of measurement. The SMD is the difference in mean effects between the experimental and comparator groups divided by the pooled standard deviation of participants’ outcomes, or external SDs when studies are very small (see Chapter 6, Section 6.5.1.2 ). The value of a SMD thus depends on both the size of the effect (the difference between means) and the standard deviation of the outcomes (the inherent variability among participants or based on an external SD).

If review authors use the SMD, they might choose to present the results directly as SMDs (row 1a, Table 15.5.a and Table 15.5.b ). However, absolute values of the intervention and comparison groups are typically not useful because studies have used different measurement instruments with different units. Guiding rules for interpreting SMDs (or ‘Cohen’s effect sizes’) exist, and have arisen mainly from researchers in the social sciences (Cohen 1988). One example is as follows: 0.2 represents a small effect, 0.5 a moderate effect and 0.8 a large effect (Cohen 1988). Variations exist (e.g. <0.40=small, 0.40 to 0.70=moderate, >0.70=large). Review authors might consider including such a guiding rule in interpreting the SMD in the text of the review, and in summary versions such as the Comments column of a ‘Summary of findings’ table. However, some methodologists believe that such interpretations are problematic because patient importance of a finding is context-dependent and not amenable to generic statements.

15.5.3.2 Re-expressing SMDs using a familiar instrument

The second possibility for interpreting the SMD is to express it in the units of one or more of the specific measurement instruments used by the included studies (row 1b, Table 15.5.a and Table 15.5.b ). The approach is to calculate an absolute difference in means by multiplying the SMD by an estimate of the SD associated with the most familiar instrument. To obtain this SD, a reasonable option is to calculate a weighted average across all intervention groups of all studies that used the selected instrument (preferably a pre-intervention or post-intervention SD as discussed in Chapter 10, Section 10.5.2 ). To better reflect among-person variation in practice, or to use an instrument not represented in the meta-analysis, it may be preferable to use a standard deviation from a representative observational study. The summary effect is thus re-expressed in the original units of that particular instrument and the clinical relevance and impact of the intervention effect can be interpreted using that familiar instrument.

The same approach of re-expressing the results for a familiar instrument can also be used for other standardized effect measures such as when standardizing by MIDs (Guyatt et al 2013b): see Section 15.5.3.5 .

Table 15.5.b Application of approaches when studies have used different measures: effects of dexamethasone for pain after laparoscopic cholecystectomy (Karanicolas et al 2008). Reproduced with permission of Wolters Kluwer

1 Certainty rated according to GRADE from very low to high certainty. 2 Substantial unexplained heterogeneity in study results. 3 Imprecision due to wide confidence intervals. 4 The 20% comes from the proportion in the control group requiring rescue analgesia. 5 Crude (arithmetic) means of the post-operative pain mean responses across all five trials when transformed to a 100-point scale.

15.5.3.3 Re-expressing SMDs through dichotomization and transformation to relative and absolute measures

A third approach (row 1c, Table 15.5.a and Table 15.5.b ) relies on converting the continuous measure into a dichotomy and thus allows calculation of relative and absolute effects on a binary scale. A transformation of a SMD to a (log) odds ratio is available, based on the assumption that an underlying continuous variable has a logistic distribution with equal standard deviation in the two intervention groups, as discussed in Chapter 10, Section 10.6  (Furukawa 1999, Guyatt et al 2013b). The assumption is unlikely to hold exactly and the results must be regarded as an approximation. The log odds ratio is estimated as

what is interpretation of data in research

(or approximately 1.81✕SMD). The resulting odds ratio can then be presented as normal, and in a ‘Summary of findings’ table, combined with an assumed comparator group risk to be expressed as an absolute risk difference. The comparator group risk in this case would refer to the proportion of people who have achieved a specific value of the continuous outcome. In randomized trials this can be interpreted as the proportion who have improved by some (specified) amount (responders), for instance by 5 points on a 0 to 100 scale. Table 15.5.c shows some illustrative results from this method. The risk differences can then be converted to NNTs or to people per thousand using methods described in Section 15.4.4 .

Table 15.5.c Risk difference derived for specific SMDs for various given ‘proportions improved’ in the comparator group (Furukawa 1999, Guyatt et al 2013b). Reproduced with permission of Elsevier 

15.5.3.4 Ratio of means

A more frequently used approach is based on calculation of a ratio of means between the intervention and comparator groups (Friedrich et al 2008) as discussed in Chapter 6, Section 6.5.1.3 . Interpretational advantages of this approach include the ability to pool studies with outcomes expressed in different units directly, to avoid the vulnerability of heterogeneous populations that limits approaches that rely on SD units, and for ease of clinical interpretation (row 2, Table 15.5.a and Table 15.5.b ). This method is currently designed for post-intervention scores only. However, it is possible to calculate a ratio of change scores if both intervention and comparator groups change in the same direction in each relevant study, and this ratio may sometimes be informative.

Limitations to this approach include its limited applicability to change scores (since it is unlikely that both intervention and comparator group changes are in the same direction in all studies) and the possibility of misleading results if the comparator group mean is very small, in which case even a modest difference from the intervention group will yield a large and therefore misleading ratio of means. It also requires that separate ratios of means be calculated for each included study, and then entered into a generic inverse variance meta-analysis (see Chapter 10, Section 10.3 ).

The ratio of means approach illustrated in Table 15.5.b suggests a relative reduction in pain of only 13%, meaning that those receiving steroids have a pain severity 87% of those in the comparator group, an effect that might be considered modest.

15.5.3.5 Presenting continuous results as minimally important difference units

To express results in MID units, review authors have two options. First, they can be combined across studies in the same way as the SMD, but instead of dividing the mean difference of each study by its SD, review authors divide by the MID associated with that outcome (Johnston et al 2010, Guyatt et al 2013b). Instead of SD units, the pooled results represent MID units (row 3, Table 15.5.a and Table 15.5.b ), and may be more easily interpretable. This approach avoids the problem of varying SDs across studies that may distort estimates of effect in approaches that rely on the SMD. The approach, however, relies on having well-established MIDs. The approach is also risky in that a difference less than the MID may be interpreted as trivial when a substantial proportion of patients may have achieved an important benefit.

The other approach makes a simple conversion (not shown in Table 15.5.b ), before undertaking the meta-analysis, of the means and SDs from each study to means and SDs on the scale of a particular familiar instrument whose MID is known. For example, one can rescale the mean and SD of other chronic respiratory disease instruments (e.g. rescaling a 0 to 100 score of an instrument) to a the 1 to 7 score in Chronic Respiratory Disease Questionnaire (CRQ) units (by assuming 0 equals 1 and 100 equals 7 on the CRQ). Given the MID of the CRQ of 0.5, a mean difference in change of 0.71 after rescaling of all studies suggests a substantial effect of the intervention (Guyatt et al 2013b). This approach, presenting in units of the most familiar instrument, may be the most desirable when the target audiences have extensive experience with that instrument, particularly if the MID is well established.

15.6 Drawing conclusions

15.6.1 conclusions sections of a cochrane review.

Authors’ conclusions in a Cochrane Review are divided into implications for practice and implications for research. While Cochrane Reviews about interventions can provide meaningful information and guidance for practice, decisions about the desirable and undesirable consequences of healthcare options require evidence and judgements for criteria that most Cochrane Reviews do not provide (Alonso-Coello et al 2016). In describing the implications for practice and the development of recommendations, however, review authors may consider the certainty of the evidence, the balance of benefits and harms, and assumed values and preferences.

15.6.2 Implications for practice

Drawing conclusions about the practical usefulness of an intervention entails making trade-offs, either implicitly or explicitly, between the estimated benefits, harms and the values and preferences. Making such trade-offs, and thus making specific recommendations for an action in a specific context, goes beyond a Cochrane Review and requires additional evidence and informed judgements that most Cochrane Reviews do not provide (Alonso-Coello et al 2016). Such judgements are typically the domain of clinical practice guideline developers for which Cochrane Reviews will provide crucial information (Graham et al 2011, Schünemann et al 2014, Zhang et al 2018a). Thus, authors of Cochrane Reviews should not make recommendations.

If review authors feel compelled to lay out actions that clinicians and patients could take, they should – after describing the certainty of evidence and the balance of benefits and harms – highlight different actions that might be consistent with particular patterns of values and preferences. Other factors that might influence a decision should also be highlighted, including any known factors that would be expected to modify the effects of the intervention, the baseline risk or status of the patient, costs and who bears those costs, and the availability of resources. Review authors should ensure they consider all patient-important outcomes, including those for which limited data may be available. In the context of public health reviews the focus may be on population-important outcomes as the target may be an entire (non-diseased) population and include outcomes that are not measured in the population receiving an intervention (e.g. a reduction of transmission of infections from those receiving an intervention). This process implies a high level of explicitness in judgements about values or preferences attached to different outcomes and the certainty of the related evidence (Zhang et al 2018b, Zhang et al 2018c); this and a full cost-effectiveness analysis is beyond the scope of most Cochrane Reviews (although they might well be used for such analyses; see Chapter 20 ).

A review on the use of anticoagulation in cancer patients to increase survival (Akl et al 2011a) provides an example for laying out clinical implications for situations where there are important trade-offs between desirable and undesirable effects of the intervention: “The decision for a patient with cancer to start heparin therapy for survival benefit should balance the benefits and downsides and integrate the patient’s values and preferences. Patients with a high preference for a potential survival prolongation, limited aversion to potential bleeding, and who do not consider heparin (both UFH or LMWH) therapy a burden may opt to use heparin, while those with aversion to bleeding may not.”

15.6.3 Implications for research

The second category for authors’ conclusions in a Cochrane Review is implications for research. To help people make well-informed decisions about future healthcare research, the ‘Implications for research’ section should comment on the need for further research, and the nature of the further research that would be most desirable. It is helpful to consider the population, intervention, comparison and outcomes that could be addressed, or addressed more effectively in the future, in the context of the certainty of the evidence in the current review (Brown et al 2006):

  • P (Population): diagnosis, disease stage, comorbidity, risk factor, sex, age, ethnic group, specific inclusion or exclusion criteria, clinical setting;
  • I (Intervention): type, frequency, dose, duration, prognostic factor;
  • C (Comparison): placebo, routine care, alternative treatment/management;
  • O (Outcome): which clinical or patient-related outcomes will the researcher need to measure, improve, influence or accomplish? Which methods of measurement should be used?

While Cochrane Review authors will find the PICO domains helpful, the domains of the GRADE certainty framework further support understanding and describing what additional research will improve the certainty in the available evidence. Note that as the certainty of the evidence is likely to vary by outcome, these implications will be specific to certain outcomes in the review. Table 15.6.a shows how review authors may be aided in their interpretation of the body of evidence and drawing conclusions about future research and practice.

Table 15.6.a Implications for research and practice suggested by individual GRADE domains

The review of compression stockings for prevention of deep vein thrombosis (DVT) in airline passengers described in Chapter 14 provides an example where there is some convincing evidence of a benefit of the intervention: “This review shows that the question of the effects on symptomless DVT of wearing versus not wearing compression stockings in the types of people studied in these trials should now be regarded as answered. Further research may be justified to investigate the relative effects of different strengths of stockings or of stockings compared to other preventative strategies. Further randomised trials to address the remaining uncertainty about the effects of wearing versus not wearing compression stockings on outcomes such as death, pulmonary embolism and symptomatic DVT would need to be large.” (Clarke et al 2016).

A review of therapeutic touch for anxiety disorder provides an example of the implications for research when no eligible studies had been found: “This review highlights the need for randomized controlled trials to evaluate the effectiveness of therapeutic touch in reducing anxiety symptoms in people diagnosed with anxiety disorders. Future trials need to be rigorous in design and delivery, with subsequent reporting to include high quality descriptions of all aspects of methodology to enable appraisal and interpretation of results.” (Robinson et al 2007).

15.6.4 Reaching conclusions

A common mistake is to confuse ‘no evidence of an effect’ with ‘evidence of no effect’. When the confidence intervals are too wide (e.g. including no effect), it is wrong to claim that the experimental intervention has ‘no effect’ or is ‘no different’ from the comparator intervention. Review authors may also incorrectly ‘positively’ frame results for some effects but not others. For example, when the effect estimate is positive for a beneficial outcome but confidence intervals are wide, review authors may describe the effect as promising. However, when the effect estimate is negative for an outcome that is considered harmful but the confidence intervals include no effect, review authors report no effect. Another mistake is to frame the conclusion in wishful terms. For example, review authors might write, “there were too few people in the analysis to detect a reduction in mortality” when the included studies showed a reduction or even increase in mortality that was not ‘statistically significant’. One way of avoiding errors such as these is to consider the results blinded; that is, consider how the results would be presented and framed in the conclusions if the direction of the results was reversed. If the confidence interval for the estimate of the difference in the effects of the interventions overlaps with no effect, the analysis is compatible with both a true beneficial effect and a true harmful effect. If one of the possibilities is mentioned in the conclusion, the other possibility should be mentioned as well. Table 15.6.b suggests narrative statements for drawing conclusions based on the effect estimate from the meta-analysis and the certainty of the evidence.

Table 15.6.b Suggested narrative statements for phrasing conclusions

Another common mistake is to reach conclusions that go beyond the evidence. Often this is done implicitly, without referring to the additional information or judgements that are used in reaching conclusions about the implications of a review for practice. Even when additional information and explicit judgements support conclusions about the implications of a review for practice, review authors rarely conduct systematic reviews of the additional information. Furthermore, implications for practice are often dependent on specific circumstances and values that must be taken into consideration. As we have noted, review authors should always be cautious when drawing conclusions about implications for practice and they should not make recommendations.

15.7 Chapter information

Authors: Holger J Schünemann, Gunn E Vist, Julian PT Higgins, Nancy Santesso, Jonathan J Deeks, Paul Glasziou, Elie Akl, Gordon H Guyatt; on behalf of the Cochrane GRADEing Methods Group

Acknowledgements: Andrew Oxman, Jonathan Sterne, Michael Borenstein and Rob Scholten contributed text to earlier versions of this chapter.

Funding: This work was in part supported by funding from the Michael G DeGroote Cochrane Canada Centre and the Ontario Ministry of Health. JJD receives support from the National Institute for Health Research (NIHR) Birmingham Biomedical Research Centre at the University Hospitals Birmingham NHS Foundation Trust and the University of Birmingham. JPTH receives support from the NIHR Biomedical Research Centre at University Hospitals Bristol NHS Foundation Trust and the University of Bristol. The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR or the Department of Health.

15.8 References

Aguilar MI, Hart R. Oral anticoagulants for preventing stroke in patients with non-valvular atrial fibrillation and no previous history of stroke or transient ischemic attacks. Cochrane Database of Systematic Reviews 2005; 3 : CD001927.

Aguilar MI, Hart R, Pearce LA. Oral anticoagulants versus antiplatelet therapy for preventing stroke in patients with non-valvular atrial fibrillation and no history of stroke or transient ischemic attacks. Cochrane Database of Systematic Reviews 2007; 3 : CD006186.

Akl EA, Gunukula S, Barba M, Yosuico VE, van Doormaal FF, Kuipers S, Middeldorp S, Dickinson HO, Bryant A, Schünemann H. Parenteral anticoagulation in patients with cancer who have no therapeutic or prophylactic indication for anticoagulation. Cochrane Database of Systematic Reviews 2011a; 1 : CD006652.

Akl EA, Oxman AD, Herrin J, Vist GE, Terrenato I, Sperati F, Costiniuk C, Blank D, Schünemann H. Using alternative statistical formats for presenting risks and risk reductions. Cochrane Database of Systematic Reviews 2011b; 3 : CD006776.

Alonso-Coello P, Schünemann HJ, Moberg J, Brignardello-Petersen R, Akl EA, Davoli M, Treweek S, Mustafa RA, Rada G, Rosenbaum S, Morelli A, Guyatt GH, Oxman AD, Group GW. GRADE Evidence to Decision (EtD) frameworks: a systematic and transparent approach to making well informed healthcare choices. 1: Introduction. BMJ 2016; 353 : i2016.

Altman DG. Confidence intervals for the number needed to treat. BMJ 1998; 317 : 1309-1312.

Atkins D, Best D, Briss PA, Eccles M, Falck-Ytter Y, Flottorp S, Guyatt GH, Harbour RT, Haugh MC, Henry D, Hill S, Jaeschke R, Leng G, Liberati A, Magrini N, Mason J, Middleton P, Mrukowicz J, O'Connell D, Oxman AD, Phillips B, Schünemann HJ, Edejer TT, Varonen H, Vist GE, Williams JW, Jr., Zaza S. Grading quality of evidence and strength of recommendations. BMJ 2004; 328 : 1490.

Brown P, Brunnhuber K, Chalkidou K, Chalmers I, Clarke M, Fenton M, Forbes C, Glanville J, Hicks NJ, Moody J, Twaddle S, Timimi H, Young P. How to formulate research recommendations. BMJ 2006; 333 : 804-806.

Cates C. Confidence intervals for the number needed to treat: Pooling numbers needed to treat may not be reliable. BMJ 1999; 318 : 1764-1765.

Clarke MJ, Broderick C, Hopewell S, Juszczak E, Eisinga A. Compression stockings for preventing deep vein thrombosis in airline passengers. Cochrane Database of Systematic Reviews 2016; 9 : CD004002.

Cohen J. Statistical Power Analysis in the Behavioral Sciences . 2nd edition ed. Hillsdale (NJ): Lawrence Erlbaum Associates, Inc.; 1988.

Coleman T, Chamberlain C, Davey MA, Cooper SE, Leonardi-Bee J. Pharmacological interventions for promoting smoking cessation during pregnancy. Cochrane Database of Systematic Reviews 2015; 12 : CD010078.

Dans AM, Dans L, Oxman AD, Robinson V, Acuin J, Tugwell P, Dennis R, Kang D. Assessing equity in clinical practice guidelines. Journal of Clinical Epidemiology 2007; 60 : 540-546.

Friedman LM, Furberg CD, DeMets DL. Fundamentals of Clinical Trials . 2nd edition ed. Littleton (MA): John Wright PSG, Inc.; 1985.

Friedrich JO, Adhikari NK, Beyene J. The ratio of means method as an alternative to mean differences for analyzing continuous outcome variables in meta-analysis: a simulation study. BMC Medical Research Methodology 2008; 8 : 32.

Furukawa T. From effect size into number needed to treat. Lancet 1999; 353 : 1680.

Graham R, Mancher M, Wolman DM, Greenfield S, Steinberg E. Committee on Standards for Developing Trustworthy Clinical Practice Guidelines, Board on Health Care Services: Clinical Practice Guidelines We Can Trust. Washington, DC: National Academies Press; 2011.

Guyatt G, Oxman AD, Akl EA, Kunz R, Vist G, Brozek J, Norris S, Falck-Ytter Y, Glasziou P, DeBeer H, Jaeschke R, Rind D, Meerpohl J, Dahm P, Schünemann HJ. GRADE guidelines: 1. Introduction-GRADE evidence profiles and summary of findings tables. Journal of Clinical Epidemiology 2011a; 64 : 383-394.

Guyatt GH, Juniper EF, Walter SD, Griffith LE, Goldstein RS. Interpreting treatment effects in randomised trials. BMJ 1998; 316 : 690-693.

Guyatt GH, Oxman AD, Vist GE, Kunz R, Falck-Ytter Y, Alonso-Coello P, Schünemann HJ. GRADE: an emerging consensus on rating quality of evidence and strength of recommendations. BMJ 2008; 336 : 924-926.

Guyatt GH, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, Alonso-Coello P, Falck-Ytter Y, Jaeschke R, Vist G, Akl EA, Post PN, Norris S, Meerpohl J, Shukla VK, Nasser M, Schünemann HJ. GRADE guidelines: 8. Rating the quality of evidence--indirectness. Journal of Clinical Epidemiology 2011b; 64 : 1303-1310.

Guyatt GH, Oxman AD, Santesso N, Helfand M, Vist G, Kunz R, Brozek J, Norris S, Meerpohl J, Djulbegovic B, Alonso-Coello P, Post PN, Busse JW, Glasziou P, Christensen R, Schünemann HJ. GRADE guidelines: 12. Preparing summary of findings tables-binary outcomes. Journal of Clinical Epidemiology 2013a; 66 : 158-172.

Guyatt GH, Thorlund K, Oxman AD, Walter SD, Patrick D, Furukawa TA, Johnston BC, Karanicolas P, Akl EA, Vist G, Kunz R, Brozek J, Kupper LL, Martin SL, Meerpohl JJ, Alonso-Coello P, Christensen R, Schünemann HJ. GRADE guidelines: 13. Preparing summary of findings tables and evidence profiles-continuous outcomes. Journal of Clinical Epidemiology 2013b; 66 : 173-183.

Hawe P, Shiell A, Riley T, Gold L. Methods for exploring implementation variation and local context within a cluster randomised community intervention trial. Journal of Epidemiology and Community Health 2004; 58 : 788-793.

Hoffrage U, Lindsey S, Hertwig R, Gigerenzer G. Medicine. Communicating statistical information. Science 2000; 290 : 2261-2262.

Jaeschke R, Singer J, Guyatt GH. Measurement of health status. Ascertaining the minimal clinically important difference. Controlled Clinical Trials 1989; 10 : 407-415.

Johnston B, Thorlund K, Schünemann H, Xie F, Murad M, Montori V, Guyatt G. Improving the interpretation of health-related quality of life evidence in meta-analysis: The application of minimal important difference units. . Health Outcomes and Qualithy of Life 2010; 11 : 116.

Karanicolas PJ, Smith SE, Kanbur B, Davies E, Guyatt GH. The impact of prophylactic dexamethasone on nausea and vomiting after laparoscopic cholecystectomy: a systematic review and meta-analysis. Annals of Surgery 2008; 248 : 751-762.

Lumley J, Oliver SS, Chamberlain C, Oakley L. Interventions for promoting smoking cessation during pregnancy. Cochrane Database of Systematic Reviews 2004; 4 : CD001055.

McQuay HJ, Moore RA. Using numerical results from systematic reviews in clinical practice. Annals of Internal Medicine 1997; 126 : 712-720.

Resnicow K, Cross D, Wynder E. The Know Your Body program: a review of evaluation studies. Bulletin of the New York Academy of Medicine 1993; 70 : 188-207.

Robinson J, Biley FC, Dolk H. Therapeutic touch for anxiety disorders. Cochrane Database of Systematic Reviews 2007; 3 : CD006240.

Rothwell PM. External validity of randomised controlled trials: "to whom do the results of this trial apply?". Lancet 2005; 365 : 82-93.

Santesso N, Carrasco-Labra A, Langendam M, Brignardello-Petersen R, Mustafa RA, Heus P, Lasserson T, Opiyo N, Kunnamo I, Sinclair D, Garner P, Treweek S, Tovey D, Akl EA, Tugwell P, Brozek JL, Guyatt G, Schünemann HJ. Improving GRADE evidence tables part 3: detailed guidance for explanatory footnotes supports creating and understanding GRADE certainty in the evidence judgments. Journal of Clinical Epidemiology 2016; 74 : 28-39.

Schünemann HJ, Puhan M, Goldstein R, Jaeschke R, Guyatt GH. Measurement properties and interpretability of the Chronic respiratory disease questionnaire (CRQ). COPD: Journal of Chronic Obstructive Pulmonary Disease 2005; 2 : 81-89.

Schünemann HJ, Guyatt GH. Commentary--goodbye M(C)ID! Hello MID, where do you come from? Health Services Research 2005; 40 : 593-597.

Schünemann HJ, Fretheim A, Oxman AD. Improving the use of research evidence in guideline development: 13. Applicability, transferability and adaptation. Health Research Policy and Systems 2006; 4 : 25.

Schünemann HJ. Methodological idiosyncracies, frameworks and challenges of non-pharmaceutical and non-technical treatment interventions. Zeitschrift für Evidenz, Fortbildung und Qualität im Gesundheitswesen 2013; 107 : 214-220.

Schünemann HJ, Tugwell P, Reeves BC, Akl EA, Santesso N, Spencer FA, Shea B, Wells G, Helfand M. Non-randomized studies as a source of complementary, sequential or replacement evidence for randomized controlled trials in systematic reviews on the effects of interventions. Research Synthesis Methods 2013; 4 : 49-62.

Schünemann HJ, Wiercioch W, Etxeandia I, Falavigna M, Santesso N, Mustafa R, Ventresca M, Brignardello-Petersen R, Laisaar KT, Kowalski S, Baldeh T, Zhang Y, Raid U, Neumann I, Norris SL, Thornton J, Harbour R, Treweek S, Guyatt G, Alonso-Coello P, Reinap M, Brozek J, Oxman A, Akl EA. Guidelines 2.0: systematic development of a comprehensive checklist for a successful guideline enterprise. CMAJ: Canadian Medical Association Journal 2014; 186 : E123-142.

Schünemann HJ. Interpreting GRADE's levels of certainty or quality of the evidence: GRADE for statisticians, considering review information size or less emphasis on imprecision? Journal of Clinical Epidemiology 2016; 75 : 6-15.

Smeeth L, Haines A, Ebrahim S. Numbers needed to treat derived from meta-analyses--sometimes informative, usually misleading. BMJ 1999; 318 : 1548-1551.

Sun X, Briel M, Busse JW, You JJ, Akl EA, Mejza F, Bala MM, Bassler D, Mertz D, Diaz-Granados N, Vandvik PO, Malaga G, Srinathan SK, Dahm P, Johnston BC, Alonso-Coello P, Hassouneh B, Walter SD, Heels-Ansdell D, Bhatnagar N, Altman DG, Guyatt GH. Credibility of claims of subgroup effects in randomised controlled trials: systematic review. BMJ 2012; 344 : e1553.

Zhang Y, Akl EA, Schünemann HJ. Using systematic reviews in guideline development: the GRADE approach. Research Synthesis Methods 2018a: doi: 10.1002/jrsm.1313.

Zhang Y, Alonso-Coello P, Guyatt GH, Yepes-Nunez JJ, Akl EA, Hazlewood G, Pardo-Hernandez H, Etxeandia-Ikobaltzeta I, Qaseem A, Williams JW, Jr., Tugwell P, Flottorp S, Chang Y, Zhang Y, Mustafa RA, Rojas MX, Schünemann HJ. GRADE Guidelines: 19. Assessing the certainty of evidence in the importance of outcomes or values and preferences-Risk of bias and indirectness. Journal of Clinical Epidemiology 2018b: doi: 10.1016/j.jclinepi.2018.1001.1013.

Zhang Y, Alonso Coello P, Guyatt G, Yepes-Nunez JJ, Akl EA, Hazlewood G, Pardo-Hernandez H, Etxeandia-Ikobaltzeta I, Qaseem A, Williams JW, Jr., Tugwell P, Flottorp S, Chang Y, Zhang Y, Mustafa RA, Rojas MX, Xie F, Schünemann HJ. GRADE Guidelines: 20. Assessing the certainty of evidence in the importance of outcomes or values and preferences - Inconsistency, Imprecision, and other Domains. Journal of Clinical Epidemiology 2018c: doi: 10.1016/j.jclinepi.2018.1005.1011.

For permission to re-use material from the Handbook (either academic or commercial), please see here for full details.

Professional network data

Leverage our top B2B datasets

Job posting data

Get access to hundreds of millions of jobs

Employee review data

Get data for employee sentiment analysis

Enhanced professional network data

Employee data

Get data on global talent at scale

Funding data

Discover and analyze funding deals

Firmographic data

Unlock a 360° view of millions of companies

Technographic data

Analyze companies’ tech stacks

BY INDUSTRY

MOST POPULAR USE CASES

Company API

Find and get data on specific companies

Historical headcount API

See how company headcounts are changing

Employee API

Access millions of employee profiles

Jobs data API

Find relevant jobs with ease

Largest professional network

Company, employee, and jobs data

Company and jobs data

Company and review data

Company, jobs, review, salary data

Community and repository data

Leveraging web data for informed investing

Building or enhancing data-driven HR tech

Supercharging your lead generation engine

Transforming marketing with web data

Market research

Conducting comprehensive market research

Lead enrichment

Use Coresignal’s data for enrichment

Talent analytics

Analyze talent from multiple perspectives

Talent sourcing

Comprehensive talent data for recruitment

Investment analysis

Source deals, evaluate risk and much more

Target market analysis

Build a complete view of the market

Competitive analysis

Identify and analyze competitors

B2B Intent data

Lesser-known ways to find intent signals

Documentation

Detailed guides, samples, and dictionaries

Learn and get insipired

Find answers to popular questions

Resource center

Data insights, customer stories, expert articles

Data Interpretation: Definition, Importance, and Processes

data interpretation visual

Susanne Morris

March 3, 2021

Many investors and organizations alike rely on data to enrich their decision-making process. From development to sales, quality data insights can provide professionals with insights into every aspect of their business operations. While this may seem rather straightforward, there are quite a few processes that must be followed so you can utilize data’s full potential. This is where data interpretation comes in.

What is data interpretation?

Ultimately, data interpretation is a data review process that utilizes analysis, evaluation, and visualization to provide in-depth findings to enhance data-driven decision-making. Further, there are many steps involved in data interpretation, as well as different types of data and data analysis processes that influence the larger data interpretation process. This article will explain the different data interpretation methods, the data interpretation process, and its benefits. Firstly, let’s start with an overview of data interpretation and its importance.

Why is data interpretation important?

The importance of data interpretation is not far from the importance of other data processes. Much like implementing data normalization and understanding data quality , proper data interpretation offers real-time solutions and provides more in-depth insights than without it. Particularly, data interpretation can improve data identification, discover hidden correlations between datasets, find data outliers, and even help forecast trends .

Additionally, proper implementation of data interpretation offers immense benefits such as cost efficiency, enhanced decision making, and improved AI predictions. Namely, in a Business Intelligence survey, it was reported that companies that implemented data analysis and interpretation from big data datasets saw a ten percent reduction in costs .

While the importance of data interpretation is undeniable, it is significant to note that this process is no easy feat. To unlock the full potential of your data, you must integrate your data interpretation process into your workflow in its entirety. So what is that process? Let’s take a closer look.

finance buildings in a city, data interpretation

The data interpretation process

Data interpretation is a five-step process, with the primary step being data analysis. Without data analysis, there can be no data interpretation. In addition to its importance, the analysis portion of data interpretation, which will be touched on later on includes two different approaches: qualitative analysis and quantitative analysis.

Qualitative analysis

Qualitative analysis is defined as examining and explaining non-quantifiable data through a subjective lens. Further, in terms of data interpretation, qualitative analysis is the process of analyzing categorical data (data that cannot be represented numerically) while applying a contextual lens. Data that cannot be represented numerically includes information such as observations, documentation, and questionnaires.

Ultimately, this data type is analyzed with a contextual lens that accounts for biases, emotions, behaviors, and more. A company review, for instance, accounts for human sentiment, narrative, and previous behaviors during analysis, helping summarize large amounts of quantitative data for further analysis. Due to the personable nature of qualitative analysis, there are a variety of techniques involved in collecting this data including interviews, questionnaires, and information exchanges. Not unlike many lead generation techniques, companies often offer free resources in exchange for information in the form of qualitative data. In practice, for example, companies offer free quality resources such as e-books in exchange for completing product or demographic surveys.

Quantitative analysis

On the other hand, quantitative analysis refers to the examination and explanation of numerical values through a statistical lens. Similarly, with regard to data interpretation, quantitative analysis involves analyzing numerical data that can be then applied to statistical modeling for predictions.

Typically, this type of analysis involves the collection of massive amounts of numerical data that are then analyzed mathematically to produce more conclusive results such as mean, standard deviation, median, and ratios. Similar to the qualitative process, the collection of this quantitative data can involve a variety of different processes. For example, web scraping is a common extraction technique used to collect public online quantitative and qualitative data. In the same way web scraping can be used to extract quantitative data, such as social sentiment, it can also be used to extract numerical data, such as financial data.

If you're looking for data to identify business opportunities, you can perform both types of analysis with Coresignal's raw data.

How to interpret data

Now that we’ve examined the two types of analysis used in the data interpretation process, we can take a closer look at the interpretation process from beginning to end. The five key steps involved in the larger data interpretation process include baseline establishment, data collection, interpretation (qualitative or quantitative analysis), visualization, and reflection. Let’s take a look at each of these steps.

1. Baseline establishment

Similar to the first step when conducting a competitive analysis, it is important to establish your baseline when conducting data interpretation. This can include setting objectives and outlining long-term and short-term goals that will be directly affected by any actions that result from your data interpretation. For example, investors utilizing data interpretation may want to set goals regarding the ROI of companies they are evaluating. It is important to note that this step also includes the determination of which data type you wish to analyze and interpret.

2. Data collection

Now that a baseline is established and the goals of your data interpretation process are known, you can start collecting data. As previously mentioned, the data collection process includes two major collecting types: web scraping and information exchange. Both of these methods are successful at collecting both qualitative and quantitative data. However, depending on the scope of your data interpretation process, you most likely will only require one method.

For example, if you are looking for specific information within a very particular demographic, you will want to target particular attributes within the larger demographic you are interested in. Particularly, let’s say you want to collect sentiment surrounding an application used by a particular job type; you will want to target individuals with a specific job type attribute and utilize information exchange.

Both of these collection methods can be quite extensive, and for that reason, you may want to enrich your data collection or even fully utilize high-quality data from a data provider . Notably, once your data is collected, you must clean and organize your data before you can proceed to analysis. This can be achieved through data cleansing and data normalization processes. 

3. Interpretation (qualitative or quantitative)

This step is arguably the most crucial one in the data interpretation process, and it involves the analysis of the data you’ve collected. This is where your decision to conduct a qualitative or quantitative analysis comes into play.

Qualitative analysis will require you to use a more subjective lens. If you are using AI-based data analysis tools, extensive “coding” will be necessary so that the data can be understood subjectively as sentiment experienced by individuals that cannot be defined numerically.

On the other hand, qualitative analysis requires that the data be analyzed through a numerical and mathematical approach. As previously mentioned, raw numerical data will be analyzed, resulting in mean, standard deviation, and ratios, which can then be analyzed further via statistical modeling to better understand and predict behaviors.

4. Visualization

When your analysis is complete, you can now start to visualize your data and draw insights from various perspectives. Today, many companies have implemented “dashboards” as a part of the visualization stage. Dashboards essentially provide you with quick insights via programmable algorithms. Even without dashboards formatting your data for visualization is relatively straightforward. To do this, you must input and format your data into a format that supports visualization. Some of the more common visualization formats include:

  • Scatter plots
  • Line graphs

5. Reflection

Lastly, once you have created adequate visualization types that meet your previously decided objectives, you can reflect. While a rather simple process, relative to the earlier steps, the reflection process can make or break your data interpretation process. During this step, you should reflect on the data analysis process as a whole, look for hidden correlations, AND identify outliers or errors that may have affected your visualization charts (but could have been missed during the data cleansing stage). It is crucial that during this step you differentiate between correlation and causation, identify bias, and take note of any missed insights.

Light and data traveling

Wrapping up

In all, data interpretation is an extremely important part of data-driven decision-making and should be done regularly as a part of a larger iterative interpretation process. Investors, developers, and sales and acquisition alike can find hidden insights from regularly performed data interpretation. It is what you do with those insights that bring your company success.

Frequently asked questions

What is qualitative data interpretation.

Qualitative data interpretation is the process of analyzing categorical data (data that cannot be represented numerically, such as observations, documentation, and questionnaires) through a contextual lens.

What is quantitative data interpretation?

Quantitative data interpretation refers to the examination and explanation of numerical data through a statistical lens.

What are the steps in data interpretation?

There are five main steps in data interpretation: baseline data establishment (similar to data discovery ), data collection, data interpretation, data visualization, and reflection.  

Boost your growth

See a variety of datasets that will help your business growth.

Don’t miss a thing

Subscribe to our monthly newsletter to learn how you can grow your business with public web data.

By providing your email address you agree to receive newsletters from Coresignal. For more information about your data processing, please take a look at our Privacy Policy .

Newsletter

Related articles

Sales & Marketing

10 Most Reliable B2C and B2B Lead Generation Databases

Not all lead databases are created equal. Some are better than others, and knowing how to pick the right one is key. A superior...

Mindaugas Jancis

April 23, 2024

It’s a (Data) Match! Data Matching as a Business Value

With the amount of business data growing, more and more options to categorize it appear, resulting in many datasets....

April 9, 2024

what is interpretation of data in research

Data Analysis

Growing demand for sustainability professionals 2020–2023

Original research about the changes in demand for sustainability specialists throughout 2020–2023....

March 29, 2024

Statsomat

  • Exporatory Data Analysis (R)
  • Exploratory Data Analysis (Python)
  • Correlation Analysis (Statsomat/CORRANA)
  • Principal Components Analysis (Statsomat/PCA)
  • Confirmatory Factor Analysis (Statsomat/CFA)
  • Multiple Comparison Procedures

Statsomat is a web platform that aims to provide automated guidance and apps for automated statistical analysis of data, specifically designed for adult learners of data analysis and data literacy, who are often students and young researchers. Statsomat aims to simulate unavailable academic consultancy for statistical data analysis. Statsomat supports data literacy education, free of charge.

Statsomat is a project in continuous progress. The core contributor authoring the content of the apps is Dr. Denise Welsch . Other contributors are students of the University of Applied Sciences Koblenz , Department of Mathematics and Technology.

What do users say

To get the code is essential.

what is interpretation of data in research

It’s very essential for me as a junior to get the source code at the end, which is hard for many students to deal with. I’ve used this app for my Bachelor thesis testing several datasets, its flexible, easy, effective, beneficial. I hope that this website could be extended in the future, so it treats statistical issues as many as possible. I would recommend everyone to try it.

A great tool

what is interpretation of data in research

The Statsomat is a great tool for analyzing data. I hope that more and more methods will become available in the near future. For a user, obtaining R code is extremely helpful. One can modify this code without the need to start from scratch , and one has code for documentation and possible replication.

Excellent for beginners

what is interpretation of data in research

Thank you very much for the Statsomat applications, they are excellent for beginners!

Massive Help

Thank you for this application. It will be a massive help to people who have little time (or scared of coding). I tried it, and the graphs look really nice.

Awesome App

what is interpretation of data in research

This is a new and awesome app, this is the future! Please add CFA for categorical variables. Thanks.

Thank you for visiting Statsomat.com.

If you liked the apps then please rate us a star for your favorite app on github ., we are also grateful for any feedback . it will inspire us to keep going., privacy overview.

IMAGES

  1. Data Interpretation: Definition and Steps with Examples

    what is interpretation of data in research

  2. What Is Data Analysis In Research Process

    what is interpretation of data in research

  3. What Is Data Interpretation? Meaning & Analysis Examples

    what is interpretation of data in research

  4. Research data interpretation

    what is interpretation of data in research

  5. What Is Data Interpretation? Meaning & Analysis Examples

    what is interpretation of data in research

  6. SOLUTION: Thesis chapter 4 analysis and interpretation of data sample

    what is interpretation of data in research

VIDEO

  1. Analysis of Data? Some Examples to Explore

  2. Data Analysis & Interpretation

  3. Intro: Part I

  4. Data Analysis in Research

  5. What's data Interpretation in Research methodology?

  6. Data Interpretation

COMMENTS

  1. Data Interpretation

    Data interpretation and data analysis are two different but closely related processes in data-driven decision-making. Data analysis refers to the process of examining and examining data using statistical and computational methods to derive insights and conclusions from it. It involves cleaning, transforming, and modeling the data to uncover ...

  2. Data Interpretation: Definition and Steps with Examples

    Data interpretation is the process of reviewing data and arriving at relevant conclusions using various analytical research methods. Data analysis assists researchers in categorizing, manipulating data, and summarizing data to answer critical questions. LEARN ABOUT: Level of Analysis.

  3. What Is Data Interpretation? Meaning & Analysis Examples

    7) The Use of Dashboards For Data Interpretation. 8) Business Data Interpretation Examples. Data analysis and interpretation have now taken center stage with the advent of the digital age… and the sheer amount of data can be frightening. In fact, a Digital Universe study found that the total data supply in 2012 was 2.8 trillion gigabytes!

  4. What is Data Interpretation? + [Types, Method & Tools]

    The quantitative data interpretation method is used to analyze quantitative data, which is also known as numerical data. This data type contains numbers and is therefore analyzed with the use of numbers and not texts. Quantitative data are of 2 main types, namely; discrete and continuous data. Continuous data is further divided into interval ...

  5. What is Data Interpretation? Tools, Techniques, Examples

    Data interpretation is the process of analyzing and making sense of data to extract valuable insights and draw meaningful conclusions. It involves examining patterns, relationships, and trends within the data to uncover actionable information. Data interpretation goes beyond merely collecting and organizing data; it is about extracting ...

  6. Data Interpretation: Definition, Method, Benefits & Examples

    Qualitative Data Interpretation Method. This is a method for breaking down or analyzing so-called qualitative data, also known as categorical data. It is important to note that no bar graphs or line charts are used in this method. Instead, they rely on text. Because qualitative data is collected through person-to-person techniques, it isn't ...

  7. Data Interpretation in Research

    The role of data interpretation. The data collection process is just one part of research, and one that can often provide a lot of data without any easy answers that instantly stick out to researchers or their audiences. An example of data that requires an interpretation process is a corpus, or a large body of text, meant to represent some language use (e.g., literature, conversation).

  8. Interpretation In Qualitative Research: What, Why, How

    Abstract. This chapter addresses a wide range of concepts related to interpretation in qualitative research, examines the meaning and importance of interpretation in qualitative inquiry, and explores the ways methodology, data, and the self/researcher as instrument interact and impact interpretive processes.

  9. Understanding statistical analysis: A beginner's guide to data

    Data interpretation is a crucial part of statistical analysis, as it is used to draw conclusions and make recommendations based on the data. When interpreting data, it is important to consider the context in which the data was collected. This includes factors such as the sample size, the sampling method, and the population being studied.

  10. LibGuides: Research Methods: Data Analysis & Interpretation

    Interpretation of qualitative data can be presented as a narrative. The themes identified from the research can be organised and integrated with themes in the existing literature to give further weight and meaning to the research. The interpretation should also state if the aims and objectives of the research were met.

  11. Data Analysis and Interpretation

    Data analysis: A complex and challenging process. Though it may sound straightforward to take 150 years of air temperature data and describe how global climate has changed, the process of analyzing and interpreting those data is actually quite complex. Consider the range of temperatures around the world on any given day in January (see Figure 2): In Johannesburg, South Africa, where it is ...

  12. Data Collection, Analysis, and Interpretation

    6.1.1 Preparation for a Data Collection. A first step in any research project is the research proposal (Sudheesh et al., 2016 ). The research proposal should set out the background to the work, and the reason of the work is necessary. It should set out a hypothesis or a research question.

  13. From Analysis to Interpretation in Qualitative Studies

    Ricoeur's theory of interpretation, as a tool for the interpretation of data in studies whose philosophical underpinning is hermeneutic phenomenology, deserves consideration by human sciences researchers who seek to provide a rigorous foundation for their work. Thorne, S., Kirkham, S. R., & O'Flynn-Magee, K. (2004).

  14. Chapter 15: Interpreting results and drawing conclusions

    Key Points: This chapter provides guidance on interpreting the results of synthesis in order to communicate the conclusions of the review effectively. Methods are presented for computing, presenting and interpreting relative and absolute effects for dichotomous outcome data, including the number needed to treat (NNT).

  15. What is Data Interpretation? All You Need to Know

    Ultimately, data interpretation is a data review process that utilizes analysis, evaluation, and visualization to provide in-depth findings to enhance data-driven decision-making. Further, there are many steps involved in data interpretation, as well as different types of data and data analysis processes that influence the larger data ...

  16. Data analysis

    data analysis, the process of systematically collecting, cleaning, transforming, describing, modeling, and interpreting data, generally employing statistical techniques. Data analysis is an important part of both scientific research and business, where demand has grown in recent years for data-driven decision making.Data analysis techniques are used to gain useful insights from datasets, which ...

  17. Interpretation and display of research results

    It important to properly collect, code, clean and edit the data before interpreting and displaying the research results. Computers play a major role in different phases of research starting from conceptual, design and planning, data collection, data analysis and research publication phases. The main objective of data display is to summarize the ...

  18. (PDF) Qualitative Data Analysis and Interpretation: Systematic Search

    Qualitative data analysis is. concerned with transforming raw data by searching, evaluating, recogni sing, cod ing, mapping, exploring and describing patterns, trends, themes an d categories in ...

  19. An Overview of Data Analysis and Interpretations in Research

    Research is a scientific field which helps to generate new knowledge and solve the existing problem. So, data analysis is the crucial part of research which makes the result of the study more ...

  20. PDF An Overview of Data Analysis and Interpretations in Research

    Data analysis is the central step in both qualitative and qualitative research. Whatever the data are, it is their analysis that, in a decisive way, forms the outcomes of the research. The purpose of analyzing data is to obtain usable and useful information. The analysis, irrespective of whether the

  21. PDF Chapter 4: Analysis and Interpretation of Results

    The analysis and interpretation of data is carried out in two phases. The. first part, which is based on the results of the questionnaire, deals with a quantitative. analysis of data. The second, which is based on the results of the interview and focus group. discussions, is a qualitative interpretation.

  22. PDF Chapter 6: Data Analysis and Interpretation 6.1. Introduction

    research matched by the many approaches to data analysis, while quantitative researchers choose from a specialised, standard set of data analysis techniques; ... analysis and interpretation of data, when he posits that the process and products of analysis provide the bases for interpretation and analysis. It is therefore not an empty ritual ...

  23. STATSOMAT

    About. Statsomat is a web platform that aims to provide automated guidance and apps for automated statistical analysis of data, specifically designed for adult learners of data analysis and data literacy, who are often students and young researchers. Statsomat aims to simulate unavailable academic consultancy for statistical data analysis.