Encyclopedia Britannica

  • History & Society
  • Science & Tech
  • Biographies
  • Animals & Nature
  • Geography & Travel
  • Arts & Culture
  • Games & Quizzes
  • On This Day
  • One Good Fact
  • New Articles
  • Lifestyles & Social Issues
  • Philosophy & Religion
  • Politics, Law & Government
  • World History
  • Health & Medicine
  • Browse Biographies
  • Birds, Reptiles & Other Vertebrates
  • Bugs, Mollusks & Other Invertebrates
  • Environment
  • Fossils & Geologic Time
  • Entertainment & Pop Culture
  • Sports & Recreation
  • Visual Arts
  • Demystified
  • Image Galleries
  • Infographics
  • Top Questions
  • Britannica Kids
  • Saving Earth
  • Space Next 50
  • Student Center
  • Introduction

Data collection

Data analysis at the Armstrong Flight Research Center in Palmdale, California

data analysis

Our editors will review what you’ve submitted and determine whether to revise the article.

  • Academia - Data Analysis
  • U.S. Department of Health and Human Services - Office of Research Integrity - Data Analysis
  • Chemistry LibreTexts - Data Analysis
  • IBM - What is Exploratory Data Analysis?
  • Table Of Contents

data analysis , the process of systematically collecting, cleaning, transforming, describing, modeling, and interpreting data , generally employing statistical techniques. Data analysis is an important part of both scientific research and business, where demand has grown in recent years for data-driven decision making . Data analysis techniques are used to gain useful insights from datasets, which can then be used to make operational decisions or guide future research . With the rise of “ big data ,” the storage of vast quantities of data in large databases and data warehouses, there is increasing need to apply data analysis techniques to generate insights about volumes of data too large to be manipulated by instruments of low information-processing capacity.

Datasets are collections of information. Generally, data and datasets are themselves collected to help answer questions, make decisions, or otherwise inform reasoning. The rise of information technology has led to the generation of vast amounts of data of many kinds, such as text, pictures, videos, personal information, account data, and metadata, the last of which provide information about other data. It is common for apps and websites to collect data about how their products are used or about the people using their platforms. Consequently, there is vastly more data being collected today than at any other time in human history. A single business may track billions of interactions with millions of consumers at hundreds of locations with thousands of employees and any number of products. Analyzing that volume of data is generally only possible using specialized computational and statistical techniques.

The desire for businesses to make the best use of their data has led to the development of the field of business intelligence , which covers a variety of tools and techniques that allow businesses to perform data analysis on the information they collect.

For data to be analyzed, it must first be collected and stored. Raw data must be processed into a format that can be used for analysis and be cleaned so that errors and inconsistencies are minimized. Data can be stored in many ways, but one of the most useful is in a database . A database is a collection of interrelated data organized so that certain records (collections of data related to a single entity) can be retrieved on the basis of various criteria . The most familiar kind of database is the relational database , which stores data in tables with rows that represent records (tuples) and columns that represent fields (attributes). A query is a command that retrieves a subset of the information in the database according to certain criteria. A query may retrieve only records that meet certain criteria, or it may join fields from records across multiple tables by use of a common field.

Frequently, data from many sources is collected into large archives of data called data warehouses. The process of moving data from its original sources (such as databases) to a centralized location (generally a data warehouse) is called ETL (which stands for extract , transform , and load ).

  • The extraction step occurs when you identify and copy or export the desired data from its source, such as by running a database query to retrieve the desired records.
  • The transformation step is the process of cleaning the data so that they fit the analytical need for the data and the schema of the data warehouse. This may involve changing formats for certain fields, removing duplicate records, or renaming fields, among other processes.
  • Finally, the clean data are loaded into the data warehouse, where they may join vast amounts of historical data and data from other sources.

After data are effectively collected and cleaned, they can be analyzed with a variety of techniques. Analysis often begins with descriptive and exploratory data analysis. Descriptive data analysis uses statistics to organize and summarize data, making it easier to understand the broad qualities of the dataset. Exploratory data analysis looks for insights into the data that may arise from descriptions of distribution, central tendency, or variability for a single data field. Further relationships between data may become apparent by examining two fields together. Visualizations may be employed during analysis, such as histograms (graphs in which the length of a bar indicates a quantity) or stem-and-leaf plots (which divide data into buckets, or “stems,” with individual data points serving as “leaves” on the stem).

what is analysis of data in research

Data analysis frequently goes beyond descriptive analysis to predictive analysis, making predictions about the future using predictive modeling techniques. Predictive modeling uses machine learning , regression analysis methods (which mathematically calculate the relationship between an independent variable and a dependent variable), and classification techniques to identify trends and relationships among variables. Predictive analysis may involve data mining , which is the process of discovering interesting or useful patterns in large volumes of information. Data mining often involves cluster analysis , which tries to find natural groupings within data, and anomaly detection , which detects instances in data that are unusual and stand out from other patterns. It may also look for rules within datasets, strong relationships among variables in the data.

Data Analysis

  • Introduction to Data Analysis
  • Quantitative Analysis Tools
  • Qualitative Analysis Tools
  • Mixed Methods Analysis
  • Geospatial Analysis
  • Further Reading

Profile Photo

What is Data Analysis?

According to the federal government, data analysis is "the process of systematically applying statistical and/or logical techniques to describe and illustrate, condense and recap, and evaluate data" ( Responsible Conduct in Data Management ). Important components of data analysis include searching for patterns, remaining unbiased in drawing inference from data, practicing responsible  data management , and maintaining "honest and accurate analysis" ( Responsible Conduct in Data Management ). 

In order to understand data analysis further, it can be helpful to take a step back and understand the question "What is data?". Many of us associate data with spreadsheets of numbers and values, however, data can encompass much more than that. According to the federal government, data is "The recorded factual material commonly accepted in the scientific community as necessary to validate research findings" ( OMB Circular 110 ). This broad definition can include information in many formats. 

Some examples of types of data are as follows:

  • Photographs 
  • Hand-written notes from field observation
  • Machine learning training data sets
  • Ethnographic interview transcripts
  • Sheet music
  • Scripts for plays and musicals 
  • Observations from laboratory experiments ( CMU Data 101 )

Thus, data analysis includes the processing and manipulation of these data sources in order to gain additional insight from data, answer a research question, or confirm a research hypothesis. 

Data analysis falls within the larger research data lifecycle, as seen below. 

( University of Virginia )

Why Analyze Data?

Through data analysis, a researcher can gain additional insight from data and draw conclusions to address the research question or hypothesis. Use of data analysis tools helps researchers understand and interpret data. 

What are the Types of Data Analysis?

Data analysis can be quantitative, qualitative, or mixed methods. 

Quantitative research typically involves numbers and "close-ended questions and responses" ( Creswell & Creswell, 2018 , p. 3). Quantitative research tests variables against objective theories, usually measured and collected on instruments and analyzed using statistical procedures ( Creswell & Creswell, 2018 , p. 4). Quantitative analysis usually uses deductive reasoning. 

Qualitative  research typically involves words and "open-ended questions and responses" ( Creswell & Creswell, 2018 , p. 3). According to Creswell & Creswell, "qualitative research is an approach for exploring and understanding the meaning individuals or groups ascribe to a social or human problem" ( 2018 , p. 4). Thus, qualitative analysis usually invokes inductive reasoning. 

Mixed methods  research uses methods from both quantitative and qualitative research approaches. Mixed methods research works under the "core assumption... that the integration of qualitative and quantitative data yields additional insight beyond the information provided by either the quantitative or qualitative data alone" ( Creswell & Creswell, 2018 , p. 4). 

  • Next: Planning >>
  • Last Updated: Sep 4, 2024 11:49 AM
  • URL: https://guides.library.georgetown.edu/data-analysis

Creative Commons

what is analysis of data in research

What is Data Analysis? (Types, Methods, and Tools)

' src=

  • Couchbase Product Marketing December 17, 2023

Data analysis is the process of cleaning, transforming, and interpreting data to uncover insights, patterns, and trends. It plays a crucial role in decision making, problem solving, and driving innovation across various domains. 

In addition to further exploring the role data analysis plays this blog post will discuss common data analysis techniques, delve into the distinction between quantitative and qualitative data, explore popular data analysis tools, and discuss the steps involved in the data analysis process.

By the end, you should have a deeper understanding of data analysis and its applications, empowering you to harness the power of data to make informed decisions and gain actionable insights.

Why Is Data Analysis Important?

Data analysis is important across various domains and industries. It helps with:

  • Decision Making : Data analysis provides valuable insights that support informed decision making, enabling organizations to make data-driven choices for better outcomes.
  • Problem Solving : Data analysis helps identify and solve problems by uncovering root causes, detecting anomalies, and optimizing processes for increased efficiency.
  • Performance Evaluation : Data analysis allows organizations to evaluate performance, track progress, and measure success by analyzing key performance indicators (KPIs) and other relevant metrics.
  • Gathering Insights : Data analysis uncovers valuable insights that drive innovation, enabling businesses to develop new products, services, and strategies aligned with customer needs and market demand.
  • Risk Management : Data analysis helps mitigate risks by identifying risk factors and enabling proactive measures to minimize potential negative impacts.

By leveraging data analysis, organizations can gain a competitive advantage, improve operational efficiency, and make smarter decisions that positively impact the bottom line.

Quantitative vs. Qualitative Data

In data analysis, you’ll commonly encounter two types of data: quantitative and qualitative. Understanding the differences between these two types of data is essential for selecting appropriate analysis methods and drawing meaningful insights. Here’s an overview of quantitative and qualitative data:

Quantitative Data

Quantitative data is numerical and represents quantities or measurements. It’s typically collected through surveys, experiments, and direct measurements. This type of data is characterized by its ability to be counted, measured, and subjected to mathematical calculations. Examples of quantitative data include age, height, sales figures, test scores, and the number of website users.

Quantitative data has the following characteristics:

  • Numerical : Quantitative data is expressed in numerical values that can be analyzed and manipulated mathematically.
  • Objective : Quantitative data is objective and can be measured and verified independently of individual interpretations.
  • Statistical Analysis : Quantitative data lends itself well to statistical analysis. It allows for applying various statistical techniques, such as descriptive statistics, correlation analysis, regression analysis, and hypothesis testing.
  • Generalizability : Quantitative data often aims to generalize findings to a larger population. It allows for making predictions, estimating probabilities, and drawing statistical inferences.

Qualitative Data

Qualitative data, on the other hand, is non-numerical and is collected through interviews, observations, and open-ended survey questions. It focuses on capturing rich, descriptive, and subjective information to gain insights into people’s opinions, attitudes, experiences, and behaviors. Examples of qualitative data include interview transcripts, field notes, survey responses, and customer feedback.

Qualitative data has the following characteristics:

  • Descriptive : Qualitative data provides detailed descriptions, narratives, or interpretations of phenomena, often capturing context, emotions, and nuances.
  • Subjective : Qualitative data is subjective and influenced by the individuals’ perspectives, experiences, and interpretations.
  • Interpretive Analysis : Qualitative data requires interpretive techniques, such as thematic analysis, content analysis, and discourse analysis, to uncover themes, patterns, and underlying meanings.
  • Contextual Understanding : Qualitative data emphasizes understanding the social, cultural, and contextual factors that shape individuals’ experiences and behaviors.
  • Rich Insights : Qualitative data enables researchers to gain in-depth insights into complex phenomena and explore research questions in greater depth.

In summary, quantitative data represents numerical quantities and lends itself well to statistical analysis, while qualitative data provides rich, descriptive insights into subjective experiences and requires interpretive analysis techniques. Understanding the differences between quantitative and qualitative data is crucial for selecting appropriate analysis methods and drawing meaningful conclusions in research and data analysis.

Types of Data Analysis

Different types of data analysis techniques serve different purposes. In this section, we’ll explore four types of data analysis: descriptive, diagnostic, predictive, and prescriptive, and go over how you can use them.

Descriptive Analysis

Descriptive analysis involves summarizing and describing the main characteristics of a dataset. It focuses on gaining a comprehensive understanding of the data through measures such as central tendency (mean, median, mode), dispersion (variance, standard deviation), and graphical representations (histograms, bar charts). For example, in a retail business, descriptive analysis may involve analyzing sales data to identify average monthly sales, popular products, or sales distribution across different regions.

Diagnostic Analysis

Diagnostic analysis aims to understand the causes or factors influencing specific outcomes or events. It involves investigating relationships between variables and identifying patterns or anomalies in the data. Diagnostic analysis often uses regression analysis, correlation analysis, and hypothesis testing to uncover the underlying reasons behind observed phenomena. For example, in healthcare, diagnostic analysis could help determine factors contributing to patient readmissions and identify potential improvements in the care process.

Predictive Analysis

Predictive analysis focuses on making predictions or forecasts about future outcomes based on historical data. It utilizes statistical models, machine learning algorithms, and time series analysis to identify patterns and trends in the data. By applying predictive analysis, businesses can anticipate customer behavior, market trends, or demand for products and services. For example, an e-commerce company might use predictive analysis to forecast customer churn and take proactive measures to retain customers.

Prescriptive Analysis

Prescriptive analysis takes predictive analysis a step further by providing recommendations or optimal solutions based on the predicted outcomes. It combines historical and real-time data with optimization techniques, simulation models, and decision-making algorithms to suggest the best course of action. Prescriptive analysis helps organizations make data-driven decisions and optimize their strategies. For example, a logistics company can use prescriptive analysis to determine the most efficient delivery routes, considering factors like traffic conditions, fuel costs, and customer preferences.

In summary, data analysis plays a vital role in extracting insights and enabling informed decision making. Descriptive analysis helps understand the data, diagnostic analysis uncovers the underlying causes, predictive analysis forecasts future outcomes, and prescriptive analysis provides recommendations for optimal actions. These different data analysis techniques are valuable tools for businesses and organizations across various industries.

Data Analysis Methods

In addition to the data analysis types discussed earlier, you can use various methods to analyze data effectively. These methods provide a structured approach to extract insights, detect patterns, and derive meaningful conclusions from the available data. Here are some commonly used data analysis methods:

Statistical Analysis 

Statistical analysis involves applying statistical techniques to data to uncover patterns, relationships, and trends. It includes methods such as hypothesis testing, regression analysis, analysis of variance (ANOVA), and chi-square tests. Statistical analysis helps organizations understand the significance of relationships between variables and make inferences about the population based on sample data. For example, a market research company could conduct a survey to analyze the relationship between customer satisfaction and product price. They can use regression analysis to determine whether there is a significant correlation between these variables.

Data Mining

Data mining refers to the process of discovering patterns and relationships in large datasets using techniques such as clustering, classification, association analysis, and anomaly detection. It involves exploring data to identify hidden patterns and gain valuable insights. For example, a telecommunications company could analyze customer call records to identify calling patterns and segment customers into groups based on their calling behavior. 

Text Mining

Text mining involves analyzing unstructured data , such as customer reviews, social media posts, or emails, to extract valuable information and insights. It utilizes techniques like natural language processing (NLP), sentiment analysis, and topic modeling to analyze and understand textual data. For example, consider how a hotel chain might analyze customer reviews from various online platforms to identify common themes and sentiment patterns to improve customer satisfaction.

Time Series Analysis

Time series analysis focuses on analyzing data collected over time to identify trends, seasonality, and patterns. It involves techniques such as forecasting, decomposition, and autocorrelation analysis to make predictions and understand the underlying patterns in the data.

For example, an energy company could analyze historical electricity consumption data to forecast future demand and optimize energy generation and distribution.

Data Visualization

Data visualization is the graphical representation of data to communicate patterns, trends, and insights visually. It uses charts, graphs, maps, and other visual elements to present data in a visually appealing and easily understandable format. For example, a sales team might use a line chart to visualize monthly sales trends and identify seasonal patterns in their sales data.

These are just a few examples of the data analysis methods you can use. Your choice should depend on the nature of the data, the research question or problem, and the desired outcome.

How to Analyze Data

Analyzing data involves following a systematic approach to extract insights and derive meaningful conclusions. Here are some steps to guide you through the process of analyzing data effectively:

Define the Objective : Clearly define the purpose and objective of your data analysis. Identify the specific question or problem you want to address through analysis.

Prepare and Explore the Data : Gather the relevant data and ensure its quality. Clean and preprocess the data by handling missing values, duplicates, and formatting issues. Explore the data using descriptive statistics and visualizations to identify patterns, outliers, and relationships.

Apply Analysis Techniques : Choose the appropriate analysis techniques based on your data and research question. Apply statistical methods, machine learning algorithms, and other analytical tools to derive insights and answer your research question.

Interpret the Results : Analyze the output of your analysis and interpret the findings in the context of your objective. Identify significant patterns, trends, and relationships in the data. Consider the implications and practical relevance of the results.

Communicate and Take Action : Communicate your findings effectively to stakeholders or intended audiences. Present the results clearly and concisely, using visualizations and reports. Use the insights from the analysis to inform decision making.

Remember, data analysis is an iterative process, and you may need to revisit and refine your analysis as you progress. These steps provide a general framework to guide you through the data analysis process and help you derive meaningful insights from your data.

Data Analysis Tools

Data analysis tools are software applications and platforms designed to facilitate the process of analyzing and interpreting data . These tools provide a range of functionalities to handle data manipulation, visualization, statistical analysis, and machine learning. Here are some commonly used data analysis tools:

Spreadsheet Software

Tools like Microsoft Excel, Google Sheets, and Apple Numbers are used for basic data analysis tasks. They offer features for data entry, manipulation, basic statistical functions, and simple visualizations.

Business Intelligence (BI) Platforms

BI platforms like Microsoft Power BI, Tableau, and Looker integrate data from multiple sources, providing comprehensive views of business performance through interactive dashboards, reports, and ad hoc queries.

Programming Languages and Libraries

Programming languages like R and Python, along with their associated libraries (e.g., NumPy, SciPy, scikit-learn), offer extensive capabilities for data analysis. They provide flexibility, customizability, and access to a wide range of statistical and machine-learning algorithms.

Cloud-Based Analytics Platforms

Cloud-based platforms like Google Cloud Platform (BigQuery, Data Studio), Microsoft Azure (Azure Analytics, Power BI), and Amazon Web Services (AWS Analytics, QuickSight) provide scalable and collaborative environments for data storage, processing, and analysis. They have a wide range of analytical capabilities for handling large datasets.

Data Mining and Machine Learning Tools

Tools like RapidMiner, KNIME, and Weka automate the process of data preprocessing, feature selection, model training, and evaluation. They’re designed to extract insights and build predictive models from complex datasets.

Text Analytics Tools

Text analytics tools, such as Natural Language Processing (NLP) libraries in Python (NLTK, spaCy) or platforms like RapidMiner Text Mining Extension, enable the analysis of unstructured text data . They help extract information, sentiment, and themes from sources like customer reviews or social media.

Choosing the right data analysis tool depends on analysis complexity, dataset size, required functionalities, and user expertise. You might need to use a combination of tools to leverage their combined strengths and address specific analysis needs. 

By understanding the power of data analysis, you can leverage it to make informed decisions, identify opportunities for improvement, and drive innovation within your organization. Whether you’re working with quantitative data for statistical analysis or qualitative data for in-depth insights, it’s important to select the right analysis techniques and tools for your objectives.

To continue learning about data analysis, review the following resources:

  • What is Big Data Analytics?
  • Operational Analytics
  • JSON Analytics + Real-Time Insights
  • Database vs. Data Warehouse: Differences, Use Cases, Examples
  • Couchbase Capella Columnar Product Blog

Couchbase Product Marketing

  • Posted in: Analytics , Application Design , Best Practices and Tutorials
  • Tagged in: data analytics , data visualization , time series

Posted by Couchbase Product Marketing

Leave a reply cancel reply.

You must be logged in to post a comment.

Check your inbox or spam folder to confirm your subscription.

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

The PMC website is updating on October 15, 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Can J Hosp Pharm
  • v.68(4); Jul-Aug 2015

Logo of cjhp

Creating a Data Analysis Plan: What to Consider When Choosing Statistics for a Study

There are three kinds of lies: lies, damned lies, and statistics. – Mark Twain 1

INTRODUCTION

Statistics represent an essential part of a study because, regardless of the study design, investigators need to summarize the collected information for interpretation and presentation to others. It is therefore important for us to heed Mr Twain’s concern when creating the data analysis plan. In fact, even before data collection begins, we need to have a clear analysis plan that will guide us from the initial stages of summarizing and describing the data through to testing our hypotheses.

The purpose of this article is to help you create a data analysis plan for a quantitative study. For those interested in conducting qualitative research, previous articles in this Research Primer series have provided information on the design and analysis of such studies. 2 , 3 Information in the current article is divided into 3 main sections: an overview of terms and concepts used in data analysis, a review of common methods used to summarize study data, and a process to help identify relevant statistical tests. My intention here is to introduce the main elements of data analysis and provide a place for you to start when planning this part of your study. Biostatistical experts, textbooks, statistical software packages, and other resources can certainly add more breadth and depth to this topic when you need additional information and advice.

TERMS AND CONCEPTS USED IN DATA ANALYSIS

When analyzing information from a quantitative study, we are often dealing with numbers; therefore, it is important to begin with an understanding of the source of the numbers. Let us start with the term variable , which defines a specific item of information collected in a study. Examples of variables include age, sex or gender, ethnicity, exercise frequency, weight, treatment group, and blood glucose. Each variable will have a group of categories, which are referred to as values , to help describe the characteristic of an individual study participant. For example, the variable “sex” would have values of “male” and “female”.

Although variables can be defined or grouped in various ways, I will focus on 2 methods at this introductory stage. First, variables can be defined according to the level of measurement. The categories in a nominal variable are names, for example, male and female for the variable “sex”; white, Aboriginal, black, Latin American, South Asian, and East Asian for the variable “ethnicity”; and intervention and control for the variable “treatment group”. Nominal variables with only 2 categories are also referred to as dichotomous variables because the study group can be divided into 2 subgroups based on information in the variable. For example, a study sample can be split into 2 groups (patients receiving the intervention and controls) using the dichotomous variable “treatment group”. An ordinal variable implies that the categories can be placed in a meaningful order, as would be the case for exercise frequency (never, sometimes, often, or always). Nominal-level and ordinal-level variables are also referred to as categorical variables, because each category in the variable can be completely separated from the others. The categories for an interval variable can be placed in a meaningful order, with the interval between consecutive categories also having meaning. Age, weight, and blood glucose can be considered as interval variables, but also as ratio variables, because the ratio between values has meaning (e.g., a 15-year-old is half the age of a 30-year-old). Interval-level and ratio-level variables are also referred to as continuous variables because of the underlying continuity among categories.

As we progress through the levels of measurement from nominal to ratio variables, we gather more information about the study participant. The amount of information that a variable provides will become important in the analysis stage, because we lose information when variables are reduced or aggregated—a common practice that is not recommended. 4 For example, if age is reduced from a ratio-level variable (measured in years) to an ordinal variable (categories of < 65 and ≥ 65 years) we lose the ability to make comparisons across the entire age range and introduce error into the data analysis. 4

A second method of defining variables is to consider them as either dependent or independent. As the terms imply, the value of a dependent variable depends on the value of other variables, whereas the value of an independent variable does not rely on other variables. In addition, an investigator can influence the value of an independent variable, such as treatment-group assignment. Independent variables are also referred to as predictors because we can use information from these variables to predict the value of a dependent variable. Building on the group of variables listed in the first paragraph of this section, blood glucose could be considered a dependent variable, because its value may depend on values of the independent variables age, sex, ethnicity, exercise frequency, weight, and treatment group.

Statistics are mathematical formulae that are used to organize and interpret the information that is collected through variables. There are 2 general categories of statistics, descriptive and inferential. Descriptive statistics are used to describe the collected information, such as the range of values, their average, and the most common category. Knowledge gained from descriptive statistics helps investigators learn more about the study sample. Inferential statistics are used to make comparisons and draw conclusions from the study data. Knowledge gained from inferential statistics allows investigators to make inferences and generalize beyond their study sample to other groups.

Before we move on to specific descriptive and inferential statistics, there are 2 more definitions to review. Parametric statistics are generally used when values in an interval-level or ratio-level variable are normally distributed (i.e., the entire group of values has a bell-shaped curve when plotted by frequency). These statistics are used because we can define parameters of the data, such as the centre and width of the normally distributed curve. In contrast, interval-level and ratio-level variables with values that are not normally distributed, as well as nominal-level and ordinal-level variables, are generally analyzed using nonparametric statistics.

METHODS FOR SUMMARIZING STUDY DATA: DESCRIPTIVE STATISTICS

The first step in a data analysis plan is to describe the data collected in the study. This can be done using figures to give a visual presentation of the data and statistics to generate numeric descriptions of the data.

Selection of an appropriate figure to represent a particular set of data depends on the measurement level of the variable. Data for nominal-level and ordinal-level variables may be interpreted using a pie graph or bar graph . Both options allow us to examine the relative number of participants within each category (by reporting the percentages within each category), whereas a bar graph can also be used to examine absolute numbers. For example, we could create a pie graph to illustrate the proportions of men and women in a study sample and a bar graph to illustrate the number of people who report exercising at each level of frequency (never, sometimes, often, or always).

Interval-level and ratio-level variables may also be interpreted using a pie graph or bar graph; however, these types of variables often have too many categories for such graphs to provide meaningful information. Instead, these variables may be better interpreted using a histogram . Unlike a bar graph, which displays the frequency for each distinct category, a histogram displays the frequency within a range of continuous categories. Information from this type of figure allows us to determine whether the data are normally distributed. In addition to pie graphs, bar graphs, and histograms, many other types of figures are available for the visual representation of data. Interested readers can find additional types of figures in the books recommended in the “Further Readings” section.

Figures are also useful for visualizing comparisons between variables or between subgroups within a variable (for example, the distribution of blood glucose according to sex). Box plots are useful for summarizing information for a variable that does not follow a normal distribution. The lower and upper limits of the box identify the interquartile range (or 25th and 75th percentiles), while the midline indicates the median value (or 50th percentile). Scatter plots provide information on how the categories for one continuous variable relate to categories in a second variable; they are often helpful in the analysis of correlations.

In addition to using figures to present a visual description of the data, investigators can use statistics to provide a numeric description. Regardless of the measurement level, we can find the mode by identifying the most frequent category within a variable. When summarizing nominal-level and ordinal-level variables, the simplest method is to report the proportion of participants within each category.

The choice of the most appropriate descriptive statistic for interval-level and ratio-level variables will depend on how the values are distributed. If the values are normally distributed, we can summarize the information using the parametric statistics of mean and standard deviation. The mean is the arithmetic average of all values within the variable, and the standard deviation tells us how widely the values are dispersed around the mean. When values of interval-level and ratio-level variables are not normally distributed, or we are summarizing information from an ordinal-level variable, it may be more appropriate to use the nonparametric statistics of median and range. The first step in identifying these descriptive statistics is to arrange study participants according to the variable categories from lowest value to highest value. The range is used to report the lowest and highest values. The median or 50th percentile is located by dividing the number of participants into 2 groups, such that half (50%) of the participants have values above the median and the other half (50%) have values below the median. Similarly, the 25th percentile is the value with 25% of the participants having values below and 75% of the participants having values above, and the 75th percentile is the value with 75% of participants having values below and 25% of participants having values above. Together, the 25th and 75th percentiles define the interquartile range .

PROCESS TO IDENTIFY RELEVANT STATISTICAL TESTS: INFERENTIAL STATISTICS

One caveat about the information provided in this section: selecting the most appropriate inferential statistic for a specific study should be a combination of following these suggestions, seeking advice from experts, and discussing with your co-investigators. My intention here is to give you a place to start a conversation with your colleagues about the options available as you develop your data analysis plan.

There are 3 key questions to consider when selecting an appropriate inferential statistic for a study: What is the research question? What is the study design? and What is the level of measurement? It is important for investigators to carefully consider these questions when developing the study protocol and creating the analysis plan. The figures that accompany these questions show decision trees that will help you to narrow down the list of inferential statistics that would be relevant to a particular study. Appendix 1 provides brief definitions of the inferential statistics named in these figures. Additional information, such as the formulae for various inferential statistics, can be obtained from textbooks, statistical software packages, and biostatisticians.

What Is the Research Question?

The first step in identifying relevant inferential statistics for a study is to consider the type of research question being asked. You can find more details about the different types of research questions in a previous article in this Research Primer series that covered questions and hypotheses. 5 A relational question seeks information about the relationship among variables; in this situation, investigators will be interested in determining whether there is an association ( Figure 1 ). A causal question seeks information about the effect of an intervention on an outcome; in this situation, the investigator will be interested in determining whether there is a difference ( Figure 2 ).

An external file that holds a picture, illustration, etc.
Object name is cjhp-68-311f1.jpg

Decision tree to identify inferential statistics for an association.

An external file that holds a picture, illustration, etc.
Object name is cjhp-68-311f2.jpg

Decision tree to identify inferential statistics for measuring a difference.

What Is the Study Design?

When considering a question of association, investigators will be interested in measuring the relationship between variables ( Figure 1 ). A study designed to determine whether there is consensus among different raters will be measuring agreement. For example, an investigator may be interested in determining whether 2 raters, using the same assessment tool, arrive at the same score. Correlation analyses examine the strength of a relationship or connection between 2 variables, like age and blood glucose. Regression analyses also examine the strength of a relationship or connection; however, in this type of analysis, one variable is considered an outcome (or dependent variable) and the other variable is considered a predictor (or independent variable). Regression analyses often consider the influence of multiple predictors on an outcome at the same time. For example, an investigator may be interested in examining the association between a treatment and blood glucose, while also considering other factors, like age, sex, ethnicity, exercise frequency, and weight.

When considering a question of difference, investigators must first determine how many groups they will be comparing. In some cases, investigators may be interested in comparing the characteristic of one group with that of an external reference group. For example, is the mean age of study participants similar to the mean age of all people in the target group? If more than one group is involved, then investigators must also determine whether there is an underlying connection between the sets of values (or samples ) to be compared. Samples are considered independent or unpaired when the information is taken from different groups. For example, we could use an unpaired t test to compare the mean age between 2 independent samples, such as the intervention and control groups in a study. Samples are considered related or paired if the information is taken from the same group of people, for example, measurement of blood glucose at the beginning and end of a study. Because blood glucose is measured in the same people at both time points, we could use a paired t test to determine whether there has been a significant change in blood glucose.

What Is the Level of Measurement?

As described in the first section of this article, variables can be grouped according to the level of measurement (nominal, ordinal, or interval). In most cases, the independent variable in an inferential statistic will be nominal; therefore, investigators need to know the level of measurement for the dependent variable before they can select the relevant inferential statistic. Two exceptions to this consideration are correlation analyses and regression analyses ( Figure 1 ). Because a correlation analysis measures the strength of association between 2 variables, we need to consider the level of measurement for both variables. Regression analyses can consider multiple independent variables, often with a variety of measurement levels. However, for these analyses, investigators still need to consider the level of measurement for the dependent variable.

Selection of inferential statistics to test interval-level variables must include consideration of how the data are distributed. An underlying assumption for parametric tests is that the data approximate a normal distribution. When the data are not normally distributed, information derived from a parametric test may be wrong. 6 When the assumption of normality is violated (for example, when the data are skewed), then investigators should use a nonparametric test. If the data are normally distributed, then investigators can use a parametric test.

ADDITIONAL CONSIDERATIONS

What is the level of significance.

An inferential statistic is used to calculate a p value, the probability of obtaining the observed data by chance. Investigators can then compare this p value against a prespecified level of significance, which is often chosen to be 0.05. This level of significance represents a 1 in 20 chance that the observation is wrong, which is considered an acceptable level of error.

What Are the Most Commonly Used Statistics?

In 1983, Emerson and Colditz 7 reported the first review of statistics used in original research articles published in the New England Journal of Medicine . This review of statistics used in the journal was updated in 1989 and 2005, 8 and this type of analysis has been replicated in many other journals. 9 – 13 Collectively, these reviews have identified 2 important observations. First, the overall sophistication of statistical methodology used and reported in studies has grown over time, with survival analyses and multivariable regression analyses becoming much more common. The second observation is that, despite this trend, 1 in 4 articles describe no statistical methods or report only simple descriptive statistics. When inferential statistics are used, the most common are t tests, contingency table tests (for example, χ 2 test and Fisher exact test), and simple correlation and regression analyses. This information is important for educators, investigators, reviewers, and readers because it suggests that a good foundational knowledge of descriptive statistics and common inferential statistics will enable us to correctly evaluate the majority of research articles. 11 – 13 However, to fully take advantage of all research published in high-impact journals, we need to become acquainted with some of the more complex methods, such as multivariable regression analyses. 8 , 13

What Are Some Additional Resources?

As an investigator and Associate Editor with CJHP , I have often relied on the advice of colleagues to help create my own analysis plans and review the plans of others. Biostatisticians have a wealth of knowledge in the field of statistical analysis and can provide advice on the correct selection, application, and interpretation of these methods. Colleagues who have “been there and done that” with their own data analysis plans are also valuable sources of information. Identify these individuals and consult with them early and often as you develop your analysis plan.

Another important resource to consider when creating your analysis plan is textbooks. Numerous statistical textbooks are available, differing in levels of complexity and scope. The titles listed in the “Further Reading” section are just a few suggestions. I encourage interested readers to look through these and other books to find resources that best fit their needs. However, one crucial book that I highly recommend to anyone wanting to be an investigator or peer reviewer is Lang and Secic’s How to Report Statistics in Medicine (see “Further Reading”). As the title implies, this book covers a wide range of statistics used in medical research and provides numerous examples of how to correctly report the results.

CONCLUSIONS

When it comes to creating an analysis plan for your project, I recommend following the sage advice of Douglas Adams in The Hitchhiker’s Guide to the Galaxy : Don’t panic! 14 Begin with simple methods to summarize and visualize your data, then use the key questions and decision trees provided in this article to identify relevant statistical tests. Information in this article will give you and your co-investigators a place to start discussing the elements necessary for developing an analysis plan. But do not stop there! Use advice from biostatisticians and more experienced colleagues, as well as information in textbooks, to help create your analysis plan and choose the most appropriate statistics for your study. Making careful, informed decisions about the statistics to use in your study should reduce the risk of confirming Mr Twain’s concern.

Appendix 1. Glossary of statistical terms * (part 1 of 2)

  • 1-way ANOVA: Uses 1 variable to define the groups for comparing means. This is similar to the Student t test when comparing the means of 2 groups.
  • Kruskall–Wallis 1-way ANOVA: Nonparametric alternative for the 1-way ANOVA. Used to determine the difference in medians between 3 or more groups.
  • n -way ANOVA: Uses 2 or more variables to define groups when comparing means. Also called a “between-subjects factorial ANOVA”.
  • Repeated-measures ANOVA: A method for analyzing whether the means of 3 or more measures from the same group of participants are different.
  • Freidman ANOVA: Nonparametric alternative for the repeated-measures ANOVA. It is often used to compare rankings and preferences that are measured 3 or more times.
  • Fisher exact: Variation of chi-square that accounts for cell counts < 5.
  • McNemar: Variation of chi-square that tests statistical significance of changes in 2 paired measurements of dichotomous variables.
  • Cochran Q: An extension of the McNemar test that provides a method for testing for differences between 3 or more matched sets of frequencies or proportions. Often used as a measure of heterogeneity in meta-analyses.
  • 1-sample: Used to determine whether the mean of a sample is significantly different from a known or hypothesized value.
  • Independent-samples t test (also referred to as the Student t test): Used when the independent variable is a nominal-level variable that identifies 2 groups and the dependent variable is an interval-level variable.
  • Paired: Used to compare 2 pairs of scores between 2 groups (e.g., baseline and follow-up blood pressure in the intervention and control groups).

Lang TA, Secic M. How to report statistics in medicine: annotated guidelines for authors, editors, and reviewers. 2nd ed. Philadelphia (PA): American College of Physicians; 2006.

Norman GR, Streiner DL. PDQ statistics. 3rd ed. Hamilton (ON): B.C. Decker; 2003.

Plichta SB, Kelvin E. Munro’s statistical methods for health care research . 6th ed. Philadelphia (PA): Wolters Kluwer Health/ Lippincott, Williams & Wilkins; 2013.

This article is the 12th in the CJHP Research Primer Series, an initiative of the CJHP Editorial Board and the CSHP Research Committee. The planned 2-year series is intended to appeal to relatively inexperienced researchers, with the goal of building research capacity among practising pharmacists. The articles, presenting simple but rigorous guidance to encourage and support novice researchers, are being solicited from authors with appropriate expertise.

Previous articles in this series:

  • Bond CM. The research jigsaw: how to get started. Can J Hosp Pharm . 2014;67(1):28–30.
  • Tully MP. Research: articulating questions, generating hypotheses, and choosing study designs. Can J Hosp Pharm . 2014;67(1):31–4.
  • Loewen P. Ethical issues in pharmacy practice research: an introductory guide. Can J Hosp Pharm. 2014;67(2):133–7.
  • Tsuyuki RT. Designing pharmacy practice research trials. Can J Hosp Pharm . 2014;67(3):226–9.
  • Bresee LC. An introduction to developing surveys for pharmacy practice research. Can J Hosp Pharm . 2014;67(4):286–91.
  • Gamble JM. An introduction to the fundamentals of cohort and case–control studies. Can J Hosp Pharm . 2014;67(5):366–72.
  • Austin Z, Sutton J. Qualitative research: getting started. C an J Hosp Pharm . 2014;67(6):436–40.
  • Houle S. An introduction to the fundamentals of randomized controlled trials in pharmacy research. Can J Hosp Pharm . 2014; 68(1):28–32.
  • Charrois TL. Systematic reviews: What do you need to know to get started? Can J Hosp Pharm . 2014;68(2):144–8.
  • Sutton J, Austin Z. Qualitative research: data collection, analysis, and management. Can J Hosp Pharm . 2014;68(3):226–31.
  • Cadarette SM, Wong L. An introduction to health care administrative data. Can J Hosp Pharm. 2014;68(3):232–7.

Competing interests: None declared.

Further Reading

  • Devor J, Peck R. Statistics: the exploration and analysis of data. 7th ed. Boston (MA): Brooks/Cole Cengage Learning; 2012. [ Google Scholar ]
  • Lang TA, Secic M. How to report statistics in medicine: annotated guidelines for authors, editors, and reviewers. 2nd ed. Philadelphia (PA): American College of Physicians; 2006. [ Google Scholar ]
  • Mendenhall W, Beaver RJ, Beaver BM. Introduction to probability and statistics. 13th ed. Belmont (CA): Brooks/Cole Cengage Learning; 2009. [ Google Scholar ]
  • Norman GR, Streiner DL. PDQ statistics. 3rd ed. Hamilton (ON): B.C. Decker; 2003. [ Google Scholar ]
  • Plichta SB, Kelvin E. Munro’s statistical methods for health care research. 6th ed. Philadelphia (PA): Wolters Kluwer Health/Lippincott, Williams & Wilkins; 2013. [ Google Scholar ]

what is analysis of data in research

  • Book a Demo

Data Analysis in Qualitative Research

Discover essential qualitative data analysis methods in this comprehensive guide. Learn how to analyze qualitative data effectively using various techniques and software tools. From coding strategies to thematic analysis, this article covers key aspects of data analysis in qualitative research.

what is analysis of data in research

While numbers tell us "what" and "how much," qualitative data reveals the crucial "why" and "how." But let's face it - turning mountains of text, images, and observations into meaningful insights can be daunting.

This guide dives deep into the art and science of how to analyze qualitative data. We'll explore cutting-edge techniques, free qualitative data analysis software, and strategies to make your analysis more rigorous and insightful. Expect practical, actionable advice on qualitative data analysis methods, whether you're a seasoned researcher looking to refine your skills or a team leader aiming to extract more value from your qualitative data.

What is qualitative data?

Qualitative data is non-numerical information that describes qualities or characteristics. It includes text, images, audio, and video. 

This data type captures complex human experiences, behaviors, and opinions that numbers alone can't express.

A qualitative data example can include interview transcripts, open-ended survey responses, field notes from observations, social media posts and customer reviews

Importance of qualitative data

Qualitative data is vital for several reasons:

  • It provides a deep, nuanced understanding of complex phenomena.
  • It captures the 'why' behind behaviors and opinions.
  • It allows for unexpected discoveries and new research directions.
  • It puts people's experiences and perspectives at the forefront.
  • It enhances quantitative findings with depth and detail.

What is data analysis in qualitative research?

Data analysis in qualitative research is the process of examining and interpreting non-numerical data to uncover patterns, themes, and insights. It aims to make sense of rich, detailed information gathered through methods like interviews, focus groups, or observations.

This analysis moves beyond simple description. It seeks to understand the underlying meanings, contexts, and relationships within the data. The goal is to create a coherent narrative that answers research questions and generates new knowledge.

How is qualitative data analysis different from quantitative data analysis?

Qualitative and quantitative data analyses differ in several key ways:

  • Data type: Qualitative analysis uses non-numerical data (text, images), while quantitative analysis uses numerical data.
  • Approach: Qualitative analysis is inductive and exploratory. Quantitative analysis is deductive and confirmatory.
  • Sample size: Qualitative studies often use smaller samples. Quantitative studies typically need larger samples for statistical validity.
  • Depth vs. breadth: Qualitative analysis provides in-depth insights about a few cases. Quantitative analysis offers broader insights across many cases.
  • Subjectivity: Qualitative analysis involves more subjective interpretation. Quantitative analysis aims for objective, statistical measures.

What are the 3 main components of qualitative data analysis?

The three main components of qualitative data analysis are:

  • Data reduction: Simplifying and focusing the raw data through coding and categorization.
  • Data display: Organizing the reduced data into visual formats like matrices, charts, or networks.
  • Conclusion drawing/verification: Interpreting the displayed data and verifying the conclusions.

These components aren't linear steps. Instead, they form an iterative process where researchers move back and forth between them throughout the analysis.

How do you write a qualitative analysis?

Step 1: organize your data.

Start with bringing all your qualitative research data in one place. A repository can be of immense help here. Transcribe interviews , compile field notes, and gather all relevant materials.

Immerse yourself in the data. Read through everything multiple times.

Step 2: Code & identify themes

Identify and label key concepts, themes, or patterns. Group related codes into broader themes or categories. Try to connect themes to tell a coherent story that answers your research questions.

Pick out direct quotes from your data to illustrate key points.

Step 3: Interpret and reflect

Explain what your results mean in the context of your research and existing literature.

Als discuss, identify and try to eliminate potential biases or limitations in your analysis. 

Summarize main insights and their implications.

What are the 5 qualitative data analysis methods?

Thematic Analysis Identifying, analyzing, and reporting patterns (themes) within data.

Content Analysis Systematically categorizing and counting the occurrence of specific elements in text.

Grounded Theory Developing theory from data through iterative coding and analysis.

Discourse Analysis Examining language use and meaning in social contexts.

Narrative Analysis Interpreting stories and personal accounts to understand experiences and meanings.

Each method suits different research goals and data types. Researchers often combine methods for comprehensive analysis.

What are the 4 data collection methods in qualitative research?

When it comes to collecting qualitative data, researchers primarily rely on four methods.

  • Interviews : One-on-one conversations to gather in-depth information.
  • Focus Groups : Group discussions to explore collective opinions and experiences.
  • Observations : Watching and recording behaviors in natural settings.
  • Document Analysis : Examining existing texts, images, or artifacts.

Researchers often use multiple methods to gain a comprehensive understanding of their topic.

How is qualitative data analysis measured?

Unlike quantitative data, qualitative data analysis isn't measured in traditional numerical terms. Instead, its quality is evaluated based on several criteria. 

Trustworthiness is key, encompassing the credibility, transferability, dependability, and confirmability of the findings. The rigor of the analysis - the thoroughness and care taken in data collection and analysis - is another crucial factor. 

Transparency in documenting the analysis process and decision-making is essential, as is reflexivity - acknowledging and examining the researcher's own biases and influences. 

Employing techniques like member checking and triangulation all contribute to the strength of qualitative analysis.

Benefits of qualitative data analysis

The benefits of qualitative data analysis are numerous. It uncovers rich, nuanced understanding of complex phenomena and allows for unexpected discoveries and new research directions. 

By capturing the 'why' behind behaviors and opinions, qualitative data analysis methods provide crucial context. 

Qualitative analysis can also lead to new theoretical frameworks or hypotheses and enhances quantitative findings with depth and detail. It's particularly adept at capturing cultural nuances that might be missed in quantitative studies.

Challenges of Qualitative Data Analysis

Researchers face several challenges when conducting qualitative data analysis. 

Managing and making sense of large volumes of rich, complex data can lead to data overload. Maintaining consistent coding across large datasets or between multiple coders can be difficult. 

There's a delicate balance to strike between providing enough context and maintaining focus on analysis. Recognizing and mitigating researcher biases in data interpretation is an ongoing challenge. 

The learning curve for qualitative data analysis software can be steep and time-consuming. Ethical considerations, particularly around protecting participant anonymity while presenting rich, detailed data, require careful navigation. Integrating different types of data from various sources can be complex. Time management is crucial, as researchers must balance the depth of analysis with project timelines and resources. Finally, communicating complex qualitative insights in clear, compelling ways can be challenging.

Best Software to Analyze Qualitative Data

G2 rating: 4.6/5

Pricing: Starts at $30 monthly.

Looppanel is an AI-powered research assistant and repository platform that can make it 5x faster to get to insights, by automating all the manual, tedious parts of your job. 

Here’s how Looppanel’s features can help with qualitative data analysis:

  • Automatic Transcription: Quickly turn speech into accurate text; it works across 8 languages and even heavy accents, with over 90% accuracy.
  • AI Note-Taking: The research assistant can join you on calls and take notes, as well as automatically sort your notes based on your interview questions.
  • Automatic Tagging: Easily tag and organize your data with free AI tools.
  • Insight Generation: Create shareable insights that fit right into your other tools.
  • Repository Search: Run Google-like searches within your projects and calls to find a data snippet/quote in seconds
  • Smart Summary: Ask the AI a question on your research, and it will give you an answer, using extracts from your data as citations.

Looppanel’s focus on automating research tasks makes it perfect for researchers who want to save time and work smarter.

G2 rating: 4.7/5

Pricing: Free version available, with the Plus version costing $20 monthly.

ChatGPT, developed by OpenAI, offers a range of capabilities for qualitative data analysis including:

  • Document analysis : It can easily extract and analyze text from various file formats.
  • Summarization : GPT can condense lengthy documents into concise summaries.
  • Advanced Data Analysis (ADA) : For paid users, Chat-GPT offers quantitative analysis of data documents.
  • Sentiment analysis: Although not Chat-GPT’s specialty, it can still perform basic sentiment analysis on text data.

ChatGPT's versatility makes it valuable for researchers who need quick insights from diverse text sources.

How to use ChatGPT for qualitative data analysis

ChatGPT can be a handy sidekick in your qualitative analysis, if you do the following:

  • Use it to summarize long documents or transcripts
  • Ask it to identify key themes in your data
  • Use it for basic sentiment analysis
  • Have it generate potential codes based on your research questions
  • Use it to brainstorm interpretations of your findings

G2 rating: 4.7/5 Pricing: Custom

Atlas.ti is a powerful platform built for detailed qualitative and mixed-methods research, offering a lot of capabilities for running both quantitative and qualitative research.

It’s key data analysis features include:

  • Multi-format Support: Analyze text, PDFs, images, audio, video, and geo data all within one platform.
  • AI-Powered Coding: Uses AI to suggest codes and summarize documents.
  • Collaboration Tools: Ideal for teams working on complex research projects.
  • Data Visualization: Create network views and other visualizations to showcase relationships in your data.

G2 rating: 4.1/5 Pricing: Custom

NVivo is another powerful platform for qualitative and mixed-methods research. It’s analysis features include:

  • Data Import and Organization: Easily manage different data types, including text, audio, and video.
  • AI-Powered Coding: Speeds up the coding process with machine learning.
  • Visualization Tools: Create charts, graphs, and diagrams to represent your findings.
  • Collaboration Features: Suitable for team-based research projects.

NVivo combines AI capabilities with traditional qualitative analysis tools, making it versatile for various research needs.

Can Excel do qualitative data analysis?

Excel can be a handy tool for qualitative data analysis, especially if you're just starting out or working on a smaller project. While it's not specialized qualitative data analysis software, you can use it to organize your data, maybe putting different themes in different columns. It's good for basic coding, where you label bits of text with keywords. You can use its filter feature to focus on specific themes. Excel can also create simple charts to visualize your findings. But for bigger or more complex projects, you might want to look into software designed specifically for qualitative data analysis. These tools often have more advanced features that can save you time and help you dig deeper into your data.

How do you show qualitative analysis?

Showing qualitative data analysis is about telling the story of your data. In qualitative data analysis methods, we use quotes from interviews or documents to back up our points. Create charts or mind maps to show how different ideas connect, which is a common practice in data analysis in qualitative research. Group your findings into themes that make sense. Then, write it all up in a way that flows, explaining what you found and why it matters.

What is the best way to analyze qualitative data?

There's no one-size-fits-all approach to how to analyze qualitative data, but there are some tried-and-true steps. 

Start by getting your data in order. Then, read through it a few times to get familiar with it. As you go, start marking important bits with codes - this is a fundamental qualitative data analysis method. Group similar codes into bigger themes. Look for patterns in these themes - how do they connect? 

Finally, think about what it all means in the bigger picture of your research. Remember, it's okay to go back and forth between these steps as you dig deeper into your data. Qualitative data analysis software can be a big help in this process, especially for managing large amounts of data.

In qualitative methods of test analysis, what do test developers do to generate data?

Test developers in qualitative research might sit down with people for in-depth chats or run group discussions, which are key qualitative data analysis methods. They often use surveys with open-ended questions that let people express themselves freely. Sometimes, they'll observe people in their natural environment, taking notes on what they see. They might also dig into existing documents or artifacts that relate to their topic. The goal is to gather rich, detailed information that helps them understand the full picture, which is crucial in data analysis in qualitative research.

Which is not a purpose of reflexivity during qualitative data analysis?

Reflexivity in qualitative data analysis isn't about proving you're completely objective. That's not the goal. Instead, it's about being honest about who you are as a researcher. It's recognizing that your own experiences and views might influence how you see the data. By being upfront about this, you actually make your research more trustworthy. It's also a way to dig deeper into your data, seeing things you might have missed at first glance. This self-awareness is a crucial part of qualitative data analysis methods.

What is a qualitative data analysis example?

A simple example is analyzing customer feedback for a new product. You might collect feedback, read through responses, create codes like "ease of use" or "design," and group similar codes into themes. You'd then identify patterns and support findings with specific quotes. This process helps transform raw feedback into actionable insights.

How to analyze qualitative data from a survey?

First, gather all your responses in one place. Read through them to get a feel for what people are saying. Then, start labeling responses with codes - short descriptions of what each bit is about. This coding process is a fundamental qualitative data analysis method. Group similar codes into bigger themes. Look for patterns in these themes. Are certain ideas coming up a lot? Do different groups of people have different views? Use actual quotes from your survey to back up what you're seeing. Think about how your findings relate to your original research questions. 

Which one is better, NVivo or Atlas.ti?

NVivo is known for being user-friendly and great for team projects. Atlas.ti shines when it comes to visual mapping of concepts and handling geographic data. Both can handle a variety of data types and have powerful tools for qualitative data analysis. The best way to decide is to try out both if you can. 

While these are powerful tools, the core of qualitative data analysis still relies on your analytical skills and understanding of qualitative data analysis methods.

Do I need to use NVivo for qualitative data analysis?

You don't necessarily need NVivo for qualitative data analysis, but it can definitely make your life easier, especially for bigger projects. Think of it like using a power tool versus a hand tool - you can get the job done either way, but the power tool might save you time and effort. For smaller projects or if you're just starting out, you might be fine with simpler tools or even free qualitative data analysis software. But if you're dealing with lots of data, or if you need to collaborate with a team, or if you want to do more complex analysis, then specialized qualitative data analysis software like NVivo can be a big help. It's all about finding the right tool for your specific research needs and the qualitative data analysis methods you're using.

Here’s a guide that can help you decide.

How to use NVivo for qualitative data analysis

First, you import all your data - interviews, documents, videos, whatever you've got. Then you start creating "nodes," which are like folders for different themes or ideas in your data. As you read through your material, you highlight bits that relate to these themes and file them under the right nodes. NVivo lets you easily search through all this organized data, find connections between different themes, and even create visual maps of how everything relates.

How much does NVivo cost?

NVivo's pricing isn't one-size-fits-all. They offer different plans for individuals, teams, and large organizations, but they don't publish their prices openly. Contact the team here for a custom quote.

What are the four steps of qualitative data analysis?

While qualitative data analysis is often iterative, it generally follows these four main steps:

1. Data Collection: Gathering raw data through interviews, observations, or documents.

2. Data Preparation: Organizing and transcribing the collected data.

3. Data Coding: Identifying and labeling important concepts or themes in the data.

4. Interpretation: Drawing meaning from the coded data and developing insights.

Table of Contents

Twitter

Get the best resources for UX Research, in your inbox

More from looppanel, how to use ai for qualitative data analysis.

what is analysis of data in research

Transcription in Qualitative Research: A Comprehensive Guide for UX Researchers

what is analysis of data in research

Triangulation in Qualitative Research: A Comprehensive Guide [2024]

what is analysis of data in research

An Overview of Data Analysis and Interpretations in Research

  • January 2020

Dawit Dibekulu at Bahir Dar University

  • Bahir Dar University

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations
  • Joan P. Almerol
  • Josephine B. Baguio

Thuli Gladys Ntuli

  • Mpho Kenneth Madavha

Awelani V Mudau

  • Samiya Telli

Houria Ghodbane

  • Yassine Kadmi

Duncan Cramer

  • John W Creswell
  • TECHNOMETRICS
  • Leone Y. Low
  • Matthew Hassett

Daniel Muijs

  • C. A. Moser
  • R. L. Ackoff
  • A. M. Wilkinson
  • Stuart W. Cook

Morton Deutsch

  • Marie Jahoda
  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up

PW Skills | Blog

Data Analysis Techniques in Research – Methods, Tools & Examples

' src=

Varun Saharawat is a seasoned professional in the fields of SEO and content writing. With a profound knowledge of the intricate aspects of these disciplines, Varun has established himself as a valuable asset in the world of digital marketing and online content creation.

Data analysis techniques in research are essential because they allow researchers to derive meaningful insights from data sets to support their hypotheses or research objectives.

data analysis techniques in research

Data Analysis Techniques in Research : While various groups, institutions, and professionals may have diverse approaches to data analysis, a universal definition captures its essence. Data analysis involves refining, transforming, and interpreting raw data to derive actionable insights that guide informed decision-making for businesses.

A straightforward illustration of data analysis emerges when we make everyday decisions, basing our choices on past experiences or predictions of potential outcomes.

If you want to learn more about this topic and acquire valuable skills that will set you apart in today’s data-driven world, we highly recommend enrolling in the Data Analytics Course by Physics Wallah . And as a special offer for our readers, use the coupon code “READER” to get a discount on this course.

Table of Contents

What is Data Analysis?

Data analysis is the systematic process of inspecting, cleaning, transforming, and interpreting data with the objective of discovering valuable insights and drawing meaningful conclusions. This process involves several steps:

  • Inspecting : Initial examination of data to understand its structure, quality, and completeness.
  • Cleaning : Removing errors, inconsistencies, or irrelevant information to ensure accurate analysis.
  • Transforming : Converting data into a format suitable for analysis, such as normalization or aggregation.
  • Interpreting : Analyzing the transformed data to identify patterns, trends, and relationships.

Types of Data Analysis Techniques in Research

Data analysis techniques in research are categorized into qualitative and quantitative methods, each with its specific approaches and tools. These techniques are instrumental in extracting meaningful insights, patterns, and relationships from data to support informed decision-making, validate hypotheses, and derive actionable recommendations. Below is an in-depth exploration of the various types of data analysis techniques commonly employed in research:

1) Qualitative Analysis:

Definition: Qualitative analysis focuses on understanding non-numerical data, such as opinions, concepts, or experiences, to derive insights into human behavior, attitudes, and perceptions.

  • Content Analysis: Examines textual data, such as interview transcripts, articles, or open-ended survey responses, to identify themes, patterns, or trends.
  • Narrative Analysis: Analyzes personal stories or narratives to understand individuals’ experiences, emotions, or perspectives.
  • Ethnographic Studies: Involves observing and analyzing cultural practices, behaviors, and norms within specific communities or settings.

2) Quantitative Analysis:

Quantitative analysis emphasizes numerical data and employs statistical methods to explore relationships, patterns, and trends. It encompasses several approaches:

Descriptive Analysis:

  • Frequency Distribution: Represents the number of occurrences of distinct values within a dataset.
  • Central Tendency: Measures such as mean, median, and mode provide insights into the central values of a dataset.
  • Dispersion: Techniques like variance and standard deviation indicate the spread or variability of data.

Diagnostic Analysis:

  • Regression Analysis: Assesses the relationship between dependent and independent variables, enabling prediction or understanding causality.
  • ANOVA (Analysis of Variance): Examines differences between groups to identify significant variations or effects.

Predictive Analysis:

  • Time Series Forecasting: Uses historical data points to predict future trends or outcomes.
  • Machine Learning Algorithms: Techniques like decision trees, random forests, and neural networks predict outcomes based on patterns in data.

Prescriptive Analysis:

  • Optimization Models: Utilizes linear programming, integer programming, or other optimization techniques to identify the best solutions or strategies.
  • Simulation: Mimics real-world scenarios to evaluate various strategies or decisions and determine optimal outcomes.

Specific Techniques:

  • Monte Carlo Simulation: Models probabilistic outcomes to assess risk and uncertainty.
  • Factor Analysis: Reduces the dimensionality of data by identifying underlying factors or components.
  • Cohort Analysis: Studies specific groups or cohorts over time to understand trends, behaviors, or patterns within these groups.
  • Cluster Analysis: Classifies objects or individuals into homogeneous groups or clusters based on similarities or attributes.
  • Sentiment Analysis: Uses natural language processing and machine learning techniques to determine sentiment, emotions, or opinions from textual data.

Also Read: AI and Predictive Analytics: Examples, Tools, Uses, Ai Vs Predictive Analytics

Data Analysis Techniques in Research Examples

To provide a clearer understanding of how data analysis techniques are applied in research, let’s consider a hypothetical research study focused on evaluating the impact of online learning platforms on students’ academic performance.

Research Objective:

Determine if students using online learning platforms achieve higher academic performance compared to those relying solely on traditional classroom instruction.

Data Collection:

  • Quantitative Data: Academic scores (grades) of students using online platforms and those using traditional classroom methods.
  • Qualitative Data: Feedback from students regarding their learning experiences, challenges faced, and preferences.

Data Analysis Techniques Applied:

1) Descriptive Analysis:

  • Calculate the mean, median, and mode of academic scores for both groups.
  • Create frequency distributions to represent the distribution of grades in each group.

2) Diagnostic Analysis:

  • Conduct an Analysis of Variance (ANOVA) to determine if there’s a statistically significant difference in academic scores between the two groups.
  • Perform Regression Analysis to assess the relationship between the time spent on online platforms and academic performance.

3) Predictive Analysis:

  • Utilize Time Series Forecasting to predict future academic performance trends based on historical data.
  • Implement Machine Learning algorithms to develop a predictive model that identifies factors contributing to academic success on online platforms.

4) Prescriptive Analysis:

  • Apply Optimization Models to identify the optimal combination of online learning resources (e.g., video lectures, interactive quizzes) that maximize academic performance.
  • Use Simulation Techniques to evaluate different scenarios, such as varying student engagement levels with online resources, to determine the most effective strategies for improving learning outcomes.

5) Specific Techniques:

  • Conduct Factor Analysis on qualitative feedback to identify common themes or factors influencing students’ perceptions and experiences with online learning.
  • Perform Cluster Analysis to segment students based on their engagement levels, preferences, or academic outcomes, enabling targeted interventions or personalized learning strategies.
  • Apply Sentiment Analysis on textual feedback to categorize students’ sentiments as positive, negative, or neutral regarding online learning experiences.

By applying a combination of qualitative and quantitative data analysis techniques, this research example aims to provide comprehensive insights into the effectiveness of online learning platforms.

Also Read: Learning Path to Become a Data Analyst in 2024

Data Analysis Techniques in Quantitative Research

Quantitative research involves collecting numerical data to examine relationships, test hypotheses, and make predictions. Various data analysis techniques are employed to interpret and draw conclusions from quantitative data. Here are some key data analysis techniques commonly used in quantitative research:

1) Descriptive Statistics:

  • Description: Descriptive statistics are used to summarize and describe the main aspects of a dataset, such as central tendency (mean, median, mode), variability (range, variance, standard deviation), and distribution (skewness, kurtosis).
  • Applications: Summarizing data, identifying patterns, and providing initial insights into the dataset.

2) Inferential Statistics:

  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. This technique includes hypothesis testing, confidence intervals, t-tests, chi-square tests, analysis of variance (ANOVA), regression analysis, and correlation analysis.
  • Applications: Testing hypotheses, making predictions, and generalizing findings from a sample to a larger population.

3) Regression Analysis:

  • Description: Regression analysis is a statistical technique used to model and examine the relationship between a dependent variable and one or more independent variables. Linear regression, multiple regression, logistic regression, and nonlinear regression are common types of regression analysis .
  • Applications: Predicting outcomes, identifying relationships between variables, and understanding the impact of independent variables on the dependent variable.

4) Correlation Analysis:

  • Description: Correlation analysis is used to measure and assess the strength and direction of the relationship between two or more variables. The Pearson correlation coefficient, Spearman rank correlation coefficient, and Kendall’s tau are commonly used measures of correlation.
  • Applications: Identifying associations between variables and assessing the degree and nature of the relationship.

5) Factor Analysis:

  • Description: Factor analysis is a multivariate statistical technique used to identify and analyze underlying relationships or factors among a set of observed variables. It helps in reducing the dimensionality of data and identifying latent variables or constructs.
  • Applications: Identifying underlying factors or constructs, simplifying data structures, and understanding the underlying relationships among variables.

6) Time Series Analysis:

  • Description: Time series analysis involves analyzing data collected or recorded over a specific period at regular intervals to identify patterns, trends, and seasonality. Techniques such as moving averages, exponential smoothing, autoregressive integrated moving average (ARIMA), and Fourier analysis are used.
  • Applications: Forecasting future trends, analyzing seasonal patterns, and understanding time-dependent relationships in data.

7) ANOVA (Analysis of Variance):

  • Description: Analysis of variance (ANOVA) is a statistical technique used to analyze and compare the means of two or more groups or treatments to determine if they are statistically different from each other. One-way ANOVA, two-way ANOVA, and MANOVA (Multivariate Analysis of Variance) are common types of ANOVA.
  • Applications: Comparing group means, testing hypotheses, and determining the effects of categorical independent variables on a continuous dependent variable.

8) Chi-Square Tests:

  • Description: Chi-square tests are non-parametric statistical tests used to assess the association between categorical variables in a contingency table. The Chi-square test of independence, goodness-of-fit test, and test of homogeneity are common chi-square tests.
  • Applications: Testing relationships between categorical variables, assessing goodness-of-fit, and evaluating independence.

These quantitative data analysis techniques provide researchers with valuable tools and methods to analyze, interpret, and derive meaningful insights from numerical data. The selection of a specific technique often depends on the research objectives, the nature of the data, and the underlying assumptions of the statistical methods being used.

Also Read: Analysis vs. Analytics: How Are They Different?

Data Analysis Methods

Data analysis methods refer to the techniques and procedures used to analyze, interpret, and draw conclusions from data. These methods are essential for transforming raw data into meaningful insights, facilitating decision-making processes, and driving strategies across various fields. Here are some common data analysis methods:

  • Description: Descriptive statistics summarize and organize data to provide a clear and concise overview of the dataset. Measures such as mean, median, mode, range, variance, and standard deviation are commonly used.
  • Description: Inferential statistics involve making predictions or inferences about a population based on a sample of data. Techniques such as hypothesis testing, confidence intervals, and regression analysis are used.

3) Exploratory Data Analysis (EDA):

  • Description: EDA techniques involve visually exploring and analyzing data to discover patterns, relationships, anomalies, and insights. Methods such as scatter plots, histograms, box plots, and correlation matrices are utilized.
  • Applications: Identifying trends, patterns, outliers, and relationships within the dataset.

4) Predictive Analytics:

  • Description: Predictive analytics use statistical algorithms and machine learning techniques to analyze historical data and make predictions about future events or outcomes. Techniques such as regression analysis, time series forecasting, and machine learning algorithms (e.g., decision trees, random forests, neural networks) are employed.
  • Applications: Forecasting future trends, predicting outcomes, and identifying potential risks or opportunities.

5) Prescriptive Analytics:

  • Description: Prescriptive analytics involve analyzing data to recommend actions or strategies that optimize specific objectives or outcomes. Optimization techniques, simulation models, and decision-making algorithms are utilized.
  • Applications: Recommending optimal strategies, decision-making support, and resource allocation.

6) Qualitative Data Analysis:

  • Description: Qualitative data analysis involves analyzing non-numerical data, such as text, images, videos, or audio, to identify themes, patterns, and insights. Methods such as content analysis, thematic analysis, and narrative analysis are used.
  • Applications: Understanding human behavior, attitudes, perceptions, and experiences.

7) Big Data Analytics:

  • Description: Big data analytics methods are designed to analyze large volumes of structured and unstructured data to extract valuable insights. Technologies such as Hadoop, Spark, and NoSQL databases are used to process and analyze big data.
  • Applications: Analyzing large datasets, identifying trends, patterns, and insights from big data sources.

8) Text Analytics:

  • Description: Text analytics methods involve analyzing textual data, such as customer reviews, social media posts, emails, and documents, to extract meaningful information and insights. Techniques such as sentiment analysis, text mining, and natural language processing (NLP) are used.
  • Applications: Analyzing customer feedback, monitoring brand reputation, and extracting insights from textual data sources.

These data analysis methods are instrumental in transforming data into actionable insights, informing decision-making processes, and driving organizational success across various sectors, including business, healthcare, finance, marketing, and research. The selection of a specific method often depends on the nature of the data, the research objectives, and the analytical requirements of the project or organization.

Also Read: Quantitative Data Analysis: Types, Analysis & Examples

Data Analysis Tools

Data analysis tools are essential instruments that facilitate the process of examining, cleaning, transforming, and modeling data to uncover useful information, make informed decisions, and drive strategies. Here are some prominent data analysis tools widely used across various industries:

1) Microsoft Excel:

  • Description: A spreadsheet software that offers basic to advanced data analysis features, including pivot tables, data visualization tools, and statistical functions.
  • Applications: Data cleaning, basic statistical analysis, visualization, and reporting.

2) R Programming Language :

  • Description: An open-source programming language specifically designed for statistical computing and data visualization.
  • Applications: Advanced statistical analysis, data manipulation, visualization, and machine learning.

3) Python (with Libraries like Pandas, NumPy, Matplotlib, and Seaborn):

  • Description: A versatile programming language with libraries that support data manipulation, analysis, and visualization.
  • Applications: Data cleaning, statistical analysis, machine learning, and data visualization.

4) SPSS (Statistical Package for the Social Sciences):

  • Description: A comprehensive statistical software suite used for data analysis, data mining, and predictive analytics.
  • Applications: Descriptive statistics, hypothesis testing, regression analysis, and advanced analytics.

5) SAS (Statistical Analysis System):

  • Description: A software suite used for advanced analytics, multivariate analysis, and predictive modeling.
  • Applications: Data management, statistical analysis, predictive modeling, and business intelligence.

6) Tableau:

  • Description: A data visualization tool that allows users to create interactive and shareable dashboards and reports.
  • Applications: Data visualization , business intelligence , and interactive dashboard creation.

7) Power BI:

  • Description: A business analytics tool developed by Microsoft that provides interactive visualizations and business intelligence capabilities.
  • Applications: Data visualization, business intelligence, reporting, and dashboard creation.

8) SQL (Structured Query Language) Databases (e.g., MySQL, PostgreSQL, Microsoft SQL Server):

  • Description: Database management systems that support data storage, retrieval, and manipulation using SQL queries.
  • Applications: Data retrieval, data cleaning, data transformation, and database management.

9) Apache Spark:

  • Description: A fast and general-purpose distributed computing system designed for big data processing and analytics.
  • Applications: Big data processing, machine learning, data streaming, and real-time analytics.

10) IBM SPSS Modeler:

  • Description: A data mining software application used for building predictive models and conducting advanced analytics.
  • Applications: Predictive modeling, data mining, statistical analysis, and decision optimization.

These tools serve various purposes and cater to different data analysis needs, from basic statistical analysis and data visualization to advanced analytics, machine learning, and big data processing. The choice of a specific tool often depends on the nature of the data, the complexity of the analysis, and the specific requirements of the project or organization.

Also Read: How to Analyze Survey Data: Methods & Examples

Importance of Data Analysis in Research

The importance of data analysis in research cannot be overstated; it serves as the backbone of any scientific investigation or study. Here are several key reasons why data analysis is crucial in the research process:

  • Data analysis helps ensure that the results obtained are valid and reliable. By systematically examining the data, researchers can identify any inconsistencies or anomalies that may affect the credibility of the findings.
  • Effective data analysis provides researchers with the necessary information to make informed decisions. By interpreting the collected data, researchers can draw conclusions, make predictions, or formulate recommendations based on evidence rather than intuition or guesswork.
  • Data analysis allows researchers to identify patterns, trends, and relationships within the data. This can lead to a deeper understanding of the research topic, enabling researchers to uncover insights that may not be immediately apparent.
  • In empirical research, data analysis plays a critical role in testing hypotheses. Researchers collect data to either support or refute their hypotheses, and data analysis provides the tools and techniques to evaluate these hypotheses rigorously.
  • Transparent and well-executed data analysis enhances the credibility of research findings. By clearly documenting the data analysis methods and procedures, researchers allow others to replicate the study, thereby contributing to the reproducibility of research findings.
  • In fields such as business or healthcare, data analysis helps organizations allocate resources more efficiently. By analyzing data on consumer behavior, market trends, or patient outcomes, organizations can make strategic decisions about resource allocation, budgeting, and planning.
  • In public policy and social sciences, data analysis is instrumental in developing and evaluating policies and interventions. By analyzing data on social, economic, or environmental factors, policymakers can assess the effectiveness of existing policies and inform the development of new ones.
  • Data analysis allows for continuous improvement in research methods and practices. By analyzing past research projects, identifying areas for improvement, and implementing changes based on data-driven insights, researchers can refine their approaches and enhance the quality of future research endeavors.

However, it is important to remember that mastering these techniques requires practice and continuous learning. That’s why we highly recommend the Data Analytics Course by Physics Wallah . Not only does it cover all the fundamentals of data analysis, but it also provides hands-on experience with various tools such as Excel, Python, and Tableau. Plus, if you use the “ READER ” coupon code at checkout, you can get a special discount on the course.

For Latest Tech Related Information, Join Our Official Free Telegram Group : PW Skills Telegram Group

Data Analysis Techniques in Research FAQs

What are the 5 techniques for data analysis.

The five techniques for data analysis include: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis Qualitative Analysis

What are techniques of data analysis in research?

Techniques of data analysis in research encompass both qualitative and quantitative methods. These techniques involve processes like summarizing raw data, investigating causes of events, forecasting future outcomes, offering recommendations based on predictions, and examining non-numerical data to understand concepts or experiences.

What are the 3 methods of data analysis?

The three primary methods of data analysis are: Qualitative Analysis Quantitative Analysis Mixed-Methods Analysis

What are the four types of data analysis techniques?

The four types of data analysis techniques are: Descriptive Analysis Diagnostic Analysis Predictive Analysis Prescriptive Analysis

  • Full Form Of OLAP

OLAP online Analytical Processing

Want to learn about What OLAP is? You are at the right place, This article will help you understand what…

  • Data Modeling – Overview, Concepts, and Types

what is analysis of data in research

Data modeling is the process of creating visual representations of data structures to define how Data is stored, connected, and…

  • Mastering Business Analytics: Strategies and Insights

business analytics

Business analytics can be defined as the systematic exploration of an organization's data to extract insights that aid in making…

right adv

Related Articles

  • Big Data Classes – Top 10 Big Data Classes
  • Applications of Big Data
  • How to Analysis of Survey Data: Methods & Examples
  • Best BI Tool: Top 15 Business Intelligence Tools (BI Tools)
  • What is Advanced Analytics: Types, Tools, Examples, Jobs, Salary
  • Visual Analytics: Transforming Data into Actionable Insights
  • 10 Most Popular Big Data Analytics Tools

bottom banner

Banner

Research Guide: Data analysis and reporting findings

  • Postgraduate Online Training subject guide This link opens in a new window
  • Open Educational Resources (OERs)
  • Library support
  • Research ideas
  • You and your supervisor
  • Researcher skills
  • Research Data Management This link opens in a new window
  • Literature review
  • Plagiarism This link opens in a new window
  • Research Methods
  • Data analysis and reporting findings
  • Statistical support
  • Writing support
  • Researcher visibility
  • Conferences and Presentations
  • Postgraduate Forums
  • Soft skills development
  • Emotional support
  • The Commons Informer (blog)
  • Research Tip Archives
  • RC Newsletter Archives
  • Evaluation Forms
  • Editing FAQs

Data analysis and findings

Data analysis is the most crucial part of any research. Data analysis summarizes collected data. It involves the interpretation of data gathered through the use of analytical and logical reasoning to determine patterns, relationships or trends. 

Data Analysis Checklist

Cleaning  data

* Did you capture and code your data in the right manner?

*Do you have all data or missing data?

* Do you have enough observations?

* Do you have any outliers? If yes, what is the remedy for outlier?

* Does your data have the potential to answer your questions?

Analyzing data

* Visualize your data, e.g. charts, tables, and graphs, to mention a few.

*  Identify patterns, correlations, and trends

* Test your hypotheses

* Let your data tell a story

Reports the results

* Communicate and interpret the results

* Conclude and recommend

* Your targeted audience must understand your results

* Use more datasets and samples

* Use accessible and understandable data analytical tool

* Do not delegate your data analysis

* Clean data to confirm that they are complete and free from errors

* Analyze cleaned data

* Understand your results

* Keep in mind who will be reading your results and present it in a way that they will understand it

* Share the results with the supervisor oftentimes

Past presentations

  • PhD Writing Retreat - Analysing_Fieldwork_Data by Cori Wielenga A clear and concise presentation on the ‘now what’ and ‘so what’ of data collection and analysis - compiled and originally presented by Cori Wielenga.

Online Resources

what is analysis of data in research

  • Qualitative analysis of interview data: A step-by-step guide
  • Qualitative Data Analysis - Coding & Developing Themes

Beginner's Guide to SPSS

  • SPSS Guideline for Beginners Presented by Hennie Gerber

Recommended Quantitative Data Analysis books

what is analysis of data in research

Recommended Qualitative Data Analysis books

what is analysis of data in research

  • << Previous: Data collection techniques
  • Next: Statistical support >>
  • Last Updated: Aug 23, 2024 12:44 PM
  • URL: https://library.up.ac.za/c.php?g=485435
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

what is analysis of data in research

Home Market Research

Qualitative Data Analysis: What is it, Methods + Examples

Explore qualitative data analysis with diverse methods and real-world examples. Uncover the nuances of human experiences with this guide.

In a world rich with information and narrative, understanding the deeper layers of human experiences requires a unique vision that goes beyond numbers and figures. This is where the power of qualitative data analysis comes to light.

In this blog, we’ll learn about qualitative data analysis, explore its methods, and provide real-life examples showcasing its power in uncovering insights.

What is Qualitative Data Analysis?

Qualitative data analysis is a systematic process of examining non-numerical data to extract meaning, patterns, and insights.

In contrast to quantitative analysis, which focuses on numbers and statistical metrics, the qualitative study focuses on the qualitative aspects of data, such as text, images, audio, and videos. It seeks to understand every aspect of human experiences, perceptions, and behaviors by examining the data’s richness.

Companies frequently conduct this analysis on customer feedback. You can collect qualitative data from reviews, complaints, chat messages, interactions with support centers, customer interviews, case notes, or even social media comments. This kind of data holds the key to understanding customer sentiments and preferences in a way that goes beyond mere numbers.

Importance of Qualitative Data Analysis

Qualitative data analysis plays a crucial role in your research and decision-making process across various disciplines. Let’s explore some key reasons that underline the significance of this analysis:

In-Depth Understanding

It enables you to explore complex and nuanced aspects of a phenomenon, delving into the ‘how’ and ‘why’ questions. This method provides you with a deeper understanding of human behavior, experiences, and contexts that quantitative approaches might not capture fully.

Contextual Insight

You can use this analysis to give context to numerical data. It will help you understand the circumstances and conditions that influence participants’ thoughts, feelings, and actions. This contextual insight becomes essential for generating comprehensive explanations.

Theory Development

You can generate or refine hypotheses via qualitative data analysis. As you analyze the data attentively, you can form hypotheses, concepts, and frameworks that will drive your future research and contribute to theoretical advances.

Participant Perspectives

When performing qualitative research, you can highlight participant voices and opinions. This approach is especially useful for understanding marginalized or underrepresented people, as it allows them to communicate their experiences and points of view.

Exploratory Research

The analysis is frequently used at the exploratory stage of your project. It assists you in identifying important variables, developing research questions, and designing quantitative studies that will follow.

Types of Qualitative Data

When conducting qualitative research, you can use several qualitative data collection methods , and here you will come across many sorts of qualitative data that can provide you with unique insights into your study topic. These data kinds add new views and angles to your understanding and analysis.

Interviews and Focus Groups

Interviews and focus groups will be among your key methods for gathering qualitative data. Interviews are one-on-one talks in which participants can freely share their thoughts, experiences, and opinions.

Focus groups, on the other hand, are discussions in which members interact with one another, resulting in dynamic exchanges of ideas. Both methods provide rich qualitative data and direct access to participant perspectives.

Observations and Field Notes

Observations and field notes are another useful sort of qualitative data. You can immerse yourself in the research environment through direct observation, carefully documenting behaviors, interactions, and contextual factors.

These observations will be recorded in your field notes, providing a complete picture of the environment and the behaviors you’re researching. This data type is especially important for comprehending behavior in their natural setting.

Textual and Visual Data

Textual and visual data include a wide range of resources that can be qualitatively analyzed. Documents, written narratives, and transcripts from various sources, such as interviews or speeches, are examples of textual data.

Photographs, films, and even artwork provide a visual layer to your research. These forms of data allow you to investigate what is spoken and the underlying emotions, details, and symbols expressed by language or pictures.

When to Choose Qualitative Data Analysis over Quantitative Data Analysis

As you begin your research journey, understanding why the analysis of qualitative data is important will guide your approach to understanding complex events. If you analyze qualitative data, it will provide new insights that complement quantitative methodologies, which will give you a broader understanding of your study topic.

It is critical to know when to use qualitative analysis over quantitative procedures. You can prefer qualitative data analysis when:

  • Complexity Reigns: When your research questions involve deep human experiences, motivations, or emotions, qualitative research excels at revealing these complexities.
  • Exploration is Key: Qualitative analysis is ideal for exploratory research. It will assist you in understanding a new or poorly understood topic before formulating quantitative hypotheses.
  • Context Matters: If you want to understand how context affects behaviors or results, qualitative data analysis provides the depth needed to grasp these relationships.
  • Unanticipated Findings: When your study provides surprising new viewpoints or ideas, qualitative analysis helps you to delve deeply into these emerging themes.
  • Subjective Interpretation is Vital: When it comes to understanding people’s subjective experiences and interpretations, qualitative data analysis is the way to go.

You can make informed decisions regarding the right approach for your research objectives if you understand the importance of qualitative analysis and recognize the situations where it shines.

Qualitative Data Analysis Methods and Examples

Exploring various qualitative data analysis methods will provide you with a wide collection for making sense of your research findings. Once the data has been collected, you can choose from several analysis methods based on your research objectives and the data type you’ve collected.

There are five main methods for analyzing qualitative data. Each method takes a distinct approach to identifying patterns, themes, and insights within your qualitative data. They are:

Method 1: Content Analysis

Content analysis is a methodical technique for analyzing textual or visual data in a structured manner. In this method, you will categorize qualitative data by splitting it into manageable pieces and assigning the manual coding process to these units.

As you go, you’ll notice ongoing codes and designs that will allow you to conclude the content. This method is very beneficial for detecting common ideas, concepts, or themes in your data without losing the context.

Steps to Do Content Analysis

Follow these steps when conducting content analysis:

  • Collect and Immerse: Begin by collecting the necessary textual or visual data. Immerse yourself in this data to fully understand its content, context, and complexities.
  • Assign Codes and Categories: Assign codes to relevant data sections that systematically represent major ideas or themes. Arrange comparable codes into groups that cover the major themes.
  • Analyze and Interpret: Develop a structured framework from the categories and codes. Then, evaluate the data in the context of your research question, investigate relationships between categories, discover patterns, and draw meaning from these connections.

Benefits & Challenges

There are various advantages to using content analysis:

  • Structured Approach: It offers a systematic approach to dealing with large data sets and ensures consistency throughout the research.
  • Objective Insights: This method promotes objectivity, which helps to reduce potential biases in your study.
  • Pattern Discovery: Content analysis can help uncover hidden trends, themes, and patterns that are not always obvious.
  • Versatility: You can apply content analysis to various data formats, including text, internet content, images, etc.

However, keep in mind the challenges that arise:

  • Subjectivity: Even with the best attempts, a certain bias may remain in coding and interpretation.
  • Complexity: Analyzing huge data sets requires time and great attention to detail.
  • Contextual Nuances: Content analysis may not capture all of the contextual richness that qualitative data analysis highlights.

Example of Content Analysis

Suppose you’re conducting market research and looking at customer feedback on a product. As you collect relevant data and analyze feedback, you’ll see repeating codes like “price,” “quality,” “customer service,” and “features.” These codes are organized into categories such as “positive reviews,” “negative reviews,” and “suggestions for improvement.”

According to your findings, themes such as “price” and “customer service” stand out and show that pricing and customer service greatly impact customer satisfaction. This example highlights the power of content analysis for obtaining significant insights from large textual data collections.

Method 2: Thematic Analysis

Thematic analysis is a well-structured procedure for identifying and analyzing recurring themes in your data. As you become more engaged in the data, you’ll generate codes or short labels representing key concepts. These codes are then organized into themes, providing a consistent framework for organizing and comprehending the substance of the data.

The analysis allows you to organize complex narratives and perspectives into meaningful categories, which will allow you to identify connections and patterns that may not be visible at first.

Steps to Do Thematic Analysis

Follow these steps when conducting a thematic analysis:

  • Code and Group: Start by thoroughly examining the data and giving initial codes that identify the segments. To create initial themes, combine relevant codes.
  • Code and Group: Begin by engaging yourself in the data, assigning first codes to notable segments. To construct basic themes, group comparable codes together.
  • Analyze and Report: Analyze the data within each theme to derive relevant insights. Organize the topics into a consistent structure and explain your findings, along with data extracts that represent each theme.

Thematic analysis has various benefits:

  • Structured Exploration: It is a method for identifying patterns and themes in complex qualitative data.
  • Comprehensive knowledge: Thematic analysis promotes an in-depth understanding of the complications and meanings of the data.
  • Application Flexibility: This method can be customized to various research situations and data kinds.

However, challenges may arise, such as:

  • Interpretive Nature: Interpreting qualitative data in thematic analysis is vital, and it is critical to manage researcher bias.
  • Time-consuming: The study can be time-consuming, especially with large data sets.
  • Subjectivity: The selection of codes and topics might be subjective.

Example of Thematic Analysis

Assume you’re conducting a thematic analysis on job satisfaction interviews. Following your immersion in the data, you assign initial codes such as “work-life balance,” “career growth,” and “colleague relationships.” As you organize these codes, you’ll notice themes develop, such as “Factors Influencing Job Satisfaction” and “Impact on Work Engagement.”

Further investigation reveals the tales and experiences included within these themes and provides insights into how various elements influence job satisfaction. This example demonstrates how thematic analysis can reveal meaningful patterns and insights in qualitative data.

Method 3: Narrative Analysis

The narrative analysis involves the narratives that people share. You’ll investigate the histories in your data, looking at how stories are created and the meanings they express. This method is excellent for learning how people make sense of their experiences through narrative.

Steps to Do Narrative Analysis

The following steps are involved in narrative analysis:

  • Gather and Analyze: Start by collecting narratives, such as first-person tales, interviews, or written accounts. Analyze the stories, focusing on the plot, feelings, and characters.
  • Find Themes: Look for recurring themes or patterns in various narratives. Think about the similarities and differences between these topics and personal experiences.
  • Interpret and Extract Insights: Contextualize the narratives within their larger context. Accept the subjective nature of each narrative and analyze the narrator’s voice and style. Extract insights from the tales by diving into the emotions, motivations, and implications communicated by the stories.

There are various advantages to narrative analysis:

  • Deep Exploration: It lets you look deeply into people’s personal experiences and perspectives.
  • Human-Centered: This method prioritizes the human perspective, allowing individuals to express themselves.

However, difficulties may arise, such as:

  • Interpretive Complexity: Analyzing narratives requires dealing with the complexities of meaning and interpretation.
  • Time-consuming: Because of the richness and complexities of tales, working with them can be time-consuming.

Example of Narrative Analysis

Assume you’re conducting narrative analysis on refugee interviews. As you read the stories, you’ll notice common themes of toughness, loss, and hope. The narratives provide insight into the obstacles that refugees face, their strengths, and the dreams that guide them.

The analysis can provide a deeper insight into the refugees’ experiences and the broader social context they navigate by examining the narratives’ emotional subtleties and underlying meanings. This example highlights how narrative analysis can reveal important insights into human stories.

Method 4: Grounded Theory Analysis

Grounded theory analysis is an iterative and systematic approach that allows you to create theories directly from data without being limited by pre-existing hypotheses. With an open mind, you collect data and generate early codes and labels that capture essential ideas or concepts within the data.

As you progress, you refine these codes and increasingly connect them, eventually developing a theory based on the data. Grounded theory analysis is a dynamic process for developing new insights and hypotheses based on details in your data.

Steps to Do Grounded Theory Analysis

Grounded theory analysis requires the following steps:

  • Initial Coding: First, immerse yourself in the data, producing initial codes that represent major concepts or patterns.
  • Categorize and Connect: Using axial coding, organize the initial codes, which establish relationships and connections between topics.
  • Build the Theory: Focus on creating a core category that connects the codes and themes. Regularly refine the theory by comparing and integrating new data, ensuring that it evolves organically from the data.

Grounded theory analysis has various benefits:

  • Theory Generation: It provides a one-of-a-kind opportunity to generate hypotheses straight from data and promotes new insights.
  • In-depth Understanding: The analysis allows you to deeply analyze the data and reveal complex relationships and patterns.
  • Flexible Process: This method is customizable and ongoing, which allows you to enhance your research as you collect additional data.

However, challenges might arise with:

  • Time and Resources: Because grounded theory analysis is a continuous process, it requires a large commitment of time and resources.
  • Theoretical Development: Creating a grounded theory involves a thorough understanding of qualitative data analysis software and theoretical concepts.
  • Interpretation of Complexity: Interpreting and incorporating a newly developed theory into existing literature can be intellectually hard.

Example of Grounded Theory Analysis

Assume you’re performing a grounded theory analysis on workplace collaboration interviews. As you open code the data, you will discover notions such as “communication barriers,” “team dynamics,” and “leadership roles.” Axial coding demonstrates links between these notions, emphasizing the significance of efficient communication in developing collaboration.

You create the core “Integrated Communication Strategies” category through selective coding, which unifies new topics.

This theory-driven category serves as the framework for understanding how numerous aspects contribute to effective team collaboration. This example shows how grounded theory analysis allows you to generate a theory directly from the inherent nature of the data.

Method 5: Discourse Analysis

Discourse analysis focuses on language and communication. You’ll look at how language produces meaning and how it reflects power relations, identities, and cultural influences. This strategy examines what is said and how it is said; the words, phrasing, and larger context of communication.

The analysis is precious when investigating power dynamics, identities, and cultural influences encoded in language. By evaluating the language used in your data, you can identify underlying assumptions, cultural standards, and how individuals negotiate meaning through communication.

Steps to Do Discourse Analysis

Conducting discourse analysis entails the following steps:

  • Select Discourse: For analysis, choose language-based data such as texts, speeches, or media content.
  • Analyze Language: Immerse yourself in the conversation, examining language choices, metaphors, and underlying assumptions.
  • Discover Patterns: Recognize the dialogue’s reoccurring themes, ideologies, and power dynamics. To fully understand the effects of these patterns, put them in their larger context.

There are various advantages of using discourse analysis:

  • Understanding Language: It provides an extensive understanding of how language builds meaning and influences perceptions.
  • Uncovering Power Dynamics: The analysis reveals how power dynamics appear via language.
  • Cultural Insights: This method identifies cultural norms, beliefs, and ideologies stored in communication.

However, the following challenges may arise:

  • Complexity of Interpretation: Language analysis involves navigating multiple levels of nuance and interpretation.
  • Subjectivity: Interpretation can be subjective, so controlling researcher bias is important.
  • Time-Intensive: Discourse analysis can take a lot of time because careful linguistic study is required in this analysis.

Example of Discourse Analysis

Consider doing discourse analysis on media coverage of a political event. You notice repeating linguistic patterns in news articles that depict the event as a conflict between opposing parties. Through deconstruction, you can expose how this framing supports particular ideologies and power relations.

You can illustrate how language choices influence public perceptions and contribute to building the narrative around the event by analyzing the speech within the broader political and social context. This example shows how discourse analysis can reveal hidden power dynamics and cultural influences on communication.

How to do Qualitative Data Analysis with the QuestionPro Research suite?

QuestionPro is a popular survey and research platform that offers tools for collecting and analyzing qualitative and quantitative data. Follow these general steps for conducting qualitative data analysis using the QuestionPro Research Suite:

  • Collect Qualitative Data: Set up your survey to capture qualitative responses. It might involve open-ended questions, text boxes, or comment sections where participants can provide detailed responses.
  • Export Qualitative Responses: Export the responses once you’ve collected qualitative data through your survey. QuestionPro typically allows you to export survey data in various formats, such as Excel or CSV.
  • Prepare Data for Analysis: Review the exported data and clean it if necessary. Remove irrelevant or duplicate entries to ensure your data is ready for analysis.
  • Code and Categorize Responses: Segment and label data, letting new patterns emerge naturally, then develop categories through axial coding to structure the analysis.
  • Identify Themes: Analyze the coded responses to identify recurring themes, patterns, and insights. Look for similarities and differences in participants’ responses.
  • Generate Reports and Visualizations: Utilize the reporting features of QuestionPro to create visualizations, charts, and graphs that help communicate the themes and findings from your qualitative research.
  • Interpret and Draw Conclusions: Interpret the themes and patterns you’ve identified in the qualitative data. Consider how these findings answer your research questions or provide insights into your study topic.
  • Integrate with Quantitative Data (if applicable): If you’re also conducting quantitative research using QuestionPro, consider integrating your qualitative findings with quantitative results to provide a more comprehensive understanding.

Qualitative data analysis is vital in uncovering various human experiences, views, and stories. If you’re ready to transform your research journey and apply the power of qualitative analysis, now is the moment to do it. Book a demo with QuestionPro today and begin your journey of exploration.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

User Behavior Analytics

User Behavior Analytics: What it is, Importance, Uses & Tools

Sep 26, 2024

data security

Data Security: What it is, Types, Risk & Strategies to Follow

Sep 25, 2024

user behavior

User Behavior: What it is, How to Understand, Track & Uses

Sep 24, 2024

what is analysis of data in research

Mass Personalization is not Personalization! — Tuesday CX Thoughts

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence


is the process of systematically applying statistical and/or logical techniques to describe and illustrate, condense and recap, and evaluate data. According to Shamoo and Resnik (2003) various analytic procedures “provide a way of drawing inductive inferences from data and distinguishing the signal (the phenomenon of interest) from the noise (statistical fluctuations) present in the data”..

While data analysis in qualitative research can include statistical procedures, many times analysis becomes an ongoing iterative process where data is continuously collected and analyzed almost simultaneously. Indeed, researchers generally analyze for patterns in observations through the entire data collection phase (Savenye, Robinson, 2004). The form of the analysis is determined by the specific qualitative approach taken (field study, ethnography content analysis, oral history, biography, research) and the form of the data (field notes, documents, audiotape, videotape).

An essential component of ensuring data integrity is the accurate and appropriate analysis of research findings. Improper statistical analyses distort scientific findings, mislead casual readers (Shepard, 2002), and may negatively influence the public perception of research. Integrity issues are just as relevant to analysis of non-statistical data as well.

Considerations/issues in data analysis

There are a number of issues that researchers should be cognizant of with respect to data analysis. These include:

when analyzing qualitative data

A tacit assumption of investigators is that they have received training sufficient to demonstrate a high standard of research practice. Unintentional ‘scientific misconduct' is likely the result of poor instruction and follow-up. A number of studies suggest this may be the case more often than believed (Nowak, 1994; Silverman, Manson, 2003). For example, Sica found that adequate training of physicians in medical schools in the proper design, implementation and evaluation of clinical trials is “abysmally small” (Sica, cited in Nowak, 1994). Indeed, a single course in biostatistics is the most that is usually offered (Christopher Williams, cited in Nowak, 1994).

A common practice of investigators is to defer the selection of analytic procedure to a research team ‘statistician’. Ideally, investigators should have substantially more than a basic understanding of the rationale for selecting one method of analysis over another. This can allow investigators to better supervise staff who conduct the data analyses process and make informed decisions


While methods of analysis may differ by scientific discipline, the optimal stage for determining appropriate analytic procedures occurs early in the research process and should not be an afterthought. According to Smeeton and Goda (2003), “Statistical advice should be obtained at the stage of initial planning of an investigation so that, for example, the method of sampling and design of questionnaire are appropriate”.

The chief aim of analysis is to distinguish between an event occurring as either reflecting a true effect versus a false one. Any bias occurring in the collection of the data, or selection of method of analysis, will increase the likelihood of drawing a biased inference. Bias can occur when recruitment of study participants falls below minimum number required to demonstrate statistical power or failure to maintain a sufficient follow-up period needed to demonstrate an effect (Altman, 2001).



When failing to demonstrate statistically different levels between treatment groups, investigators may resort to breaking down the analysis to smaller and smaller subgroups in order to find a difference. Although this practice may not inherently be unethical, these analyses should be proposed before beginning the study even if the intent is exploratory in nature. If it the study is exploratory in nature, the investigator should make this explicit so that readers understand that the research is more of a hunting expedition rather than being primarily theory driven. Although a researcher may not have a theory-based hypothesis for testing relationships between previously untested variables, a theory will have to be developed to explain an unanticipated finding. Indeed, in exploratory science, there are no a priori hypotheses therefore there are no hypothetical tests. Although theories can often drive the processes used in the investigation of qualitative studies, many times patterns of behavior or occurrences derived from analyzed data can result in developing new theoretical frameworks rather than determined (Savenye, Robinson, 2004).

It is conceivable that multiple statistical tests could yield a significant finding by chance alone rather than reflecting a true effect. Integrity is compromised if the investigator only reports tests with significant findings, and neglects to mention a large number of tests failing to reach significance. While access to computer-based statistical packages can facilitate application of increasingly complex analytic procedures, inappropriate uses of these packages can result in abuses as well.



Every field of study has developed its accepted practices for data analysis. Resnik (2000) states that it is prudent for investigators to follow these accepted norms. Resnik further states that the norms are ‘…based on two factors:

(1) the nature of the variables used (i.e., quantitative, comparative, or qualitative),

(2) assumptions about the population from which the data are drawn (i.e., random distribution, independence, sample size, etc.). If one uses unconventional norms, it is crucial to clearly state this is being done, and to show how this new and possibly unaccepted method of analysis is being used, as well as how it differs from other more traditional methods. For example, Schroder, Carey, and Vanable (2003) juxtapose their identification of new and powerful data analytic solutions developed to count data in the area of HIV contraction risk with a discussion of the limitations of commonly applied methods.

If one uses unconventional norms, it is crucial to clearly state this is being done, and to show how this new and possibly unaccepted method of analysis is being used, as well as how it differs from other more traditional methods. For example, Schroder, Carey, and Vanable (2003) juxtapose their identification of new and powerful data analytic solutions developed to count data in the area of HIV contraction risk with a discussion of the limitations of commonly applied methods.



While the conventional practice is to establish a standard of acceptability for statistical significance, with certain disciplines, it may also be appropriate to discuss whether attaining statistical significance has a true practical meaning, i.e., . Jeans (1992) defines ‘clinical significance’ as “the potential for research findings to make a real and important difference to clients or clinical practice, to health status or to any other problem identified as a relevant priority for the discipline”.

Kendall and Grove (1988) define clinical significance in terms of what happens when “… troubled and disordered clients are now, after treatment, not distinguishable from a meaningful and representative non-disturbed reference group”. Thompson and Noferi (2002) suggest that readers of counseling literature should expect authors to report either practical or clinical significance indices, or both, within their research reports. Shepard (2003) questions why some authors fail to point out that the magnitude of observed changes may too small to have any clinical or practical significance, “sometimes, a supposed change may be described in some detail, but the investigator fails to disclose that the trend is not statistically significant ”.

No amount of statistical analysis, regardless of the level of the sophistication, will correct poorly defined objective outcome measurements. Whether done unintentionally or by design, this practice increases the likelihood of clouding the interpretation of findings, thus potentially misleading readers.
The basis for this issue is the urgency of reducing the likelihood of statistical error. Common challenges include the exclusion of , filling in missing data, altering or otherwise changing data, data mining, and developing graphical representations of the data (Shamoo, Resnik, 2003).


At times investigators may enhance the impression of a significant finding by determining how to present (as opposed to data in its raw form), which portion of the data is shown, why, how and to whom (Shamoo, Resnik, 2003). Nowak (1994) notes that even experts do not agree in distinguishing between analyzing and massaging data. Shamoo (1989) recommends that investigators maintain a sufficient and accurate paper trail of how data was manipulated for future review.



The integrity of data analysis can be compromised by the environment or context in which data was collected i.e., face-to face interviews vs. focused group. The occurring within a dyadic relationship (interviewer-interviewee) differs from the group dynamic occurring within a focus group because of the number of participants, and how they react to each other’s responses. Since the data collection process could be influenced by the environment/context, researchers should take this into account when conducting data analysis.

Analyses could also be influenced by the method in which data was recorded. For example, research events could be documented by:

a. recording audio and/or video and transcribing later
b. either a researcher or self-administered survey
c. either or
d. preparing ethnographic field notes from a participant/observer
e. requesting that participants themselves take notes, compile and submit them to researchers.

While each methodology employed has rationale and advantages, issues of objectivity and subjectivity may be raised when data is analyzed.

During content analysis, staff researchers or ‘raters’ may use inconsistent strategies in analyzing text material. Some ‘raters’ may analyze comments as a whole while others may prefer to dissect text material by separating words, phrases, clauses, sentences or groups of sentences. Every effort should be made to reduce or eliminate inconsistencies between “raters” so that data integrity is not compromised.

A major challenge to data integrity could occur with the unmonitored supervision of inductive techniques. Content analysis requires raters to assign topics to text material (comments). The threat to integrity may arise when raters have received inconsistent training, or may have received previous training experience(s). Previous experience may affect how raters perceive the material or even perceive the nature of the analyses to be conducted. Thus one rater could assign topics or codes to material that is significantly different from another rater. Strategies to address this would include clearly stating a list of analyses procedures in the protocol manual, consistent training, and routine monitoring of raters.

Researchers performing analysis on either quantitative or qualitative analyses should be aware of challenges to reliability and validity. For example, in the area of content analysis, Gottschalk (1995) identifies three factors that can affect the reliability of analyzed data:

The potential for compromising data integrity arises when researchers cannot consistently demonstrate stability, reproducibility, or accuracy of data analysis

According Gottschalk, (1995), the validity of a content analysis study refers to the correspondence of the categories (the classification that raters’ assigned to text content) to the conclusions, and the generalizability of results to a theory (did the categories support the study’s conclusion, and was the finding adequately robust to support or be applied to a selected theoretical rationale?).



Upon coding text material for content analysis, raters must classify each code into an appropriate category of a cross-reference matrix. Relying on computer software to determine a frequency or word count can lead to inaccuracies. “One may obtain an accurate count of that word's occurrence and frequency, but not have an accurate accounting of the meaning inherent in each particular usage” (Gottschalk, 1995). Further analyses might be appropriate to discover the dimensionality of the data set or identity new meaningful underlying variables.

Whether statistical or non-statistical methods of analyses are used, researchers should be aware of the potential for compromising data integrity. While statistical analysis is typically performed on quantitative data, there are numerous analytic procedures specifically designed for qualitative material including content, thematic, and ethnographic analysis. Regardless of whether one studies quantitative or qualitative phenomena, researchers use a variety of tools to analyze data in order to test hypotheses, discern patterns of behavior, and ultimately answer research questions. Failure to understand or acknowledge data analysis issues presented can compromise data integrity.

References:

Gottschalk, L. A. (1995). Content analysis of verbal behavior: New findings and clinical applications. Hillside, NJ: Lawrence Erlbaum Associates, Inc

Jeans, M. E. (1992). Clinical significance of research: A growing concern. Canadian Journal of Nursing Research, 24, 1-4.

Lefort, S. (1993). The statistical versus clinical significance debate. Image, 25, 57-62.
Kendall, P. C., & Grove, W. (1988). Normative comparisons in therapy outcome. Behavioral Assessment, 10, 147-158.

Nowak, R. (1994). Problems in clinical trials go far beyond misconduct. Science. 264(5165): 1538-41.
Resnik, D. (2000). Statistics, ethics, and research: an agenda for educations and reform. Accountability in Research. 8: 163-88

Schroder, K.E., Carey, M.P., Venable, P.A. (2003). Methodological challenges in research on sexual risk behavior: I. Item content, scaling, and data analytic options. Ann Behav Med, 26(2): 76-103.

Shamoo, A.E., Resnik, B.R. (2003). Responsible Conduct of Research. Oxford University Press.

Shamoo, A.E. (1989). Principles of Research Data Audit. Gordon and Breach, New York.

Shepard, R.J. (2002). Ethics in exercise science research. Sports Med, 32 (3): 169-183.

Silverman, S., Manson, M. (2003). Research on teaching in physical education doctoral dissertations: a detailed investigation of focus, method, and analysis. Journal of Teaching in Physical Education, 22(3): 280-297.

Smeeton, N., Goda, D. (2003). Conducting and presenting social work research: some basic statistical considerations. Br J Soc Work, 33: 567-573.

Thompson, B., Noferi, G. 2002. Statistical, practical, clinical: How many types of significance should be considered in counseling research? Journal of Counseling & Development, 80(4):64-71.

 

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

Research Methods | Definitions, Types, Examples

Research methods are specific procedures for collecting and analyzing data. Developing your research methods is an integral part of your research design . When planning your methods, there are two key decisions you will make.

First, decide how you will collect data . Your methods depend on what type of data you need to answer your research question :

  • Qualitative vs. quantitative : Will your data take the form of words or numbers?
  • Primary vs. secondary : Will you collect original data yourself, or will you use data that has already been collected by someone else?
  • Descriptive vs. experimental : Will you take measurements of something as it is, or will you perform an experiment?

Second, decide how you will analyze the data .

  • For quantitative data, you can use statistical analysis methods to test relationships between variables.
  • For qualitative data, you can use methods such as thematic analysis to interpret patterns and meanings in the data.

Table of contents

Methods for collecting data, examples of data collection methods, methods for analyzing data, examples of data analysis methods, other interesting articles, frequently asked questions about research methods.

Data is the information that you collect for the purposes of answering your research question . The type of data you need depends on the aims of your research.

Qualitative vs. quantitative data

Your choice of qualitative or quantitative data collection depends on the type of knowledge you want to develop.

For questions about ideas, experiences and meanings, or to study something that can’t be described numerically, collect qualitative data .

If you want to develop a more mechanistic understanding of a topic, or your research involves hypothesis testing , collect quantitative data .

Qualitative to broader populations. .
Quantitative .

You can also take a mixed methods approach , where you use both qualitative and quantitative research methods.

Primary vs. secondary research

Primary research is any original data that you collect yourself for the purposes of answering your research question (e.g. through surveys , observations and experiments ). Secondary research is data that has already been collected by other researchers (e.g. in a government census or previous scientific studies).

If you are exploring a novel research question, you’ll probably need to collect primary data . But if you want to synthesize existing knowledge, analyze historical trends, or identify patterns on a large scale, secondary data might be a better choice.

Primary . methods.
Secondary

Descriptive vs. experimental data

In descriptive research , you collect data about your study subject without intervening. The validity of your research will depend on your sampling method .

In experimental research , you systematically intervene in a process and measure the outcome. The validity of your research will depend on your experimental design .

To conduct an experiment, you need to be able to vary your independent variable , precisely measure your dependent variable, and control for confounding variables . If it’s practically and ethically possible, this method is the best choice for answering questions about cause and effect.

Descriptive . .
Experimental

Prevent plagiarism. Run a free check.

Research methods for collecting data
Research method Primary or secondary? Qualitative or quantitative? When to use
Primary Quantitative To test cause-and-effect relationships.
Primary Quantitative To understand general characteristics of a population.
Interview/focus group Primary Qualitative To gain more in-depth understanding of a topic.
Observation Primary Either To understand how something occurs in its natural setting.
Secondary Either To situate your research in an existing body of work, or to evaluate trends within a research topic.
Either Either To gain an in-depth understanding of a specific group or context, or when you don’t have the resources for a large study.

Your data analysis methods will depend on the type of data you collect and how you prepare it for analysis.

Data can often be analyzed both quantitatively and qualitatively. For example, survey responses could be analyzed qualitatively by studying the meanings of responses or quantitatively by studying the frequencies of responses.

Qualitative analysis methods

Qualitative analysis is used to understand words, ideas, and experiences. You can use it to interpret data that was collected:

  • From open-ended surveys and interviews , literature reviews , case studies , ethnographies , and other sources that use text rather than numbers.
  • Using non-probability sampling methods .

Qualitative analysis tends to be quite flexible and relies on the researcher’s judgement, so you have to reflect carefully on your choices and assumptions and be careful to avoid research bias .

Quantitative analysis methods

Quantitative analysis uses numbers and statistics to understand frequencies, averages and correlations (in descriptive studies) or cause-and-effect relationships (in experiments).

You can use quantitative analysis to interpret data that was collected either:

  • During an experiment .
  • Using probability sampling methods .

Because the data is collected and analyzed in a statistically valid way, the results of quantitative analysis can be easily standardized and shared among researchers.

Research methods for analyzing data
Research method Qualitative or quantitative? When to use
Quantitative To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations).
Meta-analysis Quantitative To statistically analyze the results of a large collection of studies.

Can only be applied to studies that collected data in a statistically valid manner.

Qualitative To analyze data collected from interviews, , or textual sources.

To understand general themes in the data and how they are communicated.

Either To analyze large volumes of textual or visual data collected from surveys, literature reviews, or other sources.

Can be quantitative (i.e. frequencies of words) or qualitative (i.e. meanings of words).

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

what is analysis of data in research

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Chi square test of independence
  • Statistical power
  • Descriptive statistics
  • Degrees of freedom
  • Pearson correlation
  • Null hypothesis
  • Double-blind study
  • Case-control study
  • Research ethics
  • Data collection
  • Hypothesis testing
  • Structured interviews

Research bias

  • Hawthorne effect
  • Unconscious bias
  • Recall bias
  • Halo effect
  • Self-serving bias
  • Information bias

Quantitative research deals with numbers and statistics, while qualitative research deals with words and meanings.

Quantitative methods allow you to systematically measure variables and test hypotheses . Qualitative methods allow you to explore concepts and experiences in more detail.

In mixed methods research , you use both qualitative and quantitative data collection and analysis methods to answer your research question .

A sample is a subset of individuals from a larger population . Sampling means selecting the group that you will actually collect data from in your research. For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

In statistics, sampling allows you to test a hypothesis about the characteristics of a population.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts and meanings, use qualitative methods .
  • If you want to analyze a large amount of readily-available data, use secondary data. If you want data specific to your purposes with control over how it is generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Methodology refers to the overarching strategy and rationale of your research project . It involves studying the methods used in your field and the theories or principles behind them, in order to develop an approach that matches your objectives.

Methods are the specific tools and procedures you use to collect and analyze data (for example, experiments, surveys , and statistical tests ).

In shorter scientific papers, where the aim is to report the findings of a specific study, you might simply describe what you did in a methods section .

In a longer or more complex research project, such as a thesis or dissertation , you will probably include a methodology section , where you explain your approach to answering the research questions and cite relevant sources to support your choice of methods.

Is this article helpful?

Other students also liked, writing strong research questions | criteria & examples.

  • What Is a Research Design | Types, Guide & Examples
  • Data Collection | Definition, Methods & Examples

More interesting articles

  • Between-Subjects Design | Examples, Pros, & Cons
  • Cluster Sampling | A Simple Step-by-Step Guide with Examples
  • Confounding Variables | Definition, Examples & Controls
  • Construct Validity | Definition, Types, & Examples
  • Content Analysis | Guide, Methods & Examples
  • Control Groups and Treatment Groups | Uses & Examples
  • Control Variables | What Are They & Why Do They Matter?
  • Correlation vs. Causation | Difference, Designs & Examples
  • Correlational Research | When & How to Use
  • Critical Discourse Analysis | Definition, Guide & Examples
  • Cross-Sectional Study | Definition, Uses & Examples
  • Descriptive Research | Definition, Types, Methods & Examples
  • Ethical Considerations in Research | Types & Examples
  • Explanatory and Response Variables | Definitions & Examples
  • Explanatory Research | Definition, Guide, & Examples
  • Exploratory Research | Definition, Guide, & Examples
  • External Validity | Definition, Types, Threats & Examples
  • Extraneous Variables | Examples, Types & Controls
  • Guide to Experimental Design | Overview, Steps, & Examples
  • How Do You Incorporate an Interview into a Dissertation? | Tips
  • How to Do Thematic Analysis | Step-by-Step Guide & Examples
  • How to Write a Literature Review | Guide, Examples, & Templates
  • How to Write a Strong Hypothesis | Steps & Examples
  • Inclusion and Exclusion Criteria | Examples & Definition
  • Independent vs. Dependent Variables | Definition & Examples
  • Inductive Reasoning | Types, Examples, Explanation
  • Inductive vs. Deductive Research Approach | Steps & Examples
  • Internal Validity in Research | Definition, Threats, & Examples
  • Internal vs. External Validity | Understanding Differences & Threats
  • Longitudinal Study | Definition, Approaches & Examples
  • Mediator vs. Moderator Variables | Differences & Examples
  • Mixed Methods Research | Definition, Guide & Examples
  • Multistage Sampling | Introductory Guide & Examples
  • Naturalistic Observation | Definition, Guide & Examples
  • Operationalization | A Guide with Examples, Pros & Cons
  • Population vs. Sample | Definitions, Differences & Examples
  • Primary Research | Definition, Types, & Examples
  • Qualitative vs. Quantitative Research | Differences, Examples & Methods
  • Quasi-Experimental Design | Definition, Types & Examples
  • Questionnaire Design | Methods, Question Types & Examples
  • Random Assignment in Experiments | Introduction & Examples
  • Random vs. Systematic Error | Definition & Examples
  • Reliability vs. Validity in Research | Difference, Types and Examples
  • Reproducibility vs Replicability | Difference & Examples
  • Reproducibility vs. Replicability | Difference & Examples
  • Sampling Methods | Types, Techniques & Examples
  • Semi-Structured Interview | Definition, Guide & Examples
  • Simple Random Sampling | Definition, Steps & Examples
  • Single, Double, & Triple Blind Study | Definition & Examples
  • Stratified Sampling | Definition, Guide & Examples
  • Structured Interview | Definition, Guide & Examples
  • Survey Research | Definition, Examples & Methods
  • Systematic Review | Definition, Example, & Guide
  • Systematic Sampling | A Step-by-Step Guide with Examples
  • Textual Analysis | Guide, 3 Approaches & Examples
  • The 4 Types of Reliability in Research | Definitions & Examples
  • The 4 Types of Validity in Research | Definitions & Examples
  • Transcribing an Interview | 5 Steps & Transcription Software
  • Triangulation in Research | Guide, Types, Examples
  • Types of Interviews in Research | Guide & Examples
  • Types of Research Designs Compared | Guide & Examples
  • Types of Variables in Research & Statistics | Examples
  • Unstructured Interview | Definition, Guide & Examples
  • What Is a Case Study? | Definition, Examples & Methods
  • What Is a Case-Control Study? | Definition & Examples
  • What Is a Cohort Study? | Definition & Examples
  • What Is a Conceptual Framework? | Tips & Examples
  • What Is a Controlled Experiment? | Definitions & Examples
  • What Is a Double-Barreled Question?
  • What Is a Focus Group? | Step-by-Step Guide & Examples
  • What Is a Likert Scale? | Guide & Examples
  • What Is a Prospective Cohort Study? | Definition & Examples
  • What Is a Retrospective Cohort Study? | Definition & Examples
  • What Is Action Research? | Definition & Examples
  • What Is an Observational Study? | Guide & Examples
  • What Is Concurrent Validity? | Definition & Examples
  • What Is Content Validity? | Definition & Examples
  • What Is Convenience Sampling? | Definition & Examples
  • What Is Convergent Validity? | Definition & Examples
  • What Is Criterion Validity? | Definition & Examples
  • What Is Data Cleansing? | Definition, Guide & Examples
  • What Is Deductive Reasoning? | Explanation & Examples
  • What Is Discriminant Validity? | Definition & Example
  • What Is Ecological Validity? | Definition & Examples
  • What Is Ethnography? | Definition, Guide & Examples
  • What Is Face Validity? | Guide, Definition & Examples
  • What Is Non-Probability Sampling? | Types & Examples
  • What Is Participant Observation? | Definition & Examples
  • What Is Peer Review? | Types & Examples
  • What Is Predictive Validity? | Examples & Definition
  • What Is Probability Sampling? | Types & Examples
  • What Is Purposive Sampling? | Definition & Examples
  • What Is Qualitative Observation? | Definition & Examples
  • What Is Qualitative Research? | Methods & Examples
  • What Is Quantitative Observation? | Definition & Examples
  • What Is Quantitative Research? | Definition, Uses & Methods

Get unlimited documents corrected

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

Research-Methodology

Data Analysis

Methodology chapter of your dissertation should include discussions about the methods of data analysis. You have to explain in a brief manner how you are going to analyze the primary data you will collect employing the methods explained in this chapter.

There are differences between qualitative data analysis and quantitative data analysis . In qualitative researches using interviews, focus groups, experiments etc. data analysis is going to involve identifying common patterns within the responses and critically analyzing them in order to achieve research aims and objectives.

Data analysis for quantitative studies, on the other hand, involves critical analysis and interpretation of figures and numbers, and attempts to find rationale behind the emergence of main findings. Comparisons of primary research findings to the findings of the literature review are critically important for both types of studies – qualitative and quantitative.

Data analysis methods in the absence of primary data collection can involve discussing common patterns, as well as, controversies within secondary data directly related to the research area.

Data analysis

John Dudovskiy

  • Privacy Policy

Research Method

Home » Data Interpretation – Process, Methods and Questions

Data Interpretation – Process, Methods and Questions

Table of Contents

Data Interpretation

Data Interpretation

Definition :

Data interpretation refers to the process of making sense of data by analyzing and drawing conclusions from it. It involves examining data in order to identify patterns, relationships, and trends that can help explain the underlying phenomena being studied. Data interpretation can be used to make informed decisions and solve problems across a wide range of fields, including business, science, and social sciences.

Data Interpretation Process

Here are the steps involved in the data interpretation process:

  • Define the research question : The first step in data interpretation is to clearly define the research question. This will help you to focus your analysis and ensure that you are interpreting the data in a way that is relevant to your research objectives.
  • Collect the data: The next step is to collect the data. This can be done through a variety of methods such as surveys, interviews, observation, or secondary data sources.
  • Clean and organize the data : Once the data has been collected, it is important to clean and organize it. This involves checking for errors, inconsistencies, and missing data. Data cleaning can be a time-consuming process, but it is essential to ensure that the data is accurate and reliable.
  • Analyze the data: The next step is to analyze the data. This can involve using statistical software or other tools to calculate summary statistics, create graphs and charts, and identify patterns in the data.
  • Interpret the results: Once the data has been analyzed, it is important to interpret the results. This involves looking for patterns, trends, and relationships in the data. It also involves drawing conclusions based on the results of the analysis.
  • Communicate the findings : The final step is to communicate the findings. This can involve creating reports, presentations, or visualizations that summarize the key findings of the analysis. It is important to communicate the findings in a way that is clear and concise, and that is tailored to the audience’s needs.

Types of Data Interpretation

There are various types of data interpretation techniques used for analyzing and making sense of data. Here are some of the most common types:

Descriptive Interpretation

This type of interpretation involves summarizing and describing the key features of the data. This can involve calculating measures of central tendency (such as mean, median, and mode), measures of dispersion (such as range, variance, and standard deviation), and creating visualizations such as histograms, box plots, and scatterplots.

Inferential Interpretation

This type of interpretation involves making inferences about a larger population based on a sample of the data. This can involve hypothesis testing, where you test a hypothesis about a population parameter using sample data, or confidence interval estimation, where you estimate a range of values for a population parameter based on sample data.

Predictive Interpretation

This type of interpretation involves using data to make predictions about future outcomes. This can involve building predictive models using statistical techniques such as regression analysis, time-series analysis, or machine learning algorithms.

Exploratory Interpretation

This type of interpretation involves exploring the data to identify patterns and relationships that were not previously known. This can involve data mining techniques such as clustering analysis, principal component analysis, or association rule mining.

Causal Interpretation

This type of interpretation involves identifying causal relationships between variables in the data. This can involve experimental designs, such as randomized controlled trials, or observational studies, such as regression analysis or propensity score matching.

Data Interpretation Methods

There are various methods for data interpretation that can be used to analyze and make sense of data. Here are some of the most common methods:

Statistical Analysis

This method involves using statistical techniques to analyze the data. Statistical analysis can involve descriptive statistics (such as measures of central tendency and dispersion), inferential statistics (such as hypothesis testing and confidence interval estimation), and predictive modeling (such as regression analysis and time-series analysis).

Data Visualization

This method involves using visual representations of the data to identify patterns and trends. Data visualization can involve creating charts, graphs, and other visualizations, such as heat maps or scatterplots.

Text Analysis

This method involves analyzing text data, such as survey responses or social media posts, to identify patterns and themes. Text analysis can involve techniques such as sentiment analysis, topic modeling, and natural language processing.

Machine Learning

This method involves using algorithms to identify patterns in the data and make predictions or classifications. Machine learning can involve techniques such as decision trees, neural networks, and random forests.

Qualitative Analysis

This method involves analyzing non-numeric data, such as interviews or focus group discussions, to identify themes and patterns. Qualitative analysis can involve techniques such as content analysis, grounded theory, and narrative analysis.

Geospatial Analysis

This method involves analyzing spatial data, such as maps or GPS coordinates, to identify patterns and relationships. Geospatial analysis can involve techniques such as spatial autocorrelation, hot spot analysis, and clustering.

Applications of Data Interpretation

Data interpretation has a wide range of applications across different fields, including business, healthcare, education, social sciences, and more. Here are some examples of how data interpretation is used in different applications:

  • Business : Data interpretation is widely used in business to inform decision-making, identify market trends, and optimize operations. For example, businesses may analyze sales data to identify the most popular products or customer demographics, or use predictive modeling to forecast demand and adjust pricing accordingly.
  • Healthcare : Data interpretation is critical in healthcare for identifying disease patterns, evaluating treatment effectiveness, and improving patient outcomes. For example, healthcare providers may use electronic health records to analyze patient data and identify risk factors for certain diseases or conditions.
  • Education : Data interpretation is used in education to assess student performance, identify areas for improvement, and evaluate the effectiveness of instructional methods. For example, schools may analyze test scores to identify students who are struggling and provide targeted interventions to improve their performance.
  • Social sciences : Data interpretation is used in social sciences to understand human behavior, attitudes, and perceptions. For example, researchers may analyze survey data to identify patterns in public opinion or use qualitative analysis to understand the experiences of marginalized communities.
  • Sports : Data interpretation is increasingly used in sports to inform strategy and improve performance. For example, coaches may analyze performance data to identify areas for improvement or use predictive modeling to assess the likelihood of injuries or other risks.

When to use Data Interpretation

Data interpretation is used to make sense of complex data and to draw conclusions from it. It is particularly useful when working with large datasets or when trying to identify patterns or trends in the data. Data interpretation can be used in a variety of settings, including scientific research, business analysis, and public policy.

In scientific research, data interpretation is often used to draw conclusions from experiments or studies. Researchers use statistical analysis and data visualization techniques to interpret their data and to identify patterns or relationships between variables. This can help them to understand the underlying mechanisms of their research and to develop new hypotheses.

In business analysis, data interpretation is used to analyze market trends and consumer behavior. Companies can use data interpretation to identify patterns in customer buying habits, to understand market trends, and to develop marketing strategies that target specific customer segments.

In public policy, data interpretation is used to inform decision-making and to evaluate the effectiveness of policies and programs. Governments and other organizations use data interpretation to track the impact of policies and programs over time, to identify areas where improvements are needed, and to develop evidence-based policy recommendations.

In general, data interpretation is useful whenever large amounts of data need to be analyzed and understood in order to make informed decisions.

Data Interpretation Examples

Here are some real-time examples of data interpretation:

  • Social media analytics : Social media platforms generate vast amounts of data every second, and businesses can use this data to analyze customer behavior, track sentiment, and identify trends. Data interpretation in social media analytics involves analyzing data in real-time to identify patterns and trends that can help businesses make informed decisions about marketing strategies and customer engagement.
  • Healthcare analytics: Healthcare organizations use data interpretation to analyze patient data, track outcomes, and identify areas where improvements are needed. Real-time data interpretation can help healthcare providers make quick decisions about patient care, such as identifying patients who are at risk of developing complications or adverse events.
  • Financial analysis: Real-time data interpretation is essential for financial analysis, where traders and analysts need to make quick decisions based on changing market conditions. Financial analysts use data interpretation to track market trends, identify opportunities for investment, and develop trading strategies.
  • Environmental monitoring : Real-time data interpretation is important for environmental monitoring, where data is collected from various sources such as satellites, sensors, and weather stations. Data interpretation helps to identify patterns and trends that can help predict natural disasters, track changes in the environment, and inform decision-making about environmental policies.
  • Traffic management: Real-time data interpretation is used for traffic management, where traffic sensors collect data on traffic flow, congestion, and accidents. Data interpretation helps to identify areas where traffic congestion is high, and helps traffic management authorities make decisions about road maintenance, traffic signal timing, and other strategies to improve traffic flow.

Data Interpretation Questions

Data Interpretation Questions samples:

  • Medical : What is the correlation between a patient’s age and their risk of developing a certain disease?
  • Environmental Science: What is the trend in the concentration of a certain pollutant in a particular body of water over the past 10 years?
  • Finance : What is the correlation between a company’s stock price and its quarterly revenue?
  • Education : What is the trend in graduation rates for a particular high school over the past 5 years?
  • Marketing : What is the correlation between a company’s advertising budget and its sales revenue?
  • Sports : What is the trend in the number of home runs hit by a particular baseball player over the past 3 seasons?
  • Social Science: What is the correlation between a person’s level of education and their income level?

In order to answer these questions, you would need to analyze and interpret the data using statistical methods, graphs, and other visualization tools.

Purpose of Data Interpretation

The purpose of data interpretation is to make sense of complex data by analyzing and drawing insights from it. The process of data interpretation involves identifying patterns and trends, making comparisons, and drawing conclusions based on the data. The ultimate goal of data interpretation is to use the insights gained from the analysis to inform decision-making.

Data interpretation is important because it allows individuals and organizations to:

  • Understand complex data : Data interpretation helps individuals and organizations to make sense of complex data sets that would otherwise be difficult to understand.
  • Identify patterns and trends : Data interpretation helps to identify patterns and trends in data, which can reveal important insights about the underlying processes and relationships.
  • Make informed decisions: Data interpretation provides individuals and organizations with the information they need to make informed decisions based on the insights gained from the data analysis.
  • Evaluate performance : Data interpretation helps individuals and organizations to evaluate their performance over time and to identify areas where improvements can be made.
  • Communicate findings: Data interpretation allows individuals and organizations to communicate their findings to others in a clear and concise manner, which is essential for informing stakeholders and making changes based on the insights gained from the analysis.

Characteristics of Data Interpretation

Here are some characteristics of data interpretation:

  • Contextual : Data interpretation is always contextual, meaning that the interpretation of data is dependent on the context in which it is analyzed. The same data may have different meanings depending on the context in which it is analyzed.
  • Iterative : Data interpretation is an iterative process, meaning that it often involves multiple rounds of analysis and refinement as more data becomes available or as new insights are gained from the analysis.
  • Subjective : Data interpretation is often subjective, as it involves the interpretation of data by individuals who may have different perspectives and biases. It is important to acknowledge and address these biases when interpreting data.
  • Analytical : Data interpretation involves the use of analytical tools and techniques to analyze and draw insights from data. These may include statistical analysis, data visualization, and other data analysis methods.
  • Evidence-based : Data interpretation is evidence-based, meaning that it is based on the data and the insights gained from the analysis. It is important to ensure that the data used in the analysis is accurate, relevant, and reliable.
  • Actionable : Data interpretation is actionable, meaning that it provides insights that can be used to inform decision-making and to drive action. The ultimate goal of data interpretation is to use the insights gained from the analysis to improve performance or to achieve specific goals.

Advantages of Data Interpretation

Data interpretation has several advantages, including:

  • Improved decision-making: Data interpretation provides insights that can be used to inform decision-making. By analyzing data and drawing insights from it, individuals and organizations can make informed decisions based on evidence rather than intuition.
  • Identification of patterns and trends: Data interpretation helps to identify patterns and trends in data, which can reveal important insights about the underlying processes and relationships. This information can be used to improve performance or to achieve specific goals.
  • Evaluation of performance: Data interpretation helps individuals and organizations to evaluate their performance over time and to identify areas where improvements can be made. By analyzing data, organizations can identify strengths and weaknesses and make changes to improve their performance.
  • Communication of findings: Data interpretation allows individuals and organizations to communicate their findings to others in a clear and concise manner, which is essential for informing stakeholders and making changes based on the insights gained from the analysis.
  • Better resource allocation: Data interpretation can help organizations allocate resources more efficiently by identifying areas where resources are needed most. By analyzing data, organizations can identify areas where resources are being underutilized or where additional resources are needed to improve performance.
  • Improved competitiveness : Data interpretation can give organizations a competitive advantage by providing insights that help to improve performance, reduce costs, or identify new opportunities for growth.

Limitations of Data Interpretation

Data interpretation has some limitations, including:

  • Limited by the quality of data: The quality of data used in data interpretation can greatly impact the accuracy of the insights gained from the analysis. Poor quality data can lead to incorrect conclusions and decisions.
  • Subjectivity: Data interpretation can be subjective, as it involves the interpretation of data by individuals who may have different perspectives and biases. This can lead to different interpretations of the same data.
  • Limited by analytical tools: The analytical tools and techniques used in data interpretation can also limit the accuracy of the insights gained from the analysis. Different analytical tools may yield different results, and some tools may not be suitable for certain types of data.
  • Time-consuming: Data interpretation can be a time-consuming process, particularly for large and complex data sets. This can make it difficult to quickly make decisions based on the insights gained from the analysis.
  • Incomplete data: Data interpretation can be limited by incomplete data sets, which may not provide a complete picture of the situation being analyzed. Incomplete data can lead to incorrect conclusions and decisions.
  • Limited by context: Data interpretation is always contextual, meaning that the interpretation of data is dependent on the context in which it is analyzed. The same data may have different meanings depending on the context in which it is analyzed.

Difference between Data Interpretation and Data Analysis

Data interpretation and data analysis are two different but closely related processes in data-driven decision-making.

Data analysis refers to the process of examining and examining data using statistical and computational methods to derive insights and conclusions from it. It involves cleaning, transforming, and modeling the data to uncover patterns, relationships, and trends that can help in understanding the underlying phenomena.

Data interpretation, on the other hand, refers to the process of making sense of the findings from the data analysis by contextualizing them within the larger problem domain. It involves identifying the key takeaways from the data analysis, assessing their relevance and significance to the problem at hand, and communicating the insights in a clear and actionable manner.

In short, data analysis is about uncovering insights from the data, while data interpretation is about making sense of those insights and translating them into actionable recommendations.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Implications in Research

Implications in Research – Types, Examples and...

Research Approach

Research Approach – Types Methods and Examples

Delimitations

Delimitations in Research – Types, Examples and...

Background of The Study

Background of The Study – Examples and Writing...

Future Research

Future Research – Thesis Guide

Critical Analysis

Critical Analysis – Types, Examples and Writing...

An official website of the United States government

Here's how you know

Official websites use .gov A .gov website belongs to an official government organization in the United States.

Secure .gov websites use HTTPS. A lock ( Lock Locked padlock ) or https:// means you've safely connected to the .gov website. Share sensitive information only on official, secure websites.

design element

  • Search Awards
  • Recent Awards
  • Presidential and Honorary Awards
  • About Awards
  • How to Manage Your Award
  • Grant General Conditions
  • Cooperative Agreement Conditions
  • Special Conditions
  • Federal Demonstration Partnership
  • Policy Office Website

what is analysis of data in research



August 9, 2017
August 9, 2017
1703281
Standard Grant
Sylvia Spengler
[email protected]
�(703)292-7347
IIS
�Div Of Information & Intelligent Systems
CSE
�Direct For Computer & Info Scie & Enginr
September 1, 2017
August 31, 2021�(Estimated)
$408,000.00
$408,000.00
Suciu
4333 BROOKLYN AVE NE
SEATTLE
WA �US �98195-1016
(206)543-4043
WA �US �98195-2350
Info Integration & Informatics
4900
4900
47.070

what is analysis of data in research

Note:   When clicking on a Digital Object Identifier (DOI) number, you will be taken to an external site maintained by the publisher. Some full text articles may not yet be available without a charge during the embargo (administrative interval). Some links on this page may take you to non-federal websites. Their policies may differ from this site.

This Project Outcomes Report for the General Public is displayed verbatim as submitted by the Principal Investigator (PI) for this award. Any opinions, findings, and conclusions or recommendations expressed in this Report are those of the PI and do not necessarily reflect the views of the National Science Foundation; NSF has not approved or endorsed its content.

The main goal of the project was to develop declarative approaches to enable easy and efficient causal analysis by specifying units, treatment, outcome, and covariates, over the integrated data, and declaring any underlying assumptions required in causal analysis using database view definitions.  Causality is usually studied over independent and uniform units.  However, in many situations, the units are connected by relationships; these relationships can be links in a social network, or author-paper relationships, or employee-employer relationships.  In that case the outcomes for one unit may be affected through its relationships by the outcomes of other units.   Figure 1 illustrates a collection of units and their relationships.  This project has researched techniques and methods to allow causal inference over relational data, by assuming a much simpler relational representation, as suggested  in Figure 2.   The project had three major outcomes.

First, we developed an experimental system, called HypDB, for removing bias (such as Simpson?s paradox) in OLAP queries.  A SQL query, can often be biased and lead to incorrect decisions. HypDB detects, explains, and resolves bias in decision-support queries. We gave a simple definition of a biased query, which performs a set of independence tests on the data to detect bias, proposed a novel technique that explains the bias, thus assisting an analyst in understanding what goes on, and developed an automated method for rewriting a biased query into an unbiased query.

Second, we developed Capuchin, a system that applies database repair techniques, in order to remove bias from training data. Capuchin discovers a causal model in the training data, reasons about how to modify that causal model in order to make it conform to a socially accepted notion of fairness, then repairs the training data by modifying it as little as possible in order to attain the new, corrected causal model. We developed the necessary theory based on causality theory and information theory, proposed two algorithms for database repair, and evaluated the system on several publicly available datasets, showing that, when training on the repaired data, the models will make decisions that are significantly more fair, according to several notions of fairness, yet achieve almost the same accuracy as models trained on the raw data.

Finally, developed a system of causal inference in observational relational data. Although causal inference methods for ?observed data? (not from randomized controlled experiments) have been studied in Statistics and Artificial Intelligence, they rely on the critical assumption that the units of study are sampled from a population of homogeneous units; in other words, the data can be represented in a single flat table. However, many real-world data are available in ?relational? form in multiple related tables. Basic notions used in causal analysis, such as ?units?, no longer readily apply.  For example the treatment may be applied to one table whereas the outcome may be observed in a different table.  This makes causal inference in relational data more challenging. For this purpose, we developed a declarative framework, called CaRL (Causal Relational Langauge).  The framework includes a declarative language to represent causal background knowledge and assumptions, a semantics for complex causal queries, and an algorithm for answering causal queries from the given relational data.

Last Modified: 11/18/2021 Modified by: Dan Suciu

Fig 1: A grounded causal DAG

Please report errors in award information by writing to: [email protected] .

Briefing notes for researchers - public involvement in NHS, health and social care research

  • Published: 05 April 2021
  • Version: V 1.0 - April 2021
  • 41 min read

Briefing note one: introduction

This guidance is for researchers new to public involvement in research and just starting to consider how best to involve members of the public in their work. It will also be of interest to researchers with experience of public involvement in research who are looking to update and refresh their knowledge and skills. And it will be helpful to others interested in public involvement in research, including involvement leads. This guidance explains the different ways that members of the public are involved in research. It will help you to plan, resource and support public involvement in research. You can find more involvement resources and guidance on Learning for Involvement .

Briefing note two: what is public involvement in research?

Definitions of involvement, engagement and participation.

Researchers and others use a variety of words to describe ways of interacting with the public. The terms involvement, engagement and participation are sometimes used interchangeably but the NIHR ascribes specific meanings to these terms as follows:

Involvement

NIHR defines public involvement in research as research being carried out ‘with’ or ‘by’ members of the public rather than ‘to’, ‘about’ or ‘for’ them.  It is an active partnership between patients, carers and members of the public with researchers that influences and shapes research. When using the term ‘public’ we include patients, potential patients, carers and people who use health and social care services as well as people from specific communities and from organisations that represent people who use services. Also included are people with lived experience of one or more health conditions, whether they’re current patients or not. Here are examples of how members of the public might get involved in research:

  • as  joint grant holders or co-applicants on a research project
  • identifying  research priorities
  • as members of a project advisory or steering group
  • commenting on and developing patient information leaflets or other research materials
  • undertaking interviews with research participants
  • carrying out research as user and/or carer researchers

Public involvement as defined here is also sometimes referred to as Patient and Public Involvement (PPI). More information about approaches to public involvement can be found in section seven.

Where information and knowledge about research is provided and disseminated. Examples of engagement are:

  • science festivals open to the public with debates and discussions on research open days at a research centre where members of the public are invited to find out about research raising
  • awareness of research through media such as television programmes, newspapers and social media
  • dissemination to research participants, colleagues or members of the public on the findings of a study.

Find out more about engagement through the National Coordinating Centre for Public Engagement.

Participation

  • Where people take part in a research study. Examples of participation are:
  • people being recruited to a clinical trial or other research study(eg to test the efficacy of a new treatment)
  • completing a questionnaire or participating in a focus group as part of a research study

Find out more about how members of the public participate in NIHR research on Be Part of Research. These different activities – involvement, engagement and participation – are often linked and, although they are distinct, they can complement each other.

Briefing note three: why involve members of the public in research?

Public involvement can improve the quality and relevance of research, as well as serving the broader democratic principles of citizenship, accountability and transparency. For these reasons, public involvement is increasingly an expectation if not a requirement of research funders. Research teams do not always clearly articulate their rationale for public involvement. A clear rationale helps researchers be more specific about who they want to involve, and in what ways.

Democratic principles

People who are affected by research have a right to have a say in it. Public involvement in research is an intrinsic part of citizenship, public accountability and transparency. It can also help empower people who use health and social care services, by providing the opportunity to influence research that is relevant to them.

Providing a different perspective

Members of the public might have personal knowledge and experience relevant to your research topic or be able to provide a more general societal perspective.  They might have lived experience of one or more health conditions or of using services. Members of the public will also bring their experiences of being part of specific communities or groups.

Improving the quality of the research

Involving members of the public helps ensure that research focuses on outcomes that are important to the public.  It is also a good way of tailoring your research to the needs of specific communities.  For example, if you are seeking research participants from a particular community, involving members of that community in shaping the study design can help improve recruitment and retention of participants by ensuring that your ways of communicating and your research methods are sensitive to the needs, customs and circumstances of the community.

Making the research more relevant

Public involvement can make research more relevant for example through:

  • identifying a wider set of research topics than would have been generated just by  academics and health or social care professionals suggesting ideas for new research
  • ensuring research is focused on what matters to people - for example by prioritising symptoms that are of importance to patients with a particular condition
  • helping to reshape and clarify the research

Interests of research funders and research organisations

Several funding bodies, as well as research ethics committees, ask grant applicants about their plans for public involvement with an expectation that if they are not involving members of the public in the research then they need to have clear reasons for this and be able to explain them. For example, NIHR has a standard application form used by all research programmes. One of the sections on the form asks applicants to describe how they have involved the public in the design and planning of their study as well as their plans for further involvement throughout the research, including plans for evaluating impact. Applicants are also asked to provide details of the budget they have allocated for public involvement in their research. Responses to these questions will be considered by the reviewers, research panels and boards (which increasingly include members of the public) and will influence funding decisions.

The Research Ethics Service will ask about the plans for public involvement in your research if you apply for ethical approval, and it will be part of their assessment process. They expect the involvement of the public in research, as it can help ensure that research planned is ethical, relevant and acceptable from a public perspective. 

Briefing note four: why members of the public get involved in research

Members of the public get involved in research for a variety of personal and social reasons. For some, these are linked to personal experiences of health or social care services and a desire to bring about change in the quality of care or to improve treatments either for themselves or for others with a similar condition. For others it might be a way to have a ‘voice’ and influence the processes that affect people’s lives. Some people have had difficult experiences and appreciate being able to do something positive with that experience. Others have had very good experiences, and see their involvement as an opportunity to ‘give something back’. Other people get involved to ensure the voice of their community/communities are represented in research. Well planned and resourced involvement in research can also be valuable to those involved by increasing their confidence and knowledge and helping them to develop new skills. healthtalk.org has a series of interesting videos where various members of the public talk about their reasons for getting involved in research, including helping others and medical science and also for personal benefits .

Briefing note five: how to involve members of the public in research

Uk standards for public involvement.

A good place to start when planning how to involve members of the public in research is the UK Standards for Public Involvement . Developed over three years by a UK-wide partnership , the Standards are a description of what good public involvement looks like. They encourage approaches and behaviours that are the hallmark of good public involvement such as flexibility, sharing and learning, and mutual respect . The Standards are for everyone doing health or social care research, and provide guidance and reassurance for practitioners working towards achieving their own best practice. The six Standards are:

  • inclusive opportunities – offer public involvement opportunities that are accessible and that reach people and groups according to research needs
  • working together – work together in a way that values all contributions, and that builds and sustains mutually respectful and productive relationships
  • support and learning – offer and promote support and learning opportunities that build confidence and skills for public involvement in research
  • governance – involve the public in research management, regulation, leadership and decision making.
  • communication – use plain language for well-timed and relevant communications, as part of involvement plans and activities
  • impact – Seek improvement by identifying and sharing the difference that public involvement makes to research

Below we provide more information on each Standard. The Standards were tested by over 40 individuals, groups and organisations during a year-long pilot programme, and you can find out more about these ‘test bed’ projects and how they went about implementing the UK Standards for Public Involvement in the Implementation Stories report .

Inclusive opportunities

Offer public involvement opportunities that are accessible and that reach people and groups according to research needs. Research is to be informed by a diversity of public experience and insight, so that it leads to treatments and services which reflect these needs. The questions below may help you reflect on and decide if you meet the Standard:

  • are people affected by and interested in the research involved from the earliest stages?
  • have barriers to involvement, such as payment for time or accessible locations for meetings been identified and addressed?
  • how is information about opportunities shared, and does it appeal to different communities?
  • are there fair and transparent processes for involving the public in research, and do they reflect equality and diversity duties?
  • is there choice and flexibility in opportunities offered to the public?

The NIHR is committed to actively and openly supporting and promoting equality, diversity and inclusion, and this also applies to how we do public involvement . As such, this Standard is crucial for NIHR, and the research we fund.

The NIHR’s Being Inclusive in Public Involvement in Health Research guidance provides key tips for researchers and practitioners:

  • check your power - try to understand power relationships within your context and your role, and how to  promote inclusion
  • value the people you work with - people will bring different perspectives and opinions
  • use language carefully - avoid jargon, write in plain English and be considerate
  • consider inclusive locations - decide together on the best places for meetings
  • listen and seek agreement
  • get from A to B, perhaps via Z - be flexible
  • collaborate - work with community organisations
  • invest in the workplace
  • commit to a relationship
  • evidence, evaluate, share and reflect
  • act small, think big - support people, researchers and members of the public to develop confidence, learning and skills

Find out more about EDI and accessibility:

  • Tips for researchers involving unpaid carers in health and care research
  • Being Inclusive in Public Involvement in Health Research guidance
  • Different experiences: a framework for considering who might be involved in research guidance
  • government guidelines on writing about ethnicity
  • the East Midlands Academic Health Science Network PPI webpages have a number of top tips leaflets for engaging with various communities
  • NIHR Plain English Summaries guidance

Working together

Work together in a way that values all contributions, and that builds and sustains mutually respectful and productive relationships. Public involvement in research is better when people work together towards a common purpose, and different perspectives are respected. The questions below may help you reflect on and decide if you meet the Standard:

  • has the purpose of public involvement been jointly defined and recorded? 
  • have the practical requirements and arrangements for working together been addressed?
  • have all the potential different ways of working together been explored, and have these plans and activities been developed together?
  • is there a shared understanding of roles, responsibilities and expectations of public involvement?
  • have individuals’ influence, ideas and contributions been recognised and addressed?

Support and learning

We offer and promote support and learning that builds confidence and skills for public involvement in research. We seek to remove practical and social barriers that stop members of the public and research professionals from making the most of public involvement in research. The questions below may help you reflect on and decide if you meet the Standard:

  • is there a range of support to address identified needs?
  • have specific resources been designated to support learning and development opportunities for both the public, researchers, and staff?
  • do the public know where to go for information and support about public involvement?
  • is there a culture of learning- by- doing, building on and sharing that learning for researchers, staff and the public?

Different types of training might be:

  • sharing knowledge and experiences with colleagues and peers ‘on the job’ training
  • attending training sessions or courses

Support might take the form of:

  • support from other team members
  • a mentor with similar experience
  • team meetings or one-to-one meetings with line managers
  • IT support for remote working
  • informal or formal mechanisms of peer support

Find out more about guidance, training and support on Learning for Involvement .

Involve the public in research management, regulation, leadership and decision making. Public involvement in research governance can help research be more transparent and gain public trust. The questions below may help you reflect on and decide if you meet the Standard:

  • are public voices heard, valued and respected in decision making?
  • are public involvement plans in place that are regularly monitored, reviewed and reported on?
  • is there visible and accountable responsibility for public involvement throughout the organisation?
  • are realistic resources (including money, staff, time) allocated for public involvement?
  • is the privacy of personal information protected by collecting and using it in a suitable way?

Communications

Use plain language for well-timed and relevant communications, as part of involvement plans and activities. Communicate with a wider audience about public involvement and research, using a broad range of approaches that are accessible and appealing. The questions below may help you reflect on and decide if you meet the Standard:

  • has a communications plan been developed for involvement activities?
  • are the needs of different people being met through inclusive and flexible communication methods?
  • are processes in place to offer, gather, act on and share feedback with the public?
  • are you sharing your public involvement learning and achievements, good and bad?

Be clear with the people you want to involve. It is important that both you, as a researcher, and the people you involve have a shared and clear understanding of what they are being invited to do. Explain why you are asking people to get involved, and agree the aims of the research. It can be helpful to develop a role description and/or terms of reference so people know what is expected of them, and what they can expect from you.

Seek improvement by identifying and sharing the difference that public involvement makes to research. Understand the changes, benefits and learning gained from the insights and experiences of patients, carers and the public. The questions below may help you reflect on and decide if you meet the Standard:

  • are the public involved in deciding what the assessment of impact should focus on, and the approach to take?
  • is it clear what information you will collect to help assess impact, including who has been involved and how?
  • are there processes in place to help reflect on public involvement?
  • is there a commitment to learn from the public involvement experience and, where possible, to act on this learning?

Resource public involvement in research

To achieve good quality public involvement, as set out by the UK Public Involvement Standards, it is crucial to consider how to resource public involvement both in terms of budget and the additional time required to involve the public in your research. Some research funders, such as NIHR, will actively encourage and expect public involvement to be adequately costed in research proposals. It can be challenging to obtain funding for public involvement later if it has not been built into your research grant application. Costs will differ depending on how you are involving people in your research and who you are involving. You can find in-depth information on budgeting for involvement in the NIHR’s Payment Guidance for researchers and professionals .

Do I need ethical approval for public involvement?

Ethical approval is not needed where people are involved in planning or advising on research, for example as a co-applicant on a research grant, a member of an advisory group or in developing a questionnaire.  For more information, the following resources are recommended: Qualitative research and patient and public involvement in health and social care research: What are the key differences?

Clarify your organisational responsibilities

It is important that you liaise well in advance with the relevant departments within your organisation such as finance and human resources (HR). Explain how you plan to involve members of the public in your research. This will ensure that you are aware and informed of local arrangements for involving members of the public and any issues that finance or HR may raise about the proposed arrangements. If there are any concerns raised by these departments you will have time to address them early on. Examples of issues you might need to consider are:

  • payment and expenses policies
  • methods for paying people (including tax and national insurance deductions)
  • confidentiality agreements –(where these apply,  all members of the research group should be asked to sign , not just the members of the public
  • health and safety
  • honorary contracts
  • Disclosure and Barring Service (DBS) checks

We recommend that you:

  • have a lead for public involvement in your team so there is a point of contact for all involved
  • talk to others within your organisation who have involved people in their research
  • make sure that people involved know that they can stop being involved at any time
  • consider the emotional support you might need to offer people, if the research is sensitive or they find some of the information distressing, and where to access this support
  • if asking your own patients or their  carers to work with you on a study consider if this is appropriate and how this might affect your relationship with them
  • where appropriate, discuss in advance what will happen should people become ill for periods of the research discuss in advance with the people you have involved whether and in what ways they might want to stay involved after the project

Briefing note six: who should I involve and how do I find people to involve?

Who should i involve.

In deciding who best to involve it is important to think about the knowledge and perspectives that you are looking for from members of the public, and what support you are able to give to people who you plan to involve. You might also want to consider involving carers in your research, as they provide a unique and valuable perspective. We have produced some tips to help you involve unpaid carers in your research project . Key points to consider when deciding who might be involved in research,  are set out in the guidance note: Different Experiences: A framework for considering who might be involved in research guidance and include the following:

  • be clear about the purpose of involving people in research and what experiences and knowledge they can provide
  • include a diversity of relevant views and perspectives
  • take a common sense approach to who you involve
  • people can wear several hats, and their contributions may be broader because of this range of experiences
  • the type of lived experience required will vary depending on the focus of the research

For most studies it is not appropriate for people involved in the research also to be participants in the research as that can compromise both the researcher and the person involved. The possible exceptions to this may include some participatory/action research studies where the participants of the research may also be acting as co-researchers, influencing the shape of the study as it progresses.

Involve more than one person

Involving more than one person allows you to include different people at different stages of the research, and provides them more choices in  how they are involved. It will also help you to:

  • increase the breadth of experience and skills brought to the project
  • provide an opportunity for those involved to support and encourage each other
  • make the overall public presence more resilient to individual absences resulting from illness or clashing commitments Include a diversity of relevant views and perspectives

Consider the broad and different views and experiences you will need to include in your research. Try to ensure that you have a variety of perspectives so you get different viewpoints and allow time for those who have the skills you require for the project to network with others. You can read the Being Inclusive in Public Involvement in Health Research guidance for further information, and refer to the ‘inclusive opportunities’ Standard for public involvement .

How do I find people to involve?

Once you have considered who you would like to involve, you then need to think about how to make contact with them. Speak with colleagues and members of the public and ask for their views on how to find the people you want to involve. Allow time to make contact with organisations and individuals as finding people will nearly always take longer than you think.  Rather than inviting people in to talk to you, go out and engage with communities or groups where they already meet, whether that be in physical spaces or online forums. Others have contacted people by:

for specific communities, getting in touch with relevant community groups, community and faith leaders, or other individuals who can help you engage with specific groups of people

  • asking community members or patients about people who might be interested in getting involved
  • advertising in GP surgeries, outpatient departments, local newspapers and on the radio
  • talking to local or national patient support groups and voluntary organisations
  • searching online for relevant organisations
  • using social media such as Facebook or Twitter
  • talking to other health and social care professionals such as community development workers, social workers, health visitors, GP practice managers
  • contacting the Patient Advice and Liaison Service (PALS) Officer based at your local NHS Trust
  • putting details of the opportunity for people to get involved on the People in Research website so interested members of the public can make contact

Spend time developing networks and building relationships. This might involve visiting organisations to hear about what they do and talking to them about your research. Many researchers develop long term relationships with organisations and individuals who continue for many years past the involvement in the first research project. Be aware that some people or organisations might choose not to get involved in your work. This might be for a variety of reasons but it might be because their aims do not match yours, they do not have the time, or because the practical costs of either getting involved or finding somebody to get involved in your work is too great. Some organisations charge when they are asked to find people to get involved in research activities.   Find out more about how to find people to involve:

  • watch this Youtube video from Caroline Barker, from the University Hospital Southampton NHS Foundation Trust, on Reaching new public members in a virtual world
  • People in Research is a national platform to help members of the public find opportunities to get involved in research and for research organisations and researchers to advertise involvement opportunities
  • Vocal creates opportunities for people to find out about, and have a voice in, health research in Greater Manchester
  • Patient has a comprehensive list of national and local support groups and organisations

Briefing note seven: approaches to public involvement in research

We use the terms consultation, collaboration, co-production and user-controlled describe different broad approaches to involving people in research, associated with progressively increasing levels of power and influence for members of the public. In practice, research projects can include a combination of these four and boundaries between them are not clear cut. Moreover, each approach encompasses a range of specific methods for involving people. For example, you might work with one or two service users or carers as collaborators throughout a research project, as well as consulting with a wider group of service users on a specific aspect of the study, while some members of the public might lead on one stage of the research. How you involve people will depend on the nature of your research, as well as the different activities people decide they would like to get involved in. If it is the first time that you have involved people in research consider where public involvement will be most effective in your research. This might be in a relatively modest way at first, perhaps in just one activity or at one stage of the project. Build on the skills and experience you develop in future projects

Consultation

Consultation is when you ask members of the public for their views and use these views to inform your decision making. Consultation can be about any aspect of the research process – from identifying topics for research through to thinking about the implications of research findings. You might, for example, hold one-off meetings to ask people’s views on the importance of a study and areas that it is important to measure within the study. Research funders may consult members of the public by asking for their views on research grant applications. If this is the first time you have involved people in your research, it can be a good starting point. It can also be a way of getting the views of a larger group of people. However, think carefully about what you are asking and what you will do with the information. Be clear about these aims with the people you involve. You might find that people are frustrated at being asked for their views without a commitment from you that you will act on them. There is a danger of ‘consultation fatigue’ for individuals and organisations who have been consulted before and think that their views have been ignored. If you decide to consult people on your research we recommend that you:

  • give them enough time to respond
  • feedback on the actions you have taken as a result of the consultation
  • ask if they would like to hear about the findings of the research

Benefits of consultation:

  • it enables you to find out people’s views
  • it can be useful when exploring sensitive and difficult issues
  • you can get a wide range of views
  • you can involve people in discussion and debate

Challenges of consultation:

  • you might not get the broad views you hope for
  • people might have previous bad experiences of consultation where their views were not listened to
  • as the consultation is framed by your own concerns and questions, you might not get the level of insight from consultees that could emerge in
  • a more two-way process
  • you might require an experienced facilitator

Working more closely with members of the public, returning to ask them for further information, and developing an ongoing relationship with them, will take you towards collaboration.

Collaboration

Collaboration involves an ongoing partnership between you and the members of the public you are working with, where decisions about the research are shared. For example, members of the public might collaborate with the researchers on developing the research grant application, be members of the study advisory group and collaborate with researchers to disseminate the results of a research project. This is a broad approach that can be used in a wide range of different research activities and at many different stages of the research project. Collaboration requires commitment, openness and flexibility and it is important to plan and prepare adequately (see briefing note five on planning and preparation). Benefits of collaboration:

  • helps to ensure research remains focused and relevant
  • skills and perspectives of the public and the researchers can complement one another
  • helps to ensure the research is ethical
  • can help with recruitment and informed consent

Challenges to collaboration:

  • time-consuming and involves additional cost
  • researchers and the public may require training or support
  • researchers need to be flexible and willing to share the control of the research

Co-production

Co-producing a research project is an approach in which researchers, practitioners and the public work together, sharing power and responsibility from the start to the end of the project, including the generation of knowledge. The assumption is that those for whom the  research is most relevant  are best placed to design and deliver it in partnership with the professionals, and have skills, knowledge and experience of equal importance. There is an overlap with the collaboration approach but what defines co-production is the values and principles that underpin it, rather than any specific tools or techniques. Indeed, there is no single formula or method for co-production and such an approach would be counter to the innovation and flexibility that is implicit in co-produced research. Rather, co-production requires that relationships are valued and nurtured, that efforts are made to redress power differentials, and that people are supported and enabled to realise their potential in carrying out their roles and responsibilities in the project. Co-producing research can include partnerships between academia and organisations representing the public as well as individual public members working with organisations, for example universities, which undertake research. The NIHR’s Guidance on co-producing a research project describes key principles and features of co-production. These are summarised below: Key principles:

  • sharing of power – the research is jointly owned and people work together to achieve a joint understanding.
  • including all perspectives and skills – making sure the research team includes all those who can make a contribution.
  • respecting and valuing the knowledge of all those working together on the research – everyone is of equal importance.
  • reciprocity – everybody benefits from working together.
  • building and maintaining relationships – an emphasis on relationships is key to sharing power

Key features:

  • establishing ground rules
  • continuing dialogue
  • joint ownership of key decisions
  • a commitment to relationship building
  • opportunities for personal growth and development
  • flexibility
  • continuous reflection
  • valuing and evaluating the impact of co-producing research

The NIHR’s Guidance on co-producing a research project also presents some challenges that need to be addressed to enable more opportunities for co-producing research. These challenges touch on the topics of power (and power sharing), flexibility in research approaches and practices, the need for more guidance on co-production, and how to assess and evaluate co-produced research. Find out more about co-production:

  • NIHR Guidance on co-producing a research project
  • NIHR Co-Production in Action (Number One)
  • NIHR Co-Production in Action (Number Two)
  • NIHR Co-Production in Action (Number Three)
  • BMJ: ‘Co-production of knowledge: the future’ Collection of articles (2021)
  • the NIHR Research Design Service South East have produced a  podcast series on co-production
  • the Co-Production Collective is a community of patients, carers, researchers, practitioners, students and anyone else who is interested in co-production

User controlled research

User controlled research is research that is actively controlled, directed and managed by service users and their service user organisations. Service users decide on the issues and questions to be looked at, as well as the way the research is designed, planned, written up and disseminated. The service users will run the research advisory or steering group and may also decide to carry out the research. Some service users make no distinction between the terms user controlled and user led, others feel that user led has a different, vaguer meaning. They see user led research as research which is meant to be led and shaped by service users but is not necessarily controlled or undertaken by them. Control in user led research in this case will rest with some other group of non-service users who also have an interest in the research, such as the commissioners of the research, the researchers or people who provide services. Examples of user-controlled research in action have highlighted several key reasons why user-controlled research is important for service users:

  • a commitment to changing or improving the lives of their community of service users
  • frustration with traditional research and services which exclude them
  • frustration with mainstream research in failing to capture their needs or to research things they feel are important

As a researcher, you might get involved in user controlled research in several ways such as:

  • training or supporting a group of service users who are undertaking their own research
  • supporting user controlled research in a specific part of the research
  • a user controlled organisation might commission you to carry out research under their direction

Organising and hosting meetings

Regardless of the approach you take, involving members of the public is likely to entail organising and hosting meetings. How you do this can make a huge difference to how people feel about the research and how much they are able and want to get involved in your work. Holding a meeting is only one of the ways to involve people and you may decide that this is not the best approach for your research. If you do choose meetings, you need to consider whether face-to-face, online, or a combination of the two work best for your research project, and your public contributors.

Planning for meetings: face-to-face

  • explore opportunities for meeting people in their own environment, such as by attending a regular meeting of an organisation or group consider venues that are on neutral ground – venues such as hospitals or local authority departments might be associated with difficult experiences
  • those who are working, have young children or who are carers might need to meet outside office hours
  • make sure that there is parking and public transport nearby
  • it is often better to plan for a mid-morning or early afternoon start to the meeting – this makes it easier for people if they have to travel some distance to the meeting or if they need additional time in the mornings because of their disability or health condition
  • in some situations, you might need to offer overnight accommodation, in which case check if they have any special requirements for an overnight stay
  • make sure meeting places, hotels and facilities are accessible to all those attending, for example if you are inviting a wheelchair user to join your committee, meet in an accessible meeting room with parking nearby and fully accessible facilities
  • where possible, visit the venue in person in advance of the meeting, and ask to be shown around to check its suitability and access to all rooms, dining area, disabled toilets and the stage speaker area (if required)
  • ask people if they have any special dietary requirements and let them know what refreshments you will be providing
  • be mindful of practical matters such as microphones and hearing loops for people with hearing impairments or large print for people with visual impairments
  • think about whether you will need interpreters for people from different ethnic groups or for sign language
  • provide clear information about the meeting, timings and directions for getting to the venue well in advance and in a relevant format
  • ask people if they require information posted to them or if they would like to receive it by email
  • plan and prepare a budget for your meeting
  • consider developing terms of reference and/or ground rules for the meeting so that those attending know why the meeting is being held and the responsibilities of those attending

Planning for meetings: online

The NIHR School for Primary Care Research (SPCR) has developed useful guidance on how to hold a PPI meeting using virtual tools , which details these key tips and tricks:

  • keep it simple - use easy software and tools
  • send documents in advance
  • don’t chair and make minutes simultaneously
  • choose appropriate software - SPCR provide specific tips for Microsoft Teams and Zoom
  • promote the opportunity widely - virtual meetings  give  you the opportunity to involve people not usually involved in research
  • adjust the agenda accordingly - take into consideration more frequent breaks
  • have a backup plan if technology fails
  • provide support before and during the meeting

There is a wide range of guidance on good practice for online meetings for involvement. Here are some of the ones we think might be most useful:

  • here is an overview of the best video conferencing apps for accessibility
  • National Coordinating Centre for Public Engagement Online Engagement: a guide for creating and running virtual meetings and events
  • a blog by University College London on Engagement in a time of social distancing , which also includes further resources and reading
  • Engaging at a distance guidance (.PDF) from Being Human, which has a useful online event checklist
  • there are particular considerations to keep in mind when hosting online meetings which will be accessed by disabled people. Find out more about how to make your virtual meetings and events accessible to the disability community and how to run accessible online meetings for disabled people working and studying from home.

Conducting the meetings (face-to-face and online)

  • brief the Chair and other committee members to ensure that the members of the public are welcomed and included during the meeting. You can find out more about chairing a meeting by reading the  TwoCan Associates guidance for chairs (.PDF)
  • offer a pre-meeting or telephone discussion for members of the public to discuss the agenda and papers
  • for face-to-face meetings, provide name badges in a large clear text font
  • ask people to introduce themselves at the beginning of the meeting
  • agree ground rules for how you will conduct a meeting so everybody has an equal opportunity to contribute
  • members including members of the public agree to these rules of mutual respect
  • make sure that everybody has an equal voice on the group
  • encourage the use of clear language, explain jargon and acronyms
  • ask the Chair to regularly check that people understand the language used and the content of the meeting
  • plan for frequent breaks as people might need to take medication or find sitting for long periods difficult
  • for face-to-face meetings, see if it is possible to have a spare room to allocate as a quiet room for those who might need to take some time out of the meeting
  • consider different ways of conducting meetings, such as time in small group sessions (breakout rooms online) as well meetings in a larger
  • group to allow people the opportunity to contribute in different ways
  • create a mentor or buddy system to support the members of the public you are involving on an ongoing basis

After the meeting (face-to-face and online)

  • share a write up of the event, including any recommendations or outcomes
  • provide feedback on any recommendations or outcomes
  • allow sufficient time between meetings for people to consult with their peers or their organisations if they wish to do so
  • ask for feedback from members of the public involved in the meeting and if they have any suggestions for improvement for future meetings

Briefing note eight: ways that people can be involved in the different stages of the research cycle

Members of the public are getting involved in a whole range of research activities. These include helping to develop the research question, applying for funding and ethical approval, sitting on advisory groups, carrying out the research and disseminating the research findings. This section considers the different ways members of the public can get involved in the stages of the research cycle. When reading through this section, keep in mind the six UK Standards for Public Involvement : inclusive opportunities; working together; support and learning; governance; communications; impact.

Identifying and prioritising research

Involving the public in helping to identify and prioritise research questions is a powerful way of ensuring that your research priorities are aligned with those of people who have a personal stake in the subject, for example if they have the condition you are researching or use relevant services. Depending on the type of funding call you are applying for, topics might have already been decided by the research funder or commissioner. Members of the public might have been involved in the identification and prioritisation of the topics by the research funding organisation. People with lived experience -  and the organisations that represent - researchers and health and care practitioners will all have distinct, though often overlapping research priorities.  By working together you can develop a shared agenda. An active partnership will enable you to learn from each other and agree on the research questions together, and the final topic will be a shared decision between the group. Consider recording how the research questions were developed and the different influences on the questions at the beginning of the project. Researchers and research organisations use a range of different ways to work with the public to identify and prioritise research. These include:

  • discussions with existing reference groups and networks
  • inviting people to an event or holding a workshop or focus group
  • attending meetings held by service user groups
  • peer group interviews
  • surveys and interviews
  • asking support organisations about the feedback they get from people who use services
  • using an independent facilitator (this reduces the risk of researchers influencing the agenda too much)

Sometimes it is difficult for people who are unfamiliar with research to identify research questions. It may help to first discuss problems that people experience living with their condition/s, or using treatments or services, before discussing how these might be turned into research questions.

Find out if others have worked with members of the public to identify research topics

In several areas of research including cancer, arthritis and mental health the research topics and priorities of people with these conditions have been explored and published. Find out if research has been carried out to identify the topics most relevant to patients or service users in your area of work to enable you to identify relevant research topics. The James Lind Alliance , funded by the NIHR, facilitates Priority Setting Partnerships. These partnerships bring patients, carers and health and care practitioners together to identify and prioritise the care and treatment uncertainties which they agree are the most important for research. It’s not too late to involve people if you have already identified a research topic – you can still involve the public by asking them if the topic is relevant to them. Most researchers or research organisations have a range of topics or areas that they would like to research. Members of the public can work with individual researchers, research teams or organisations to decide which topics to focus on first. Listen, respond and talk through with them how their ideas can be included in your research questions.

Commissioning research

Many funding organisations now involve members of the public in commissioning research. For example the NIHR recruits and supports patients, carers and members of the public to give feedback on health and social care research funding applications . This gives a broader perspective to the review process, by considering the issues that are important from a public and patient perspective. Involvement in commissioning can be done in a variety of ways, including:

  • involving members of the public in reviewing research proposals
  • having members of the public on research commissioning panels or boards
  • involvement in the monitoring process of research, once funded
  • user controlled organisations commissioning research

Designing and managing research

Involving members of the public in the design of research helps to ensure that the research is relevant to the needs of people and that the research questions and outcomes are clear. It strengthens the ethical basis of the research, and can help improve the recruitment and retention of volunteer participants in the study. Public involvement in the design stage of the research can:

  • demonstrate to funders and commissioners that the topic is important and relevant to the end users of research and that they have been
  • involved in the design of the research
  • identify aspects of the research that raise ethical considerations and provide a broader set of perspectives  on solutions
  • suggest ways that people can be meaningfully involved in the various stages of the research
  • suggest ways to ensure that diversity and inclusion are addressed in the research, for example in ensuring that volunteers can be recruited
  • from certain ethnic communities
  • ensure that your recruitment process is practical and feasible
  • help you to develop a budget for public involvement in your research and ensure that the time and the support needed for public involvement
  • is built in to the research from the beginning
  • help develop written information in user friendly and plain language, which is appropriately targeted at specific communities

Be clear with the people you are involving about the fact that the funding process can be lengthy and that the research may not get funded.

Funding to support public involvement in your research grant application

It can be challenging to obtain funding to support public involvement prior to your grant application being successful. Speak to your organisation or university to see if they have any funding for public involvement. It’s not too late to involve people if you have already developed your research grant application. You can still involve members of the public by:

  • asking people to review your proposal and give written comments on any potential difficulties in your design
  • taking your proposal to a local public involvement group or panel and ask for their views – your university or NHS Trust might already have a
  • group or panel of people who are willing to do this
  • thinking about how best to involve people at other stages of the research if your application is successful

Managing research

One of the main ways that members of the public get involved in managing research is through membership of a study steering group or management committee. Increasingly members of the public are taking a more active role in research as collaborators or in some cases as the principal investigators in studies. In these circumstances they will often be employed as a member of the research team. Involving members of the public in managing research can help to ensure that:

  • a public and societal perspective is maintained throughout an individual project or a programme of work
  • public involvement in the project is properly budgeted and funded
  • effective support is developed for members of the public involved in the study
  • advice is available on improving the recruitment of participants to the study
  • there is involvement in the selection process of staff and researchers for the study – particularly helpful if they are going to be working with research participants

Find out more about public involvement in managing your research on the NIHR website . 

Undertaking the research

Members of the public can get involved in a variety of roles in carrying out the research such as:

  • gathering and reviewing documentary evidence
  • undertaking library-based research
  • carrying out interviews
  • running focus groups
  • analysing and interpreting the results of research

Gathering and reviewing documentary evidence and undertaking library-based research

Members of the public can help look at different types of evidence and interpret the literature from a public perspective. 

Interviewing and running focus groups

If you are going to involve members of the public as peer interviewers (people who have direct experience of the topic being researched and who carry out interviews with other members of the public) or in running focus groups, we suggest you consider:

  • who the ‘peers’ are for your research project, for example their gender, age, ethnicity or experience of using a particular service
  • training and support required to carry out the role.

Analysing and interpreting the results of research

Involving members of the public in analysing and interpreting research findings can:

  • help to identify themes that researchers might miss
  • help check the validity of the conclusions from a public perspective
  • highlight findings that are more relevant to the public

You might involve the members of the public who have been working with you on the research project to analyse and interpret the research findings. Alternatively you could hold a small event to find out the views of a wider audience.

Disseminating research

Members of the public involved in your research will want to ensure that the findings are widely disseminated so they can influence and change practice for the better. It has been found that involving people at the dissemination stage is more successful if they have also  been involved at the earlier stages of the research as they then benefit from a sense of ownership of the research and an understanding of the context and background. This means they will be more likely to disseminate the results to their networks, to help summarise the research findings in clear user-friendly language and ensure that the information is accessible to a public audience.

To encourage and support public involvement in dissemination:

  • develop progress reports or newsletters to keep people informed throughout the project, reporting both negative and positive results
  • feedback results to all those you consulted and collaborated with as well as participants
  • work with members of the public to develop your dissemination plans – they will often have access to groups and forums that researchers are not aware of
  • involve people in presenting at conferences, speaking to patients, support groups and service providers
  • ask members of the public involved in your research to be co-authors in journal and newsletter articles
  • acknowledge the contribution members of the public made to the research when writing journal articles and reports

Remember to include funding for public involvement in disseminating the findings in your grant application, and consider if you will need to allocate funding for developing and printing summaries and for postage. It’s not too late – if you have reached the stage of disseminating your research, there are still options open to you for involving people:

  • discuss your research findings with members of the public and listen to their views
  • ask for their ideas on how best to report your findings to others through networks, newsletters and different media and formats
  • involve people in working with you to ensure that the information is clear and easy to understand for different audiences
  • reflect on and plan how you will involve people earlier in your next project

Implementing research

Public involvement in your research can influence, support and add strength to the way your research is taken into practice. Public involvement in research often provides a route to effecting change and improvement in aspects of health and care practice that are of particular concern to people. Members of the public involved in research are often passionate to ensure that action happens as a result of the research and might be able to establish influential relationships with key agencies and policy makers.

Evaluating impact

From the beginning of your project think how you are going to monitor and evaluate public involvement and its impact throughout the project. Working with the people you involve, document and write up an evaluation of the public involvement in your research recording short and long term impacts. This will help you for future projects and provide valuable knowledge for other researchers looking to involve members of the public in their work. Help to build the evidence base and let others know about what worked well and what didn’t and the impact of public involvement in your research by:

  • including the information in your research reports
  • publishing information on the impact in journal articles

Find out more about reporting evaluating impact of public involvement in your study:

  • PiiAF – The Public Involvement Impact Assessment Framework and Guidance (Popay, Collins et al 2013)
  • What does it mean to involve consumers successfully in NHS research? A consensus study (Telford et al 2004)
  • PIRICOM study: a systematic review (.PDF) (Brett et al 2010)
  • Service user involvement in nursing, midwifery and health visiting research: A review of evidence and practice (Smith et al 2008)

Briefing note nine: what to do when things go wrong

In this guidance we have provided information to help you think through how best to involve people in your research. However, there will occasionally be times when things go wrong. Problems can often be sorted out by informal discussions but if you think that things are going wrong it is best to act quickly. If left unresolved, problems can get worse and affect a growing number of members of a team or organisation. Depending on what the issue is we suggest you consider some of the following:

  • acknowledge that there is a problem
  • listen to any concerns and openly discuss them with those involved along with any concerns that you might have
  • allow space and time for all involved to reflect - public involvement in research is a learning process
  • refer back to any relevant documents that you have developed such as ground rules for meetings, complaints policy, confidentiality agreements
  • set a timescale for agreed change to happen
  • let people know about any actions/changes/decisions
  • ensure support is available if necessary
  • consider using a skilled external facilitator to help with the reflection process

If you are unable to resolve issues using some of the above strategies or if either you or the member of the public feel a more independent review of the situation is required then a more formal approach should be considered. It is helpful to outline in your planning and preparation the procedure for complaints and resolving differences so that the information is clearly available from the beginning. More formal procedures might be:

  • Patient Advice and Liaison Service (PALS) Officers
  • university complaints procedures
  • local authority complaints procedures

If you are working with people representing a non-statutory organisation, that organisation might have its own processes.

Briefing note ten: where to go for further information

Nihr information.

Learning for Involvement allows you access training and guidance on public involvement People in Research reports live opportunities for public involvement in NHS, public health and social care research, and allows you to submit your own Read the NIHR Payment Guidance for Researchers and Professionals NIHR Research Support Service (RSS) - The RSS provides free and confidential support for researchers to apply for funding and develop and deliver clinical and applied health and care research. Access to support, advice and expertise is available for all researchers in England working across  the remit of the NIHR . NIHR Clinical Research Network (CRN) - The CRN supports patients, the public and health and care organisations across England to participate in high-quality research, thereby advancing knowledge and improving care. The CRN comprises 15 Local Clinical Research Networks (LCRN) and 30 Specialties which coordinate and support the delivery of high-quality research both by geography and therapy area. National leadership and coordination is provided through the Research Delivery Network Coordinating Centre. Contact your LCRN .

How to cite this guidance

NIHR, Briefing notes for researchers, April 2021, [URL], (Accessed on: [DATE])

Acknowledgements

This resource was reviewed by the NIHR in March 2021 for accuracy and currency. The NIHR endorses this resource. The previous version was referenced as: INVOLVE (2012) Briefing notes for researchers: involving the public in NHS, public health and social care research. INVOLVE, Eastleigh.

The authors of the 2012 version of the Briefing Notes for Researchers were Helen Hayes, Sarah Buckland and Maryrose Tarpey, who used to work for the INVOLVE Coordinating Centre. Additional contributors to thank for their involvement in the development of the 2012 version are: Ann Louise Caress, Alison Ford, Lesley Roberts, Carer, Kati Turner, Derek Ward, Tracey Williamson, Sarah Howlett, Lucy Simons, Philippa Yeeles, Gill Wren, Paula Davis, Sandra Paget, Doreen Tembo, Christine Vial.

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Are you in the American middle class? Find out with our income calculator

About half of U.S. adults (52%) lived in middle-income households in 2022, according to a Pew Research Center analysis of the most recent available government data. Roughly three-in-ten (28%) were in lower-income households and 19% were in upper-income households.

Our calculator below, updated with 2022 data, lets you find out which group you are in, and compares you with:

  • Other adults in your metropolitan area
  • U.S. adults overall
  • U.S. adults similar to you in education, age, race or ethnicity, and marital status

Find more research about the U.S. middle class on our topic page .

Our latest analysis shows that the estimated share of adults who live in middle-income households varies widely across the 254 metropolitan areas we examined, from 42% in San Jose-Sunnyvale-Santa Clara, California, to 66% in Olympia-Lacey-Tumwater, Washington. The share of adults who live in lower-income households ranges from 16% in Bismarck, North Dakota, to 46% in Laredo, Texas. The share living in upper-income households is smallest in Muskegon-Norton Shores, Michigan (8%), and greatest in San Jose-Sunnyvale-Santa Clara, California (41%).

How the income calculator works

The calculator takes your household income and adjusts it for the size of your household. The income is revised upward for households that are below average in size and downward for those of above-average size. This way, each household’s income is made equivalent to the income of a three-person household. (Three is the whole number nearest to the  average size of a U.S. household , which was 2.5 people in 2023.)

Pew Research Center does not store or share any of the information you enter.

We use your size-adjusted household income and the cost of living in your area to determine your income tier. Middle-income households – those with an income that is two-thirds to double the U.S. median household income – had incomes ranging from about $56,600 to $169,800 in 2022. Lower-income households had incomes less than $56,600, and upper-income households had incomes greater than $169,800. (All figures are computed for three-person households, adjusted for the cost of living in a metropolitan area, and expressed in 2022 dollars.)

The following example illustrates how cost-of-living adjustment for a given area was calculated: Jackson, Tennessee, is a relatively inexpensive area, with a  price level in 2022 that was 13.0% less than the national average. The San Francisco-Oakland-Berkeley metropolitan area in California is one of the most expensive, with a price level that was 17.9% higher than the national average. Thus, to step over the national middle-class threshold of $56,600, a household in Jackson needs an income of only about $49,200, or 13.0% less than the national threshold. But a household in the San Francisco area needs an income of about $66,700, or 17.9% more than the U.S. threshold, to be considered middle class.

The income calculator encompasses 254 of 387 metropolitan areas in the United States, as defined by the Office of Management and Budget  . If you live outside of one of these 254 areas, the calculator reports the estimates for your state.

The second part of our calculator asks about your education, age, race or ethnicity, and marital status. This allows you to see how other adults who are similar to you demographically are distributed across lower-, middle- and upper-income tiers in the U.S. overall. It does not recompute your economic tier.

Note: This post and interactive calculator were originally published Dec. 9, 2015, and have been updated to reflect the Center’s new analysis.   Former Senior Researcher Rakesh Kochhar and former Research Analyst Jesse Bennett also contributed to this analysis.

The Center recently published an analysis of the distribution of the  American population across income tiers . In that analysis, the estimates of the overall shares in each income tier are slightly different, because it relies on a separate government data source and includes children as well as adults.

Pew Research Center designed this calculator as a way for users to find out, based on our analysis, where they appear in the distribution of U.S. adults by income tier, as well as how they compare with others who match their demographic profile.

The data underlying the calculator come from the 2022 American Community Survey (ACS). The ACS contains approximately 3 million records, or about 1% of the U.S. population.

In our analysis, “middle-income” Americans are adults whose annual household income is two-thirds to double the national median, after incomes have been adjusted for household size. Lower-income households have incomes less than two-thirds of the median, and upper-income households have incomes more than double the median. American adults refers to those ages 18 and older who reside in a household (as opposed to group quarters).

In 2022, the  national  middle-income range was about $56,600 to $169,800 annually for a household of three. Lower-income households had incomes less than $56,600, and upper-income households had incomes greater than $169,800. (Incomes are calculated in 2022 dollars.) The median adjusted household income used to derive this middle-income range is based on household heads, regardless of their age.

These income ranges vary with the cost of living in metropolitan areas and with household size. A household in a metropolitan area with a higher-than-average cost of living, or one with more than three people, needs more than $56,600 to be included in the middle-income tier. Households in less expensive areas or with fewer than three people need less than $56,600 to be considered middle income. Additional details on the methodology are available in our  earlier analyses .

  • Income & Wages
  • Middle Class

Download Richard Fry's photo

Richard Fry is a senior researcher focusing on economics and education at Pew Research Center .

Income inequality is greater among Chinese Americans than any other Asian origin group in the U.S.

Is college worth it, 7 facts about americans and taxes, methodology: 2023 focus groups of asian americans, 1 in 10: redefining the asian american dream (short film), most popular.

901 E St. NW, Suite 300 Washington, DC 20004 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan, nonadvocacy fact tank that informs the public about the issues, attitudes and trends shaping the world. It does not take policy positions. The Center conducts public opinion polling, demographic research, computational social science research and other data-driven research. Pew Research Center is a subsidiary of The Pew Charitable Trusts , its primary funder.

© 2024 Pew Research Center

Have questions about buying, selling or renting during COVID-19? Learn more

Zillow Research aims to be the most open, authoritative source for timely and accurate housing data and unbiased insight.

  • More New Listings at Major Climate Risk Now Than Five Years Ago

The Numbers

August 2024 U.S. Typical Home Value (Zillow Home Value Index)

$362,143 (2.5% YoY)

August 2024 U.S. Typical Rent (Zillow Observed Rent Index)

$2,063 (3.4% YOY)

August 2024 Change in New Listings

August 2024 Typical Mortgage Payment

  • August 2024: New Home Sales Fell In August, But Remain Higher Than Expected
  • August 2024 Existing Home Sales Pull Back
  • Renters are Looking for Perks Like Pet Areas and Happy Hours Over Gyms and Pools
  • Fed Cuts In Line With Expectations, Yields Increase On The News
  • Zillow Home Value and Home Sales Forecast (August 2024)
  • Rate drops and more inventory mean opportunities for buyers (August 2024 Market Report)
  • Mortgage Rates Fell This Week After Lackluster Jobs Data
  • Market Heat Index
  • Market One-Pager

Trump's claims of a migrant crime wave are not supported by national data

Donald Trump

WASHINGTON — When Donald Trump speaks at the southern border in Texas on Thursday, you can expect to hear him talk about “migrant crime,” a category he has coined and defined as a terrifying binge of criminal activity committed by undocumented immigrants spreading across the country.

“You know, in New York, what’s happening with crime is it’s through the roof, and it’s called ‘migrant,’” the former president said at a rally in Michigan earlier this month. “They beat up police officers. You’ve seen that they go in, they stab people, hurt people, shoot people. It’s a whole new form, and they have gangs now that are making our gangs look like small potatoes.”

Trump has undoubtedly tapped into the rising anger over crimes allegedly committed by undocumented migrants that have gained national attention — most recently, the killing of college student Laken Riley in Georgia last week, after which an undocumented migrant from Venezuela was arrested and charged with her murder, and the much-reported fight between New York police officers and a group of migrant teens.

According to a recent Pew  poll , 57% of Americans said that a large number of migrants seeking to enter the country leads to more crime. Republicans (85%) overwhelmingly say the migrant surge leads to increased crime in the U.S. A far smaller share of Democrats (31%) say the same. The poll found that 63% of Democrats say it does not have much of an impact.

But despite the former president’s campaign rhetoric, expert analysis and available data from major-city police departments show that despite several horrifying high-profile incidents, there is no evidence of a migrant-driven crime wave in the United States.

That won’t change the way Trump talks about immigrants in his bid to return to the White House, as he argues that President Joe Biden’s immigration policies are making Americans less safe. Trump says voters should hold Biden personally responsible for every crime committed by an undocumented immigrant.

An NBC News review of available 2024 crime data from the cities targeted by Texas’ “Operation Lone Star,” which buses or flies migrants from the border to major cities in the interior — shows overall crime levels dropping in those cities that have received the most migrants.

Overall crime is down year over year in  Philadelphia ,  Chicago , Denver ,  New York  and Los Angeles. Crime has risen in  Washington, D.C ., but local officials do not attribute the spike to migrants.

“This is a public perception problem. It’s always based upon these kinds of flashpoint events where an immigrant commits a crime,” explains Graham Ousey, a professor at the College of William & Mary and the co-author of “Immigration and Crime: Taking Stock.” “There’s no evidence for there being any relationship between somebody’s immigrant status and their involvement in crime.”

Ousey notes the emotional toll these incidents have taken and how they can inform public perception, saying, “They can be really egregious acts of criminality that really draw lots of attention that involve somebody who happens to be an immigrant. And if you have leaders, political leaders who are really pushing that narrative, I think that would have the tendency to sort of push up the myth.”

“At least a couple of recent studies show that undocumented immigrants are also not more likely to be involved in crime,” Ousey says — in part because of caution about their immigration status. “The individual-level studies actually show that they’re less involved than native-born citizens or second-generation immigrants.”

Another misconception often cited by critics is that crime is more prevalent in “sanctuary cities.” But a Department of Justice report found that “there was no evidence that the percentage of unauthorized or authorized immigrant population at the city level impacted shifts in the homicide rates and no evidence that immigration is connected to robbery at the city level.”

Trump’s campaign claims without evidence that those statistics obscure the problem.

“Democrat cities purposefully do not document when crimes are committed by illegal immigrants, because they don’t want American citizens to know the truth about the dangerous impact Joe Biden’s open border is having on their communities,” Karoline Leavitt, Trump campaign press secretary, said in a statement. “Nevertheless, Americans know migrant crime is a serious and growing threat; and the murder, rape, or abuse of one innocent citizen at the hands of an illegal immigrant is one too many.”

Trump has been pushing the argument that immigrants bring crime since launching his first campaign in 2015, often featuring at his rallies the family members of those who were killed by undocumented immigrants who had been drinking and driving. And his arguments are not new — opponents of immigration have long tried to make the case that migrants bring crime.

National crime data, especially pertaining to undocumented immigrants, is notoriously incomplete. The national data comes in piecemeal and can only be evaluated holistically when the annual data is released.

The data is incomplete on how many crimes each year are committed by migrants, primarily because most local police don’t record immigration status when they make arrests. But the studies that have been done on this, most recently by the University of Wisconsin-Madison, show that in Texas, where police do record immigration status, migrants commit fewer crimes per capita.

In December 2020, researchers studying Texas crime statistics found that “contrary to public perception, we observe considerably lower felony arrest rates among undocumented immigrants compared to legal immigrants and native-born U.S. citizens and find no evidence that undocumented criminality has increased in recent years.”

what is analysis of data in research

Olympia Sonnier is a field producer for NBC News. 

what is analysis of data in research

Garrett Haake is NBC News' senior Capitol Hill correspondent. He also covers the Trump campaign.

IMAGES

  1. What is Data Analysis ?

    what is analysis of data in research

  2. What Is Data Analysis In Research Process

    what is analysis of data in research

  3. Data Analysis in research methodology

    what is analysis of data in research

  4. Standard statistical tools in research and data analysis

    what is analysis of data in research

  5. Four Main Types of Data Analysis And Its Application

    what is analysis of data in research

  6. Data Analysis: Definition, Types and Examples

    what is analysis of data in research

VIDEO

  1. #5 How does Data get Analyzed

  2. A very brief Introduction to Data Analysis (part 1)

  3. Data Analysis in Research

  4. What is Data Analysis?

  5. Analysis of Data? Some Examples to Explore

  6. What is Data Analysis? Data Analysis

COMMENTS

  1. Data Analysis in Research: Types & Methods

    Data analysis in qualitative research. Data analysis and qualitative data research work a little differently from the numerical data as the quality data is made up of words, descriptions, images, objects, and sometimes symbols. Getting insight from such complicated information is a complicated process.

  2. Data analysis

    data analysis, the process of systematically collecting, cleaning, transforming, describing, modeling, and interpreting data, generally employing statistical techniques. Data analysis is an important part of both scientific research and business, where demand has grown in recent years for data-driven decision making.

  3. Data Analysis

    Data Analysis. Definition: Data analysis refers to the process of inspecting, cleaning, transforming, and modeling data with the goal of discovering useful information, drawing conclusions, and supporting decision-making. It involves applying various statistical and computational techniques to interpret and derive insights from large datasets.

  4. What is Data Analysis? An Expert Guide With Examples

    Data analysis is a comprehensive method of inspecting, cleansing, transforming, and modeling data to discover useful information, draw conclusions, and support decision-making. It is a multifaceted process involving various techniques and methodologies to interpret data from various sources in different formats, both structured and unstructured.

  5. Introduction to Data Analysis

    Data analysis can be quantitative, qualitative, or mixed methods. Quantitative research typically involves numbers and "close-ended questions and responses" (Creswell & Creswell, 2018, p. 3).Quantitative research tests variables against objective theories, usually measured and collected on instruments and analyzed using statistical procedures (Creswell & Creswell, 2018, p. 4).

  6. What Is Data Analysis in Research? Why It Matters & What Data Analysts

    Data analysis in research is the process of uncovering insights from data sets. Data analysts can use their knowledge of statistical techniques, research theories and methods, and research practices to analyze data. They take data and uncover what it's trying to tell us, whether that's through charts, graphs, or other visual representations

  7. Introduction to Research Statistical Analysis: An Overview of the

    Introduction. Statistical analysis is necessary for any research project seeking to make quantitative conclusions. The following is a primer for research-based statistical analysis. It is intended to be a high-level overview of appropriate statistical testing, while not diving too deep into any specific methodology.

  8. A practical guide to data analysis in general literature reviews

    This article is a practical guide to conducting data analysis in general literature reviews. The general literature review is a synthesis and analysis of published research on a relevant clinical issue, and is a common format for academic theses at the bachelor's and master's levels in nursing, physiotherapy, occupational therapy, public health and other related fields.

  9. What is Data Analysis? (Types, Methods, and Tools)

    December 17, 2023. Data analysis is the process of cleaning, transforming, and interpreting data to uncover insights, patterns, and trends. It plays a crucial role in decision making, problem solving, and driving innovation across various domains. In addition to further exploring the role data analysis plays this blog post will discuss common ...

  10. Data Analysis: Definition, Types and Examples

    Data analysis in research is an illustrative method of applying the right statistical or logical technique so that the raw data makes sense. ... "A lot of this [data analysis] will help humans work smarter and faster because we have data on everything that happens." -Daniel Burrus, business consultant and speaker on business and ...

  11. Creating a Data Analysis Plan: What to Consider When Choosing

    For those interested in conducting qualitative research, previous articles in this Research Primer series have provided information on the design and analysis of such studies. 2, 3 Information in the current article is divided into 3 main sections: an overview of terms and concepts used in data analysis, a review of common methods used to ...

  12. Data Analysis in Qualitative Research

    Data analysis in qualitative research is the process of examining and interpreting non-numerical data to uncover patterns, themes, and insights. It aims to make sense of rich, detailed information gathered through methods like interviews, focus groups, or observations. This analysis moves beyond simple description.

  13. An Overview of Data Analysis and Interpretations in Research

    Research is a scientific field which helps to generate new knowledge and solve the existing problem. So, data analysis is the cru cial part of research which makes the result of the stu dy more ...

  14. Learning to Do Qualitative Data Analysis: A Starting Point

    This question is particularly relevant to researchers new to the field and practice of qualitative research and instructors and mentors who regularly introduce students to qualitative research practices. In this article, we seek to offer what we view as a useful starting point for learning how to do qualitative analysis. We begin by discussing ...

  15. Data Analysis Techniques in Research

    Importance of Data Analysis in Research. The importance of data analysis in research cannot be overstated; it serves as the backbone of any scientific investigation or study. Here are several key reasons why data analysis is crucial in the research process: Data analysis helps ensure that the results obtained are valid and reliable.

  16. Research Guide: Data analysis and reporting findings

    Data analysis is the most crucial part of any research. Data analysis summarizes collected data. It involves the interpretation of data gathered through the use of analytical and logical reasoning to determine patterns, relationships or trends.

  17. Qualitative Data Analysis: What is it, Methods + Examples

    Qualitative data analysis is a systematic process of examining non-numerical data to extract meaning, patterns, and insights. In contrast to quantitative analysis, which focuses on numbers and statistical metrics, the qualitative study focuses on the qualitative aspects of data, such as text, images, audio, and videos.

  18. Considerations/issues in data analysis

    While data analysis in qualitative research can include statistical procedures, many times analysis becomes an ongoing iterative process where data is continuously collected and analyzed almost simultaneously. Indeed, researchers generally analyze for patterns in observations through the entire data collection phase (Savenye, Robinson, 2004). ...

  19. Research Methods

    To analyze data collected in a statistically valid manner (e.g. from experiments, surveys, and observations). Meta-analysis. Quantitative. To statistically analyze the results of a large collection of studies. Can only be applied to studies that collected data in a statistically valid manner. Thematic analysis.

  20. Data Analysis

    Data analysis for quantitative studies, on the other hand, involves critical analysis and interpretation of figures and numbers, and attempts to find rationale behind the emergence of main findings. Comparisons of primary research findings to the findings of the literature review are critically important for both types of studies ...

  21. Data Interpretation

    Data interpretation and data analysis are two different but closely related processes in data-driven decision-making. Data analysis refers to the process of examining and examining data using statistical and computational methods to derive insights and conclusions from it. It involves cleaning, transforming, and modeling the data to uncover ...

  22. NSF Award Search: Award # 1703281

    Standard statistical methods for conducting causal inference on observational data assume a very simple data model: a single table with independent units. This research has the potential to significantly impact application domains where differentiating causality from correlation is essential, e.g., education policy and cancer genomics.

  23. Briefing notes for researchers

    Published and ongoing research and evaluations to help you with data analysis and to improve health in your area. Making a difference stories. From research to impact: stories of how we make a difference. Explore data and publications. ... Research is to be informed by a diversity of public experience and insight, so that it leads to treatments ...

  24. Are you in the U.S. middle class? Try our income calculator

    Former Senior Researcher Rakesh Kochhar and former Research Analyst Jesse Bennett also contributed to this analysis. The Center recently published an analysis of the distribution of the American population across income tiers. In that analysis, the estimates of the overall shares in each income tier are slightly different, because it relies on ...

  25. Home

    Zillow Research aims to be the most open, authoritative source for timely and accurate housing data and unbiased insight. Have questions about buying, selling or renting during COVID-19? Learn more

  26. Trump's claims of a migrant crime wave are not supported by national data

    The data is incomplete on how many crimes each year are committed by migrants, primarily because most local police don't record immigration status when they make arrests.

  27. FBI Releases 2023 Crime in the Nation Statistics

    The FBI's crime statistics estimates, based on reported data for 2023, show that national violent crime decreased an estimated 3.0% in 2023 compared to 2022 estimates: