19+ Experimental Design Examples (Methods + Types)

practical psychology logo

Ever wondered how scientists discover new medicines, psychologists learn about behavior, or even how marketers figure out what kind of ads you like? Well, they all have something in common: they use a special plan or recipe called an "experimental design."

Imagine you're baking cookies. You can't just throw random amounts of flour, sugar, and chocolate chips into a bowl and hope for the best. You follow a recipe, right? Scientists and researchers do something similar. They follow a "recipe" called an experimental design to make sure their experiments are set up in a way that the answers they find are meaningful and reliable.

Experimental design is the roadmap researchers use to answer questions. It's a set of rules and steps that researchers follow to collect information, or "data," in a way that is fair, accurate, and makes sense.

experimental design test tubes

Long ago, people didn't have detailed game plans for experiments. They often just tried things out and saw what happened. But over time, people got smarter about this. They started creating structured plans—what we now call experimental designs—to get clearer, more trustworthy answers to their questions.

In this article, we'll take you on a journey through the world of experimental designs. We'll talk about the different types, or "flavors," of experimental designs, where they're used, and even give you a peek into how they came to be.

What Is Experimental Design?

Alright, before we dive into the different types of experimental designs, let's get crystal clear on what experimental design actually is.

Imagine you're a detective trying to solve a mystery. You need clues, right? Well, in the world of research, experimental design is like the roadmap that helps you find those clues. It's like the game plan in sports or the blueprint when you're building a house. Just like you wouldn't start building without a good blueprint, researchers won't start their studies without a strong experimental design.

So, why do we need experimental design? Think about baking a cake. If you toss ingredients into a bowl without measuring, you'll end up with a mess instead of a tasty dessert.

Similarly, in research, if you don't have a solid plan, you might get confusing or incorrect results. A good experimental design helps you ask the right questions ( think critically ), decide what to measure ( come up with an idea ), and figure out how to measure it (test it). It also helps you consider things that might mess up your results, like outside influences you hadn't thought of.

For example, let's say you want to find out if listening to music helps people focus better. Your experimental design would help you decide things like: Who are you going to test? What kind of music will you use? How will you measure focus? And, importantly, how will you make sure that it's really the music affecting focus and not something else, like the time of day or whether someone had a good breakfast?

In short, experimental design is the master plan that guides researchers through the process of collecting data, so they can answer questions in the most reliable way possible. It's like the GPS for the journey of discovery!

History of Experimental Design

Around 350 BCE, people like Aristotle were trying to figure out how the world works, but they mostly just thought really hard about things. They didn't test their ideas much. So while they were super smart, their methods weren't always the best for finding out the truth.

Fast forward to the Renaissance (14th to 17th centuries), a time of big changes and lots of curiosity. People like Galileo started to experiment by actually doing tests, like rolling balls down inclined planes to study motion. Galileo's work was cool because he combined thinking with doing. He'd have an idea, test it, look at the results, and then think some more. This approach was a lot more reliable than just sitting around and thinking.

Now, let's zoom ahead to the 18th and 19th centuries. This is when people like Francis Galton, an English polymath, started to get really systematic about experimentation. Galton was obsessed with measuring things. Seriously, he even tried to measure how good-looking people were ! His work helped create the foundations for a more organized approach to experiments.

Next stop: the early 20th century. Enter Ronald A. Fisher , a brilliant British statistician. Fisher was a game-changer. He came up with ideas that are like the bread and butter of modern experimental design.

Fisher invented the concept of the " control group "—that's a group of people or things that don't get the treatment you're testing, so you can compare them to those who do. He also stressed the importance of " randomization ," which means assigning people or things to different groups by chance, like drawing names out of a hat. This makes sure the experiment is fair and the results are trustworthy.

Around the same time, American psychologists like John B. Watson and B.F. Skinner were developing " behaviorism ." They focused on studying things that they could directly observe and measure, like actions and reactions.

Skinner even built boxes—called Skinner Boxes —to test how animals like pigeons and rats learn. Their work helped shape how psychologists design experiments today. Watson performed a very controversial experiment called The Little Albert experiment that helped describe behaviour through conditioning—in other words, how people learn to behave the way they do.

In the later part of the 20th century and into our time, computers have totally shaken things up. Researchers now use super powerful software to help design their experiments and crunch the numbers.

With computers, they can simulate complex experiments before they even start, which helps them predict what might happen. This is especially helpful in fields like medicine, where getting things right can be a matter of life and death.

Also, did you know that experimental designs aren't just for scientists in labs? They're used by people in all sorts of jobs, like marketing, education, and even video game design! Yes, someone probably ran an experiment to figure out what makes a game super fun to play.

So there you have it—a quick tour through the history of experimental design, from Aristotle's deep thoughts to Fisher's groundbreaking ideas, and all the way to today's computer-powered research. These designs are the recipes that help people from all walks of life find answers to their big questions.

Key Terms in Experimental Design

Before we dig into the different types of experimental designs, let's get comfy with some key terms. Understanding these terms will make it easier for us to explore the various types of experimental designs that researchers use to answer their big questions.

Independent Variable : This is what you change or control in your experiment to see what effect it has. Think of it as the "cause" in a cause-and-effect relationship. For example, if you're studying whether different types of music help people focus, the kind of music is the independent variable.

Dependent Variable : This is what you're measuring to see the effect of your independent variable. In our music and focus experiment, how well people focus is the dependent variable—it's what "depends" on the kind of music played.

Control Group : This is a group of people who don't get the special treatment or change you're testing. They help you see what happens when the independent variable is not applied. If you're testing whether a new medicine works, the control group would take a fake pill, called a placebo , instead of the real medicine.

Experimental Group : This is the group that gets the special treatment or change you're interested in. Going back to our medicine example, this group would get the actual medicine to see if it has any effect.

Randomization : This is like shaking things up in a fair way. You randomly put people into the control or experimental group so that each group is a good mix of different kinds of people. This helps make the results more reliable.

Sample : This is the group of people you're studying. They're a "sample" of a larger group that you're interested in. For instance, if you want to know how teenagers feel about a new video game, you might study a sample of 100 teenagers.

Bias : This is anything that might tilt your experiment one way or another without you realizing it. Like if you're testing a new kind of dog food and you only test it on poodles, that could create a bias because maybe poodles just really like that food and other breeds don't.

Data : This is the information you collect during the experiment. It's like the treasure you find on your journey of discovery!

Replication : This means doing the experiment more than once to make sure your findings hold up. It's like double-checking your answers on a test.

Hypothesis : This is your educated guess about what will happen in the experiment. It's like predicting the end of a movie based on the first half.

Steps of Experimental Design

Alright, let's say you're all fired up and ready to run your own experiment. Cool! But where do you start? Well, designing an experiment is a bit like planning a road trip. There are some key steps you've got to take to make sure you reach your destination. Let's break it down:

  • Ask a Question : Before you hit the road, you've got to know where you're going. Same with experiments. You start with a question you want to answer, like "Does eating breakfast really make you do better in school?"
  • Do Some Homework : Before you pack your bags, you look up the best places to visit, right? In science, this means reading up on what other people have already discovered about your topic.
  • Form a Hypothesis : This is your educated guess about what you think will happen. It's like saying, "I bet this route will get us there faster."
  • Plan the Details : Now you decide what kind of car you're driving (your experimental design), who's coming with you (your sample), and what snacks to bring (your variables).
  • Randomization : Remember, this is like shuffling a deck of cards. You want to mix up who goes into your control and experimental groups to make sure it's a fair test.
  • Run the Experiment : Finally, the rubber hits the road! You carry out your plan, making sure to collect your data carefully.
  • Analyze the Data : Once the trip's over, you look at your photos and decide which ones are keepers. In science, this means looking at your data to see what it tells you.
  • Draw Conclusions : Based on your data, did you find an answer to your question? This is like saying, "Yep, that route was faster," or "Nope, we hit a ton of traffic."
  • Share Your Findings : After a great trip, you want to tell everyone about it, right? Scientists do the same by publishing their results so others can learn from them.
  • Do It Again? : Sometimes one road trip just isn't enough. In the same way, scientists often repeat their experiments to make sure their findings are solid.

So there you have it! Those are the basic steps you need to follow when you're designing an experiment. Each step helps make sure that you're setting up a fair and reliable way to find answers to your big questions.

Let's get into examples of experimental designs.

1) True Experimental Design

notepad

In the world of experiments, the True Experimental Design is like the superstar quarterback everyone talks about. Born out of the early 20th-century work of statisticians like Ronald A. Fisher, this design is all about control, precision, and reliability.

Researchers carefully pick an independent variable to manipulate (remember, that's the thing they're changing on purpose) and measure the dependent variable (the effect they're studying). Then comes the magic trick—randomization. By randomly putting participants into either the control or experimental group, scientists make sure their experiment is as fair as possible.

No sneaky biases here!

True Experimental Design Pros

The pros of True Experimental Design are like the perks of a VIP ticket at a concert: you get the best and most trustworthy results. Because everything is controlled and randomized, you can feel pretty confident that the results aren't just a fluke.

True Experimental Design Cons

However, there's a catch. Sometimes, it's really tough to set up these experiments in a real-world situation. Imagine trying to control every single detail of your day, from the food you eat to the air you breathe. Not so easy, right?

True Experimental Design Uses

The fields that get the most out of True Experimental Designs are those that need super reliable results, like medical research.

When scientists were developing COVID-19 vaccines, they used this design to run clinical trials. They had control groups that received a placebo (a harmless substance with no effect) and experimental groups that got the actual vaccine. Then they measured how many people in each group got sick. By comparing the two, they could say, "Yep, this vaccine works!"

So next time you read about a groundbreaking discovery in medicine or technology, chances are a True Experimental Design was the VIP behind the scenes, making sure everything was on point. It's been the go-to for rigorous scientific inquiry for nearly a century, and it's not stepping off the stage anytime soon.

2) Quasi-Experimental Design

So, let's talk about the Quasi-Experimental Design. Think of this one as the cool cousin of True Experimental Design. It wants to be just like its famous relative, but it's a bit more laid-back and flexible. You'll find quasi-experimental designs when it's tricky to set up a full-blown True Experimental Design with all the bells and whistles.

Quasi-experiments still play with an independent variable, just like their stricter cousins. The big difference? They don't use randomization. It's like wanting to divide a bag of jelly beans equally between your friends, but you can't quite do it perfectly.

In real life, it's often not possible or ethical to randomly assign people to different groups, especially when dealing with sensitive topics like education or social issues. And that's where quasi-experiments come in.

Quasi-Experimental Design Pros

Even though they lack full randomization, quasi-experimental designs are like the Swiss Army knives of research: versatile and practical. They're especially popular in fields like education, sociology, and public policy.

For instance, when researchers wanted to figure out if the Head Start program , aimed at giving young kids a "head start" in school, was effective, they used a quasi-experimental design. They couldn't randomly assign kids to go or not go to preschool, but they could compare kids who did with kids who didn't.

Quasi-Experimental Design Cons

Of course, quasi-experiments come with their own bag of pros and cons. On the plus side, they're easier to set up and often cheaper than true experiments. But the flip side is that they're not as rock-solid in their conclusions. Because the groups aren't randomly assigned, there's always that little voice saying, "Hey, are we missing something here?"

Quasi-Experimental Design Uses

Quasi-Experimental Design gained traction in the mid-20th century. Researchers were grappling with real-world problems that didn't fit neatly into a laboratory setting. Plus, as society became more aware of ethical considerations, the need for flexible designs increased. So, the quasi-experimental approach was like a breath of fresh air for scientists wanting to study complex issues without a laundry list of restrictions.

In short, if True Experimental Design is the superstar quarterback, Quasi-Experimental Design is the versatile player who can adapt and still make significant contributions to the game.

3) Pre-Experimental Design

Now, let's talk about the Pre-Experimental Design. Imagine it as the beginner's skateboard you get before you try out for all the cool tricks. It has wheels, it rolls, but it's not built for the professional skatepark.

Similarly, pre-experimental designs give researchers a starting point. They let you dip your toes in the water of scientific research without diving in head-first.

So, what's the deal with pre-experimental designs?

Pre-Experimental Designs are the basic, no-frills versions of experiments. Researchers still mess around with an independent variable and measure a dependent variable, but they skip over the whole randomization thing and often don't even have a control group.

It's like baking a cake but forgetting the frosting and sprinkles; you'll get some results, but they might not be as complete or reliable as you'd like.

Pre-Experimental Design Pros

Why use such a simple setup? Because sometimes, you just need to get the ball rolling. Pre-experimental designs are great for quick-and-dirty research when you're short on time or resources. They give you a rough idea of what's happening, which you can use to plan more detailed studies later.

A good example of this is early studies on the effects of screen time on kids. Researchers couldn't control every aspect of a child's life, but they could easily ask parents to track how much time their kids spent in front of screens and then look for trends in behavior or school performance.

Pre-Experimental Design Cons

But here's the catch: pre-experimental designs are like that first draft of an essay. It helps you get your ideas down, but you wouldn't want to turn it in for a grade. Because these designs lack the rigorous structure of true or quasi-experimental setups, they can't give you rock-solid conclusions. They're more like clues or signposts pointing you in a certain direction.

Pre-Experimental Design Uses

This type of design became popular in the early stages of various scientific fields. Researchers used them to scratch the surface of a topic, generate some initial data, and then decide if it's worth exploring further. In other words, pre-experimental designs were the stepping stones that led to more complex, thorough investigations.

So, while Pre-Experimental Design may not be the star player on the team, it's like the practice squad that helps everyone get better. It's the starting point that can lead to bigger and better things.

4) Factorial Design

Now, buckle up, because we're moving into the world of Factorial Design, the multi-tasker of the experimental universe.

Imagine juggling not just one, but multiple balls in the air—that's what researchers do in a factorial design.

In Factorial Design, researchers are not satisfied with just studying one independent variable. Nope, they want to study two or more at the same time to see how they interact.

It's like cooking with several spices to see how they blend together to create unique flavors.

Factorial Design became the talk of the town with the rise of computers. Why? Because this design produces a lot of data, and computers are the number crunchers that help make sense of it all. So, thanks to our silicon friends, researchers can study complicated questions like, "How do diet AND exercise together affect weight loss?" instead of looking at just one of those factors.

Factorial Design Pros

This design's main selling point is its ability to explore interactions between variables. For instance, maybe a new study drug works really well for young people but not so great for older adults. A factorial design could reveal that age is a crucial factor, something you might miss if you only studied the drug's effectiveness in general. It's like being a detective who looks for clues not just in one room but throughout the entire house.

Factorial Design Cons

However, factorial designs have their own bag of challenges. First off, they can be pretty complicated to set up and run. Imagine coordinating a four-way intersection with lots of cars coming from all directions—you've got to make sure everything runs smoothly, or you'll end up with a traffic jam. Similarly, researchers need to carefully plan how they'll measure and analyze all the different variables.

Factorial Design Uses

Factorial designs are widely used in psychology to untangle the web of factors that influence human behavior. They're also popular in fields like marketing, where companies want to understand how different aspects like price, packaging, and advertising influence a product's success.

And speaking of success, the factorial design has been a hit since statisticians like Ronald A. Fisher (yep, him again!) expanded on it in the early-to-mid 20th century. It offered a more nuanced way of understanding the world, proving that sometimes, to get the full picture, you've got to juggle more than one ball at a time.

So, if True Experimental Design is the quarterback and Quasi-Experimental Design is the versatile player, Factorial Design is the strategist who sees the entire game board and makes moves accordingly.

5) Longitudinal Design

pill bottle

Alright, let's take a step into the world of Longitudinal Design. Picture it as the grand storyteller, the kind who doesn't just tell you about a single event but spins an epic tale that stretches over years or even decades. This design isn't about quick snapshots; it's about capturing the whole movie of someone's life or a long-running process.

You know how you might take a photo every year on your birthday to see how you've changed? Longitudinal Design is kind of like that, but for scientific research.

With Longitudinal Design, instead of measuring something just once, researchers come back again and again, sometimes over many years, to see how things are going. This helps them understand not just what's happening, but why it's happening and how it changes over time.

This design really started to shine in the latter half of the 20th century, when researchers began to realize that some questions can't be answered in a hurry. Think about studies that look at how kids grow up, or research on how a certain medicine affects you over a long period. These aren't things you can rush.

The famous Framingham Heart Study , started in 1948, is a prime example. It's been studying heart health in a small town in Massachusetts for decades, and the findings have shaped what we know about heart disease.

Longitudinal Design Pros

So, what's to love about Longitudinal Design? First off, it's the go-to for studying change over time, whether that's how people age or how a forest recovers from a fire.

Longitudinal Design Cons

But it's not all sunshine and rainbows. Longitudinal studies take a lot of patience and resources. Plus, keeping track of participants over many years can be like herding cats—difficult and full of surprises.

Longitudinal Design Uses

Despite these challenges, longitudinal studies have been key in fields like psychology, sociology, and medicine. They provide the kind of deep, long-term insights that other designs just can't match.

So, if the True Experimental Design is the superstar quarterback, and the Quasi-Experimental Design is the flexible athlete, then the Factorial Design is the strategist, and the Longitudinal Design is the wise elder who has seen it all and has stories to tell.

6) Cross-Sectional Design

Now, let's flip the script and talk about Cross-Sectional Design, the polar opposite of the Longitudinal Design. If Longitudinal is the grand storyteller, think of Cross-Sectional as the snapshot photographer. It captures a single moment in time, like a selfie that you take to remember a fun day. Researchers using this design collect all their data at one point, providing a kind of "snapshot" of whatever they're studying.

In a Cross-Sectional Design, researchers look at multiple groups all at the same time to see how they're different or similar.

This design rose to popularity in the mid-20th century, mainly because it's so quick and efficient. Imagine wanting to know how people of different ages feel about a new video game. Instead of waiting for years to see how opinions change, you could just ask people of all ages what they think right now. That's Cross-Sectional Design for you—fast and straightforward.

You'll find this type of research everywhere from marketing studies to healthcare. For instance, you might have heard about surveys asking people what they think about a new product or political issue. Those are usually cross-sectional studies, aimed at getting a quick read on public opinion.

Cross-Sectional Design Pros

So, what's the big deal with Cross-Sectional Design? Well, it's the go-to when you need answers fast and don't have the time or resources for a more complicated setup.

Cross-Sectional Design Cons

Remember, speed comes with trade-offs. While you get your results quickly, those results are stuck in time. They can't tell you how things change or why they're changing, just what's happening right now.

Cross-Sectional Design Uses

Also, because they're so quick and simple, cross-sectional studies often serve as the first step in research. They give scientists an idea of what's going on so they can decide if it's worth digging deeper. In that way, they're a bit like a movie trailer, giving you a taste of the action to see if you're interested in seeing the whole film.

So, in our lineup of experimental designs, if True Experimental Design is the superstar quarterback and Longitudinal Design is the wise elder, then Cross-Sectional Design is like the speedy running back—fast, agile, but not designed for long, drawn-out plays.

7) Correlational Design

Next on our roster is the Correlational Design, the keen observer of the experimental world. Imagine this design as the person at a party who loves people-watching. They don't interfere or get involved; they just observe and take mental notes about what's going on.

In a correlational study, researchers don't change or control anything; they simply observe and measure how two variables relate to each other.

The correlational design has roots in the early days of psychology and sociology. Pioneers like Sir Francis Galton used it to study how qualities like intelligence or height could be related within families.

This design is all about asking, "Hey, when this thing happens, does that other thing usually happen too?" For example, researchers might study whether students who have more study time get better grades or whether people who exercise more have lower stress levels.

One of the most famous correlational studies you might have heard of is the link between smoking and lung cancer. Back in the mid-20th century, researchers started noticing that people who smoked a lot also seemed to get lung cancer more often. They couldn't say smoking caused cancer—that would require a true experiment—but the strong correlation was a red flag that led to more research and eventually, health warnings.

Correlational Design Pros

This design is great at proving that two (or more) things can be related. Correlational designs can help prove that more detailed research is needed on a topic. They can help us see patterns or possible causes for things that we otherwise might not have realized.

Correlational Design Cons

But here's where you need to be careful: correlational designs can be tricky. Just because two things are related doesn't mean one causes the other. That's like saying, "Every time I wear my lucky socks, my team wins." Well, it's a fun thought, but those socks aren't really controlling the game.

Correlational Design Uses

Despite this limitation, correlational designs are popular in psychology, economics, and epidemiology, to name a few fields. They're often the first step in exploring a possible relationship between variables. Once a strong correlation is found, researchers may decide to conduct more rigorous experimental studies to examine cause and effect.

So, if the True Experimental Design is the superstar quarterback and the Longitudinal Design is the wise elder, the Factorial Design is the strategist, and the Cross-Sectional Design is the speedster, then the Correlational Design is the clever scout, identifying interesting patterns but leaving the heavy lifting of proving cause and effect to the other types of designs.

8) Meta-Analysis

Last but not least, let's talk about Meta-Analysis, the librarian of experimental designs.

If other designs are all about creating new research, Meta-Analysis is about gathering up everyone else's research, sorting it, and figuring out what it all means when you put it together.

Imagine a jigsaw puzzle where each piece is a different study. Meta-Analysis is the process of fitting all those pieces together to see the big picture.

The concept of Meta-Analysis started to take shape in the late 20th century, when computers became powerful enough to handle massive amounts of data. It was like someone handed researchers a super-powered magnifying glass, letting them examine multiple studies at the same time to find common trends or results.

You might have heard of the Cochrane Reviews in healthcare . These are big collections of meta-analyses that help doctors and policymakers figure out what treatments work best based on all the research that's been done.

For example, if ten different studies show that a certain medicine helps lower blood pressure, a meta-analysis would pull all that information together to give a more accurate answer.

Meta-Analysis Pros

The beauty of Meta-Analysis is that it can provide really strong evidence. Instead of relying on one study, you're looking at the whole landscape of research on a topic.

Meta-Analysis Cons

However, it does have some downsides. For one, Meta-Analysis is only as good as the studies it includes. If those studies are flawed, the meta-analysis will be too. It's like baking a cake: if you use bad ingredients, it doesn't matter how good your recipe is—the cake won't turn out well.

Meta-Analysis Uses

Despite these challenges, meta-analyses are highly respected and widely used in many fields like medicine, psychology, and education. They help us make sense of a world that's bursting with information by showing us the big picture drawn from many smaller snapshots.

So, in our all-star lineup, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, the Factorial Design is the strategist, the Cross-Sectional Design is the speedster, and the Correlational Design is the scout, then the Meta-Analysis is like the coach, using insights from everyone else's plays to come up with the best game plan.

9) Non-Experimental Design

Now, let's talk about a player who's a bit of an outsider on this team of experimental designs—the Non-Experimental Design. Think of this design as the commentator or the journalist who covers the game but doesn't actually play.

In a Non-Experimental Design, researchers are like reporters gathering facts, but they don't interfere or change anything. They're simply there to describe and analyze.

Non-Experimental Design Pros

So, what's the deal with Non-Experimental Design? Its strength is in description and exploration. It's really good for studying things as they are in the real world, without changing any conditions.

Non-Experimental Design Cons

Because a non-experimental design doesn't manipulate variables, it can't prove cause and effect. It's like a weather reporter: they can tell you it's raining, but they can't tell you why it's raining.

The downside? Since researchers aren't controlling variables, it's hard to rule out other explanations for what they observe. It's like hearing one side of a story—you get an idea of what happened, but it might not be the complete picture.

Non-Experimental Design Uses

Non-Experimental Design has always been a part of research, especially in fields like anthropology, sociology, and some areas of psychology.

For instance, if you've ever heard of studies that describe how people behave in different cultures or what teens like to do in their free time, that's often Non-Experimental Design at work. These studies aim to capture the essence of a situation, like painting a portrait instead of taking a snapshot.

One well-known example you might have heard about is the Kinsey Reports from the 1940s and 1950s, which described sexual behavior in men and women. Researchers interviewed thousands of people but didn't manipulate any variables like you would in a true experiment. They simply collected data to create a comprehensive picture of the subject matter.

So, in our metaphorical team of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, and Meta-Analysis is the coach, then Non-Experimental Design is the sports journalist—always present, capturing the game, but not part of the action itself.

10) Repeated Measures Design

white rat

Time to meet the Repeated Measures Design, the time traveler of our research team. If this design were a player in a sports game, it would be the one who keeps revisiting past plays to figure out how to improve the next one.

Repeated Measures Design is all about studying the same people or subjects multiple times to see how they change or react under different conditions.

The idea behind Repeated Measures Design isn't new; it's been around since the early days of psychology and medicine. You could say it's a cousin to the Longitudinal Design, but instead of looking at how things naturally change over time, it focuses on how the same group reacts to different things.

Imagine a study looking at how a new energy drink affects people's running speed. Instead of comparing one group that drank the energy drink to another group that didn't, a Repeated Measures Design would have the same group of people run multiple times—once with the energy drink, and once without. This way, you're really zeroing in on the effect of that energy drink, making the results more reliable.

Repeated Measures Design Pros

The strong point of Repeated Measures Design is that it's super focused. Because it uses the same subjects, you don't have to worry about differences between groups messing up your results.

Repeated Measures Design Cons

But the downside? Well, people can get tired or bored if they're tested too many times, which might affect how they respond.

Repeated Measures Design Uses

A famous example of this design is the "Little Albert" experiment, conducted by John B. Watson and Rosalie Rayner in 1920. In this study, a young boy was exposed to a white rat and other stimuli several times to see how his emotional responses changed. Though the ethical standards of this experiment are often criticized today, it was groundbreaking in understanding conditioned emotional responses.

In our metaphorical lineup of research designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, and Non-Experimental Design is the journalist, then Repeated Measures Design is the time traveler—always looping back to fine-tune the game plan.

11) Crossover Design

Next up is Crossover Design, the switch-hitter of the research world. If you're familiar with baseball, you'll know a switch-hitter is someone who can bat both right-handed and left-handed.

In a similar way, Crossover Design allows subjects to experience multiple conditions, flipping them around so that everyone gets a turn in each role.

This design is like the utility player on our team—versatile, flexible, and really good at adapting.

The Crossover Design has its roots in medical research and has been popular since the mid-20th century. It's often used in clinical trials to test the effectiveness of different treatments.

Crossover Design Pros

The neat thing about this design is that it allows each participant to serve as their own control group. Imagine you're testing two new kinds of headache medicine. Instead of giving one type to one group and another type to a different group, you'd give both kinds to the same people but at different times.

Crossover Design Cons

What's the big deal with Crossover Design? Its major strength is in reducing the "noise" that comes from individual differences. Since each person experiences all conditions, it's easier to see real effects. However, there's a catch. This design assumes that there's no lasting effect from the first condition when you switch to the second one. That might not always be true. If the first treatment has a long-lasting effect, it could mess up the results when you switch to the second treatment.

Crossover Design Uses

A well-known example of Crossover Design is in studies that look at the effects of different types of diets—like low-carb vs. low-fat diets. Researchers might have participants follow a low-carb diet for a few weeks, then switch them to a low-fat diet. By doing this, they can more accurately measure how each diet affects the same group of people.

In our team of experimental designs, if True Experimental Design is the quarterback and Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, and Repeated Measures Design is the time traveler, then Crossover Design is the versatile utility player—always ready to adapt and play multiple roles to get the most accurate results.

12) Cluster Randomized Design

Meet the Cluster Randomized Design, the team captain of group-focused research. In our imaginary lineup of experimental designs, if other designs focus on individual players, then Cluster Randomized Design is looking at how the entire team functions.

This approach is especially common in educational and community-based research, and it's been gaining traction since the late 20th century.

Here's how Cluster Randomized Design works: Instead of assigning individual people to different conditions, researchers assign entire groups, or "clusters." These could be schools, neighborhoods, or even entire towns. This helps you see how the new method works in a real-world setting.

Imagine you want to see if a new anti-bullying program really works. Instead of selecting individual students, you'd introduce the program to a whole school or maybe even several schools, and then compare the results to schools without the program.

Cluster Randomized Design Pros

Why use Cluster Randomized Design? Well, sometimes it's just not practical to assign conditions at the individual level. For example, you can't really have half a school following a new reading program while the other half sticks with the old one; that would be way too confusing! Cluster Randomization helps get around this problem by treating each "cluster" as its own mini-experiment.

Cluster Randomized Design Cons

There's a downside, too. Because entire groups are assigned to each condition, there's a risk that the groups might be different in some important way that the researchers didn't account for. That's like having one sports team that's full of veterans playing against a team of rookies; the match wouldn't be fair.

Cluster Randomized Design Uses

A famous example is the research conducted to test the effectiveness of different public health interventions, like vaccination programs. Researchers might roll out a vaccination program in one community but not in another, then compare the rates of disease in both.

In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, and Crossover Design is the utility player, then Cluster Randomized Design is the team captain—always looking out for the group as a whole.

13) Mixed-Methods Design

Say hello to Mixed-Methods Design, the all-rounder or the "Renaissance player" of our research team.

Mixed-Methods Design uses a blend of both qualitative and quantitative methods to get a more complete picture, just like a Renaissance person who's good at lots of different things. It's like being good at both offense and defense in a sport; you've got all your bases covered!

Mixed-Methods Design is a fairly new kid on the block, becoming more popular in the late 20th and early 21st centuries as researchers began to see the value in using multiple approaches to tackle complex questions. It's the Swiss Army knife in our research toolkit, combining the best parts of other designs to be more versatile.

Here's how it could work: Imagine you're studying the effects of a new educational app on students' math skills. You might use quantitative methods like tests and grades to measure how much the students improve—that's the 'numbers part.'

But you also want to know how the students feel about math now, or why they think they got better or worse. For that, you could conduct interviews or have students fill out journals—that's the 'story part.'

Mixed-Methods Design Pros

So, what's the scoop on Mixed-Methods Design? The strength is its versatility and depth; you're not just getting numbers or stories, you're getting both, which gives a fuller picture.

Mixed-Methods Design Cons

But, it's also more challenging. Imagine trying to play two sports at the same time! You have to be skilled in different research methods and know how to combine them effectively.

Mixed-Methods Design Uses

A high-profile example of Mixed-Methods Design is research on climate change. Scientists use numbers and data to show temperature changes (quantitative), but they also interview people to understand how these changes are affecting communities (qualitative).

In our team of experimental designs, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, and Cluster Randomized Design is the team captain, then Mixed-Methods Design is the Renaissance player—skilled in multiple areas and able to bring them all together for a winning strategy.

14) Multivariate Design

Now, let's turn our attention to Multivariate Design, the multitasker of the research world.

If our lineup of research designs were like players on a basketball court, Multivariate Design would be the player dribbling, passing, and shooting all at once. This design doesn't just look at one or two things; it looks at several variables simultaneously to see how they interact and affect each other.

Multivariate Design is like baking a cake with many ingredients. Instead of just looking at how flour affects the cake, you also consider sugar, eggs, and milk all at once. This way, you understand how everything works together to make the cake taste good or bad.

Multivariate Design has been a go-to method in psychology, economics, and social sciences since the latter half of the 20th century. With the advent of computers and advanced statistical software, analyzing multiple variables at once became a lot easier, and Multivariate Design soared in popularity.

Multivariate Design Pros

So, what's the benefit of using Multivariate Design? Its power lies in its complexity. By studying multiple variables at the same time, you can get a really rich, detailed understanding of what's going on.

Multivariate Design Cons

But that complexity can also be a drawback. With so many variables, it can be tough to tell which ones are really making a difference and which ones are just along for the ride.

Multivariate Design Uses

Imagine you're a coach trying to figure out the best strategy to win games. You wouldn't just look at how many points your star player scores; you'd also consider assists, rebounds, turnovers, and maybe even how loud the crowd is. A Multivariate Design would help you understand how all these factors work together to determine whether you win or lose.

A well-known example of Multivariate Design is in market research. Companies often use this approach to figure out how different factors—like price, packaging, and advertising—affect sales. By studying multiple variables at once, they can find the best combination to boost profits.

In our metaphorical research team, if True Experimental Design is the quarterback, Longitudinal Design is the wise elder, Factorial Design is the strategist, Cross-Sectional Design is the speedster, Correlational Design is the scout, Meta-Analysis is the coach, Non-Experimental Design is the journalist, Repeated Measures Design is the time traveler, Crossover Design is the utility player, Cluster Randomized Design is the team captain, and Mixed-Methods Design is the Renaissance player, then Multivariate Design is the multitasker—juggling many variables at once to get a fuller picture of what's happening.

15) Pretest-Posttest Design

Let's introduce Pretest-Posttest Design, the "Before and After" superstar of our research team. You've probably seen those before-and-after pictures in ads for weight loss programs or home renovations, right?

Well, this design is like that, but for science! Pretest-Posttest Design checks out what things are like before the experiment starts and then compares that to what things are like after the experiment ends.

This design is one of the classics, a staple in research for decades across various fields like psychology, education, and healthcare. It's so simple and straightforward that it has stayed popular for a long time.

In Pretest-Posttest Design, you measure your subject's behavior or condition before you introduce any changes—that's your "before" or "pretest." Then you do your experiment, and after it's done, you measure the same thing again—that's your "after" or "posttest."

Pretest-Posttest Design Pros

What makes Pretest-Posttest Design special? It's pretty easy to understand and doesn't require fancy statistics.

Pretest-Posttest Design Cons

But there are some pitfalls. For example, what if the kids in our math example get better at multiplication just because they're older or because they've taken the test before? That would make it hard to tell if the program is really effective or not.

Pretest-Posttest Design Uses

Let's say you're a teacher and you want to know if a new math program helps kids get better at multiplication. First, you'd give all the kids a multiplication test—that's your pretest. Then you'd teach them using the new math program. At the end, you'd give them the same test again—that's your posttest. If the kids do better on the second test, you might conclude that the program works.

One famous use of Pretest-Posttest Design is in evaluating the effectiveness of driver's education courses. Researchers will measure people's driving skills before and after the course to see if they've improved.

16) Solomon Four-Group Design

Next up is the Solomon Four-Group Design, the "chess master" of our research team. This design is all about strategy and careful planning. Named after Richard L. Solomon who introduced it in the 1940s, this method tries to correct some of the weaknesses in simpler designs, like the Pretest-Posttest Design.

Here's how it rolls: The Solomon Four-Group Design uses four different groups to test a hypothesis. Two groups get a pretest, then one of them receives the treatment or intervention, and both get a posttest. The other two groups skip the pretest, and only one of them receives the treatment before they both get a posttest.

Sound complicated? It's like playing 4D chess; you're thinking several moves ahead!

Solomon Four-Group Design Pros

What's the pro and con of the Solomon Four-Group Design? On the plus side, it provides really robust results because it accounts for so many variables.

Solomon Four-Group Design Cons

The downside? It's a lot of work and requires a lot of participants, making it more time-consuming and costly.

Solomon Four-Group Design Uses

Let's say you want to figure out if a new way of teaching history helps students remember facts better. Two classes take a history quiz (pretest), then one class uses the new teaching method while the other sticks with the old way. Both classes take another quiz afterward (posttest).

Meanwhile, two more classes skip the initial quiz, and then one uses the new method before both take the final quiz. Comparing all four groups will give you a much clearer picture of whether the new teaching method works and whether the pretest itself affects the outcome.

The Solomon Four-Group Design is less commonly used than simpler designs but is highly respected for its ability to control for more variables. It's a favorite in educational and psychological research where you really want to dig deep and figure out what's actually causing changes.

17) Adaptive Designs

Now, let's talk about Adaptive Designs, the chameleons of the experimental world.

Imagine you're a detective, and halfway through solving a case, you find a clue that changes everything. You wouldn't just stick to your old plan; you'd adapt and change your approach, right? That's exactly what Adaptive Designs allow researchers to do.

In an Adaptive Design, researchers can make changes to the study as it's happening, based on early results. In a traditional study, once you set your plan, you stick to it from start to finish.

Adaptive Design Pros

This method is particularly useful in fast-paced or high-stakes situations, like developing a new vaccine in the middle of a pandemic. The ability to adapt can save both time and resources, and more importantly, it can save lives by getting effective treatments out faster.

Adaptive Design Cons

But Adaptive Designs aren't without their drawbacks. They can be very complex to plan and carry out, and there's always a risk that the changes made during the study could introduce bias or errors.

Adaptive Design Uses

Adaptive Designs are most often seen in clinical trials, particularly in the medical and pharmaceutical fields.

For instance, if a new drug is showing really promising results, the study might be adjusted to give more participants the new treatment instead of a placebo. Or if one dose level is showing bad side effects, it might be dropped from the study.

The best part is, these changes are pre-planned. Researchers lay out in advance what changes might be made and under what conditions, which helps keep everything scientific and above board.

In terms of applications, besides their heavy usage in medical and pharmaceutical research, Adaptive Designs are also becoming increasingly popular in software testing and market research. In these fields, being able to quickly adjust to early results can give companies a significant advantage.

Adaptive Designs are like the agile startups of the research world—quick to pivot, keen to learn from ongoing results, and focused on rapid, efficient progress. However, they require a great deal of expertise and careful planning to ensure that the adaptability doesn't compromise the integrity of the research.

18) Bayesian Designs

Next, let's dive into Bayesian Designs, the data detectives of the research universe. Named after Thomas Bayes, an 18th-century statistician and minister, this design doesn't just look at what's happening now; it also takes into account what's happened before.

Imagine if you were a detective who not only looked at the evidence in front of you but also used your past cases to make better guesses about your current one. That's the essence of Bayesian Designs.

Bayesian Designs are like detective work in science. As you gather more clues (or data), you update your best guess on what's really happening. This way, your experiment gets smarter as it goes along.

In the world of research, Bayesian Designs are most notably used in areas where you have some prior knowledge that can inform your current study. For example, if earlier research shows that a certain type of medicine usually works well for a specific illness, a Bayesian Design would include that information when studying a new group of patients with the same illness.

Bayesian Design Pros

One of the major advantages of Bayesian Designs is their efficiency. Because they use existing data to inform the current experiment, often fewer resources are needed to reach a reliable conclusion.

Bayesian Design Cons

However, they can be quite complicated to set up and require a deep understanding of both statistics and the subject matter at hand.

Bayesian Design Uses

Bayesian Designs are highly valued in medical research, finance, environmental science, and even in Internet search algorithms. Their ability to continually update and refine hypotheses based on new evidence makes them particularly useful in fields where data is constantly evolving and where quick, informed decisions are crucial.

Here's a real-world example: In the development of personalized medicine, where treatments are tailored to individual patients, Bayesian Designs are invaluable. If a treatment has been effective for patients with similar genetics or symptoms in the past, a Bayesian approach can use that data to predict how well it might work for a new patient.

This type of design is also increasingly popular in machine learning and artificial intelligence. In these fields, Bayesian Designs help algorithms "learn" from past data to make better predictions or decisions in new situations. It's like teaching a computer to be a detective that gets better and better at solving puzzles the more puzzles it sees.

19) Covariate Adaptive Randomization

old person and young person

Now let's turn our attention to Covariate Adaptive Randomization, which you can think of as the "matchmaker" of experimental designs.

Picture a soccer coach trying to create the most balanced teams for a friendly match. They wouldn't just randomly assign players; they'd take into account each player's skills, experience, and other traits.

Covariate Adaptive Randomization is all about creating the most evenly matched groups possible for an experiment.

In traditional randomization, participants are allocated to different groups purely by chance. This is a pretty fair way to do things, but it can sometimes lead to unbalanced groups.

Imagine if all the professional-level players ended up on one soccer team and all the beginners on another; that wouldn't be a very informative match! Covariate Adaptive Randomization fixes this by using important traits or characteristics (called "covariates") to guide the randomization process.

Covariate Adaptive Randomization Pros

The benefits of this design are pretty clear: it aims for balance and fairness, making the final results more trustworthy.

Covariate Adaptive Randomization Cons

But it's not perfect. It can be complex to implement and requires a deep understanding of which characteristics are most important to balance.

Covariate Adaptive Randomization Uses

This design is particularly useful in medical trials. Let's say researchers are testing a new medication for high blood pressure. Participants might have different ages, weights, or pre-existing conditions that could affect the results.

Covariate Adaptive Randomization would make sure that each treatment group has a similar mix of these characteristics, making the results more reliable and easier to interpret.

In practical terms, this design is often seen in clinical trials for new drugs or therapies, but its principles are also applicable in fields like psychology, education, and social sciences.

For instance, in educational research, it might be used to ensure that classrooms being compared have similar distributions of students in terms of academic ability, socioeconomic status, and other factors.

Covariate Adaptive Randomization is like the wise elder of the group, ensuring that everyone has an equal opportunity to show their true capabilities, thereby making the collective results as reliable as possible.

20) Stepped Wedge Design

Let's now focus on the Stepped Wedge Design, a thoughtful and cautious member of the experimental design family.

Imagine you're trying out a new gardening technique, but you're not sure how well it will work. You decide to apply it to one section of your garden first, watch how it performs, and then gradually extend the technique to other sections. This way, you get to see its effects over time and across different conditions. That's basically how Stepped Wedge Design works.

In a Stepped Wedge Design, all participants or clusters start off in the control group, and then, at different times, they 'step' over to the intervention or treatment group. This creates a wedge-like pattern over time where more and more participants receive the treatment as the study progresses. It's like rolling out a new policy in phases, monitoring its impact at each stage before extending it to more people.

Stepped Wedge Design Pros

The Stepped Wedge Design offers several advantages. Firstly, it allows for the study of interventions that are expected to do more good than harm, which makes it ethically appealing.

Secondly, it's useful when resources are limited and it's not feasible to roll out a new treatment to everyone at once. Lastly, because everyone eventually receives the treatment, it can be easier to get buy-in from participants or organizations involved in the study.

Stepped Wedge Design Cons

However, this design can be complex to analyze because it has to account for both the time factor and the changing conditions in each 'step' of the wedge. And like any study where participants know they're receiving an intervention, there's the potential for the results to be influenced by the placebo effect or other biases.

Stepped Wedge Design Uses

This design is particularly useful in health and social care research. For instance, if a hospital wants to implement a new hygiene protocol, it might start in one department, assess its impact, and then roll it out to other departments over time. This allows the hospital to adjust and refine the new protocol based on real-world data before it's fully implemented.

In terms of applications, Stepped Wedge Designs are commonly used in public health initiatives, organizational changes in healthcare settings, and social policy trials. They are particularly useful in situations where an intervention is being rolled out gradually and it's important to understand its impacts at each stage.

21) Sequential Design

Next up is Sequential Design, the dynamic and flexible member of our experimental design family.

Imagine you're playing a video game where you can choose different paths. If you take one path and find a treasure chest, you might decide to continue in that direction. If you hit a dead end, you might backtrack and try a different route. Sequential Design operates in a similar fashion, allowing researchers to make decisions at different stages based on what they've learned so far.

In a Sequential Design, the experiment is broken down into smaller parts, or "sequences." After each sequence, researchers pause to look at the data they've collected. Based on those findings, they then decide whether to stop the experiment because they've got enough information, or to continue and perhaps even modify the next sequence.

Sequential Design Pros

This allows for a more efficient use of resources, as you're only continuing with the experiment if the data suggests it's worth doing so.

One of the great things about Sequential Design is its efficiency. Because you're making data-driven decisions along the way, you can often reach conclusions more quickly and with fewer resources.

Sequential Design Cons

However, it requires careful planning and expertise to ensure that these "stop or go" decisions are made correctly and without bias.

Sequential Design Uses

In terms of its applications, besides healthcare and medicine, Sequential Design is also popular in quality control in manufacturing, environmental monitoring, and financial modeling. In these areas, being able to make quick decisions based on incoming data can be a big advantage.

This design is often used in clinical trials involving new medications or treatments. For example, if early results show that a new drug has significant side effects, the trial can be stopped before more people are exposed to it.

On the flip side, if the drug is showing promising results, the trial might be expanded to include more participants or to extend the testing period.

Think of Sequential Design as the nimble athlete of experimental designs, capable of quick pivots and adjustments to reach the finish line in the most effective way possible. But just like an athlete needs a good coach, this design requires expert oversight to make sure it stays on the right track.

22) Field Experiments

Last but certainly not least, let's explore Field Experiments—the adventurers of the experimental design world.

Picture a scientist leaving the controlled environment of a lab to test a theory in the real world, like a biologist studying animals in their natural habitat or a social scientist observing people in a real community. These are Field Experiments, and they're all about getting out there and gathering data in real-world settings.

Field Experiments embrace the messiness of the real world, unlike laboratory experiments, where everything is controlled down to the smallest detail. This makes them both exciting and challenging.

Field Experiment Pros

On one hand, the results often give us a better understanding of how things work outside the lab.

While Field Experiments offer real-world relevance, they come with challenges like controlling for outside factors and the ethical considerations of intervening in people's lives without their knowledge.

Field Experiment Cons

On the other hand, the lack of control can make it harder to tell exactly what's causing what. Yet, despite these challenges, they remain a valuable tool for researchers who want to understand how theories play out in the real world.

Field Experiment Uses

Let's say a school wants to improve student performance. In a Field Experiment, they might change the school's daily schedule for one semester and keep track of how students perform compared to another school where the schedule remained the same.

Because the study is happening in a real school with real students, the results could be very useful for understanding how the change might work in other schools. But since it's the real world, lots of other factors—like changes in teachers or even the weather—could affect the results.

Field Experiments are widely used in economics, psychology, education, and public policy. For example, you might have heard of the famous "Broken Windows" experiment in the 1980s that looked at how small signs of disorder, like broken windows or graffiti, could encourage more serious crime in neighborhoods. This experiment had a big impact on how cities think about crime prevention.

From the foundational concepts of control groups and independent variables to the sophisticated layouts like Covariate Adaptive Randomization and Sequential Design, it's clear that the realm of experimental design is as varied as it is fascinating.

We've seen that each design has its own special talents, ideal for specific situations. Some designs, like the Classic Controlled Experiment, are like reliable old friends you can always count on.

Others, like Sequential Design, are flexible and adaptable, making quick changes based on what they learn. And let's not forget the adventurous Field Experiments, which take us out of the lab and into the real world to discover things we might not see otherwise.

Choosing the right experimental design is like picking the right tool for the job. The method you choose can make a big difference in how reliable your results are and how much people will trust what you've discovered. And as we've learned, there's a design to suit just about every question, every problem, and every curiosity.

So the next time you read about a new discovery in medicine, psychology, or any other field, you'll have a better understanding of the thought and planning that went into figuring things out. Experimental design is more than just a set of rules; it's a structured way to explore the unknown and answer questions that can change the world.

Related posts:

  • Experimental Psychologist Career (Salary + Duties + Interviews)
  • 40+ Famous Psychologists (Images + Biographies)
  • 11+ Psychology Experiment Ideas (Goals + Methods)
  • The Little Albert Experiment
  • 41+ White Collar Job Examples (Salary + Path)

Reference this article:

About The Author

Photo of author

Free Personality Test

Free Personality Quiz

Free Memory Test

Free Memory Test

Free IQ Test

Free IQ Test

PracticalPie.com is a participant in the Amazon Associates Program. As an Amazon Associate we earn from qualifying purchases.

Follow Us On:

Youtube Facebook Instagram X/Twitter

Psychology Resources

Developmental

Personality

Relationships

Psychologists

Serial Killers

Psychology Tests

Personality Quiz

Memory Test

Depression test

Type A/B Personality Test

© PracticalPsychology. All rights reserved

Privacy Policy | Terms of Use

Experimental Design: Types, Examples & Methods

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Experimental design refers to how participants are allocated to different groups in an experiment. Types of design include repeated measures, independent groups, and matched pairs designs.

Probably the most common way to design an experiment in psychology is to divide the participants into two groups, the experimental group and the control group, and then introduce a change to the experimental group, not the control group.

The researcher must decide how he/she will allocate their sample to the different experimental groups.  For example, if there are 10 participants, will all 10 participants participate in both groups (e.g., repeated measures), or will the participants be split in half and take part in only one group each?

Three types of experimental designs are commonly used:

1. Independent Measures

Independent measures design, also known as between-groups , is an experimental design where different participants are used in each condition of the independent variable.  This means that each condition of the experiment includes a different group of participants.

This should be done by random allocation, ensuring that each participant has an equal chance of being assigned to one group.

Independent measures involve using two separate groups of participants, one in each condition. For example:

Independent Measures Design 2

  • Con : More people are needed than with the repeated measures design (i.e., more time-consuming).
  • Pro : Avoids order effects (such as practice or fatigue) as people participate in one condition only.  If a person is involved in several conditions, they may become bored, tired, and fed up by the time they come to the second condition or become wise to the requirements of the experiment!
  • Con : Differences between participants in the groups may affect results, for example, variations in age, gender, or social background.  These differences are known as participant variables (i.e., a type of extraneous variable ).
  • Control : After the participants have been recruited, they should be randomly assigned to their groups. This should ensure the groups are similar, on average (reducing participant variables).

2. Repeated Measures Design

Repeated Measures design is an experimental design where the same participants participate in each independent variable condition.  This means that each experiment condition includes the same group of participants.

Repeated Measures design is also known as within-groups or within-subjects design .

  • Pro : As the same participants are used in each condition, participant variables (i.e., individual differences) are reduced.
  • Con : There may be order effects. Order effects refer to the order of the conditions affecting the participants’ behavior.  Performance in the second condition may be better because the participants know what to do (i.e., practice effect).  Or their performance might be worse in the second condition because they are tired (i.e., fatigue effect). This limitation can be controlled using counterbalancing.
  • Pro : Fewer people are needed as they participate in all conditions (i.e., saves time).
  • Control : To combat order effects, the researcher counter-balances the order of the conditions for the participants.  Alternating the order in which participants perform in different conditions of an experiment.

Counterbalancing

Suppose we used a repeated measures design in which all of the participants first learned words in “loud noise” and then learned them in “no noise.”

We expect the participants to learn better in “no noise” because of order effects, such as practice. However, a researcher can control for order effects using counterbalancing.

The sample would be split into two groups: experimental (A) and control (B).  For example, group 1 does ‘A’ then ‘B,’ and group 2 does ‘B’ then ‘A.’ This is to eliminate order effects.

Although order effects occur for each participant, they balance each other out in the results because they occur equally in both groups.

counter balancing

3. Matched Pairs Design

A matched pairs design is an experimental design where pairs of participants are matched in terms of key variables, such as age or socioeconomic status. One member of each pair is then placed into the experimental group and the other member into the control group .

One member of each matched pair must be randomly assigned to the experimental group and the other to the control group.

matched pairs design

  • Con : If one participant drops out, you lose 2 PPs’ data.
  • Pro : Reduces participant variables because the researcher has tried to pair up the participants so that each condition has people with similar abilities and characteristics.
  • Con : Very time-consuming trying to find closely matched pairs.
  • Pro : It avoids order effects, so counterbalancing is not necessary.
  • Con : Impossible to match people exactly unless they are identical twins!
  • Control : Members of each pair should be randomly assigned to conditions. However, this does not solve all these problems.

Experimental design refers to how participants are allocated to an experiment’s different conditions (or IV levels). There are three types:

1. Independent measures / between-groups : Different participants are used in each condition of the independent variable.

2. Repeated measures /within groups : The same participants take part in each condition of the independent variable.

3. Matched pairs : Each condition uses different participants, but they are matched in terms of important characteristics, e.g., gender, age, intelligence, etc.

Learning Check

Read about each of the experiments below. For each experiment, identify (1) which experimental design was used; and (2) why the researcher might have used that design.

1 . To compare the effectiveness of two different types of therapy for depression, depressed patients were assigned to receive either cognitive therapy or behavior therapy for a 12-week period.

The researchers attempted to ensure that the patients in the two groups had similar severity of depressed symptoms by administering a standardized test of depression to each participant, then pairing them according to the severity of their symptoms.

2 . To assess the difference in reading comprehension between 7 and 9-year-olds, a researcher recruited each group from a local primary school. They were given the same passage of text to read and then asked a series of questions to assess their understanding.

3 . To assess the effectiveness of two different ways of teaching reading, a group of 5-year-olds was recruited from a primary school. Their level of reading ability was assessed, and then they were taught using scheme one for 20 weeks.

At the end of this period, their reading was reassessed, and a reading improvement score was calculated. They were then taught using scheme two for a further 20 weeks, and another reading improvement score for this period was calculated. The reading improvement scores for each child were then compared.

4 . To assess the effect of the organization on recall, a researcher randomly assigned student volunteers to two conditions.

Condition one attempted to recall a list of words that were organized into meaningful categories; condition two attempted to recall the same words, randomly grouped on the page.

Experiment Terminology

Ecological validity.

The degree to which an investigation represents real-life experiences.

Experimenter effects

These are the ways that the experimenter can accidentally influence the participant through their appearance or behavior.

Demand characteristics

The clues in an experiment lead the participants to think they know what the researcher is looking for (e.g., the experimenter’s body language).

Independent variable (IV)

The variable the experimenter manipulates (i.e., changes) is assumed to have a direct effect on the dependent variable.

Dependent variable (DV)

Variable the experimenter measures. This is the outcome (i.e., the result) of a study.

Extraneous variables (EV)

All variables which are not independent variables but could affect the results (DV) of the experiment. Extraneous variables should be controlled where possible.

Confounding variables

Variable(s) that have affected the results (DV), apart from the IV. A confounding variable could be an extraneous variable that has not been controlled.

Random Allocation

Randomly allocating participants to independent variable conditions means that all participants should have an equal chance of taking part in each condition.

The principle of random allocation is to avoid bias in how the experiment is carried out and limit the effects of participant variables.

Order effects

Changes in participants’ performance due to their repeating the same or similar test more than once. Examples of order effects include:

(i) practice effect: an improvement in performance on a task due to repetition, for example, because of familiarity with the task;

(ii) fatigue effect: a decrease in performance of a task due to repetition, for example, because of boredom or tiredness.

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

  • Experimental Research Designs: Types, Examples & Methods

busayo.longe

Experimental research is the most familiar type of research design for individuals in the physical sciences and a host of other fields. This is mainly because experimental research is a classical scientific experiment, similar to those performed in high school science classes.

Imagine taking 2 samples of the same plant and exposing one of them to sunlight, while the other is kept away from sunlight. Let the plant exposed to sunlight be called sample A, while the latter is called sample B.

If after the duration of the research, we find out that sample A grows and sample B dies, even though they are both regularly wetted and given the same treatment. Therefore, we can conclude that sunlight will aid growth in all similar plants.

What is Experimental Research?

Experimental research is a scientific approach to research, where one or more independent variables are manipulated and applied to one or more dependent variables to measure their effect on the latter. The effect of the independent variables on the dependent variables is usually observed and recorded over some time, to aid researchers in drawing a reasonable conclusion regarding the relationship between these 2 variable types.

The experimental research method is widely used in physical and social sciences, psychology, and education. It is based on the comparison between two or more groups with a straightforward logic, which may, however, be difficult to execute.

Mostly related to a laboratory test procedure, experimental research designs involve collecting quantitative data and performing statistical analysis on them during research. Therefore, making it an example of quantitative research method .

What are The Types of Experimental Research Design?

The types of experimental research design are determined by the way the researcher assigns subjects to different conditions and groups. They are of 3 types, namely; pre-experimental, quasi-experimental, and true experimental research.

Pre-experimental Research Design

In pre-experimental research design, either a group or various dependent groups are observed for the effect of the application of an independent variable which is presumed to cause change. It is the simplest form of experimental research design and is treated with no control group.

Although very practical, experimental research is lacking in several areas of the true-experimental criteria. The pre-experimental research design is further divided into three types

  • One-shot Case Study Research Design

In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  • One-group Pretest-posttest Research Design: 

This research design combines both posttest and pretest study by carrying out a test on a single group before the treatment is administered and after the treatment is administered. With the former being administered at the beginning of treatment and later at the end.

  • Static-group Comparison: 

In a static-group comparison study, 2 or more groups are placed under observation, where only one of the groups is subjected to some treatment while the other groups are held static. All the groups are post-tested, and the observed differences between the groups are assumed to be a result of the treatment.

Quasi-experimental Research Design

  The word “quasi” means partial, half, or pseudo. Therefore, the quasi-experimental research bearing a resemblance to the true experimental research, but not the same.  In quasi-experiments, the participants are not randomly assigned, and as such, they are used in settings where randomization is difficult or impossible.

 This is very common in educational research, where administrators are unwilling to allow the random selection of students for experimental samples.

Some examples of quasi-experimental research design include; the time series, no equivalent control group design, and the counterbalanced design.

True Experimental Research Design

The true experimental research design relies on statistical analysis to approve or disprove a hypothesis. It is the most accurate type of experimental design and may be carried out with or without a pretest on at least 2 randomly assigned dependent subjects.

The true experimental research design must contain a control group, a variable that can be manipulated by the researcher, and the distribution must be random. The classification of true experimental design include:

  • The posttest-only Control Group Design: In this design, subjects are randomly selected and assigned to the 2 groups (control and experimental), and only the experimental group is treated. After close observation, both groups are post-tested, and a conclusion is drawn from the difference between these groups.
  • The pretest-posttest Control Group Design: For this control group design, subjects are randomly assigned to the 2 groups, both are presented, but only the experimental group is treated. After close observation, both groups are post-tested to measure the degree of change in each group.
  • Solomon four-group Design: This is the combination of the pretest-only and the pretest-posttest control groups. In this case, the randomly selected subjects are placed into 4 groups.

The first two of these groups are tested using the posttest-only method, while the other two are tested using the pretest-posttest method.

Examples of Experimental Research

Experimental research examples are different, depending on the type of experimental research design that is being considered. The most basic example of experimental research is laboratory experiments, which may differ in nature depending on the subject of research.

Administering Exams After The End of Semester

During the semester, students in a class are lectured on particular courses and an exam is administered at the end of the semester. In this case, the students are the subjects or dependent variables while the lectures are the independent variables treated on the subjects.

Only one group of carefully selected subjects are considered in this research, making it a pre-experimental research design example. We will also notice that tests are only carried out at the end of the semester, and not at the beginning.

Further making it easy for us to conclude that it is a one-shot case study research. 

Employee Skill Evaluation

Before employing a job seeker, organizations conduct tests that are used to screen out less qualified candidates from the pool of qualified applicants. This way, organizations can determine an employee’s skill set at the point of employment.

In the course of employment, organizations also carry out employee training to improve employee productivity and generally grow the organization. Further evaluation is carried out at the end of each training to test the impact of the training on employee skills, and test for improvement.

Here, the subject is the employee, while the treatment is the training conducted. This is a pretest-posttest control group experimental research example.

Evaluation of Teaching Method

Let us consider an academic institution that wants to evaluate the teaching method of 2 teachers to determine which is best. Imagine a case whereby the students assigned to each teacher is carefully selected probably due to personal request by parents or due to stubbornness and smartness.

This is a no equivalent group design example because the samples are not equal. By evaluating the effectiveness of each teacher’s teaching method this way, we may conclude after a post-test has been carried out.

However, this may be influenced by factors like the natural sweetness of a student. For example, a very smart student will grab more easily than his or her peers irrespective of the method of teaching.

What are the Characteristics of Experimental Research?  

Experimental research contains dependent, independent and extraneous variables. The dependent variables are the variables being treated or manipulated and are sometimes called the subject of the research.

The independent variables are the experimental treatment being exerted on the dependent variables. Extraneous variables, on the other hand, are other factors affecting the experiment that may also contribute to the change.

The setting is where the experiment is carried out. Many experiments are carried out in the laboratory, where control can be exerted on the extraneous variables, thereby eliminating them. 

Other experiments are carried out in a less controllable setting. The choice of setting used in research depends on the nature of the experiment being carried out.

  • Multivariable

Experimental research may include multiple independent variables, e.g. time, skills, test scores, etc.

Why Use Experimental Research Design?  

Experimental research design can be majorly used in physical sciences, social sciences, education, and psychology. It is used to make predictions and draw conclusions on a subject matter. 

Some uses of experimental research design are highlighted below.

  • Medicine: Experimental research is used to provide the proper treatment for diseases. In most cases, rather than directly using patients as the research subject, researchers take a sample of the bacteria from the patient’s body and are treated with the developed antibacterial

The changes observed during this period are recorded and evaluated to determine its effectiveness. This process can be carried out using different experimental research methods.

  • Education: Asides from science subjects like Chemistry and Physics which involves teaching students how to perform experimental research, it can also be used in improving the standard of an academic institution. This includes testing students’ knowledge on different topics, coming up with better teaching methods, and the implementation of other programs that will aid student learning.
  • Human Behavior: Social scientists are the ones who mostly use experimental research to test human behaviour. For example, consider 2 people randomly chosen to be the subject of the social interaction research where one person is placed in a room without human interaction for 1 year.

The other person is placed in a room with a few other people, enjoying human interaction. There will be a difference in their behaviour at the end of the experiment.

  • UI/UX: During the product development phase, one of the major aims of the product team is to create a great user experience with the product. Therefore, before launching the final product design, potential are brought in to interact with the product.

For example, when finding it difficult to choose how to position a button or feature on the app interface, a random sample of product testers are allowed to test the 2 samples and how the button positioning influences the user interaction is recorded.

What are the Disadvantages of Experimental Research?  

  • It is highly prone to human error due to its dependency on variable control which may not be properly implemented. These errors could eliminate the validity of the experiment and the research being conducted.
  • Exerting control of extraneous variables may create unrealistic situations. Eliminating real-life variables will result in inaccurate conclusions. This may also result in researchers controlling the variables to suit his or her personal preferences.
  • It is a time-consuming process. So much time is spent on testing dependent variables and waiting for the effect of the manipulation of dependent variables to manifest.
  • It is expensive. 
  • It is very risky and may have ethical complications that cannot be ignored. This is common in medical research, where failed trials may lead to a patient’s death or a deteriorating health condition.
  • Experimental research results are not descriptive.
  • Response bias can also be supplied by the subject of the conversation.
  • Human responses in experimental research can be difficult to measure. 

What are the Data Collection Methods in Experimental Research?  

Data collection methods in experimental research are the different ways in which data can be collected for experimental research. They are used in different cases, depending on the type of research being carried out.

1. Observational Study

This type of study is carried out over a long period. It measures and observes the variables of interest without changing existing conditions.

When researching the effect of social interaction on human behavior, the subjects who are placed in 2 different environments are observed throughout the research. No matter the kind of absurd behavior that is exhibited by the subject during this period, its condition will not be changed.

This may be a very risky thing to do in medical cases because it may lead to death or worse medical conditions.

2. Simulations

This procedure uses mathematical, physical, or computer models to replicate a real-life process or situation. It is frequently used when the actual situation is too expensive, dangerous, or impractical to replicate in real life.

This method is commonly used in engineering and operational research for learning purposes and sometimes as a tool to estimate possible outcomes of real research. Some common situation software are Simulink, MATLAB, and Simul8.

Not all kinds of experimental research can be carried out using simulation as a data collection tool . It is very impractical for a lot of laboratory-based research that involves chemical processes.

A survey is a tool used to gather relevant data about the characteristics of a population and is one of the most common data collection tools. A survey consists of a group of questions prepared by the researcher, to be answered by the research subject.

Surveys can be shared with the respondents both physically and electronically. When collecting data through surveys, the kind of data collected depends on the respondent, and researchers have limited control over it.

Formplus is the best tool for collecting experimental data using survey s. It has relevant features that will aid the data collection process and can also be used in other aspects of experimental research.

Differences between Experimental and Non-Experimental Research 

1. In experimental research, the researcher can control and manipulate the environment of the research, including the predictor variable which can be changed. On the other hand, non-experimental research cannot be controlled or manipulated by the researcher at will.

This is because it takes place in a real-life setting, where extraneous variables cannot be eliminated. Therefore, it is more difficult to conclude non-experimental studies, even though they are much more flexible and allow for a greater range of study fields.

2. The relationship between cause and effect cannot be established in non-experimental research, while it can be established in experimental research. This may be because many extraneous variables also influence the changes in the research subject, making it difficult to point at a particular variable as the cause of a particular change

3. Independent variables are not introduced, withdrawn, or manipulated in non-experimental designs, but the same may not be said about experimental research.

Conclusion  

Experimental research designs are often considered to be the standard in research designs. This is partly due to the common misconception that research is equivalent to scientific experiments—a component of experimental research design.

In this research design, one or more subjects or dependent variables are randomly assigned to different treatments (i.e. independent variables manipulated by the researcher) and the results are observed to conclude. One of the uniqueness of experimental research is in its ability to control the effect of extraneous variables.

Experimental research is suitable for research whose goal is to examine cause-effect relationships, e.g. explanatory research. It can be conducted in the laboratory or field settings, depending on the aim of the research that is being carried out. 

Logo

Connect to Formplus, Get Started Now - It's Free!

  • examples of experimental research
  • experimental research methods
  • types of experimental research
  • busayo.longe

Formplus

You may also like:

Experimental Vs Non-Experimental Research: 15 Key Differences

Differences between experimental and non experimental research on definitions, types, examples, data collection tools, uses, advantages etc.

research experimental design example

Response vs Explanatory Variables: Definition & Examples

In this article, we’ll be comparing the two types of variables, what they both mean and see some of their real-life applications in research

What is Experimenter Bias? Definition, Types & Mitigation

In this article, we will look into the concept of experimental bias and how it can be identified in your research

Simpson’s Paradox & How to Avoid it in Experimental Research

In this article, we are going to look at Simpson’s Paradox from its historical point and later, we’ll consider its effect in...

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology

Research Design | Step-by-Step Guide with Examples

Published on 5 May 2022 by Shona McCombes . Revised on 20 March 2023.

A research design is a strategy for answering your research question  using empirical data. Creating a research design means making decisions about:

  • Your overall aims and approach
  • The type of research design you’ll use
  • Your sampling methods or criteria for selecting subjects
  • Your data collection methods
  • The procedures you’ll follow to collect data
  • Your data analysis methods

A well-planned research design helps ensure that your methods match your research aims and that you use the right kind of analysis for your data.

Table of contents

Step 1: consider your aims and approach, step 2: choose a type of research design, step 3: identify your population and sampling method, step 4: choose your data collection methods, step 5: plan your data collection procedures, step 6: decide on your data analysis strategies, frequently asked questions.

  • Introduction

Before you can start designing your research, you should already have a clear idea of the research question you want to investigate.

There are many different ways you could go about answering this question. Your research design choices should be driven by your aims and priorities – start by thinking carefully about what you want to achieve.

The first choice you need to make is whether you’ll take a qualitative or quantitative approach.

Qualitative research designs tend to be more flexible and inductive , allowing you to adjust your approach based on what you find throughout the research process.

Quantitative research designs tend to be more fixed and deductive , with variables and hypotheses clearly defined in advance of data collection.

It’s also possible to use a mixed methods design that integrates aspects of both approaches. By combining qualitative and quantitative insights, you can gain a more complete picture of the problem you’re studying and strengthen the credibility of your conclusions.

Practical and ethical considerations when designing research

As well as scientific considerations, you need to think practically when designing your research. If your research involves people or animals, you also need to consider research ethics .

  • How much time do you have to collect data and write up the research?
  • Will you be able to gain access to the data you need (e.g., by travelling to a specific location or contacting specific people)?
  • Do you have the necessary research skills (e.g., statistical analysis or interview techniques)?
  • Will you need ethical approval ?

At each stage of the research design process, make sure that your choices are practically feasible.

Prevent plagiarism, run a free check.

Within both qualitative and quantitative approaches, there are several types of research design to choose from. Each type provides a framework for the overall shape of your research.

Types of quantitative research designs

Quantitative designs can be split into four main types. Experimental and   quasi-experimental designs allow you to test cause-and-effect relationships, while descriptive and correlational designs allow you to measure variables and describe relationships between them.

With descriptive and correlational designs, you can get a clear picture of characteristics, trends, and relationships as they exist in the real world. However, you can’t draw conclusions about cause and effect (because correlation doesn’t imply causation ).

Experiments are the strongest way to test cause-and-effect relationships without the risk of other variables influencing the results. However, their controlled conditions may not always reflect how things work in the real world. They’re often also more difficult and expensive to implement.

Types of qualitative research designs

Qualitative designs are less strictly defined. This approach is about gaining a rich, detailed understanding of a specific context or phenomenon, and you can often be more creative and flexible in designing your research.

The table below shows some common types of qualitative design. They often have similar approaches in terms of data collection, but focus on different aspects when analysing the data.

Your research design should clearly define who or what your research will focus on, and how you’ll go about choosing your participants or subjects.

In research, a population is the entire group that you want to draw conclusions about, while a sample is the smaller group of individuals you’ll actually collect data from.

Defining the population

A population can be made up of anything you want to study – plants, animals, organisations, texts, countries, etc. In the social sciences, it most often refers to a group of people.

For example, will you focus on people from a specific demographic, region, or background? Are you interested in people with a certain job or medical condition, or users of a particular product?

The more precisely you define your population, the easier it will be to gather a representative sample.

Sampling methods

Even with a narrowly defined population, it’s rarely possible to collect data from every individual. Instead, you’ll collect data from a sample.

To select a sample, there are two main approaches: probability sampling and non-probability sampling . The sampling method you use affects how confidently you can generalise your results to the population as a whole.

Probability sampling is the most statistically valid option, but it’s often difficult to achieve unless you’re dealing with a very small and accessible population.

For practical reasons, many studies use non-probability sampling, but it’s important to be aware of the limitations and carefully consider potential biases. You should always make an effort to gather a sample that’s as representative as possible of the population.

Case selection in qualitative research

In some types of qualitative designs, sampling may not be relevant.

For example, in an ethnography or a case study, your aim is to deeply understand a specific context, not to generalise to a population. Instead of sampling, you may simply aim to collect as much data as possible about the context you are studying.

In these types of design, you still have to carefully consider your choice of case or community. You should have a clear rationale for why this particular case is suitable for answering your research question.

For example, you might choose a case study that reveals an unusual or neglected aspect of your research problem, or you might choose several very similar or very different cases in order to compare them.

Data collection methods are ways of directly measuring variables and gathering information. They allow you to gain first-hand knowledge and original insights into your research problem.

You can choose just one data collection method, or use several methods in the same study.

Survey methods

Surveys allow you to collect data about opinions, behaviours, experiences, and characteristics by asking people directly. There are two main survey methods to choose from: questionnaires and interviews.

Observation methods

Observations allow you to collect data unobtrusively, observing characteristics, behaviours, or social interactions without relying on self-reporting.

Observations may be conducted in real time, taking notes as you observe, or you might make audiovisual recordings for later analysis. They can be qualitative or quantitative.

Other methods of data collection

There are many other ways you might collect data depending on your field and topic.

If you’re not sure which methods will work best for your research design, try reading some papers in your field to see what data collection methods they used.

Secondary data

If you don’t have the time or resources to collect data from the population you’re interested in, you can also choose to use secondary data that other researchers already collected – for example, datasets from government surveys or previous studies on your topic.

With this raw data, you can do your own analysis to answer new research questions that weren’t addressed by the original study.

Using secondary data can expand the scope of your research, as you may be able to access much larger and more varied samples than you could collect yourself.

However, it also means you don’t have any control over which variables to measure or how to measure them, so the conclusions you can draw may be limited.

As well as deciding on your methods, you need to plan exactly how you’ll use these methods to collect data that’s consistent, accurate, and unbiased.

Planning systematic procedures is especially important in quantitative research, where you need to precisely define your variables and ensure your measurements are reliable and valid.

Operationalisation

Some variables, like height or age, are easily measured. But often you’ll be dealing with more abstract concepts, like satisfaction, anxiety, or competence. Operationalisation means turning these fuzzy ideas into measurable indicators.

If you’re using observations , which events or actions will you count?

If you’re using surveys , which questions will you ask and what range of responses will be offered?

You may also choose to use or adapt existing materials designed to measure the concept you’re interested in – for example, questionnaires or inventories whose reliability and validity has already been established.

Reliability and validity

Reliability means your results can be consistently reproduced , while validity means that you’re actually measuring the concept you’re interested in.

For valid and reliable results, your measurement materials should be thoroughly researched and carefully designed. Plan your procedures to make sure you carry out the same steps in the same way for each participant.

If you’re developing a new questionnaire or other instrument to measure a specific concept, running a pilot study allows you to check its validity and reliability in advance.

Sampling procedures

As well as choosing an appropriate sampling method, you need a concrete plan for how you’ll actually contact and recruit your selected sample.

That means making decisions about things like:

  • How many participants do you need for an adequate sample size?
  • What inclusion and exclusion criteria will you use to identify eligible participants?
  • How will you contact your sample – by mail, online, by phone, or in person?

If you’re using a probability sampling method, it’s important that everyone who is randomly selected actually participates in the study. How will you ensure a high response rate?

If you’re using a non-probability method, how will you avoid bias and ensure a representative sample?

Data management

It’s also important to create a data management plan for organising and storing your data.

Will you need to transcribe interviews or perform data entry for observations? You should anonymise and safeguard any sensitive data, and make sure it’s backed up regularly.

Keeping your data well organised will save time when it comes to analysing them. It can also help other researchers validate and add to your findings.

On their own, raw data can’t answer your research question. The last step of designing your research is planning how you’ll analyse the data.

Quantitative data analysis

In quantitative research, you’ll most likely use some form of statistical analysis . With statistics, you can summarise your sample data, make estimates, and test hypotheses.

Using descriptive statistics , you can summarise your sample data in terms of:

  • The distribution of the data (e.g., the frequency of each score on a test)
  • The central tendency of the data (e.g., the mean to describe the average score)
  • The variability of the data (e.g., the standard deviation to describe how spread out the scores are)

The specific calculations you can do depend on the level of measurement of your variables.

Using inferential statistics , you can:

  • Make estimates about the population based on your sample data.
  • Test hypotheses about a relationship between variables.

Regression and correlation tests look for associations between two or more variables, while comparison tests (such as t tests and ANOVAs ) look for differences in the outcomes of different groups.

Your choice of statistical test depends on various aspects of your research design, including the types of variables you’re dealing with and the distribution of your data.

Qualitative data analysis

In qualitative research, your data will usually be very dense with information and ideas. Instead of summing it up in numbers, you’ll need to comb through the data in detail, interpret its meanings, identify patterns, and extract the parts that are most relevant to your research question.

Two of the most common approaches to doing this are thematic analysis and discourse analysis .

There are many other ways of analysing qualitative data depending on the aims of your research. To get a sense of potential approaches, try reading some qualitative research papers in your field.

A sample is a subset of individuals from a larger population. Sampling means selecting the group that you will actually collect data from in your research.

For example, if you are researching the opinions of students in your university, you could survey a sample of 100 students.

Statistical sampling allows you to test a hypothesis about the characteristics of a population. There are various sampling methods you can use to ensure that your sample is representative of the population as a whole.

Operationalisation means turning abstract conceptual ideas into measurable observations.

For example, the concept of social anxiety isn’t directly observable, but it can be operationally defined in terms of self-rating scores, behavioural avoidance of crowded places, or physical anxiety symptoms in social situations.

Before collecting data , it’s important to consider how you will operationalise the variables that you want to measure.

The research methods you use depend on the type of data you need to answer your research question .

  • If you want to measure something or test a hypothesis , use quantitative methods . If you want to explore ideas, thoughts, and meanings, use qualitative methods .
  • If you want to analyse a large amount of readily available data, use secondary data. If you want data specific to your purposes with control over how they are generated, collect primary data.
  • If you want to establish cause-and-effect relationships between variables , use experimental methods. If you want to understand the characteristics of a research subject, use descriptive methods.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

McCombes, S. (2023, March 20). Research Design | Step-by-Step Guide with Examples. Scribbr. Retrieved 14 May 2024, from https://www.scribbr.co.uk/research-methods/research-design/

Is this article helpful?

Shona McCombes

Shona McCombes

Experimental design: Guide, steps, examples

Last updated

27 April 2023

Reviewed by

Miroslav Damyanov

Experimental research design is a scientific framework that allows you to manipulate one or more variables while controlling the test environment. 

When testing a theory or new product, it can be helpful to have a certain level of control and manipulate variables to discover different outcomes. You can use these experiments to determine cause and effect or study variable associations. 

This guide explores the types of experimental design, the steps in designing an experiment, and the advantages and limitations of experimental design. 

Make research less tedious

Dovetail streamlines research to help you uncover and share actionable insights

  • What is experimental research design?

You can determine the relationship between each of the variables by: 

Manipulating one or more independent variables (i.e., stimuli or treatments)

Applying the changes to one or more dependent variables (i.e., test groups or outcomes)

With the ability to analyze the relationship between variables and using measurable data, you can increase the accuracy of the result. 

What is a good experimental design?

A good experimental design requires: 

Significant planning to ensure control over the testing environment

Sound experimental treatments

Properly assigning subjects to treatment groups

Without proper planning, unexpected external variables can alter an experiment's outcome. 

To meet your research goals, your experimental design should include these characteristics:

Provide unbiased estimates of inputs and associated uncertainties

Enable the researcher to detect differences caused by independent variables

Include a plan for analysis and reporting of the results

Provide easily interpretable results with specific conclusions

What's the difference between experimental and quasi-experimental design?

The major difference between experimental and quasi-experimental design is the random assignment of subjects to groups. 

A true experiment relies on certain controls. Typically, the researcher designs the treatment and randomly assigns subjects to control and treatment groups. 

However, these conditions are unethical or impossible to achieve in some situations.

When it's unethical or impractical to assign participants randomly, that’s when a quasi-experimental design comes in. 

This design allows researchers to conduct a similar experiment by assigning subjects to groups based on non-random criteria. 

Another type of quasi-experimental design might occur when the researcher doesn't have control over the treatment but studies pre-existing groups after they receive different treatments.

When can a researcher conduct experimental research?

Various settings and professions can use experimental research to gather information and observe behavior in controlled settings. 

Basically, a researcher can conduct experimental research any time they want to test a theory with variable and dependent controls. 

Experimental research is an option when the project includes an independent variable and a desire to understand the relationship between cause and effect. 

  • The importance of experimental research design

Experimental research enables researchers to conduct studies that provide specific, definitive answers to questions and hypotheses. 

Researchers can test Independent variables in controlled settings to:

Test the effectiveness of a new medication

Design better products for consumers

Answer questions about human health and behavior

Developing a quality research plan means a researcher can accurately answer vital research questions with minimal error. As a result, definitive conclusions can influence the future of the independent variable. 

Types of experimental research designs

There are three main types of experimental research design. The research type you use will depend on the criteria of your experiment, your research budget, and environmental limitations. 

Pre-experimental research design

A pre-experimental research study is a basic observational study that monitors independent variables’ effects. 

During research, you observe one or more groups after applying a treatment to test whether the treatment causes any change. 

The three subtypes of pre-experimental research design are:

One-shot case study research design

This research method introduces a single test group to a single stimulus to study the results at the end of the application. 

After researchers presume the stimulus or treatment has caused changes, they gather results to determine how it affects the test subjects. 

One-group pretest-posttest design

This method uses a single test group but includes a pretest study as a benchmark. The researcher applies a test before and after the group’s exposure to a specific stimulus. 

Static group comparison design

This method includes two or more groups, enabling the researcher to use one group as a control. They apply a stimulus to one group and leave the other group static. 

A posttest study compares the results among groups. 

True experimental research design

A true experiment is the most common research method. It involves statistical analysis to prove or disprove a specific hypothesis . 

Under completely experimental conditions, researchers expose participants in two or more randomized groups to different stimuli. 

Random selection removes any potential for bias, providing more reliable results. 

These are the three main sub-groups of true experimental research design:

Posttest-only control group design

This structure requires the researcher to divide participants into two random groups. One group receives no stimuli and acts as a control while the other group experiences stimuli.

Researchers perform a test at the end of the experiment to observe the stimuli exposure results.

Pretest-posttest control group design

This test also requires two groups. It includes a pretest as a benchmark before introducing the stimulus. 

The pretest introduces multiple ways to test subjects. For instance, if the control group also experiences a change, it reveals that taking the test twice changes the results.

Solomon four-group design

This structure divides subjects into two groups, with two as control groups. Researchers assign the first control group a posttest only and the second control group a pretest and a posttest. 

The two variable groups mirror the control groups, but researchers expose them to stimuli. The ability to differentiate between groups in multiple ways provides researchers with more testing approaches for data-based conclusions. 

Quasi-experimental research design

Although closely related to a true experiment, quasi-experimental research design differs in approach and scope. 

Quasi-experimental research design doesn’t have randomly selected participants. Researchers typically divide the groups in this research by pre-existing differences. 

Quasi-experimental research is more common in educational studies, nursing, or other research projects where it's not ethical or practical to use randomized subject groups.

  • 5 steps for designing an experiment

Experimental research requires a clearly defined plan to outline the research parameters and expected goals. 

Here are five key steps in designing a successful experiment:

Step 1: Define variables and their relationship

Your experiment should begin with a question: What are you hoping to learn through your experiment? 

The relationship between variables in your study will determine your answer.

Define the independent variable (the intended stimuli) and the dependent variable (the expected effect of the stimuli). After identifying these groups, consider how you might control them in your experiment. 

Could natural variations affect your research? If so, your experiment should include a pretest and posttest. 

Step 2: Develop a specific, testable hypothesis

With a firm understanding of the system you intend to study, you can write a specific, testable hypothesis. 

What is the expected outcome of your study? 

Develop a prediction about how the independent variable will affect the dependent variable. 

How will the stimuli in your experiment affect your test subjects? 

Your hypothesis should provide a prediction of the answer to your research question . 

Step 3: Design experimental treatments to manipulate your independent variable

Depending on your experiment, your variable may be a fixed stimulus (like a medical treatment) or a variable stimulus (like a period during which an activity occurs). 

Determine which type of stimulus meets your experiment’s needs and how widely or finely to vary your stimuli. 

Step 4: Assign subjects to groups

When you have a clear idea of how to carry out your experiment, you can determine how to assemble test groups for an accurate study. 

When choosing your study groups, consider: 

The size of your experiment

Whether you can select groups randomly

Your target audience for the outcome of the study

You should be able to create groups with an equal number of subjects and include subjects that match your target audience. Remember, you should assign one group as a control and use one or more groups to study the effects of variables. 

Step 5: Plan how to measure your dependent variable

This step determines how you'll collect data to determine the study's outcome. You should seek reliable and valid measurements that minimize research bias or error. 

You can measure some data with scientific tools, while you’ll need to operationalize other forms to turn them into measurable observations.

  • Advantages of experimental research

Experimental research is an integral part of our world. It allows researchers to conduct experiments that answer specific questions. 

While researchers use many methods to conduct different experiments, experimental research offers these distinct benefits:

Researchers can determine cause and effect by manipulating variables.

It gives researchers a high level of control.

Researchers can test multiple variables within a single experiment.

All industries and fields of knowledge can use it. 

Researchers can duplicate results to promote the validity of the study .

Replicating natural settings rapidly means immediate research.

Researchers can combine it with other research methods.

It provides specific conclusions about the validity of a product, theory, or idea.

  • Disadvantages (or limitations) of experimental research

Unfortunately, no research type yields ideal conditions or perfect results. 

While experimental research might be the right choice for some studies, certain conditions could render experiments useless or even dangerous. 

Before conducting experimental research, consider these disadvantages and limitations:

Required professional qualification

Only competent professionals with an academic degree and specific training are qualified to conduct rigorous experimental research. This ensures results are unbiased and valid. 

Limited scope

Experimental research may not capture the complexity of some phenomena, such as social interactions or cultural norms. These are difficult to control in a laboratory setting.

Resource-intensive

Experimental research can be expensive, time-consuming, and require significant resources, such as specialized equipment or trained personnel.

Limited generalizability

The controlled nature means the research findings may not fully apply to real-world situations or people outside the experimental setting.

Practical or ethical concerns

Some experiments may involve manipulating variables that could harm participants or violate ethical guidelines . 

Researchers must ensure their experiments do not cause harm or discomfort to participants. 

Sometimes, recruiting a sample of people to randomly assign may be difficult. 

  • Experimental research design example

Experiments across all industries and research realms provide scientists, developers, and other researchers with definitive answers. These experiments can solve problems, create inventions, and heal illnesses. 

Product design testing is an excellent example of experimental research. 

A company in the product development phase creates multiple prototypes for testing. With a randomized selection, researchers introduce each test group to a different prototype. 

When groups experience different product designs , the company can assess which option most appeals to potential customers. 

Experimental research design provides researchers with a controlled environment to conduct experiments that evaluate cause and effect. 

Using the five steps to develop a research plan ensures you anticipate and eliminate external variables while answering life’s crucial questions.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 17 January 2024

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 18 May 2023

Last updated: 25 November 2023

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

research experimental design example

Users report unexpectedly high data usage, especially during streaming sessions.

research experimental design example

Users find it hard to navigate from the home page to relevant playlists in the app.

research experimental design example

It would be great to have a sleep timer feature, especially for bedtime listening.

research experimental design example

I need better filters to find the songs or artists I’m looking for.

  • Types of experimental

Log in or sign up

Get started for free

Experimental Research: Definition, Types, Design, Examples

Appinio Research · 14.05.2024 · 31min read

Experimental Research Definition Types Design Examples

Experimental research is a cornerstone of scientific inquiry, providing a systematic approach to understanding cause-and-effect relationships and advancing knowledge in various fields. At its core, experimental research involves manipulating variables, observing outcomes, and drawing conclusions based on empirical evidence. By controlling factors that could influence the outcome, researchers can isolate the effects of specific variables and make reliable inferences about their impact. This guide offers a step-by-step exploration of experimental research, covering key elements such as research design, data collection, analysis, and ethical considerations. Whether you're a novice researcher seeking to understand the basics or an experienced scientist looking to refine your experimental techniques, this guide will equip you with the knowledge and tools needed to conduct rigorous and insightful research.

What is Experimental Research?

Experimental research is a systematic approach to scientific inquiry that aims to investigate cause-and-effect relationships by manipulating independent variables and observing their effects on dependent variables. Experimental research primarily aims to test hypotheses, make predictions, and draw conclusions based on empirical evidence.

By controlling extraneous variables and randomizing participant assignment, researchers can isolate the effects of specific variables and establish causal relationships. Experimental research is characterized by its rigorous methodology, emphasis on objectivity, and reliance on empirical data to support conclusions.

Importance of Experimental Research

  • Establishing Cause-and-Effect Relationships : Experimental research allows researchers to establish causal relationships between variables by systematically manipulating independent variables and observing their effects on dependent variables. This provides valuable insights into the underlying mechanisms driving phenomena and informs theory development.
  • Testing Hypotheses and Making Predictions : Experimental research provides a structured framework for testing hypotheses and predicting the relationship between variables . By systematically manipulating variables and controlling for confounding factors, researchers can empirically test the validity of their hypotheses and refine theoretical models.
  • Informing Evidence-Based Practice : Experimental research generates empirical evidence that informs evidence-based practice in various fields, including healthcare, education, and business. Experimental research contributes to improving outcomes and informing decision-making in real-world settings by identifying effective interventions, treatments, and strategies.
  • Driving Innovation and Advancement : Experimental research drives innovation and advancement by uncovering new insights, challenging existing assumptions, and pushing the boundaries of knowledge. Through rigorous experimentation and empirical validation, researchers can develop novel solutions to complex problems and contribute to the advancement of science and technology.
  • Enhancing Research Rigor and Validity : Experimental research upholds high research rigor and validity standards by employing systematic methods, controlling for confounding variables, and ensuring replicability of findings. By adhering to rigorous methodology and ethical principles, experimental research produces reliable and credible evidence that withstands scrutiny and contributes to the cumulative body of knowledge.

Experimental research plays a pivotal role in advancing scientific understanding, informing evidence-based practice, and driving innovation across various disciplines. By systematically testing hypotheses, establishing causal relationships, and generating empirical evidence, experimental research contributes to the collective pursuit of knowledge and the improvement of society.

Understanding Experimental Design

Experimental design serves as the blueprint for your study, outlining how you'll manipulate variables and control factors to draw valid conclusions.

Experimental Design Components

Experimental design comprises several essential elements:

  • Independent Variable (IV) : This is the variable manipulated by the researcher. It's what you change to observe its effect on the dependent variable. For example, in a study testing the impact of different study techniques on exam scores, the independent variable might be the study method (e.g., flashcards, reading, or practice quizzes).
  • Dependent Variable (DV) : The dependent variable is what you measure to assess the effect of the independent variable. It's the outcome variable affected by the manipulation of the independent variable. In our study example, the dependent variable would be the exam scores.
  • Control Variables : These factors could influence the outcome but are kept constant or controlled to isolate the effect of the independent variable. Controlling variables helps ensure that any observed changes in the dependent variable can be attributed to manipulating the independent variable rather than other factors.
  • Experimental Group : This group receives the treatment or intervention being tested. It's exposed to the manipulated independent variable. In contrast, the control group does not receive the treatment and serves as a baseline for comparison.

Types of Experimental Designs

Experimental designs can vary based on the research question, the nature of the variables, and the desired level of control. Here are some common types:

  • Between-Subjects Design : In this design, different groups of participants are exposed to varying levels of the independent variable. Each group represents a different experimental condition, and participants are only exposed to one condition. For instance, in a study comparing the effectiveness of two teaching methods, one group of students would use Method A, while another would use Method B.
  • Within-Subjects Design : Also known as repeated measures design , this approach involves exposing the same group of participants to all levels of the independent variable. Participants serve as their own controls, and the order of conditions is typically counterbalanced to control for order effects. For example, participants might be tested on their reaction times under different lighting conditions, with the order of conditions randomized to eliminate any research bias .
  • Mixed Designs : Mixed designs combine elements of both between-subjects and within-subjects designs. This allows researchers to examine both between-group differences and within-group changes over time. Mixed designs help study complex phenomena that involve multiple variables and temporal dynamics.

Factors Influencing Experimental Design Choices

Several factors influence the selection of an appropriate experimental design:

  • Research Question : The nature of your research question will guide your choice of experimental design. Some questions may be better suited to between-subjects designs, while others may require a within-subjects approach.
  • Variables : Consider the number and type of variables involved in your study. A factorial design might be appropriate if you're interested in exploring multiple factors simultaneously. Conversely, if you're focused on investigating the effects of a single variable, a simpler design may suffice.
  • Practical Considerations : Practical constraints such as time, resources, and access to participants can impact your choice of experimental design. Depending on your study's specific requirements, some designs may be more feasible or cost-effective   than others .
  • Ethical Considerations : Ethical concerns, such as the potential risks to participants or the need to minimize harm, should also inform your experimental design choices. Ensure that your design adheres to ethical guidelines and safeguards the rights and well-being of participants.

By carefully considering these factors and selecting an appropriate experimental design, you can ensure that your study is well-designed and capable of yielding meaningful insights.

Experimental Research Elements

When conducting experimental research, understanding the key elements is crucial for designing and executing a robust study. Let's explore each of these elements in detail to ensure your experiment is well-planned and executed effectively.

Independent and Dependent Variables

In experimental research, the independent variable (IV) is the factor that the researcher manipulates or controls, while the dependent variable (DV) is the measured outcome or response. The independent variable is what you change in the experiment to observe its effect on the dependent variable.

For example, in a study investigating the effect of different fertilizers on plant growth, the type of fertilizer used would be the independent variable, while the plant growth (height, number of leaves, etc.) would be the dependent variable.

Control Groups and Experimental Groups

Control groups and experimental groups are essential components of experimental design. The control group serves as a baseline for comparison and does not receive the treatment or intervention being studied. Its purpose is to provide a reference point to assess the effects of the independent variable.

In contrast, the experimental group receives the treatment or intervention and is used to measure the impact of the independent variable. For example, in a drug trial, the control group would receive a placebo, while the experimental group would receive the actual medication.

Randomization and Random Sampling

Randomization is the process of randomly assigning participants to different experimental conditions to minimize biases and ensure that each participant has an equal chance of being assigned to any condition. Randomization helps control for extraneous variables and increases the study's internal validity .

Random sampling, on the other hand, involves selecting a representative sample from the population of interest to generalize the findings to the broader population. Random sampling ensures that each member of the population has an equal chance of being included in the sample, reducing the risk of sampling bias .

Replication and Reliability

Replication involves repeating the experiment to confirm the results and assess the reliability of the findings . It is essential for ensuring the validity of scientific findings and building confidence in the robustness of the results. A study that can be replicated consistently across different settings and by various researchers is considered more reliable. Researchers should strive to design experiments that are easily replicable and transparently report their methods to facilitate replication by others.

Validity: Internal, External, Construct, and Statistical Conclusion Validity

Validity refers to the degree to which an experiment measures what it intends to measure and the extent to which the results can be generalized to other populations or contexts. There are several types of validity that researchers should consider:

  • Internal Validity : Internal validity refers to the extent to which the study accurately assesses the causal relationship between variables. Internal validity is threatened by factors such as confounding variables, selection bias, and experimenter effects. Researchers can enhance internal validity through careful experimental design and control procedures.
  • External Validity : External validity refers to the extent to which the study's findings can be generalized to other populations or settings. External validity is influenced by factors such as the representativeness of the sample and the ecological validity of the experimental conditions. Researchers should consider the relevance and applicability of their findings to real-world situations.
  • Construct Validity : Construct validity refers to the degree to which the study accurately measures the theoretical constructs of interest. Construct validity is concerned with whether the operational definitions of the variables align with the underlying theoretical concepts. Researchers can establish construct validity through careful measurement selection and validation procedures.
  • Statistical Conclusion Validity : Statistical conclusion validity refers to the accuracy of the statistical analyses and conclusions drawn from the data. It ensures that the statistical tests used are appropriate for the data and that the conclusions drawn are warranted. Researchers should use robust statistical methods and report effect sizes and confidence intervals to enhance statistical conclusion validity.

By addressing these elements of experimental research and ensuring the validity and reliability of your study, you can conduct research that contributes meaningfully to the advancement of knowledge in your field.

How to Conduct Experimental Research?

Embarking on an experimental research journey involves a series of well-defined phases, each crucial for the success of your study. Let's explore the pre-experimental, experimental, and post-experimental phases to ensure you're equipped to conduct rigorous and insightful research.

Pre-Experimental Phase

The pre-experimental phase lays the foundation for your study, setting the stage for what's to come. Here's what you need to do:

  • Formulating Research Questions and Hypotheses : Start by clearly defining your research questions and formulating testable hypotheses. Your research questions should be specific, relevant, and aligned with your research objectives. Hypotheses provide a framework for testing the relationships between variables and making predictions about the outcomes of your study.
  • Reviewing Literature and Establishing Theoretical Framework : Dive into existing literature relevant to your research topic and establish a solid theoretical framework. Literature review helps you understand the current state of knowledge, identify research gaps, and build upon existing theories. A well-defined theoretical framework provides a conceptual basis for your study and guides your research design and analysis.

Experimental Phase

The experimental phase is where the magic happens – it's time to put your hypotheses to the test and gather data. Here's what you need to consider:

  • Participant Recruitment and Sampling Techniques : Carefully recruit participants for your study using appropriate sampling techniques . The sample should be representative of the population you're studying to ensure the generalizability of your findings. Consider factors such as sample size , demographics , and inclusion criteria when recruiting participants.
  • Implementing Experimental Procedures : Once you've recruited participants, it's time to implement your experimental procedures. Clearly outline the experimental protocol, including instructions for participants, procedures for administering treatments or interventions, and measures for controlling extraneous variables. Standardize your procedures to ensure consistency across participants and minimize sources of bias.
  • Data Collection and Measurement : Collect data using reliable and valid measurement instruments. Depending on your research questions and variables of interest, data collection methods may include surveys , observations, physiological measurements, or experimental tasks. Ensure that your data collection procedures are ethical, respectful of participants' rights, and designed to minimize errors and biases.

Post-Experimental Phase

In the post-experimental phase, you make sense of your data, draw conclusions, and communicate your findings  to the world . Here's what you need to do:

  • Data Analysis Techniques : Analyze your data using appropriate statistical techniques . Choose methods that are aligned with your research design and hypotheses. Standard statistical analyses include descriptive statistics, inferential statistics (e.g., t-tests, ANOVA), regression analysis , and correlation analysis. Interpret your findings in the context of your research questions and theoretical framework.
  • Interpreting Results and Drawing Conclusions : Once you've analyzed your data, interpret the results and draw conclusions. Discuss the implications of your findings, including any theoretical, practical, or real-world implications. Consider alternative explanations and limitations of your study and propose avenues for future research. Be transparent about the strengths and weaknesses of your study to enhance the credibility of your conclusions.
  • Reporting Findings : Finally, communicate your findings through research reports, academic papers, or presentations. Follow standard formatting guidelines and adhere to ethical standards for research reporting. Clearly articulate your research objectives, methods, results, and conclusions. Consider your target audience and choose appropriate channels for disseminating your findings to maximize impact and reach.

By meticulously planning and executing each experimental research phase, you can generate valuable insights, advance knowledge in your field, and contribute to scientific progress.

A s you navigate the intricate phases of experimental research, leveraging Appinio can streamline your journey toward actionable insights. With our intuitive platform, you can swiftly gather real-time consumer data, empowering you to make informed decisions with confidence. Say goodbye to the complexities of traditional market research and hello to a seamless, efficient process that puts you in the driver's seat of your research endeavors.

Ready to revolutionize your approach to data-driven decision-making? Book a demo today and discover the power of Appinio in transforming your research experience!

Book a Demo

Experimental Research Examples

Understanding how experimental research is applied in various contexts can provide valuable insights into its practical significance and effectiveness. Here are some examples illustrating the application of experimental research in different domains:

Market Research

Experimental studies are crucial in market research in testing hypotheses, evaluating marketing strategies, and understanding consumer behavior . For example, a company may conduct an experiment to determine the most effective advertising message for a new product. Participants could be exposed to different versions of an advertisement, each emphasizing different product features or appeals.

By measuring variables such as brand recall, purchase intent, and brand perception, researchers can assess the impact of each advertising message and identify the most persuasive approach.

Software as a Service (SaaS)

In the SaaS industry, experimental research is often used to optimize user interfaces, features, and pricing models to enhance user experience and drive engagement. For instance, a SaaS company may conduct A/B tests to compare two versions of its software interface, each with a different layout or navigation structure.

Researchers can identify design elements that lead to higher user satisfaction and retention by tracking user interactions, conversion rates, and customer feedback . Experimental research also enables SaaS companies to test new product features or pricing strategies before full-scale implementation, minimizing risks and maximizing return on investment.

Business Management

Experimental research is increasingly utilized in business management to inform decision-making, improve organizational processes, and drive innovation. For example, a business may conduct an experiment to evaluate the effectiveness of a new training program on employee productivity. Participants could be randomly assigned to either receive the training or serve as a control group.

By measuring performance metrics such as sales revenue, customer satisfaction, and employee turnover, researchers can assess the training program's impact and determine its return on investment. Experimental research in business management provides empirical evidence to support strategic initiatives and optimize resource allocation.

In healthcare , experimental research is instrumental in testing new treatments, interventions, and healthcare delivery models to improve patient outcomes and quality of care. For instance, a clinical trial may be conducted to evaluate the efficacy of a new drug in treating a specific medical condition. Participants are randomly assigned to either receive the experimental drug or a placebo, and their health outcomes are monitored over time.

By comparing the effectiveness of the treatment and placebo groups, researchers can determine the drug's efficacy, safety profile, and potential side effects. Experimental research in healthcare informs evidence-based practice and drives advancements in medical science and patient care.

These examples illustrate the versatility and applicability of experimental research across diverse domains, demonstrating its value in generating actionable insights, informing decision-making, and driving innovation. Whether in market research or healthcare, experimental research provides a rigorous and systematic approach to testing hypotheses, evaluating interventions, and advancing knowledge.

Experimental Research Challenges

Even with careful planning and execution, experimental research can present various challenges. Understanding these challenges and implementing effective solutions is crucial for ensuring the validity and reliability of your study. Here are some common challenges and strategies for addressing them.

Sample Size and Statistical Power

Challenge : Inadequate sample size can limit your study's generalizability and statistical power, making it difficult to detect meaningful effects. Small sample sizes increase the risk of Type II errors (false negatives) and reduce the reliability of your findings.

Solution : Increase your sample size to improve statistical power and enhance the robustness of your results. Conduct a power analysis before starting your study to determine the minimum sample size required to detect the effects of interest with sufficient power. Consider factors such as effect size, alpha level, and desired power when calculating sample size requirements. Additionally, consider using techniques such as bootstrapping or resampling to augment small sample sizes and improve the stability of your estimates.

To enhance the reliability of your experimental research findings, you can leverage our Sample Size Calculator . By determining the optimal sample size based on your desired margin of error, confidence level, and standard deviation, you can ensure the representativeness of your survey results. Don't let inadequate sample sizes hinder the validity of your study and unlock the power of precise research planning!

Confounding Variables and Bias

Challenge : Confounding variables are extraneous factors that co-vary with the independent variable and can distort the relationship between the independent and dependent variables. Confounding variables threaten the internal validity of your study and can lead to erroneous conclusions.

Solution : Implement control measures to minimize the influence of confounding variables on your results. Random assignment of participants to experimental conditions helps distribute confounding variables evenly across groups, reducing their impact on the dependent variable. Additionally, consider using matching or blocking techniques to ensure that groups are comparable on relevant variables. Conduct sensitivity analyses to assess the robustness of your findings to potential confounders and explore alternative explanations for your results.

Researcher Effects and Experimenter Bias

Challenge : Researcher effects and experimenter bias occur when the experimenter's expectations or actions inadvertently influence the study's outcomes. This bias can manifest through subtle cues, unintentional behaviors, or unconscious biases , leading to invalid conclusions.

Solution : Implement double-blind procedures whenever possible to mitigate researcher effects and experimenter bias. Double-blind designs conceal information about the experimental conditions from both the participants and the experimenters, minimizing the potential for bias. Standardize experimental procedures and instructions to ensure consistency across conditions and minimize experimenter variability. Additionally, consider using objective outcome measures or automated data collection procedures to reduce the influence of experimenter bias on subjective assessments.

External Validity and Generalizability

Challenge : External validity refers to the extent to which your study's findings can be generalized to other populations, settings, or conditions. Limited external validity restricts the applicability of your results and may hinder their relevance to real-world contexts.

Solution : Enhance external validity by designing studies closely resembling real-world conditions and populations of interest. Consider using diverse samples  that represent  the target population's demographic, cultural, and ecological variability. Conduct replication studies in different contexts or with different populations to assess the robustness and generalizability of your findings. Additionally, consider conducting meta-analyses or systematic reviews to synthesize evidence from multiple studies and enhance the external validity of your conclusions.

By proactively addressing these challenges and implementing effective solutions, you can strengthen the validity, reliability, and impact of your experimental research. Remember to remain vigilant for potential pitfalls throughout the research process and adapt your strategies as needed to ensure the integrity of your findings.

Advanced Topics in Experimental Research

As you delve deeper into experimental research, you'll encounter advanced topics and methodologies that offer greater complexity and nuance.

Quasi-Experimental Designs

Quasi-experimental designs resemble true experiments but lack random assignment to experimental conditions. They are often used when random assignment is impractical, unethical, or impossible. Quasi-experimental designs allow researchers to investigate cause-and-effect relationships in real-world settings where strict experimental control is challenging. Common examples include:

  • Non-Equivalent Groups Design : This design compares two or more groups that were not created through random assignment. While similar to between-subjects designs, non-equivalent group designs lack the random assignment of participants, increasing the risk of confounding variables.
  • Interrupted Time Series Design : In this design, multiple measurements are taken over time before and after an intervention is introduced. Changes in the dependent variable are assessed over time, allowing researchers to infer the impact of the intervention.
  • Regression Discontinuity Design : This design involves assigning participants to different groups based on a cutoff score on a continuous variable. Participants just above and below the cutoff are treated as if they were randomly assigned to different conditions, allowing researchers to estimate causal effects.

Quasi-experimental designs offer valuable insights into real-world phenomena but require careful consideration of potential confounding variables and limitations inherent to non-random assignment.

Factorial Designs

Factorial designs involve manipulating two or more independent variables simultaneously to examine their main effects and interactions. By systematically varying multiple factors, factorial designs allow researchers to explore complex relationships between variables and identify how they interact to influence outcomes. Common types of factorial designs include:

  • 2x2 Factorial Design : This design manipulates two independent variables, each with two levels. It allows researchers to examine the main effects of each variable as well as any interaction between them.
  • Mixed Factorial Design : In this design, one independent variable is manipulated between subjects, while another is manipulated within subjects. Mixed factorial designs enable researchers to investigate both between-subjects and within-subjects effects simultaneously.

Factorial designs provide a comprehensive understanding of how multiple factors contribute to outcomes and offer greater statistical efficiency compared to studying variables in isolation.

Longitudinal and Cross-Sectional Studies

Longitudinal studies involve collecting data from the same participants over an extended period, allowing researchers to observe changes and trajectories over time. Cross-sectional studies , on the other hand, involve collecting data from different participants at a single point in time, providing a snapshot of the population at that moment. Both longitudinal and cross-sectional studies offer unique advantages and challenges:

  • Longitudinal Studies : Longitudinal designs allow researchers to examine developmental processes, track changes over time, and identify causal relationships. However, longitudinal studies require long-term commitment, are susceptible to attrition and dropout, and may be subject to practice effects and cohort effects.
  • Cross-Sectional Studies : Cross-sectional designs are relatively quick and cost-effective, provide a snapshot of population characteristics, and allow for comparisons across different groups. However, cross-sectional studies cannot assess changes over time or establish causal relationships between variables.

Researchers should carefully consider the research question, objectives, and constraints when choosing between longitudinal and cross-sectional designs.

Meta-Analysis and Systematic Reviews

Meta-analysis and systematic reviews are quantitative methods used to synthesize findings from multiple studies and draw robust conclusions. These methods offer several advantages:

  • Meta-Analysis : Meta-analysis combines the results of multiple studies using statistical techniques to estimate overall effect sizes and assess the consistency of findings across studies. Meta-analysis increases statistical power, enhances generalizability, and provides more precise estimates of effect sizes.
  • Systematic Reviews : Systematic reviews involve systematically searching, appraising, and synthesizing existing literature on a specific topic. Systematic reviews provide a comprehensive summary of the evidence, identify gaps and inconsistencies in the literature, and inform future research directions.

Meta-analysis and systematic reviews are valuable tools for evidence-based practice, guiding policy decisions, and advancing scientific knowledge by aggregating and synthesizing empirical evidence from diverse sources.

By exploring these advanced topics in experimental research, you can expand your methodological toolkit, tackle more complex research questions, and contribute to deeper insights and understanding in your field.

Experimental Research Ethical Considerations

When conducting experimental research, it's imperative to uphold ethical standards and prioritize the well-being and rights of participants. Here are some key ethical considerations to keep in mind throughout the research process:

  • Informed Consent : Obtain informed consent from participants before they participate in your study. Ensure that participants understand the purpose of the study, the procedures involved, any potential risks or benefits, and their right to withdraw from the study at any time without penalty.
  • Protection of Participants' Rights : Respect participants' autonomy, privacy, and confidentiality throughout the research process. Safeguard sensitive information and ensure that participants' identities are protected. Be transparent about how their data will be used and stored.
  • Minimizing Harm and Risks : Take steps to mitigate any potential physical or psychological harm to participants. Conduct a risk assessment before starting your study and implement appropriate measures to reduce risks. Provide support services and resources for participants who may experience distress or adverse effects as a result of their participation.
  • Confidentiality and Data Security : Protect participants' privacy and ensure the security of their data. Use encryption and secure storage methods to prevent unauthorized access to sensitive information. Anonymize data whenever possible to minimize the risk of data breaches or privacy violations.
  • Avoiding Deception : Minimize the use of deception in your research and ensure that any deception is justified by the scientific objectives of the study. If deception is necessary, debrief participants fully at the end of the study and provide them with an opportunity to withdraw their data if they wish.
  • Respecting Diversity and Cultural Sensitivity : Be mindful of participants' diverse backgrounds, cultural norms, and values. Avoid imposing your own cultural biases on participants and ensure that your research is conducted in a culturally sensitive manner. Seek input from diverse stakeholders to ensure your research is inclusive and respectful.
  • Compliance with Ethical Guidelines : Familiarize yourself with relevant ethical guidelines and regulations governing research with human participants, such as those outlined by institutional review boards (IRBs) or ethics committees. Ensure that your research adheres to these guidelines and that any potential ethical concerns are addressed appropriately.
  • Transparency and Openness : Be transparent about your research methods, procedures, and findings. Clearly communicate the purpose of your study, any potential risks or limitations, and how participants' data will be used. Share your research findings openly and responsibly, contributing to the collective body of knowledge in your field.

By prioritizing ethical considerations in your experimental research, you demonstrate integrity, respect, and responsibility as a researcher, fostering trust and credibility in the scientific community.

Conclusion for Experimental Research

Experimental research is a powerful tool for uncovering causal relationships and expanding our understanding of the world around us. By carefully designing experiments, collecting data, and analyzing results, researchers can make meaningful contributions to their fields and address pressing questions. However, conducting experimental research comes with responsibilities. Ethical considerations are paramount to ensure the well-being and rights of participants, as well as the integrity of the research process. Researchers can build trust and credibility in their work by upholding ethical standards and prioritizing participant safety and autonomy. Furthermore, as you continue to explore and innovate in experimental research, you must remain open to new ideas and methodologies. Embracing diversity in perspectives and approaches fosters creativity and innovation, leading to breakthrough discoveries and scientific advancements. By promoting collaboration and sharing findings openly, we can collectively push the boundaries of knowledge and tackle some of society's most pressing challenges.

How to Conduct Research in Minutes?

Discover the power of Appinio , the real-time market research platform revolutionizing experimental research. With Appinio, you can access real-time consumer insights to make better data-driven decisions in minutes. Join the thousands of companies worldwide who trust Appinio to deliver fast, reliable consumer insights.

Here's why you should consider using Appinio for your research needs:

  • From questions to insights in minutes:  With Appinio, you can conduct your own market research and get actionable insights in record time, allowing you to make fast, informed decisions for your business.
  • Intuitive platform for anyone:  You don't need a PhD in research to use Appinio. Our platform is designed to be user-friendly and intuitive so  that anyone  can easily create and launch surveys.
  • Extensive reach and targeting options:  Define your target audience from over 1200 characteristics and survey them in over 90 countries. Our platform ensures you reach the right people for your research needs, no matter where they are.

Register now EN

Get free access to the platform!

Join the loop 💌

Be the first to hear about new updates, product news, and data insights. We'll send it all straight to your inbox.

Get the latest market research news straight to your inbox! 💌

Wait, there's more

Time Series Analysis Definition Types Techniques Examples

16.05.2024 | 30min read

Time Series Analysis: Definition, Types, Techniques, Examples

14.05.2024 | 31min read

Interval Scale Definition Characteristics Examples

07.05.2024 | 29min read

Interval Scale: Definition, Characteristics, Examples

Grad Coach

Research Design 101

Everything You Need To Get Started (With Examples)

By: Derek Jansen (MBA) | Reviewers: Eunice Rautenbach (DTech) & Kerryn Warren (PhD) | April 2023

Research design for qualitative and quantitative studies

Navigating the world of research can be daunting, especially if you’re a first-time researcher. One concept you’re bound to run into fairly early in your research journey is that of “ research design ”. Here, we’ll guide you through the basics using practical examples , so that you can approach your research with confidence.

Overview: Research Design 101

What is research design.

  • Research design types for quantitative studies
  • Video explainer : quantitative research design
  • Research design types for qualitative studies
  • Video explainer : qualitative research design
  • How to choose a research design
  • Key takeaways

Research design refers to the overall plan, structure or strategy that guides a research project , from its conception to the final data analysis. A good research design serves as the blueprint for how you, as the researcher, will collect and analyse data while ensuring consistency, reliability and validity throughout your study.

Understanding different types of research designs is essential as helps ensure that your approach is suitable  given your research aims, objectives and questions , as well as the resources you have available to you. Without a clear big-picture view of how you’ll design your research, you run the risk of potentially making misaligned choices in terms of your methodology – especially your sampling , data collection and data analysis decisions.

The problem with defining research design…

One of the reasons students struggle with a clear definition of research design is because the term is used very loosely across the internet, and even within academia.

Some sources claim that the three research design types are qualitative, quantitative and mixed methods , which isn’t quite accurate (these just refer to the type of data that you’ll collect and analyse). Other sources state that research design refers to the sum of all your design choices, suggesting it’s more like a research methodology . Others run off on other less common tangents. No wonder there’s confusion!

In this article, we’ll clear up the confusion. We’ll explain the most common research design types for both qualitative and quantitative research projects, whether that is for a full dissertation or thesis, or a smaller research paper or article.

Free Webinar: Research Methodology 101

Research Design: Quantitative Studies

Quantitative research involves collecting and analysing data in a numerical form. Broadly speaking, there are four types of quantitative research designs: descriptive , correlational , experimental , and quasi-experimental . 

Descriptive Research Design

As the name suggests, descriptive research design focuses on describing existing conditions, behaviours, or characteristics by systematically gathering information without manipulating any variables. In other words, there is no intervention on the researcher’s part – only data collection.

For example, if you’re studying smartphone addiction among adolescents in your community, you could deploy a survey to a sample of teens asking them to rate their agreement with certain statements that relate to smartphone addiction. The collected data would then provide insight regarding how widespread the issue may be – in other words, it would describe the situation.

The key defining attribute of this type of research design is that it purely describes the situation . In other words, descriptive research design does not explore potential relationships between different variables or the causes that may underlie those relationships. Therefore, descriptive research is useful for generating insight into a research problem by describing its characteristics . By doing so, it can provide valuable insights and is often used as a precursor to other research design types.

Correlational Research Design

Correlational design is a popular choice for researchers aiming to identify and measure the relationship between two or more variables without manipulating them . In other words, this type of research design is useful when you want to know whether a change in one thing tends to be accompanied by a change in another thing.

For example, if you wanted to explore the relationship between exercise frequency and overall health, you could use a correlational design to help you achieve this. In this case, you might gather data on participants’ exercise habits, as well as records of their health indicators like blood pressure, heart rate, or body mass index. Thereafter, you’d use a statistical test to assess whether there’s a relationship between the two variables (exercise frequency and health).

As you can see, correlational research design is useful when you want to explore potential relationships between variables that cannot be manipulated or controlled for ethical, practical, or logistical reasons. It is particularly helpful in terms of developing predictions , and given that it doesn’t involve the manipulation of variables, it can be implemented at a large scale more easily than experimental designs (which will look at next).

That said, it’s important to keep in mind that correlational research design has limitations – most notably that it cannot be used to establish causality . In other words, correlation does not equal causation . To establish causality, you’ll need to move into the realm of experimental design, coming up next…

Need a helping hand?

research experimental design example

Experimental Research Design

Experimental research design is used to determine if there is a causal relationship between two or more variables . With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions about potential causality.

For example, if you wanted to measure if/how different types of fertiliser affect plant growth, you could set up several groups of plants, with each group receiving a different type of fertiliser, as well as one with no fertiliser at all. You could then measure how much each plant group grew (on average) over time and compare the results from the different groups to see which fertiliser was most effective.

Overall, experimental research design provides researchers with a powerful way to identify and measure causal relationships (and the direction of causality) between variables. However, developing a rigorous experimental design can be challenging as it’s not always easy to control all the variables in a study. This often results in smaller sample sizes , which can reduce the statistical power and generalisability of the results.

Moreover, experimental research design requires random assignment . This means that the researcher needs to assign participants to different groups or conditions in a way that each participant has an equal chance of being assigned to any group (note that this is not the same as random sampling ). Doing so helps reduce the potential for bias and confounding variables . This need for random assignment can lead to ethics-related issues . For example, withholding a potentially beneficial medical treatment from a control group may be considered unethical in certain situations.

Quasi-Experimental Research Design

Quasi-experimental research design is used when the research aims involve identifying causal relations , but one cannot (or doesn’t want to) randomly assign participants to different groups (for practical or ethical reasons). Instead, with a quasi-experimental research design, the researcher relies on existing groups or pre-existing conditions to form groups for comparison.

For example, if you were studying the effects of a new teaching method on student achievement in a particular school district, you may be unable to randomly assign students to either group and instead have to choose classes or schools that already use different teaching methods. This way, you still achieve separate groups, without having to assign participants to specific groups yourself.

Naturally, quasi-experimental research designs have limitations when compared to experimental designs. Given that participant assignment is not random, it’s more difficult to confidently establish causality between variables, and, as a researcher, you have less control over other variables that may impact findings.

All that said, quasi-experimental designs can still be valuable in research contexts where random assignment is not possible and can often be undertaken on a much larger scale than experimental research, thus increasing the statistical power of the results. What’s important is that you, as the researcher, understand the limitations of the design and conduct your quasi-experiment as rigorously as possible, paying careful attention to any potential confounding variables .

The four most common quantitative research design types are descriptive, correlational, experimental and quasi-experimental.

Research Design: Qualitative Studies

There are many different research design types when it comes to qualitative studies, but here we’ll narrow our focus to explore the “Big 4”. Specifically, we’ll look at phenomenological design, grounded theory design, ethnographic design, and case study design.

Phenomenological Research Design

Phenomenological design involves exploring the meaning of lived experiences and how they are perceived by individuals. This type of research design seeks to understand people’s perspectives , emotions, and behaviours in specific situations. Here, the aim for researchers is to uncover the essence of human experience without making any assumptions or imposing preconceived ideas on their subjects.

For example, you could adopt a phenomenological design to study why cancer survivors have such varied perceptions of their lives after overcoming their disease. This could be achieved by interviewing survivors and then analysing the data using a qualitative analysis method such as thematic analysis to identify commonalities and differences.

Phenomenological research design typically involves in-depth interviews or open-ended questionnaires to collect rich, detailed data about participants’ subjective experiences. This richness is one of the key strengths of phenomenological research design but, naturally, it also has limitations. These include potential biases in data collection and interpretation and the lack of generalisability of findings to broader populations.

Grounded Theory Research Design

Grounded theory (also referred to as “GT”) aims to develop theories by continuously and iteratively analysing and comparing data collected from a relatively large number of participants in a study. It takes an inductive (bottom-up) approach, with a focus on letting the data “speak for itself”, without being influenced by preexisting theories or the researcher’s preconceptions.

As an example, let’s assume your research aims involved understanding how people cope with chronic pain from a specific medical condition, with a view to developing a theory around this. In this case, grounded theory design would allow you to explore this concept thoroughly without preconceptions about what coping mechanisms might exist. You may find that some patients prefer cognitive-behavioural therapy (CBT) while others prefer to rely on herbal remedies. Based on multiple, iterative rounds of analysis, you could then develop a theory in this regard, derived directly from the data (as opposed to other preexisting theories and models).

Grounded theory typically involves collecting data through interviews or observations and then analysing it to identify patterns and themes that emerge from the data. These emerging ideas are then validated by collecting more data until a saturation point is reached (i.e., no new information can be squeezed from the data). From that base, a theory can then be developed .

As you can see, grounded theory is ideally suited to studies where the research aims involve theory generation , especially in under-researched areas. Keep in mind though that this type of research design can be quite time-intensive , given the need for multiple rounds of data collection and analysis.

research experimental design example

Ethnographic Research Design

Ethnographic design involves observing and studying a culture-sharing group of people in their natural setting to gain insight into their behaviours, beliefs, and values. The focus here is on observing participants in their natural environment (as opposed to a controlled environment). This typically involves the researcher spending an extended period of time with the participants in their environment, carefully observing and taking field notes .

All of this is not to say that ethnographic research design relies purely on observation. On the contrary, this design typically also involves in-depth interviews to explore participants’ views, beliefs, etc. However, unobtrusive observation is a core component of the ethnographic approach.

As an example, an ethnographer may study how different communities celebrate traditional festivals or how individuals from different generations interact with technology differently. This may involve a lengthy period of observation, combined with in-depth interviews to further explore specific areas of interest that emerge as a result of the observations that the researcher has made.

As you can probably imagine, ethnographic research design has the ability to provide rich, contextually embedded insights into the socio-cultural dynamics of human behaviour within a natural, uncontrived setting. Naturally, however, it does come with its own set of challenges, including researcher bias (since the researcher can become quite immersed in the group), participant confidentiality and, predictably, ethical complexities . All of these need to be carefully managed if you choose to adopt this type of research design.

Case Study Design

With case study research design, you, as the researcher, investigate a single individual (or a single group of individuals) to gain an in-depth understanding of their experiences, behaviours or outcomes. Unlike other research designs that are aimed at larger sample sizes, case studies offer a deep dive into the specific circumstances surrounding a person, group of people, event or phenomenon, generally within a bounded setting or context .

As an example, a case study design could be used to explore the factors influencing the success of a specific small business. This would involve diving deeply into the organisation to explore and understand what makes it tick – from marketing to HR to finance. In terms of data collection, this could include interviews with staff and management, review of policy documents and financial statements, surveying customers, etc.

While the above example is focused squarely on one organisation, it’s worth noting that case study research designs can have different variation s, including single-case, multiple-case and longitudinal designs. As you can see in the example, a single-case design involves intensely examining a single entity to understand its unique characteristics and complexities. Conversely, in a multiple-case design , multiple cases are compared and contrasted to identify patterns and commonalities. Lastly, in a longitudinal case design , a single case or multiple cases are studied over an extended period of time to understand how factors develop over time.

As you can see, a case study research design is particularly useful where a deep and contextualised understanding of a specific phenomenon or issue is desired. However, this strength is also its weakness. In other words, you can’t generalise the findings from a case study to the broader population. So, keep this in mind if you’re considering going the case study route.

Case study design often involves investigating an individual to gain an in-depth understanding of their experiences, behaviours or outcomes.

How To Choose A Research Design

Having worked through all of these potential research designs, you’d be forgiven for feeling a little overwhelmed and wondering, “ But how do I decide which research design to use? ”. While we could write an entire post covering that alone, here are a few factors to consider that will help you choose a suitable research design for your study.

Data type: The first determining factor is naturally the type of data you plan to be collecting – i.e., qualitative or quantitative. This may sound obvious, but we have to be clear about this – don’t try to use a quantitative research design on qualitative data (or vice versa)!

Research aim(s) and question(s): As with all methodological decisions, your research aim and research questions will heavily influence your research design. For example, if your research aims involve developing a theory from qualitative data, grounded theory would be a strong option. Similarly, if your research aims involve identifying and measuring relationships between variables, one of the experimental designs would likely be a better option.

Time: It’s essential that you consider any time constraints you have, as this will impact the type of research design you can choose. For example, if you’ve only got a month to complete your project, a lengthy design such as ethnography wouldn’t be a good fit.

Resources: Take into account the resources realistically available to you, as these need to factor into your research design choice. For example, if you require highly specialised lab equipment to execute an experimental design, you need to be sure that you’ll have access to that before you make a decision.

Keep in mind that when it comes to research, it’s important to manage your risks and play as conservatively as possible. If your entire project relies on you achieving a huge sample, having access to niche equipment or holding interviews with very difficult-to-reach participants, you’re creating risks that could kill your project. So, be sure to think through your choices carefully and make sure that you have backup plans for any existential risks. Remember that a relatively simple methodology executed well generally will typically earn better marks than a highly-complex methodology executed poorly.

research experimental design example

Recap: Key Takeaways

We’ve covered a lot of ground here. Let’s recap by looking at the key takeaways:

  • Research design refers to the overall plan, structure or strategy that guides a research project, from its conception to the final analysis of data.
  • Research designs for quantitative studies include descriptive , correlational , experimental and quasi-experimenta l designs.
  • Research designs for qualitative studies include phenomenological , grounded theory , ethnographic and case study designs.
  • When choosing a research design, you need to consider a variety of factors, including the type of data you’ll be working with, your research aims and questions, your time and the resources available to you.

If you need a helping hand with your research design (or any other aspect of your research), check out our private coaching services .

research experimental design example

Psst... there’s more!

This post was based on one of our popular Research Bootcamps . If you're working on a research project, you'll definitely want to check this out ...

You Might Also Like:

Survey Design 101: The Basics

10 Comments

Wei Leong YONG

Is there any blog article explaining more on Case study research design? Is there a Case study write-up template? Thank you.

Solly Khan

Thanks this was quite valuable to clarify such an important concept.

hetty

Thanks for this simplified explanations. it is quite very helpful.

Belz

This was really helpful. thanks

Imur

Thank you for your explanation. I think case study research design and the use of secondary data in researches needs to be talked about more in your videos and articles because there a lot of case studies research design tailored projects out there.

Please is there any template for a case study research design whose data type is a secondary data on your repository?

Sam Msongole

This post is very clear, comprehensive and has been very helpful to me. It has cleared the confusion I had in regard to research design and methodology.

Robyn Pritchard

This post is helpful, easy to understand, and deconstructs what a research design is. Thanks

kelebogile

how to cite this page

Peter

Thank you very much for the post. It is wonderful and has cleared many worries in my mind regarding research designs. I really appreciate .

ali

how can I put this blog as my reference(APA style) in bibliography part?

Submit a Comment Cancel reply

Your email address will not be published. Required fields are marked *

Save my name, email, and website in this browser for the next time I comment.

  • Print Friendly

helpful professor logo

15 Experimental Design Examples

experimental design types and definition, explained below

Experimental design involves testing an independent variable against a dependent variable. It is a central feature of the scientific method .

A simple example of an experimental design is a clinical trial, where research participants are placed into control and treatment groups in order to determine the degree to which an intervention in the treatment group is effective.

There are three categories of experimental design . They are:

  • Pre-Experimental Design: Testing the effects of the independent variable on a single participant or a small group of participants (e.g. a case study).
  • Quasi-Experimental Design: Testing the effects of the independent variable on a group of participants who aren’t randomly assigned to treatment and control groups (e.g. purposive sampling).
  • True Experimental Design: Testing the effects of the independent variable on a group of participants who are randomly assigned to treatment and control groups in order to infer causality (e.g. clinical trials).

A good research student can look at a design’s methodology and correctly categorize it. Below are some typical examples of experimental designs, with their type indicated.

Experimental Design Examples

The following are examples of experimental design (with their type indicated).

1. Action Research in the Classroom

Type: Pre-Experimental Design

A teacher wants to know if a small group activity will help students learn how to conduct a survey. So, they test the activity out on a few of their classes and make careful observations regarding the outcome.

The teacher might observe that the students respond well to the activity and seem to be learning the material quickly.

However, because there was no comparison group of students that learned how to do a survey with a different methodology, the teacher cannot be certain that the activity is actually the best method for teaching that subject.

2. Study on the Impact of an Advertisement

An advertising firm has assigned two of their best staff to develop a quirky ad about eating a brand’s new breakfast product.

The team puts together an unusual skit that involves characters enjoying the breakfast while engaged in silly gestures and zany background music. The ad agency doesn’t want to spend a great deal of money on the ad just yet, so the commercial is shot with a low budget. The firm then shows the ad to a small group of people just to see their reactions.

Afterwards they determine that the ad had a strong impact on viewers so they move forward with a much larger budget.

3. Case Study

A medical doctor has a hunch that an old treatment regimen might be effective in treating a rare illness.

The treatment has never been used in this manner before. So, the doctor applies the treatment to two of their patients with the illness. After several weeks, the results seem to indicate that the treatment is not causing any change in the illness. The doctor concludes that there is no need to continue the treatment or conduct a larger study with a control condition.

4. Fertilizer and Plant Growth Study

An agricultural farmer is exploring different combinations of nutrients on plant growth, so she does a small experiment.

Instead of spending a lot of time and money applying the different mixes to acres of land and waiting several months to see the results, she decides to apply the fertilizer to some small plants in the lab.

After several weeks, it appears that the plants are responding well. They are growing rapidly and producing dense branching. She shows the plants to her colleagues and they all agree that further testing is needed under better controlled conditions .

5. Mood States Study

A team of psychologists is interested in studying how mood affects altruistic behavior. They are undecided however, on how to put the research participants in a bad mood, so they try a few pilot studies out.

They try one suggestion and make a 3-minute video that shows sad scenes from famous heart-wrenching movies.

They then recruit a few people to watch the clips and measure their mood states afterwards.

The results indicate that people were put in a negative mood, but since there was no control group, the researchers cannot be 100% confident in the clip’s effectiveness.

6. Math Games and Learning Study

Type: Quasi-Experimental Design

Two teachers have developed a set of math games that they think will make learning math more enjoyable for their students. They decide to test out the games on their classes.

So, for two weeks, one teacher has all of her students play the math games. The other teacher uses the standard teaching techniques. At the end of the two weeks, all students take the same math test. The results indicate that students that played the math games did better on the test.

Although the teachers would like to say the games were the cause of the improved performance, they cannot be 100% sure because the study lacked random assignment . There are many other differences between the groups that played the games and those that did not.

Learn More: Random Assignment Examples

7. Economic Impact of Policy

An economic policy institute has decided to test the effectiveness of a new policy on the development of small business. The institute identifies two cities in a third-world country for testing.

The two cities are similar in terms of size, economic output, and other characteristics. The city in which the new policy was implemented showed a much higher growth of small businesses than the other city.

Although the two cities were similar in many ways, the researchers must be cautious in their conclusions. There may exist other differences between the two cities that effected small business growth other than the policy.

8. Parenting Styles and Academic Performance

Psychologists want to understand how parenting style affects children’s academic performance.

So, they identify a large group of parents that have one of four parenting styles: authoritarian, authoritative, permissive, or neglectful. The researchers then compare the grades of each group and discover that children raised with the authoritative parenting style had better grades than the other three groups. Although these results may seem convincing, it turns out that parents that use the authoritative parenting style also have higher SES class and can afford to provide their children with more intellectually enriching activities like summer STEAM camps.

9. Movies and Donations Study

Will the type of movie a person watches affect the likelihood that they donate to a charitable cause? To answer this question, a researcher decides to solicit donations at the exit point of a large theatre.

He chooses to study two types of movies: action-hero and murder mystery. After collecting donations for one month, he tallies the results. Patrons that watched the action-hero movie donated more than those that watched the murder mystery. Can you think of why these results could be due to something other than the movie?

10. Gender and Mindfulness Apps Study

Researchers decide to conduct a study on whether men or women benefit from mindfulness the most. So, they recruit office workers in large corporations at all levels of management.

Then, they divide the research sample up into males and females and ask the participants to use a mindfulness app once each day for at least 15 minutes.

At the end of three weeks, the researchers give all the participants a questionnaire that measures stress and also take swabs from their saliva to measure stress hormones.

The results indicate the women responded much better to the apps than males and showed lower stress levels on both measures.

Unfortunately, it is difficult to conclude that women respond to apps better than men because the researchers could not randomly assign participants to gender. This means that there may be extraneous variables that are causing the results.

11. Eyewitness Testimony Study

Type: True Experimental Design

To study the how leading questions on the memories of eyewitnesses leads to retroactive inference , Loftus and Palmer (1974) conducted a simple experiment consistent with true experimental design.

Research participants all watched the same short video of two cars having an accident. Each were randomly assigned to be asked either one of two versions of a question regarding the accident.

Half of the participants were asked the question “How fast were the two cars going when they smashed into each other?” and the other half were asked “How fast were the two cars going when they contacted each other?”

Participants’ estimates were affected by the wording of the question. Participants that responded to the question with the word “smashed” gave much higher estimates than participants that responded to the word “contacted.”

12. Sports Nutrition Bars Study

A company wants to test the effects of their sports nutrition bars. So, they recruited students on a college campus to participate in their study. The students were randomly assigned to either the treatment condition or control condition.

Participants in the treatment condition ate two nutrition bars. Participants in the control condition ate two similar looking bars that tasted nearly identical, but offered no nutritional value.

One hour after consuming the bars, participants ran on a treadmill at a moderate pace for 15 minutes. The researchers recorded their speed, breathing rates, and level of exhaustion.

The results indicated that participants that ate the nutrition bars ran faster, breathed more easily, and reported feeling less exhausted than participants that ate the non-nutritious bar.

13. Clinical Trials

Medical researchers often use true experiments to assess the effectiveness of various treatment regimens. For a simplified example: people from the population are randomly selected to participate in a study on the effects of a medication on heart disease.

Participants are randomly assigned to either receive the medication or nothing at all. Three months later, all participants are contacted and they are given a full battery of heart disease tests.

The results indicate that participants that received the medication had significantly lower levels of heart disease than participants that received no medication.

14. Leadership Training Study

A large corporation wants to improve the leadership skills of its mid-level managers. The HR department has developed two programs, one online and the other in-person in small classes.

HR randomly selects 120 employees to participate and then randomly assigned them to one of three conditions: one-third are assigned to the online program, one-third to the in-class version, and one-third are put on a waiting list.

The training lasts for 6 weeks and 4 months later, supervisors of the participants are asked to rate their staff in terms of leadership potential. The supervisors were not informed about which of their staff participated in the program.

The results indicated that the in-person participants received the highest ratings from their supervisors. The online class participants came in second, followed by those on the waiting list.

15. Reading Comprehension and Lighting Study

Different wavelengths of light may affect cognitive processing. To put this hypothesis to the test, a researcher randomly assigned students on a college campus to read a history chapter in one of three lighting conditions: natural sunlight, artificial yellow light, and standard fluorescent light.

At the end of the chapter all students took the same exam. The researcher then compared the scores on the exam for students in each condition. The results revealed that natural sunlight produced the best test scores, followed by yellow light and fluorescent light.

Therefore, the researcher concludes that natural sunlight improves reading comprehension.

See Also: Experimental Study vs Observational Study

Experimental design is a central feature of scientific research. When done using true experimental design, causality can be infered, which allows researchers to provide proof that an independent variable affects a dependent variable. This is necessary in just about every field of research, and especially in medical sciences.

Chris

Chris Drew (PhD)

Dr. Chris Drew is the founder of the Helpful Professor. He holds a PhD in education and has published over 20 articles in scholarly journals. He is the former editor of the Journal of Learning Development in Higher Education. [Image Descriptor: Photo of Chris]

  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 15 Animism Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ 10 Magical Thinking Examples
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ Social-Emotional Learning (Definition, Examples, Pros & Cons)
  • Chris Drew (PhD) https://helpfulprofessor.com/author/chris-drew-phd/ What is Educational Psychology?

Leave a Comment Cancel Reply

Your email address will not be published. Required fields are marked *

  • Privacy Policy

Research Method

Home » Experimental Design – Types, Methods, Guide

Experimental Design – Types, Methods, Guide

Table of Contents

Experimental Research Design

Experimental Design

Experimental design is a process of planning and conducting scientific experiments to investigate a hypothesis or research question. It involves carefully designing an experiment that can test the hypothesis, and controlling for other variables that may influence the results.

Experimental design typically includes identifying the variables that will be manipulated or measured, defining the sample or population to be studied, selecting an appropriate method of sampling, choosing a method for data collection and analysis, and determining the appropriate statistical tests to use.

Types of Experimental Design

Here are the different types of experimental design:

Completely Randomized Design

In this design, participants are randomly assigned to one of two or more groups, and each group is exposed to a different treatment or condition.

Randomized Block Design

This design involves dividing participants into blocks based on a specific characteristic, such as age or gender, and then randomly assigning participants within each block to one of two or more treatment groups.

Factorial Design

In a factorial design, participants are randomly assigned to one of several groups, each of which receives a different combination of two or more independent variables.

Repeated Measures Design

In this design, each participant is exposed to all of the different treatments or conditions, either in a random order or in a predetermined order.

Crossover Design

This design involves randomly assigning participants to one of two or more treatment groups, with each group receiving one treatment during the first phase of the study and then switching to a different treatment during the second phase.

Split-plot Design

In this design, the researcher manipulates one or more variables at different levels and uses a randomized block design to control for other variables.

Nested Design

This design involves grouping participants within larger units, such as schools or households, and then randomly assigning these units to different treatment groups.

Laboratory Experiment

Laboratory experiments are conducted under controlled conditions, which allows for greater precision and accuracy. However, because laboratory conditions are not always representative of real-world conditions, the results of these experiments may not be generalizable to the population at large.

Field Experiment

Field experiments are conducted in naturalistic settings and allow for more realistic observations. However, because field experiments are not as controlled as laboratory experiments, they may be subject to more sources of error.

Experimental Design Methods

Experimental design methods refer to the techniques and procedures used to design and conduct experiments in scientific research. Here are some common experimental design methods:

Randomization

This involves randomly assigning participants to different groups or treatments to ensure that any observed differences between groups are due to the treatment and not to other factors.

Control Group

The use of a control group is an important experimental design method that involves having a group of participants that do not receive the treatment or intervention being studied. The control group is used as a baseline to compare the effects of the treatment group.

Blinding involves keeping participants, researchers, or both unaware of which treatment group participants are in, in order to reduce the risk of bias in the results.

Counterbalancing

This involves systematically varying the order in which participants receive treatments or interventions in order to control for order effects.

Replication

Replication involves conducting the same experiment with different samples or under different conditions to increase the reliability and validity of the results.

This experimental design method involves manipulating multiple independent variables simultaneously to investigate their combined effects on the dependent variable.

This involves dividing participants into subgroups or blocks based on specific characteristics, such as age or gender, in order to reduce the risk of confounding variables.

Data Collection Method

Experimental design data collection methods are techniques and procedures used to collect data in experimental research. Here are some common experimental design data collection methods:

Direct Observation

This method involves observing and recording the behavior or phenomenon of interest in real time. It may involve the use of structured or unstructured observation, and may be conducted in a laboratory or naturalistic setting.

Self-report Measures

Self-report measures involve asking participants to report their thoughts, feelings, or behaviors using questionnaires, surveys, or interviews. These measures may be administered in person or online.

Behavioral Measures

Behavioral measures involve measuring participants’ behavior directly, such as through reaction time tasks or performance tests. These measures may be administered using specialized equipment or software.

Physiological Measures

Physiological measures involve measuring participants’ physiological responses, such as heart rate, blood pressure, or brain activity, using specialized equipment. These measures may be invasive or non-invasive, and may be administered in a laboratory or clinical setting.

Archival Data

Archival data involves using existing records or data, such as medical records, administrative records, or historical documents, as a source of information. These data may be collected from public or private sources.

Computerized Measures

Computerized measures involve using software or computer programs to collect data on participants’ behavior or responses. These measures may include reaction time tasks, cognitive tests, or other types of computer-based assessments.

Video Recording

Video recording involves recording participants’ behavior or interactions using cameras or other recording equipment. This method can be used to capture detailed information about participants’ behavior or to analyze social interactions.

Data Analysis Method

Experimental design data analysis methods refer to the statistical techniques and procedures used to analyze data collected in experimental research. Here are some common experimental design data analysis methods:

Descriptive Statistics

Descriptive statistics are used to summarize and describe the data collected in the study. This includes measures such as mean, median, mode, range, and standard deviation.

Inferential Statistics

Inferential statistics are used to make inferences or generalizations about a larger population based on the data collected in the study. This includes hypothesis testing and estimation.

Analysis of Variance (ANOVA)

ANOVA is a statistical technique used to compare means across two or more groups in order to determine whether there are significant differences between the groups. There are several types of ANOVA, including one-way ANOVA, two-way ANOVA, and repeated measures ANOVA.

Regression Analysis

Regression analysis is used to model the relationship between two or more variables in order to determine the strength and direction of the relationship. There are several types of regression analysis, including linear regression, logistic regression, and multiple regression.

Factor Analysis

Factor analysis is used to identify underlying factors or dimensions in a set of variables. This can be used to reduce the complexity of the data and identify patterns in the data.

Structural Equation Modeling (SEM)

SEM is a statistical technique used to model complex relationships between variables. It can be used to test complex theories and models of causality.

Cluster Analysis

Cluster analysis is used to group similar cases or observations together based on similarities or differences in their characteristics.

Time Series Analysis

Time series analysis is used to analyze data collected over time in order to identify trends, patterns, or changes in the data.

Multilevel Modeling

Multilevel modeling is used to analyze data that is nested within multiple levels, such as students nested within schools or employees nested within companies.

Applications of Experimental Design 

Experimental design is a versatile research methodology that can be applied in many fields. Here are some applications of experimental design:

  • Medical Research: Experimental design is commonly used to test new treatments or medications for various medical conditions. This includes clinical trials to evaluate the safety and effectiveness of new drugs or medical devices.
  • Agriculture : Experimental design is used to test new crop varieties, fertilizers, and other agricultural practices. This includes randomized field trials to evaluate the effects of different treatments on crop yield, quality, and pest resistance.
  • Environmental science: Experimental design is used to study the effects of environmental factors, such as pollution or climate change, on ecosystems and wildlife. This includes controlled experiments to study the effects of pollutants on plant growth or animal behavior.
  • Psychology : Experimental design is used to study human behavior and cognitive processes. This includes experiments to test the effects of different interventions, such as therapy or medication, on mental health outcomes.
  • Engineering : Experimental design is used to test new materials, designs, and manufacturing processes in engineering applications. This includes laboratory experiments to test the strength and durability of new materials, or field experiments to test the performance of new technologies.
  • Education : Experimental design is used to evaluate the effectiveness of teaching methods, educational interventions, and programs. This includes randomized controlled trials to compare different teaching methods or evaluate the impact of educational programs on student outcomes.
  • Marketing : Experimental design is used to test the effectiveness of marketing campaigns, pricing strategies, and product designs. This includes experiments to test the impact of different marketing messages or pricing schemes on consumer behavior.

Examples of Experimental Design 

Here are some examples of experimental design in different fields:

  • Example in Medical research : A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the new drug and the control group receiving a placebo. The outcomes, such as improvement in symptoms or side effects, are measured and compared between the two groups.
  • Example in Education research: A study that examines the impact of a new teaching method on student learning outcomes. Students are randomly assigned to either a group that receives the new teaching method or a group that receives the traditional teaching method. Student achievement is measured before and after the intervention, and the results are compared between the two groups.
  • Example in Environmental science: A study that tests the effectiveness of a new method for reducing pollution in a river. Two sections of the river are selected, with one section treated with the new method and the other section left untreated. The water quality is measured before and after the intervention, and the results are compared between the two sections.
  • Example in Marketing research: A study that investigates the impact of a new advertising campaign on consumer behavior. Participants are randomly assigned to either a group that is exposed to the new campaign or a group that is not. Their behavior, such as purchasing or product awareness, is measured and compared between the two groups.
  • Example in Social psychology: A study that examines the effect of a new social intervention on reducing prejudice towards a marginalized group. Participants are randomly assigned to either a group that receives the intervention or a control group that does not. Their attitudes and behavior towards the marginalized group are measured before and after the intervention, and the results are compared between the two groups.

When to use Experimental Research Design 

Experimental research design should be used when a researcher wants to establish a cause-and-effect relationship between variables. It is particularly useful when studying the impact of an intervention or treatment on a particular outcome.

Here are some situations where experimental research design may be appropriate:

  • When studying the effects of a new drug or medical treatment: Experimental research design is commonly used in medical research to test the effectiveness and safety of new drugs or medical treatments. By randomly assigning patients to treatment and control groups, researchers can determine whether the treatment is effective in improving health outcomes.
  • When evaluating the effectiveness of an educational intervention: An experimental research design can be used to evaluate the impact of a new teaching method or educational program on student learning outcomes. By randomly assigning students to treatment and control groups, researchers can determine whether the intervention is effective in improving academic performance.
  • When testing the effectiveness of a marketing campaign: An experimental research design can be used to test the effectiveness of different marketing messages or strategies. By randomly assigning participants to treatment and control groups, researchers can determine whether the marketing campaign is effective in changing consumer behavior.
  • When studying the effects of an environmental intervention: Experimental research design can be used to study the impact of environmental interventions, such as pollution reduction programs or conservation efforts. By randomly assigning locations or areas to treatment and control groups, researchers can determine whether the intervention is effective in improving environmental outcomes.
  • When testing the effects of a new technology: An experimental research design can be used to test the effectiveness and safety of new technologies or engineering designs. By randomly assigning participants or locations to treatment and control groups, researchers can determine whether the new technology is effective in achieving its intended purpose.

How to Conduct Experimental Research

Here are the steps to conduct Experimental Research:

  • Identify a Research Question : Start by identifying a research question that you want to answer through the experiment. The question should be clear, specific, and testable.
  • Develop a Hypothesis: Based on your research question, develop a hypothesis that predicts the relationship between the independent and dependent variables. The hypothesis should be clear and testable.
  • Design the Experiment : Determine the type of experimental design you will use, such as a between-subjects design or a within-subjects design. Also, decide on the experimental conditions, such as the number of independent variables, the levels of the independent variable, and the dependent variable to be measured.
  • Select Participants: Select the participants who will take part in the experiment. They should be representative of the population you are interested in studying.
  • Randomly Assign Participants to Groups: If you are using a between-subjects design, randomly assign participants to groups to control for individual differences.
  • Conduct the Experiment : Conduct the experiment by manipulating the independent variable(s) and measuring the dependent variable(s) across the different conditions.
  • Analyze the Data: Analyze the data using appropriate statistical methods to determine if there is a significant effect of the independent variable(s) on the dependent variable(s).
  • Draw Conclusions: Based on the data analysis, draw conclusions about the relationship between the independent and dependent variables. If the results support the hypothesis, then it is accepted. If the results do not support the hypothesis, then it is rejected.
  • Communicate the Results: Finally, communicate the results of the experiment through a research report or presentation. Include the purpose of the study, the methods used, the results obtained, and the conclusions drawn.

Purpose of Experimental Design 

The purpose of experimental design is to control and manipulate one or more independent variables to determine their effect on a dependent variable. Experimental design allows researchers to systematically investigate causal relationships between variables, and to establish cause-and-effect relationships between the independent and dependent variables. Through experimental design, researchers can test hypotheses and make inferences about the population from which the sample was drawn.

Experimental design provides a structured approach to designing and conducting experiments, ensuring that the results are reliable and valid. By carefully controlling for extraneous variables that may affect the outcome of the study, experimental design allows researchers to isolate the effect of the independent variable(s) on the dependent variable(s), and to minimize the influence of other factors that may confound the results.

Experimental design also allows researchers to generalize their findings to the larger population from which the sample was drawn. By randomly selecting participants and using statistical techniques to analyze the data, researchers can make inferences about the larger population with a high degree of confidence.

Overall, the purpose of experimental design is to provide a rigorous, systematic, and scientific method for testing hypotheses and establishing cause-and-effect relationships between variables. Experimental design is a powerful tool for advancing scientific knowledge and informing evidence-based practice in various fields, including psychology, biology, medicine, engineering, and social sciences.

Advantages of Experimental Design 

Experimental design offers several advantages in research. Here are some of the main advantages:

  • Control over extraneous variables: Experimental design allows researchers to control for extraneous variables that may affect the outcome of the study. By manipulating the independent variable and holding all other variables constant, researchers can isolate the effect of the independent variable on the dependent variable.
  • Establishing causality: Experimental design allows researchers to establish causality by manipulating the independent variable and observing its effect on the dependent variable. This allows researchers to determine whether changes in the independent variable cause changes in the dependent variable.
  • Replication : Experimental design allows researchers to replicate their experiments to ensure that the findings are consistent and reliable. Replication is important for establishing the validity and generalizability of the findings.
  • Random assignment: Experimental design often involves randomly assigning participants to conditions. This helps to ensure that individual differences between participants are evenly distributed across conditions, which increases the internal validity of the study.
  • Precision : Experimental design allows researchers to measure variables with precision, which can increase the accuracy and reliability of the data.
  • Generalizability : If the study is well-designed, experimental design can increase the generalizability of the findings. By controlling for extraneous variables and using random assignment, researchers can increase the likelihood that the findings will apply to other populations and contexts.

Limitations of Experimental Design

Experimental design has some limitations that researchers should be aware of. Here are some of the main limitations:

  • Artificiality : Experimental design often involves creating artificial situations that may not reflect real-world situations. This can limit the external validity of the findings, or the extent to which the findings can be generalized to real-world settings.
  • Ethical concerns: Some experimental designs may raise ethical concerns, particularly if they involve manipulating variables that could cause harm to participants or if they involve deception.
  • Participant bias : Participants in experimental studies may modify their behavior in response to the experiment, which can lead to participant bias.
  • Limited generalizability: The conditions of the experiment may not reflect the complexities of real-world situations. As a result, the findings may not be applicable to all populations and contexts.
  • Cost and time : Experimental design can be expensive and time-consuming, particularly if the experiment requires specialized equipment or if the sample size is large.
  • Researcher bias : Researchers may unintentionally bias the results of the experiment if they have expectations or preferences for certain outcomes.
  • Lack of feasibility : Experimental design may not be feasible in some cases, particularly if the research question involves variables that cannot be manipulated or controlled.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Questionnaire

Questionnaire – Definition, Types, and Examples

Case Study Research

Case Study – Methods, Examples and Guide

Observational Research

Observational Research – Methods and Guide

Quantitative Research

Quantitative Research – Methods, Types and...

Qualitative Research Methods

Qualitative Research Methods

Explanatory Research

Explanatory Research – Types, Methods, Guide

  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case NPS+ Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

research experimental design example

Home Market Research

Experimental Research: What it is + Types of designs

Experimental Research Design

Any research conducted under scientifically acceptable conditions uses experimental methods. The success of experimental studies hinges on researchers confirming the change of a variable is based solely on the manipulation of the constant variable. The research should establish a notable cause and effect.

What is Experimental Research?

Experimental research is a study conducted with a scientific approach using two sets of variables. The first set acts as a constant, which you use to measure the differences of the second set. Quantitative research methods , for example, are experimental.

If you don’t have enough data to support your decisions, you must first determine the facts. This research gathers the data necessary to help you make better decisions.

You can conduct experimental research in the following situations:

  • Time is a vital factor in establishing a relationship between cause and effect.
  • Invariable behavior between cause and effect.
  • You wish to understand the importance of cause and effect.

Experimental Research Design Types

The classic experimental design definition is: “The methods used to collect data in experimental studies.”

There are three primary types of experimental design:

  • Pre-experimental research design
  • True experimental research design
  • Quasi-experimental research design

The way you classify research subjects based on conditions or groups determines the type of research design  you should use.

0 1. Pre-Experimental Design

A group, or various groups, are kept under observation after implementing cause and effect factors. You’ll conduct this research to understand whether further investigation is necessary for these particular groups.

You can break down pre-experimental research further into three types:

  • One-shot Case Study Research Design
  • One-group Pretest-posttest Research Design
  • Static-group Comparison

0 2. True Experimental Design

It relies on statistical analysis to prove or disprove a hypothesis, making it the most accurate form of research. Of the types of experimental design, only true design can establish a cause-effect relationship within a group. In a true experiment, three factors need to be satisfied:

  • There is a Control Group, which won’t be subject to changes, and an Experimental Group, which will experience the changed variables.
  • A variable that can be manipulated by the researcher
  • Random distribution

This experimental research method commonly occurs in the physical sciences.

0 3. Quasi-Experimental Design

The word “Quasi” indicates similarity. A quasi-experimental design is similar to an experimental one, but it is not the same. The difference between the two is the assignment of a control group. In this research, an independent variable is manipulated, but the participants of a group are not randomly assigned. Quasi-research is used in field settings where random assignment is either irrelevant or not required.

Importance of Experimental Design

Experimental research is a powerful tool for understanding cause-and-effect relationships. It allows us to manipulate variables and observe the effects, which is crucial for understanding how different factors influence the outcome of a study.

But the importance of experimental research goes beyond that. It’s a critical method for many scientific and academic studies. It allows us to test theories, develop new products, and make groundbreaking discoveries.

For example, this research is essential for developing new drugs and medical treatments. Researchers can understand how a new drug works by manipulating dosage and administration variables and identifying potential side effects.

Similarly, experimental research is used in the field of psychology to test theories and understand human behavior. By manipulating variables such as stimuli, researchers can gain insights into how the brain works and identify new treatment options for mental health disorders.

It is also widely used in the field of education. It allows educators to test new teaching methods and identify what works best. By manipulating variables such as class size, teaching style, and curriculum, researchers can understand how students learn and identify new ways to improve educational outcomes.

In addition, experimental research is a powerful tool for businesses and organizations. By manipulating variables such as marketing strategies, product design, and customer service, companies can understand what works best and identify new opportunities for growth.

Advantages of Experimental Research

When talking about this research, we can think of human life. Babies do their own rudimentary experiments (such as putting objects in their mouths) to learn about the world around them, while older children and teens do experiments at school to learn more about science.

Ancient scientists used this research to prove that their hypotheses were correct. For example, Galileo Galilei and Antoine Lavoisier conducted various experiments to discover key concepts in physics and chemistry. The same is true of modern experts, who use this scientific method to see if new drugs are effective, discover treatments for diseases, and create new electronic devices (among others).

It’s vital to test new ideas or theories. Why put time, effort, and funding into something that may not work?

This research allows you to test your idea in a controlled environment before marketing. It also provides the best method to test your theory thanks to the following advantages:

Advantages of experimental research

  • Researchers have a stronger hold over variables to obtain desired results.
  • The subject or industry does not impact the effectiveness of experimental research. Any industry can implement it for research purposes.
  • The results are specific.
  • After analyzing the results, you can apply your findings to similar ideas or situations.
  • You can identify the cause and effect of a hypothesis. Researchers can further analyze this relationship to determine more in-depth ideas.
  • Experimental research makes an ideal starting point. The data you collect is a foundation for building more ideas and conducting more action research .

Whether you want to know how the public will react to a new product or if a certain food increases the chance of disease, experimental research is the best place to start. Begin your research by finding subjects using  QuestionPro Audience  and other tools today.

LEARN MORE         FREE TRIAL

MORE LIKE THIS

data information vs insight

Data Information vs Insight: Essential differences

May 14, 2024

pricing analytics software

Pricing Analytics Software: Optimize Your Pricing Strategy

May 13, 2024

relationship marketing

Relationship Marketing: What It Is, Examples & Top 7 Benefits

May 8, 2024

email survey tool

The Best Email Survey Tool to Boost Your Feedback Game

May 7, 2024

Other categories

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Uncategorized
  • Video Learning Series
  • What’s Coming Up
  • Workforce Intelligence

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • J Athl Train
  • v.45(1); Jan-Feb 2010

Study/Experimental/Research Design: Much More Than Statistics

Kenneth l. knight.

Brigham Young University, Provo, UT

The purpose of study, experimental, or research design in scientific manuscripts has changed significantly over the years. It has evolved from an explanation of the design of the experiment (ie, data gathering or acquisition) to an explanation of the statistical analysis. This practice makes “Methods” sections hard to read and understand.

To clarify the difference between study design and statistical analysis, to show the advantages of a properly written study design on article comprehension, and to encourage authors to correctly describe study designs.

Description:

The role of study design is explored from the introduction of the concept by Fisher through modern-day scientists and the AMA Manual of Style . At one time, when experiments were simpler, the study design and statistical design were identical or very similar. With the complex research that is common today, which often includes manipulating variables to create new variables and the multiple (and different) analyses of a single data set, data collection is very different than statistical design. Thus, both a study design and a statistical design are necessary.

Advantages:

Scientific manuscripts will be much easier to read and comprehend. A proper experimental design serves as a road map to the study methods, helping readers to understand more clearly how the data were obtained and, therefore, assisting them in properly analyzing the results.

Study, experimental, or research design is the backbone of good research. It directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. When properly described in the written report of the experiment, it serves as a road map to readers, 1 helping them negotiate the “Methods” section, and, thus, it improves the clarity of communication between authors and readers.

A growing trend is to equate study design with only the statistical analysis of the data. The design statement typically is placed at the end of the “Methods” section as a subsection called “Experimental Design” or as part of a subsection called “Data Analysis.” This placement, however, equates experimental design and statistical analysis, minimizing the effect of experimental design on the planning and reporting of an experiment. This linkage is inappropriate, because some of the elements of the study design that should be described at the beginning of the “Methods” section are instead placed in the “Statistical Analysis” section or, worse, are absent from the manuscript entirely.

Have you ever interrupted your reading of the “Methods” to sketch out the variables in the margins of the paper as you attempt to understand how they all fit together? Or have you jumped back and forth from the early paragraphs of the “Methods” section to the “Statistics” section to try to understand which variables were collected and when? These efforts would be unnecessary if a road map at the beginning of the “Methods” section outlined how the independent variables were related, which dependent variables were measured, and when they were measured. When they were measured is especially important if the variables used in the statistical analysis were a subset of the measured variables or were computed from measured variables (such as change scores).

The purpose of this Communications article is to clarify the purpose and placement of study design elements in an experimental manuscript. Adopting these ideas may improve your science and surely will enhance the communication of that science. These ideas will make experimental manuscripts easier to read and understand and, therefore, will allow them to become part of readers' clinical decision making.

WHAT IS A STUDY (OR EXPERIMENTAL OR RESEARCH) DESIGN?

The terms study design, experimental design, and research design are often thought to be synonymous and are sometimes used interchangeably in a single paper. Avoid doing so. Use the term that is preferred by the style manual of the journal for which you are writing. Study design is the preferred term in the AMA Manual of Style , 2 so I will use it here.

A study design is the architecture of an experimental study 3 and a description of how the study was conducted, 4 including all elements of how the data were obtained. 5 The study design should be the first subsection of the “Methods” section in an experimental manuscript (see the Table ). “Statistical Design” or, preferably, “Statistical Analysis” or “Data Analysis” should be the last subsection of the “Methods” section.

Table. Elements of a “Methods” Section

An external file that holds a picture, illustration, etc.
Object name is i1062-6050-45-1-98-t01.jpg

The “Study Design” subsection describes how the variables and participants interacted. It begins with a general statement of how the study was conducted (eg, crossover trials, parallel, or observational study). 2 The second element, which usually begins with the second sentence, details the number of independent variables or factors, the levels of each variable, and their names. A shorthand way of doing so is with a statement such as “A 2 × 4 × 8 factorial guided data collection.” This tells us that there were 3 independent variables (factors), with 2 levels of the first factor, 4 levels of the second factor, and 8 levels of the third factor. Following is a sentence that names the levels of each factor: for example, “The independent variables were sex (male or female), training program (eg, walking, running, weight lifting, or plyometrics), and time (2, 4, 6, 8, 10, 15, 20, or 30 weeks).” Such an approach clearly outlines for readers how the various procedures fit into the overall structure and, therefore, enhances their understanding of how the data were collected. Thus, the design statement is a road map of the methods.

The dependent (or measurement or outcome) variables are then named. Details of how they were measured are not given at this point in the manuscript but are explained later in the “Instruments” and “Procedures” subsections.

Next is a paragraph detailing who the participants were and how they were selected, placed into groups, and assigned to a particular treatment order, if the experiment was a repeated-measures design. And although not a part of the design per se, a statement about obtaining written informed consent from participants and institutional review board approval is usually included in this subsection.

The nuts and bolts of the “Methods” section follow, including such things as equipment, materials, protocols, etc. These are beyond the scope of this commentary, however, and so will not be discussed.

The last part of the “Methods” section and last part of the “Study Design” section is the “Data Analysis” subsection. It begins with an explanation of any data manipulation, such as how data were combined or how new variables (eg, ratios or differences between collected variables) were calculated. Next, readers are told of the statistical measures used to analyze the data, such as a mixed 2 × 4 × 8 analysis of variance (ANOVA) with 2 between-groups factors (sex and training program) and 1 within-groups factor (time of measurement). Researchers should state and reference the statistical package and procedure(s) within the package used to compute the statistics. (Various statistical packages perform analyses slightly differently, so it is important to know the package and specific procedure used.) This detail allows readers to judge the appropriateness of the statistical measures and the conclusions drawn from the data.

STATISTICAL DESIGN VERSUS STATISTICAL ANALYSIS

Avoid using the term statistical design . Statistical methods are only part of the overall design. The term gives too much emphasis to the statistics, which are important, but only one of many tools used in interpreting data and only part of the study design:

The most important issues in biostatistics are not expressed with statistical procedures. The issues are inherently scientific, rather than purely statistical, and relate to the architectural design of the research, not the numbers with which the data are cited and interpreted. 6

Stated another way, “The justification for the analysis lies not in the data collected but in the manner in which the data were collected.” 3 “Without the solid foundation of a good design, the edifice of statistical analysis is unsafe.” 7 (pp4–5)

The intertwining of study design and statistical analysis may have been caused (unintentionally) by R.A. Fisher, “… a genius who almost single-handedly created the foundations for modern statistical science.” 8 Most research did not involve statistics until Fisher invented the concepts and procedures of ANOVA (in 1921) 9 , 10 and experimental design (in 1935). 11 His books became standard references for scientists in many disciplines. As a result, many ANOVA books were titled Experimental Design (see, for example, Edwards 12 ), and ANOVA courses taught in psychology and education departments included the words experimental design in their course titles.

Before the widespread use of computers to analyze data, designs were much simpler, and often there was little difference between study design and statistical analysis. So combining the 2 elements did not cause serious problems. This is no longer true, however, for 3 reasons: (1) Research studies are becoming more complex, with multiple independent and dependent variables. The procedures sections of these complex studies can be difficult to understand if your only reference point is the statistical analysis and design. (2) Dependent variables are frequently measured at different times. (3) How the data were collected is often not directly correlated with the statistical design.

For example, assume the goal is to determine the strength gain in novice and experienced athletes as a result of 3 strength training programs. Rate of change in strength is not a measurable variable; rather, it is calculated from strength measurements taken at various time intervals during the training. So the study design would be a 2 × 2 × 3 factorial with independent variables of time (pretest or posttest), experience (novice or advanced), and training (isokinetic, isotonic, or isometric) and a dependent variable of strength. The statistical design , however, would be a 2 × 3 factorial with independent variables of experience (novice or advanced) and training (isokinetic, isotonic, or isometric) and a dependent variable of strength gain. Note that data were collected according to a 3-factor design but were analyzed according to a 2-factor design and that the dependent variables were different. So a single design statement, usually a statistical design statement, would not communicate which data were collected or how. Readers would be left to figure out on their own how the data were collected.

MULTIVARIATE RESEARCH AND THE NEED FOR STUDY DESIGNS

With the advent of electronic data gathering and computerized data handling and analysis, research projects have increased in complexity. Many projects involve multiple dependent variables measured at different times, and, therefore, multiple design statements may be needed for both data collection and statistical analysis. Consider, for example, a study of the effects of heat and cold on neural inhibition. The variables of H max and M max are measured 3 times each: before, immediately after, and 30 minutes after a 20-minute treatment with heat or cold. Muscle temperature might be measured each minute before, during, and after the treatment. Although the minute-by-minute data are important for graphing temperature fluctuations during the procedure, only 3 temperatures (time 0, time 20, and time 50) are used for statistical analysis. A single dependent variable H max :M max ratio is computed to illustrate neural inhibition. Again, a single statistical design statement would tell little about how the data were obtained. And in this example, separate design statements would be needed for temperature measurement and H max :M max measurements.

As stated earlier, drawing conclusions from the data depends more on how the data were measured than on how they were analyzed. 3 , 6 , 7 , 13 So a single study design statement (or multiple such statements) at the beginning of the “Methods” section acts as a road map to the study and, thus, increases scientists' and readers' comprehension of how the experiment was conducted (ie, how the data were collected). Appropriate study design statements also increase the accuracy of conclusions drawn from the study.

CONCLUSIONS

The goal of scientific writing, or any writing, for that matter, is to communicate information. Including 2 design statements or subsections in scientific papers—one to explain how the data were collected and another to explain how they were statistically analyzed—will improve the clarity of communication and bring praise from readers. To summarize:

  • Purge from your thoughts and vocabulary the idea that experimental design and statistical design are synonymous.
  • Study or experimental design plays a much broader role than simply defining and directing the statistical analysis of an experiment.
  • A properly written study design serves as a road map to the “Methods” section of an experiment and, therefore, improves communication with the reader.
  • Study design should include a description of the type of design used, each factor (and each level) involved in the experiment, and the time at which each measurement was made.
  • Clarify when the variables involved in data collection and data analysis are different, such as when data analysis involves only a subset of a collected variable or a resultant variable from the mathematical manipulation of 2 or more collected variables.

Acknowledgments

Thanks to Thomas A. Cappaert, PhD, ATC, CSCS, CSE, for suggesting the link between R.A. Fisher and the melding of the concepts of research design and statistics.

research experimental design example

Understanding Nursing Research

  • Primary Research
  • Qualitative vs. Quantitative Research

Experimental Design

Randomization vs random selection, randomized control trials (rcts), how do i tell if my article is a randomized control trial, how to limit your research to randomized control trials.

  • Is it a Nursing journal?
  • Is it Written by a Nurse?
  • Systematic Reviews and Secondary Research
  • Quality Improvement Plans

Correlational , or non-experimental , research is research where subjects are not acted upon, but where research questions can be answered merely by observing subjects.

An example of a correlational research question could be, "What is relationship between parents who make their children wash their hands at home and hand washing at school?" This is a question that  I could answer without acting upon the students or their parents.

Quasi-Experimental Research is research where an independent variable is manipulated, but the subjects of a study are not randomly assigned to an action (or a lack of action).

An example of quasi-experimental research would be to ask "What is the effect of hand-washing posters in school bathrooms?" If researchers put posters in the same place in all of the bathrooms of a single high school and measured how often students washed their hands. The reason the study is quasi-experimental is because the students are not randomly selected to participate in the study, they just participate because their school is receiving the intervention (posters in the bathroom).

Experimental Research is research that randomly selects subjects to participate in a study that includes some kind of intervention, or action intended to have an effect on the participants.

An example of an experimental design would be randomly selecting all of the schools participating in the hand washing poster campaign. The schools would then randomly be assigned to either the poster-group or the control group, which would receive no posters in their bathroom. Having a control group allows researchers to compare the group of students who received an intervention to those who did not.

How to tell:

The only way to tell what kind of experimental design is in an article you're reading is to read the Methodologies section of the article. This section should describe if participants were selected, how they were selected, and how they were assigned to either a control or intervention group.

Random Selection means subjects are randomly selected to participate in a study that involves an intervention.

Random Assignment means subjects are randomly assigned to whether they will be in a control group or a group that receives an intervention.

Controlled Trials are trials or studies that include a "control" group. If you were researching whether hand-washing posters were effective in getting students to wash their hands, you would put the posters in all of the bathrooms of one high school and in none of the bathrooms in another high school with similar demographic make up. The high school without the posters would be the control group. The control group allows you to see just how effective or ineffective your intervention was when you compare data at the end of your study.

Randomized Controlled Trials (RCTs) are also sometimes called Randomized Clinical Trials. These are studies where the participants are not necessarily randomly selected, but they are sorted into either an intervention group or a control group randomly. So in the example above, the researchers might select had twenty high schools in South Texas that were relatively similar (demographic make up, household incomes, size, etc.) and randomly decide which schools received hand washing posters and which did not.

To tell if an article you're looking at is a Randomized Control Trial (RCT) is relatively simple.

First, check the article's publication information. Sometimes even before you open an article, you can tell if it's a Randomized Control Trial. Like in this example:

research experimental design example

If you can't find the information in the article's publication information, the next step is to read the article's Abstract and Methodologies. In at least one of these sections, the researchers will state whether or not they used a control group in their study and whether or not the control and the intervention groups were assigned randomly.

The Methodologies section in particular should clearly explain how the participants were sorted into group. If the author states that participants were randomly assigned to groups, then that study is a Randomized Control Trial (RCT). If nothing about randomization is mentioned, it is safe to assume the article is not an RCT.

Below is an example of what to look for in an article's Methodologies section:

research experimental design example

If you know when you begin your research that you're interested in just Randomized Control Trials (RCTs), you can tell the database to just show you results that include Randomized Control Trials (RCTs).

In CINAHL, you can do that by scrolling down on the homepage and checking the box next to "Randomized Control Trials"

research experimental design example

If you keep scrolling, you'll get to a box that says "Publication Type." You can also scroll through those options and select "Randomized Control Trials." 

research experimental design example

If you're in PubMed, then enter your search terms and hit "Search." Then, when you're on the results page, click "Randomized Controlled Trial" under "Article types."

If you don't see a "Randomized Controlled Trial" option, click "Customize...," check the box next to "Randomized Controlled Trial," click the blue "show" button, and then click on "Randomized Controlled Trial" to make sure you've selected it.

research experimental design example

This is a really helpful way to limit your search results to just the kinds of articles you're interested in, but you should always double check that an article is in fact about a Randomized Control Trial (RCT) by reading the article's Methodologies section thoroughly.

  • << Previous: Qualitative vs. Quantitative Research
  • Next: Is it a Nursing journal? >>
  • Last Updated: Feb 6, 2024 9:34 AM
  • URL: https://guides.library.tamucc.edu/nursingresearch

Evolving Improved Sampling Protocols for Dose–Response Modelling Using Genetic Algorithms with a Profile-Likelihood Metric

  • Original Article
  • Open access
  • Published: 08 May 2024
  • Volume 86 , article number  70 , ( 2024 )

Cite this article

You have full access to this open access article

research experimental design example

  • Nicholas N. Lam   ORCID: orcid.org/0000-0001-5001-6148 1 ,
  • Rua Murray   ORCID: orcid.org/0000-0001-5563-1721 2 &
  • Paul D. Docherty   ORCID: orcid.org/0000-0003-1661-2573 1 , 3  

206 Accesses

Explore all metrics

Practical limitations of quality and quantity of data can limit the precision of parameter identification in mathematical models. Model-based experimental design approaches have been developed to minimise parameter uncertainty, but the majority of these approaches have relied on first-order approximations of model sensitivity at a local point in parameter space. Practical identifiability approaches such as profile-likelihood have shown potential for quantifying parameter uncertainty beyond linear approximations. This research presents a genetic algorithm approach to optimise sample timing across various parameterisations of a demonstrative PK-PD model with the goal of aiding experimental design. The optimisation relies on a chosen metric of parameter uncertainty that is based on the profile-likelihood method. Additionally, the approach considers cases where multiple parameter scenarios may require simultaneous optimisation. The genetic algorithm approach was able to locate near-optimal sampling protocols for a wide range of sample number (n = 3–20), and it reduced the parameter variance metric by 33–37% on average. The profile-likelihood metric also correlated well with an existing Monte Carlo-based metric (with a worst-case r > 0.89), while reducing computational cost by an order of magnitude. The combination of the new profile-likelihood metric and the genetic algorithm demonstrate the feasibility of considering the nonlinear nature of models in optimal experimental design at a reasonable computational cost. The outputs of such a process could allow for experimenters to either improve parameter certainty given a fixed number of samples, or reduce sample quantity while retaining the same level of parameter certainty.

Similar content being viewed by others

research experimental design example

Development of a genetic algorithm and NONMEM workbench for automating and improving population pharmacokinetic/pharmacodynamic model selection

The effect of using a robust optimality criterion in model based adaptive optimization.

research experimental design example

Estimation of Population Pharmacokinetic Parameters Using a Genetic Algorithm

Avoid common mistakes on your manuscript.

1 Introduction

Parameter identification is the process of determining the optimal values of a set of model parameters to fit the model to observed behaviour (Villaverde and Banga 2014 ). Parameter identifiability analysis is the process of determining how reliably parameters can be estimated from data. When considering finite data, this is often called practical identifiability analysis, while towards the infinite data limit, this becomes structural identifiability analysis (Simpson et al. 2020 ) . Practical limitations in experimentation such as measurement noise and discrete sampling locations can restrict the information available for the process of parameter identification, potentially leading to practical identifiability issues (Raue et al. 2009 ; Hines et al. 2014 ; Wieland et al. 2021 ; Lam et al. 2022 ; Muñoz-Tamayo and Tedeschi 2023 ). Consequently, a wide distribution of parameters may exhibit similar model behaviour and are not distinguishable to measured data. In such cases, the optimised parameter values have low certainty, and thus, the information yielded by the model-based analysis is ambiguous. Such issues have given rise to the model-based design of experiments (MBDoE) approach (also known as Optimal Experimental Design), which aims minimise uncertainty for parameter identification through adjusting experimental design settings (Franceschini and Macchietto 2008 ).

MBDoE approaches have been developed to address the difficulty of optimising experiments in non-linear models. MBDoE approaches can guide the choice of experimental design elements such as test inputs, experiment duration, and measurement timing. MBDoE has seen extensive research over several decades (Jacquez and Greif 1985 ; Walter and Pronzato 1990 ; Franceschini and Macchietto 2008 ; Galvanin et al. 2013 ). The approaches have predominantly required a scalar metric to be optimised through the MBDoE process, such as those based on properties of the Fisher information matrix (FIM).

The FIM is a first-order linear approximation of model sensitivity at a nominal parameter set, and it is indicative of the local convexity of the objective surface (Lam et al. 2022 ). D-optimality criteria, which maximises the determinant of the FIM, has been the most commonly employed metric for MBDoE (Franceschini and Macchietto 2008 ). Other common metrics are E-optimality and A-optimality, which maximise the smallest eigenvalue and trace of the FIM, respectively. However, as noted by Krausch et al. ( 2019 ) and Raue et al. ( 2009 ), using the FIM to approximate the accuracy of parameter estimation is not justified in the presence of nonlinearity in a region proximal to the optimised parameter values. Furthermore, optimising designs around a single parameter set can be an issue if multiple characteristic behaviours outside of that parameter domain exist in measured data (Franceschini and Macchietto 2008 ; Lam et al. 2022 ). For example, in a disease modelling context, a schedule optimised based on the parameters of a healthy individual may be detrimental to the parameter identification of a sick individual, or vice versa.

Recent developments in MBDoE have focused on tackling the issue of non-linearity in model behaviour. Methods have been developed alongside methods of practical identifiability analysis, since both methods share a goal of improving parameter estimation. In 2019, Krausch et al. ( 2019 ) developed a new metric, the Q-criterion, based on quantiles of Monte Carlo simulations performed on the model rather than a measure of the FIM. The Q-criterion was shown to capture non-linearities in a Michaelis–Menten kinetic example when used in conjunction with a MBDoE toolbox. Outside of the traditional MBDoE approach, practical identifiability based methods such as the generalised sensitivity functions developed by Thomaseth and Cobelli ( 1999 ), a graphical approach by Docherty et al. ( 2011 ), and profile-likelihood (PL) approach popularised by Raue et al. ( 2010 ) have been developed to improve experimental design. Of these methods, uptake of the PL approach has been particularly high (Wieland et al. 2021 ; Lam et al. 2022 ; Villaverde et al. 2023 ). Distinct advantages of the PL approach have been its ease of implementation and interpretability, along with computation speeds roughly an order of magnitude lower than comparable Monte Carlo-based methods (Simpson et al. 2020 ).

As noted by Lin et al. ( 2015 ), the use of genetic algorithms (GA) to generate optimal sampling schedules has seen an increase in recent decades. Inspired by evolutionary biology, GAs for MBDoE consider a population of candidate sampling schedules as organisms , and these organisms compete in successive generations with a goal of gradual improvement towards a near-optimal solution. The selection process that determines successive generations is reliant on a metric that acts as a fitness function to rank the optimality of each organism. In one selection scheme of GA known as the elitist variant, the best individuals from a current generation are selected for the next, along with additional individuals that have been created with crossover and/or mutation operations (Lin et al. 2015 ). However, applications of GA and other stochastic optimisation methods for MBDoE have predominantly used measures based on the FIM such as D-optimality (Broudiscou et al. 1996 ; Heredia-Langner et al. 2004 ; Chen et al. 2015 ). As noted earlier, these measures can miss nonlinear model behaviour.

This paper proposes the use of a profile-likelihood based metric in conjunction with a genetic algorithm to overcome the limitations of both the linear assumptions implicit in FIM-based measures, and the computational burden of Monte Carlo simulations. The proposed methodology is used to determine the optimal sample placement in a simple dose–response experiment with concomitant models of varying complexity. There is literature that describes the relationship between confidence interval width and sample size (Rothman and Greenland 2018 ). Using confidence interval-based metrics in MBDoE is complicated by the need to consider both sample placement and sample size, and current research on these methods is limited. In the pharmacokinetic-pharmacodynamic (PK-PD) context of this modelling, sampling is often limited by both the physical consequences of drawing multiple blood samples and the cost of analyte measurement in a lab setting (Mori and DiStefano 1979 ; DiStefano 1981 ; Docherty et al. 2011 ; Galvanin et al. 2013 ). Recent research has highlighted that quantifying uncertainty pharmacological model parameters is challenging due to the range of complexity in prospective models (Sher et al. 2022 ). The aim of the study is to explore the benefits of optimising sampling schedules in a sparse sampling scenario using a novel method that is rapid and relatively easy to implement.

2.1 Cases Investigated

A toy model with simple pharmacokinetic-pharmacodynamic behaviour was chosen for testing the GA approach. First-order dynamics for a concentration \(C\) are described by

where \(C\) is an arbitrary concentration, and \(\dot{C}\) is its time derivative, \(k\) is a first-order decay rate, \(V\) is the volume of distribution, \(U_N\) is the endogenous production rate, and \(U_x \left( t \right)\) is the external bolus. The bolus \(U_x (t)\) is defined as an instantaneous input at time \(t\)  = 60 min. For the purposes of testing the experimental design protocol, Case 1 considered \({{\varvec{\theta}}} = \left\{ {k, U_N } \right\}^T\) as unknown parameters to be identified, while \(V\) is an a priori value and \(\beta = 0\) . Case 2 considered \({{\varvec{\theta}}} = \left\{ {k, U_N , V} \right\}^T\) as unknown parameters to be identified with \(\beta = 0\) . Case 3 considers a model with Michaelis–Menten mechanics (Michaelis and Menten 1913 ). In this more complex model, \({{\varvec{\theta}}} = \left\{ {k, U_N , V, \beta } \right\}^T\) were non-zero and identified.

To consider and explore the local nature of optimality in experimental design, three distinct parameter scenarios were considered for each model case (Table  1 ). Numerical integration of Eq.  1 was performed for 0–600 min, the magnitude of \(U_x (t)\) was set as 4.0 for 1 min at \(t\)  = 60 min. Simulations, shown in Fig.  1 , were undertaken using Euler’s method with a step size of 1 min. For all simulated cases and parameter identification, it was assumed that \(C_0\) was at equilibrium. The three scenarios were chosen to resemble the progression towards a biological saturated response, where either age or the progression of a disease may inhibit the body’s ability to process an input.

figure 1

Trajectories of model scenarios used in Cases 1 and 2 (left) and Case 3 (right)

2.2 Practical Identifiability Methods

The Q-criterion ( \(Q_{crit}\) ) was developed by Krausch et al. ( 2019 ) as a measure of average parameter confidence interval width, where

uses \(Q_{\theta_i , 0.9}\) and \(Q_{\theta_i , 0.1}\) , the 90th and 10th quantile bounds of samples generated by Monte Carlo analysis, respectively. \(Q_{crit}\) is summed over each \(i^{th}\) parameter within the parameter vector \({{\varvec{\theta}}}\) . Based on Eq.  2 , a general metric for capturing parameter identifiability based on confidence intervals is proposed as

where \(\hat{\theta }_i\) is the best estimate of \(\theta_i\) from parameter identification, and \(\left[ {\theta_i^- , \theta_i^+ } \right]\) are the confidence intervals of the parameters obtained from a given method. Based on the coefficient of variation, the metric scales the confidence interval width by \(\hat{\theta }_i\) to allow for comparison of parameters of differing magnitudes. Minimisation of \(CI_{crit}\) results in reduction of average normalised parameter uncertainty. Profile-likelihood was selected as the method to form confidence intervals in this study. Using the profile-likelihood to form confidence intervals has advantages over FIM-based methods: it is invariant under nonlinear transformations and applicable to nonlinear models (Wieland et al. 2021 ). Additionally, profile-likelihood can identify confidence intervals more efficiently than Monte Carlo simulations, making it computationally advantageous for the methods covered later in Sect.  3.2 .

Raue et al. ( 2009 ) defined a method for using profile-likelihood to determine confidence intervals for model parameters. Parameters are ‘profiled’ by fixing a single \(\theta_i\) along a range of values while fitting the non-fixed \(\theta_{j \ne i}\) to data. By assuming zero-mean additive white Gaussian noise on the parameters, the weighted sum of squared residuals, \(\psi\) , can be used as a placeholder for the likelihood. \(\psi\) can be defined as

where \(N_s\) is the number of datapoints, \({{\varvec{\theta}}}\) is the parameter vector, \(\sigma_{M, i}\) is the standard deviation of the measurement error, \(C\left( {{{\varvec{\theta}}}, t_i } \right)\) and \(C_{M, i}\) are the simulated and measured concentrations at schedule time \(t_i\) , respectively. Then, likelihood-based confidence intervals can then be based on likelihood thresholds defined by the Chi-squared distribution \(\chi\) , with a confidence region

where the confidence interval is constructed for a nominal parameter set \({\hat{\varvec{\theta }}}\) (Raue et al. 2009 ). In this research, a quantile of \(\alpha\)  = 0.68 and \(\# dof\)  = 1 as in (Raue et al. 2009 ) were used to construct point-wise confidence intervals for each parameter \(\theta_i\) . The parameter \(\theta_i\) is considered practically identifiable when the confidence interval is finite. Henceforth, when \(CI_{crit}\) from Eq.  3 is calculated using the confidence interval from profile-likelihood, it will be referred to as \(PLB_{crit}\) to show that it was calculated through the profile likelihood bounds. In contrast, when \(CI_{crit}\) is calculated with comparable Monte Carlo quantile bounds ( \(Q_{\theta_i , 0.84}\) and \(Q_{\theta_i , 0.16}\) , the 84th and 16th quantiles to also capture 0.68 of the cluster), it will be referred to as \(QB_{crit}\) .

Both \(PLB_{crit}\) and \(QB_{crit}\) enable quantitative measurement of parameter uncertainty, which allows for direct comparison of different sampling schedules. Confidence intervals have been used to compare model performance and measurement schemes (Simpson et al. 2020 ), and \(PLB_{crit}\) uses these intervals to form a scalar performance measure. The metrics are minimised when confidence intervals or quantile bounds are narrower, which indicates that residual error rapidly increases as one moves away from the optimal parameter solution. Conversely, higher values of the metrics indicate that residual error increases only when moving much further away from the optimal parameters, which is symptomatic of parameter trade-off and practical non-identifiability issues.

2.3 Genetic Algorithm Method

Prior to applying the genetic algorithm, a practical identifiability analysis was performed with profile-likelihood to ensure that parameter bounds would be finite and physically feasible. Then a relatively simple GA implementation was implemented to find the optimum sample placement ( \({{\varvec{S}}}_{opt}\) ). One hundred organisms ( \({{\varvec{S}}}_k\) ) were iterated upon in each generation, and they competed to improve the \(PLB_{crit}\) metric in each generation.

The procedure used for the genetic algorithm is as follows:

Randomly select \(k = \left\{ {1, 2, \ldots 100} \right\}\) initial sampling schedules (organisms) ( \({{\varvec{S}}}_k\) ).

Determine the \(PLB_{crit}\) value for each organism ( \(PLB_{crit,k}^P\) ) and parameter scenario ( \(P\) ).

Across the considered parameter scenarios, save the worst-case (maximum) value for each organism: \(PLB_{crit,k}^{max} = {\mathop {\max }\limits_P} (PLB_{crit,k}^P )\) .

Re-order organism values ( \({{\varvec{S}}}_k^*\) ) in ascending order of \(PLB_{crit,k}^{max}\) , leading to the minimum value among the organisms being ranked the best .

Carry forward the best organisms to the next generation through a weighted cloning process.

Mutate (add noise) to the sample placement of all but the highest ranked organism ( \({{\varvec{S}}}_{2..100,} = {{\varvec{S}}}_{2..100}^* + {\rm{\mathbb{N}}}\left( {0,16} \right)_{2..100}^{1..N_S }\) ).

Repeat steps 2–6 to simulate successive ‘generations’.

Some practical constraints were imposed on the sampling schedules throughout the genetic algorithm procedure. One data point was always placed at the \(t_0\)  = 0. A condition of \(\Delta t > 5\) minutes was imposed on samples to represent the limitation of practical sampling. Additionally, the 5 min of time following the bolus input at t = 60 min were set as infeasible sampling times, due to a practical consideration of local mixing effects being present immediately post-bolus (Lam et al. 2021 ). In cases where the change would violate one of the constraints, the sampling point was shifted to the nearest valid location. MATLAB’s ‘lsqnonlin’ function was used to perform parameter identification through minimisation of \(\psi ({{\varvec{\theta}}})\) given \({{\varvec{S}}}_k\) . The lsqnonlin function with ‘StepTolerance’ = 1e−7, ‘OptimalityTolerance’ = 1e−7, and the other settings were left as default. Computational time was reduced for step 2 by running the 100 organisms through a parallel for-loop using MATLAB’s parallel computing toolbox.

Steps 3–4 implement a simple minimax scheme to consider the performance of multiple parameter scenarios in the optimal experimental design process. The true parameter values are not known in a practical setting, the worst-case behaviour under the range of representative scenarios is considered to address this issue. In contrast to a Bayesian modelling approach, this process does not require setting a prior distribution for the parameters, but it is limited to covering a finite number of feasible scenarios. Step 5 was performed through cloning organisms through to the next generation using the following function. Each j th clone in the new generation was selected based on the previous generation’s ordered organisms:

which cloned 14 of the best, 5 of the second best, 5 of the next, and so on until the last cloned organism in the new generation was the 52nd from the previous. For the mutation in step 6, except in the best sampling schedule from the generation ( \(j\)  = 1), normally distributed noise ( \(\mu\)  = 0, \(\sigma\)  = 16 [min]) was sequentially added to each sampling time following the initial t = 0 min sample. If the noise added to a particular sampling time contradicted the \(\Delta t > 5\)  min or post-bolus cooldown conditions, then the time was shifted to the nearest valid location.

The genetic algorithm was applied to each of the three cases for all three parameter scenarios. This was performed for a minimum number of samples \(N_s\)  = 3, 4, or 5 for cases 1–3 respectively, up to a maximum \(N_s\)  = 20. The algorithm was applied using a minimax design rule, in which the three parameter scenarios were optimised upon the same \({{\varvec{S}}}_K\) set simultaneously such that the maximum \(PLB_{crit}\) value of the parameter scenario was minimised for each generation. 150 generations were run for each case to observe speed of convergence of the sampling schedules. All analyses were undertaken on an Intel core i7-9700 (@3.00Ghz) with 32 GB RAM and MATLAB (Version R2022a 64-bit).

The confidence interval of parameter values achievable with the proposed method was compared to the confidence intervals obtained by traditional time-uniform sampling methods. Such a direct comparison targets the metric of primary value in model-based analysis. Positive performance of the proposed algorithm will be evident in a consistent ability to lead to parameter values with a small confidence interval. Furthermore, to enable comparison of each of the scenarios ( P ), a second analysis was undertaken wherein the genetic algorithm was applied separately to each of the three scenarios. 100 generations were run for each individual scenario, and steps 3–4 of the genetic algorithm were simplified down to simply ordering the organisms by their minimum \(PLB_{crit}\) values.

3.1 Genetic Algorithm Results

Across all of the tested sampling quantities, the genetic algorithm was able to locate sampling schedules with lower PL-measures than time-uniform sampling. Figure  2 shows the progression of 100 generations for \(N_s = 12\) in case 2, parameter scenario 1, along with the sampling locations visualised on the simulated C-profile on Fig.  3 . From generation six onwards, convergence can clearly be seen as the same sampling schedule continues to dominate successive generations. For Cases 1 and 2, results of the genetic algorithm are similar across all the cases and \(N_s\) . In data not shown, there was a general trend for slower convergence for larger \(N_s\) . To validate the usage of the minimax algorithm, the ratio of optimised \(PLB_{crit}\) to \(PLB_{crit}\) from sampling at equi-distant times was checked in a grid of parameter values. The optimised sampling outperformed the time-uniform sampling within the neighbourhood of the scenarios, and an example of one of the validation outputs for \(N_s\)  = 10 is shown in Fig.  4 .

figure 2

Genetic algorithm output over 100 generations of convergence for Model 1, parameter scenario 1, \({{\varvec{N}}}_{{\varvec{s}}}\)  = 12. The organism with the lowest \({{\varvec{PLB}}}_{{{\varvec{crit}}}} ({{\varvec{S}}}_1^{\varvec{*}} )\) is plotted in colour, while the other 99 organisms are grey dots to show clustering (Color figure online)

figure 3

The optimal sampling points resulting from the genetic algorithm applied for Model 1, parameter scenario 1, \({{\varvec{N}}}_{{\varvec{s}}}\)  = 12

figure 4

Validation of the genetic algorithm optimisation for \({{\varvec{N}}}_{{\varvec{s}}}\)  = 10. The heatmap shows the ratio of optimised \({{\varvec{PLB}}}_{{{\varvec{crit}}}}\) to time-uniform \({{\varvec{PLB}}}_{{{\varvec{crit}}}}\) , and the black dots from left to right represent parameter scenarios 1–3, respectively (Color Figure Online)

In the first generation of each Case I and II setting, the modified \(QB_{crit}\) derived from Krausch et al. ( 2019 ) was calculated for comparison with \(PLB_{crit}\) . Values of the Q-criterion were strongly correlated with the PL-measure. Across all the simulated parameter scenarios and \(N_s\) , correlations in the range [0.893, 0.998], and [0.953, 0.996] were found for Cases 1 and 2 respectively. On average, the calculation of \(PLB_{crit}\) was 25 to 36 times faster than \(QB_{crit}\) . Due to its stochastic nature, the values for \(QB_{crit}\) fluctuated slightly between runs unless the MATLAB’s random number generator had a fixed at the start of simulations. On average, when considering a single parameter scenario, each generation of Case 1 and 2 took 0.38 and 0.77 s, respectively. Implementing the minimax algorithm increased computational time additively, i.e. considering the three scenarios increased the computational time threefold.

3.2 Case 1 and 2 Results

The final results for the full range of \(N_s\) tested for each case in Cases 1 and 2 are shown in Fig.  5 . The methods were able to consistently reduce \(PLB_{crit}\) for each of the three cases. Simulations for the full Case 1 and 2 trade-off curves took 15 and 21 min for each parameter scenario, respectively. Additionally, the minimax approach was able to locate sampling times that yielded sampling times with a lower \(PLB_{crit}\) than the time-uniform sampling schedules of all three scenarios, except in the case of \(N_s\)  = 3 for Case 1. Compared to Case 1, values of \(PLB_{crit}\) were consistently higher for Case 2, and the confidence intervals of individual parameters \(k\) and \(U_N\) (not pictured) were also greater for each parameter scenario.

figure 5

Tradeoff curves for the 2-parameter Case 1 (left) and 3-parameter Case 2 (right). The dotted lines represent the error when naive uniform sampling (uni) is implemented, while the solid lines represent the optimal sampling (opt) following optimisation of \({{\varvec{PLB}}}_{{{\varvec{crit}}}}\) via genetic algorithm (Color Figure Online)

3.3 Case 3 Results

Within the local region of parameters and inputs that were tested, the lower confidence bound of \(\beta\) fell below \(\beta = 0\) , indicating physically infeasible parameter values for Michaelis–Menten mechanics. Additionally, the upper bounds for the \(k\) and \(U_N\) parameters started tending towards infinity for small \(N_s\) in parameter scenario 3. These practical identifiability issues were identified during the profile likelihood analysis performed prior to implementing the genetic algorithm, and the results of this analysis for Case 3, parameter scenario 1 are shown in Fig.  6 . These identifiability issues persisted when modifications to the experimental design, such as doubling the bolus or halving the theoretical measurement noise, were attempted. Despite the identifiability issues, the genetic algorithm methods were attempted for parameter scenario 1, and they consistently yielded reductions in the \(PLB_{crit}\) metric. However, the confidence intervals for the \(\beta\) variable remained in infeasible regions, and the variance in \(\beta\) dominated the \(PLB_{crit}\) metric. Thus, the model of Case 3 failed the primary practical identifiability check that the proposed approach requires, hence the optimisation process is somewhat moot. The results of this analysis are presented in “ Appendix ”.

figure 6

Profile likelihood analysis of Case 3, parameter scenario 1, for \({{\varvec{N}}}_{{\varvec{s}}}\)  = 20. The threshold at \({{\varvec{\psi}}}\)  = 0.99 indicates the bound for the pointwise confidence interval of each parameter. Note the scale of \({{\varvec{\beta}}}\) exceeds the physiologically feasible range

4 Discussion

Through optimising \(PLB_{crit}\) , the GA was able to locate much better sampling schedules across the wide range of cases and parameter scenarios tested. For Cases 1 and 2, shifting from a uniform to an optimised sampling schedule led to an average reduction in \(PLB_{crit}\) of 33.1% and 36.9%, respectively (see Fig.  5 ). Furthermore, Fig.  5 shows that the minimax optimisation of the three parameter scenarios yielded sampling protocols which still improved identifiability outcomes for all scenarios in all cases (with the single exception of \(n_s\)  = 3 in Case 1). The \(PLB_{crit}\) metric achieved high correlations (mean correlations > 0.95) with a form of the \(QB_{crit}\) metric previously established by Krausch et al. ( 2019 ), while also reducing computational time by an order of magnitude.

In addition to providing a clear path towards improving the placement of samples, the methodology allows for the clear visualisation of trade-off curves for guiding experimental design decisions. The relatively fast execution of the profile-likelihood values enables the comparison of optimal curves against the naive time-uniform sampling curve. The trade-off curves plotted in Fig.  5 allow for a clear comparison where one could either reduce the number of samples required while maintaining the same level of parameter certainty or improve the level of parameter certainty through optimising a fixed quantity of samples. The increase in parameter variability when moving from Case 1 to Case 2 is in agreement with the concept of parsimony in modelling—the increased complexity of the parameter identification led to the corresponding increase of \(PLB_{crit}\) .

Despite the increasing use of PL in practical identifiability analyses, there seems to have been little utilisation of PL for experimental design optimisation. Through sharing the statistical principles of the Monte Carlo methods, the \(PL_{crit}\) is able to detect nonlinearities. However, rather than relying on the stochastic nature of the MC, the \(PL_{crit}\) provides a deterministic metric that remains consistent between iterations. While relatively simplistic in its implementation, the GA used for this work was able to quickly locate sampling protocols that made clear improvements to identifiability outcomes. Practical identifiability concerns were also able to be addressed, as seen from the results of applying profile likelihood to Case 3. Additionally, several stages of the process were parallelisable: the calculation of each organism’s \(PL_{crit}\) value was parallelised in this process, and further improvements could be made through calculating the scenarios of different \(N_s\) values in parallel.

The methodology presented in this paper shares downsides common to other methods for optimal design of experiments. The optimisation of sampling was undertaken using a domain of suspected parameter values. Having knowledge of these local parameters would require either preliminary studies, or estimation via some indicative a priori information. This was partially addressed through using the minimax to address the differences between the three parameter scenarios without significant additional cost to computational time. However, this still assumed some level of a priori knowledge about how the local parameters were distributed. Additionally, the simulated nature of this methodology assumes that the model itself is accurate to measured phenomena. In the case that the model has mismatch or bias compared to the true data, the clustering of sampling around some perceived optimality could hinder the unique identification of parameters and mask the issue of model mismatch. However, if the model has previously undergone validation and the likely parameter scenarios are known, then the usage of this methodology would be justified (Wieland et al. 2021 ).

In cases where data could be limited in quantity due to cost of sampling, or a limited nature of the data (e.g., blood sampling), the methods could provide a means for maximising the information available within each sample. Furthermore, applying the methodology to generate the trade-off curves in Fig.  5 could provide guidance and evidence to support decisions regarding changes in sampling procedure. From the modeller’s perspective, the \(PLB_{crit}\) metric allows for a reduction in computational requirements for experimental design, and the minimax approach allows for consideration of multiple parameter behaviours. Additionally, the level of complexity of the GA implemented in this paper has been kept relatively low to demonstrate the ease of implementation of the methods. While it is certainly possible to make further adjustments to the GA in order to improve the convergence speeds, it is not necessary to make such changes in order to achieve a near-optimal sampling protocol with current computational capabilities.

This research focussed on applying the \(PLB_{crit}\) metric and GA approach to a PK-PD dose–response model. In the future, it would be valuable to test the method on a wider range of modelling contexts. This research has shown that it is not possible to locate an optimal sample timing common to all parameter scenarios. However, the minimax approach could lead to optimal sample timing schedules that lead to the best possible parameter confidence across the expected range of characteristics. Nonetheless, the likely parameter domain remains a critical input to this approach. Additionally, this work focussed on testing candidate sampling protocols on a continuous time measure. A more restricted region of sample timing (i.e. allowing discrete sampling locations on minutely grid points) could yield improvements in the GA convergence speed, while also aligning with practical limitations of physical data collection.

5 Conclusion

This analysis considers the usage of a novel, PL based experimental design metric for optimising the identifiability of parameters with relatively low computational effort. A genetic algorithm was applied with this metric to optimise sampling protocols across a range of model complexities, parameter scenarios, and number of samples. The methods were demonstrated consistent reductions in parameter variance (~ 33%) across the parameters and scenarios explored, including an example in which three parameter scenarios had to be simultaneously optimised using minimax rule. The results showed clear trade-off curves that quantified the extent to which either parameter variance could be reduced, or numbers of samples could be reduced.

Overall, this analysis showed that it is possible to account for the nonlinear nature of models in MBDoE while maintaining reasonable computation times. The \(PLB_{crit}\) metric provides adds a new alternative to the existing \(QB_{crit}\) metric. By giving up a small amount of information regarding higher dimensional parameter constellations, the \(PLB_{crit}\) metric reduces computational time by an order of magnitude, leaving more time available for running computationally intensive applications such as model optimisation via GA. However, it must be acknowledged the optimisation of sampling schedules through these methods, while mathematically useful, must be considered against the practical considerations of those implementing the experiments.

Broudiscou A, Leardi R, Phan-Tan-Luu R (1996) Genetic algorithm as a tool for selection of D-optimal design. Chemom Intell Lab Syst 35(1):105–116. https://doi.org/10.1016/S0169-7439(96)00028-7

Article   Google Scholar  

Chen R-B, Chang S-P, Wang W, Tung H-C, Wong WK (2015) Minimax optimal designs via particle swarm optimization methods. Stat Comput 25(5):975–988. https://doi.org/10.1007/s11222-014-9466-0

Article   MathSciNet   Google Scholar  

DiStefano JJ 3rd (1981) Optimized blood sampling protocols and sequential design of kinetic experiments. Am J Physiol 240(5):R259-265. https://doi.org/10.1152/ajpregu.1981.240.5.R259

Docherty P, Chase JG, Lotz T, Desaive T (2011) A graphical method for practical and informative identifiability analyses of physiological models: a case study of insulin kinetics and sensitivity. Biomed Eng Online. https://doi.org/10.1186/1475-925X-10-39

Franceschini G, Macchietto S (2008) Model-based design of experiments for parameter precision: State of the art. Chem Eng Sci 63(19):4846–4872. https://doi.org/10.1016/j.ces.2007.11.034

Galvanin F, Ballan CC, Barolo M, Bezzo F (2013) A general model-based design of experiments approach to achieve practical identifiability of pharmacokinetic and pharmacodynamic models. J Pharmacokinet Pharmacodyn 40(4):451–467. https://doi.org/10.1007/s10928-013-9321-5

Heredia-Langner A, Montgomery DC, Carlyle WM, Borror CM (2004) Model-robust optimal designs: a genetic algorithm approach. J Qual Technol 36(3):263–279. https://doi.org/10.1080/00224065.2004.11980273

Hines KE, Middendorf TR, Aldrich RW (2014) Determination of parameter identifiability in nonlinear biophysical models: A Bayesian approach. J Gen Physiol 143(3):401–416. https://doi.org/10.1085/jgp.201311116

Jacquez JA, Greif P (1985) Numerical parameter identifiability and estimability: integrating identifiability, estimability, and optimal sampling design. Math Biosci 77(1):201–227. https://doi.org/10.1016/0025-5564(85)90098-7

Krausch N, Barz T, Sawatzki A, Gruber M, Kamel S, Neubauer P, Cruz Bournazou MN (2019) Monte Carlo simulations for the analysis of non-linear parameter confidence intervals in optimal experimental design. Front Bioeng Biotechnol. https://doi.org/10.3389/fbioe.2019.00122

Lam N, Murray R, Docherty PD, Te Morenga L, Chase JG (2021) The effects of additional local-mixing compartments in the DISST model-based assessment of insulin sensitivity. J Diabet Sci Technol. https://doi.org/10.1177/19322968211021602

Lam NN, Docherty PD, Murray R (2022) Practical identifiability of parametrised models: a review of benefits and limitations of various approaches. Math Comput Simul 199:202–216. https://doi.org/10.1016/j.matcom.2022.03.020

Lin CD, Anderson-Cook CM, Hamada MS, Moore LM, Sitter RR (2015) Using genetic algorithms to design experiments: a review. Qual Reliab Eng Int 31(2):155–167. https://doi.org/10.1002/qre.1591

Michaelis L, Menten ML (1913) Die Kinetik Der Invertinwirkung. Biochem Z 49:333–369

Google Scholar  

Mori F, DiStefano J (1979) Optimal nonuniform sampling interval and test-input design for identification of physiological systems from very limited data. IEEE Trans Autom Control 24(6):893–900. https://doi.org/10.1109/TAC.1979.1102175

Muñoz-Tamayo R, Tedeschi LO (2023) ASAS-NANP symposium: mathematical modeling in animal nutrition: the power of identifiability analysis for dynamic modeling in animal science: a practitioner approach. J Anim Sci. https://doi.org/10.1093/jas/skad320

Raue A, Becker V, Klingmüller U, Timmer J (2010) Identifiability and observability analysis for experimental design in nonlinear dynamical models. Chaos Interdiscip J Nonlinear Sci 20(4):045105. https://doi.org/10.1063/1.3528102

Raue A, Kreutz C, Maiwald T, Bachmann J, Schilling M, Klingmüller U, Timmer J (2009) Structural and practical identifiability analysis of partially observed dynamical models by exploiting the profile likelihood. Bioinformatics 25(15):1923–1929. https://doi.org/10.1093/bioinformatics/btp358

Rothman KJ, Greenland S (2018) Planning study size based on precision rather than power. Epidemiology 29(5):599–603. https://doi.org/10.1097/ede.0000000000000876

Sher A, Niederer SA, Mirams GR, Kirpichnikova A, Allen R, Pathmanathan P, Gavaghan DJ, van der Graaf PH, Noble D (2022) A quantitative systems pharmacology perspective on the importance of parameter identifiability. Bull Math Biol 84(3):39. https://doi.org/10.1007/s11538-021-00982-5

Simpson MJ, Baker RE, Vittadello ST, Maclaren OJ (2020) Practical parameter identifiability for spatio-temporal models of cell invasion. J R Soc Interface 17(164):20200055. https://doi.org/10.1098/rsif.2020.0055

Thomaseth K, Cobelli C (1999) Generalized sensitivity functions in physiological system identification. Ann Biomed Eng 27(5):607–616. https://doi.org/10.1114/1.207

Villaverde AF, Banga JR (2014) Reverse engineering and identification in systems biology: strategies, perspectives and challenges. J R Soc Interface 11(91):20130505. https://doi.org/10.1098/rsif.2013.0505

Villaverde AF, Raimundez E, Hasenauer J, Banga JR (2023) Assessment of Prediction uncertainty quantification methods in systems biology. IEEE/ACM Trans Comput Biol Bioinform 20(3):1725–1736. https://doi.org/10.1109/tcbb.2022.3213914

Walter E, Pronzato L (1990) Qualitative and quantitative experiment design for phenomenological models—A survey. Automatica 26(2):195–213. https://doi.org/10.1016/0005-1098(90)90116-Y

Wieland F-G, Hauber AL, Rosenblatt M, Tönsing C, Timmer J (2021) On structural and practical identifiability. Curr Opin Syst Biol 25:60–69. https://doi.org/10.1016/j.coisb.2021.03.005

Download references

Open Access funding enabled and organized by CAUL and its Member Institutions. N. Lam is supported by the University of Canterbury doctoral scholarship.

Author information

Authors and affiliations.

Department of Mechanical Engineering, University of Canterbury, Christchurch, New Zealand

Nicholas N. Lam & Paul D. Docherty

School of Mathematics and Statistics, University of Canterbury, Christchurch, New Zealand

Institute of Technical Medicine, Furtwangen University, Villingen-Schwenningen, Baden-Württemberg, Germany

Paul D. Docherty

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Nicholas N. Lam .

Ethics declarations

Conflict of interest.

The authors declare that they have no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

figure 7

Trade-off curves for the \({{\varvec{PLB}}}_{{{\varvec{crit}}}}\) values of optimised and time-uniform sampling locations for Case 3, parameter scenario 1, alongside optimised coefficient of variation (CV) values for individual parameters (Color Figure Online)

7 shows the results of optimising sampling schedules for Case 3. In all cases, the lower bound of the \(\beta\) parameter was negative, indicating physically infeasible values for Michaelis Menten mechanics. Additionally, the \(PLB_{crit}\) metric was mainly influenced by the variance in the \(\beta\) bounds. Coefficient of variation values for parameters were calculated using the profile likelihood bounds with the following definition:

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Lam, N.N., Murray, R. & Docherty, P.D. Evolving Improved Sampling Protocols for Dose–Response Modelling Using Genetic Algorithms with a Profile-Likelihood Metric. Bull Math Biol 86 , 70 (2024). https://doi.org/10.1007/s11538-024-01304-1

Download citation

Received : 09 January 2024

Accepted : 23 April 2024

Published : 08 May 2024

DOI : https://doi.org/10.1007/s11538-024-01304-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Model-based design of experiments
  • Practical identifiability
  • Identifiability
  • Profile likelihood

Advertisement

  • Find a journal
  • Publish with us
  • Track your research
  • Reference Manager
  • Simple TEXT file

People also looked at

Original research article, microwave biosensor for the detection of growth inhibition of human liver cancer cells at different concentrations of chemotherapeutic drug.

www.frontiersin.org

  • 1 School of Internet of Things Engineering, Institute of Advanced Technology, Jiangnan University, Wuxi, China
  • 2 State Key Laboratory of Biochemical Engineering, Institute of Process Engineering, Chinese Academy of Sciences, Beijing, China
  • 3 Key Laboratory of Biopharmaceutical Preparation and Delivery, Chinese Academy of Sciences, Beijing, China
  • 4 School of Biotechnology, the Key Laboratory of Carbohydrate Chemistry and Biotechnology, Ministry of Education, Jiangnan University, Wuxi, China

Cytotoxicity assays are crucial for assessing the efficacy of drugs in killing cancer cells and determining their potential therapeutic value. Measurement of the effect of drug concentration, which is an influence factor on cytotoxicity, is of great importance. This paper proposes a cytotoxicity assay using microwave sensors in an end-point approach based on the detection of the number of live cells for the first time. In contrast to optical methods like fluorescent labeling, this research uses a resonator-type microwave biosensor to evaluate the effects of drug concentrations on cytotoxicity by monitoring electrical parameter changes due to varying cell densities. Initially, the feasibility of treating cells with ultrapure water for cell counting by a microwave biosensor is confirmed. Subsequently, inhibition curves generated by both the CCK-8 method and the new microwave biosensor for various drug concentrations were compared and found to be congruent. This agreement supports the potential of microwave-based methods to quantify cell growth inhibition by drug concentrations.

1 Introduction

Cytotoxicity assays are pivotal in evaluating cellular damage induced by drugs, playing a critical role in the drug development process and safety evaluation ( Parboosing et al., 2017 ; Zhang and Wan, 2022 ). These assays facilitate the determination of a drug’s safety profile, therapeutic window, and potential side effects, thus informing drug design, judicious usage, and toxicity risk assessment. They are instrumental in detecting adverse effects and providing essential data for the secure administration of drugs ( Niles et al., 2008 ; Vaucher et al., 2010 ).

The relationship between drug concentration, an independent factor influencing cytotoxicity ( Chan et al., 2002 ; Radko et al., 2013 ), and cytotoxicity is essential to optimize therapeutic efficacy and minimize adverse effects. Understanding the dose-response relationship is pivotal for researchers to strike a delicate balance between drug efficacy and safety, thereby ensuring judicious drug utilization that curtails potential risks. Cancer cells differ from normal cells in many ways, one of which is that they grow and divide very rapidly. In response to this characteristic, a number of cytotoxic drugs have been developed that target rapidly proliferating cancer cells and inhibit their growth and proliferation by interfering with their DNA synthesis or cell division processes ( McQuade et al., 2017 ; Dongsar et al., 2023 ). However, cytotoxic drugs do not completely discriminate between cancer cells and normal cells ( Tofzikovskaya et al., 2015 ). Mitomycin-C is an example of a cell cycle-specific chemotherapeutic agent widely used in oncology and cytotoxicity research ( Tomasz, 1995 ; Zhang et al., 2019 ; Park et al., 2021 ), predominantly acting on the G2 and M phases to impede DNA synthesis and cell division, thereby arresting cancer cell proliferation. Despite its efficacy against various cancer cell types, mitomycin-c’s potential toxicity to normal cells necessitates rigorous dose regulation and vigilant monitoring for adverse effects. Consequently, assessing the drug concentration-inhibition relationship is an indispensable component of cytotoxicity studies.

To accurately evaluate drug impacts on cell viability, two common techniques are employed: real-time cellular analysis (RTCA) and CCK-8 assays ( Cai et al., 2019 ). RTCA offers real-time, non-invasive monitoring of cellular dynamics, but with limited application scope and higher equipment costs ( Yan et al., 2018 ). Conversely, the CCK-8 assay, a standard in cytotoxicity tests, facilitates straightforward colorimetric measurements and is versatile across various cell lines ( Wang et al., 2015 ; Liu et al., 2018 ; Wang et al., 2018 ). However, to ensure a sufficient reaction, the CCK-8 method requires a certain incubation time for the reaction, typically 1∼4 h.

While established cytotoxicity assays like CCK-8 and real-time cellular assays are well-developed, ongoing research is delving into novel assays tailored for diverse cellular contexts and specific experimental requirements. The investigation of cytotoxic mechanisms warrants detailed analysis in certain studies, whereas others prioritize the rapidity and precision of the assay’s readouts. Moreover, optical and electrical measurements can often complement each other’s results in the field of biosensing ( He et al., 2023 ). Microwave biosensors, as a new type of biosensor, are highly sensitive and correspondingly fast (their response time is usually only a few seconds to a few mins), allowing real-time results to be obtained in a short period of time ( Narang et al., 2018 ; Gao et al., 2021 ). Microwave biosensors’ compactness and lightweight design indeed make them ideal for portable device production. Their seamless integration with electronic circuits, coupled with appropriate algorithms, can lead to intelligent data processing products. To date, no attempt has been made to detect drug cytotoxicity using microwave resonance sensors. If the microwave biosensor can be used for cytotoxicity detection, it can complement the original method in terms of advantages and disadvantages. This wound also further broaden the application areas of microwave detection. Microwave biosensors for measuring cytotoxicity have the advantages of eliminating the need for cell staining, rapid detection, low cost, easy integration with matching circuits, and small sample size. Microwave sensors based on resonant elements are very sensitive to the dielectric constant and loss angle tangent of the surrounding medium ( Muñoz-Enano et al., 2020 ), and have been widely used in the fields of biosensing. Researchers have demonstrated that it has promising applications in bacteria detection ( Narang et al., 2018 ; Jain et al., 2020 ; Jain et al., 2021 ), blood glucose detection ( Yilmaz et al., 2019 ; Kandwal et al., 2020 ; Nazli Kazemi anLight, 2023 ), and many other areas. Since the key to the endpoint method of evaluating drug cytotoxicity is to determine the number of surviving cells at the end of the experiment ( Adan et al., 2016 ), and there have been studies on the differences in dielectric properties of cell solutions at different concentrations ( Chen et al., 2014 ), it has a certain degree of feasibility to do cytotoxicity testing with microwave sensors.

In this paper, we have designed and fabricated a microwave biosensor based on the integrated passive device (IPD) fabrication technology. IPD integrates different passive components (inductors, capacitors, resistors) in a single subcomponent, which is characterized by a small linewidth, precise substrate control, a high degree of integration and fewer parasitic effects ( Yu et al., 2019 ; Chu et al., 2020 ). Moreover, IPDs demonstrate enhanced stability compared to capacitive or resistive sensors ( Yu et al., 2021 ). The consolidation of multiple passive components onto a single chip allows IPDs to conserve space, diminish energy consumption, bolster system reliability and accuracy of measurements, and ease the transition to productization. Employing this biosensor, we assessed the impact of concentration on cytotoxicity using HepG2 cells as the model and Mitomycin-c as the chemotherapeutic agent. We determined OD450 values via the CCK-8 assay, which is the biological gold standard ( Zhou et al., 2018 ), as a control group for parallel experiments and verified the feasibility of cytotoxicity experiments using microwave sensors by mapping and comparing the curves of the two groups. In addition, we treated the cells with ultrapure water instead of phosphate buffered saline (PBS) in this experiment to verify the feasibility of this treatment in the microwave biosensor cell number measurement experiments.

2 Materials and methods

2.1 sensor design and analysis.

The proposed biosensor is a microwave IPD resonator, consisting of a spiral inductor and an interdigital capacitor, where changes in the electrical parameters of the surrounding medium, mainly the dielectric constant and the loss angle tangent can cause changes in the resonant frequency or the amplitude of the resonance peak. When designing microwave resonators, the relevant parameters and performance are usually adjusted by the capacitance section ( Zhu and Abbosh, 2016 ; Xu and Zhu, 2017 ). The spiral inductor of the proposed microwave sensor is pre-designed by our group ( Wang et al., 2023 ) and this work focuses on the design, optimization and simulation of the interdigital capacitor. By adjusting the corresponding capacitance structure, we can adjust the frequency sensitivity and amplitude sensitivity of the resonator. In the design of the interdigital capacitive structure, three schemes are considered, respectively, in a cross-shaped central periphery equally spaced increase of 1-turn, 2-turn and 3-turn copper strip lines as shown in Figures 1A–I, B-I, and C-I . The reflection coefficient (S 11 ) of the three resonators and the variation of the resonance peak amplitude in different loss angle tangent environments are simulated in the Advanced Design System 2020 (ADS). The Eq. (1) shows that the permittivity of a sample can be obtained by adding the real and imaginary permittivity:

www.frontiersin.org

Figure 1 . Simulation results of the sensor. (A) Interdigital capacitor with 1-turn, (A–I) structure, (A-II) S 11 , (A-III) resonance peak amplitude in different loss angle tangent. (B) Interdigital capacitor with 2-turns, (B–I) structure, (B-II) S 11 , (B-III) resonance peak amplitude in different loss angle tangent. (C) Interdigital capacitor with 3-turns, (C–I) structure, (C-II) S 11 , (C-III) resonance peak amplitude in different loss angle tangent.

The loss angle tangent is calculated from Eq. 2 :

Samples with varying cell concentrations can be characterized by using different values of the loss angle tangent. A change in the loss angle tangent indicates a change in the complex dielectric constant, which in turn affects the S 11 of the microwave resonator. It can be seen from Figures 1A–II , B-II, and C-II that as the number of turns increases, the resonant frequency decreases, the bandwidth decreases and the Q value decreases. High Q represents high energy storage capacity and frequency selectivity. In terms of sensitivity to the loss angle tangent, the structure of 2-turn shows the best performance as illustrated in Figures 1A–III , B-III, and C-III. Since the resonance amplitude is usually the preferred metric for this type of detection relative to the resonance frequency ( Jain et al., 2021 ), and combined with factors such as the size of the detection area, interdigital capacitor with 2-turns was finally selected as the biosensor.

Figure 2A delineates the capacitive section’s architecture and precise dimensions. Encircling the device, a spiral inductor integrated with air-bridge structures is observed, while at its nucleus lies an interdigital capacitor, composed of strip wires coiled around a cruciform framework. The strip lines boast a uniform size and interspace of 20 μm. Figure 2B shows the longitudinal layer structure of the sensor, from top to bottom, with a 4.5/0.5 µm Cu/Au top layer, a 1.8 μm copper interconnect layer containing air bridge structure which were introduced in the spiral inductor to increase the mutual inductance and decrease the signal transmission loss in the inductor, a 4.5/0.5 µm Cu/Au bottom layer, a 0.2 μm thick nitride dielectric layer with relative dielectric constant of 7.5 and a loss angle tangent of 0.0036, a 200 μm thick GaAs substrate layer with relative dielectric constant of 12.85 and a loss angle tangent of 0.0028. For the fabrication of our proposed biosensor, seed metal (Ti/Au) is sputtered with the thicknesses of 20/80 nm as for strengthened metallic adhesion. In the electroplating process, gold and copper are tightly bonded through the plating process and have excellent corrosion resistance and do not easily diffuse into the solution, thus they do not interfere with the cytotoxicity analysis of cells. The electric field condition of this resonator is simulated in High Frequency Structure Simulator 19.1(HFSS), and its horizontal E-field strength is shown in Figure 2C , where the E-field strength reaches 10 6  V/m in its core sensitive region. The highest electric field strengths reported so far in the paper are around 10 5  V/m ( Zarifi et al., 2017 ; Kumar et al., 2020 ). The device’s notably higher electric field strengths suggest enhanced penetration and sensitivity. Considering the actual measured solution droplet size, the longitudinal field strength distribution is also simulated, and the results are shown in Figure 2D , illustrating that the sensitive region can still achieve an electric field strength of 10 5  V/m at a height of 50 μm. High electric field strength in the horizontal and vertical directions reveals the good penetration capability of the device. Consequently, this allows for the use of larger droplet volumes when applying sample droplets, effectively minimizing random sampling errors. Figure 2E shows the equivalent circuit diagram of the device. The capacitance of the oxide layer between the base and the metal can be denoted as C ox , the resistance between the substrate and the ground can be denoted as R sub , the capacitance can be denoted as C sub , the parasitic resistance of the inductor can be denoted as R L , the parasitic conductance of the capacitance can be denoted as G . Through the equivalent circuit transformation, the whole device can be regarded as an LC resonator. The complex dielectric constant properties of the cell solution can be modeled using the Debye equation. The relationship between the measured microwave parameters of the cell solution and the complex dielectric constant can be expressed by Eq. (3) as ( Withayachumnankul et al., 2013 )

where △ ε ′ = ε s ′ − ε r ′ , △ ε ″ = ε s ″ − ε r ″ , △ f 0 = f s − f r and △ S 11 = S 11 s − S 11 r are the differences between the sample (with subscript s ) and the reference (with subscript r ) values, m 11 , m 12 , m 21 , m 22 is the parameters to be determined. In this experiment, a change in cell concentration would cause a change in the loss angle tangent, thus causing a change in △ S 11 .

www.frontiersin.org

Figure 2 . Device structure analysis and electric field simulation. (A) Overall device structure and dimensions of interdigital capacitance. (B) The hierarchical structure of the device. (C) Surface electric field distribution of devices. (D) Vertical electric field distribution. (E) Equivalent circuit diagram.

2.2 Preparation of biological sample

HepG2 cell line is used as the experimental cells which were purchased from the cell bank of the Chinese Academy of Science (Shanghai, China). It is a human hepatocellular carcinoma cell line commonly used in the study of molecular mechanisms, drug screening and treatment of liver cancer ( Elkady et al., 2022 ). The entire experimental procedure is illustrated in Figure 3 . After completing the cell resuscitation, we first performed a pre-experiment using the fabricated microwave biosensor for cell number measurement. We inoculated cells into rows A, B, D, and E of a 96-well plate with a concentration gradient of 100 cells per well to 200,000 cells per well in two-fold increments, and added.

www.frontiersin.org

Figure 3 . Cell culture and handling, addition of drugs and pre-preparation for measurements.

Dulbecco’s modified eagle medium (DMEM) to make them adherent to the bottom by incubating them for 24 h in a CO 2 incubator at 37 °C. After removing them from the incubator, we pipetted the DMEM from the A and B rows and washed them with PBS. After that, in row A, trypsin treatment was used to dissociate the cells from the bottom, and then 100 μL of PBS was injected into each well; in row B, the same trypsin treatment was used, and then 100 μL of ultrapure water was injected into each well. Rows D and E are used as backup groups. The PBS, trypsin solution and DMEM used in the experiments were purchased from Sangon Biotech (Shanghai, China). The cell culture incubator was purchased from Thermofishe (United States). Normally, in cell number experiments, cells are treated in PBS.In this study, ultrapure water was utilized to treat Group B based on several key considerations. In order to maintain an isotonic state with the cytosol, the ionic concentration of PBS buffer and cytosol is similar. This would result in the cells and the PBS potentially exhibiting similar electrical parameter characteristics, which will lead to a narrowing of the differences in electrical parameters caused by the concentration of the cells. Conversely, the contrast in ionic concentration between ultrapure water and the cell solution is likely to amplify the solution’s electrical parameter changes due to cellular quantity. On the other hand, cell water uptake and cell fluid exudation can result in a more uniform ionic distribution of the solution, thus mitigating random errors linked to small sample sizes. After completing the pre-experimental validation, we seeded 50,000 cells per well on a new 96-well plate, inoculated on row C, D and E as three parallel groups, and after 24 h of CO 2 thermostatic incubation for cell adhesion, drug administration commenced. In this experiment, we used mitomycin-c as an inhibitor of cell growth. Mitomycin-c was selected as the cell growth inhibitor for this experiment. It was initially dissolved in dimethyl sulfoxide before being prepared into a stock solution at various concentrations. This stock solution was then serially diluted with DMEM to create a two-fold concentration gradient ranging from 1.7 μmol/L to 40 μmol/L. Subsequently, 200 μL of DMEM containing varying concentrations of mitomycin-c was added to each well of the 96-well plate and incubated at 37°C in a CO 2 incubator for 48 h. Repeat the above steps and prepare the same 3 rows of cells on a new 96-well plate, with one set for OD450 optical measurements and the other set for microwave measurements. For microwave measurement groups, remove the DMEM with a pipette, wash it with PBS, inject 100 μL of ultrapure water into each well. After a period of resting, pipette 1.5 μL of solution and drop it on the sensor for detection. Since the proposed microwave biosensor performs cytotoxicity detection mainly by detecting the concentration of ions contained in the cells, it is unable to distinguish between live and dead cells. Therefore, it is important to ensure that dead cells are cleaned as completely as possible before measurement. Additionally, the ions in the drug can also affect the measurements, so it is important to ensure that the drug is completely purified. The mitomycin-c and dimethyl sulfoxide used were purchased from MedChemExpress (Shanghai, China).

2.3 Experimental environment

The experimental apparatus was positioned on an anti-static mat and comprised a Vector Network Analyzer (VNA, Ceyear, 3656B), the IPD device, coaxial cables, samples, and a pipette, as depicted in Figure 4A . At the heart of the IPD device lies the microwave resonator, detailed microstructurally in Figure 4C . The resonator’s two ports are connected by bonding wire to the corresponding input and output matching wires on the printed circuit board. Figure 4B schematically illustrates the assembled sensor. Its bottom is an aluminum block with screw holes for fixing holes, and the chip is first fixed on top of the aluminum block by screws, and then connected to the coaxial cable of the VNA through the Small A Type connector fixed on both sides. This meticulous assembly ensures the chip remains horizontally stable, mitigating positional errors. The coaxial cable itself is taped to the table to reduce measurement disruptions from any movement. During the measurement, 1.5 μL of solution was added to the middle sensing area with a pipette. To ensure uniform distribution and mitigate the risk of sample settling, each sample drawn from a 96-well plate via pipette is agitated by employing a larger pipette tip. In subsequent sample drops, it was found that when the droplet volume was equal to 2 μL or larger, the droplets were easy to be dispersed irregularly on the surface of the device leading to measurement failure due to destruction of the surface tension of the droplets. After each measurement, the liquid was sucked up with absorbent paper and was cleaned several times with ultrapure water to return the S 11 to the initial values to ensure that the next experiment was not affected. Since temperature and humidity affect the performance of semiconductor devices, we control and measure the temperature and humidity values, the measurements were carried out at an ambient temperature of 20°C∼21 °C and a humidity of 47 %RH∼48 %RH.

www.frontiersin.org

Figure 4 . Measuring platforms and fabricated sensor. (A) The measurement environment. (B) Device structure and assembly schematic. (C) Microscope image of the proposed sensor.

3 Results and discussion

3.1 pre-experimental results of cell number measurements.

Figure 5 shows the overall results of the cell number measurement experiment. In the cell number measurement pre-experiment, pictures of cells with concentration gradients from 6.4×10 4 /mL to 2×10 6 /mL were taken under the microscope as shown in Figure 5A∼F , which showed healthy growth and a clear concentration gradient. The cells with a concentration gradient from 1×10 4 /mL to 3.2×10 4 /mL did not show a marked difference due to the limited cell numbers, similar to 6.4×10 4 /mL. Figure 5G illustrates the cellular morphology in PBS, where a transition from wall-adherent irregular shapes to more defined round or ovoid forms is observed, predominantly existing as either single entities or aggregated clusters. Under this circumstance, a dynamic equilibrium of ion and water molecule exchange is established between the intracellular and extracellular environments, resulting in comparable ion concentrations. Figure 5H shows the status of the cells after 5 mins of exposure to ultrapure water. Due to the lower osmotic pressure of pure water compared to the cells, water enters the cells, causing them to swell or even dissolve. Cells lose their original morphological characteristics in pure water and become flattened, deformed or ruptured. This can lead to spillage of cell contents dispersed in ultrapure water. The measurements of the S 11 near the resonance peak of ten quantities of cells after treatment with ultrapure water are shown in Figure 5I . The peak value of the S 11 decreases with the increase of cell concentration. These measurements were plotted as points in Origin. It can be found that when the number of cells is too low (lower than 6.4 × 10 4 /mL), the measurements of microwave amplitude are similar, showing a deviation from the other groups and are similar to the measurements of ultrapure water. This may be due to inadequate cytosol exchange with external components when cell numbers are low, and the aspirated 1.5 μL solution may not contain cell membrane components. After selecting the mean of multiple measurements, we performed a linear fit to the mean data on the last six data as shown in Figure 5J . The error bars are based on the mean value, and the relationship between the amplitude of the resonance peak and the concentration of the cells can be characterized by y = 2.58549 × 10 −7 x-25.70623. R 2 is 0.99874, showing a good linear relationship. The corresponding detection and quantification limits (LOD&LOQ) of the proposed devices was calculated on the basis of following Eqs ( 4 , 5 ) ( Qiang et al., 2017 ) as 1.41×10 5 /mL and 4.23×10 5 /mL, respectively.

where SD is the standard deviation of the frequency response and m is the slope of the regression line. That means, the lowest amount of analyte in a sample which can be detected but not necessarily quantitated as an exact value is 1.41×10 5 /mL, the lowest amount of analyte in a sample which can be quantitatively determined is 4.23×10 5 /mL. This experiment demonstrated that there is a linear relationship between the magnitude of the amplitude under the microwave resonator and the number of cells. Specifically, when the number of cells exceeds a certain threshold (6.4×10 4 /mL), the solution of adherent cells treated with ultrapure water shows this relationship. This experiment confirms that using this microwave sensor to measure cytotoxicity as an endpoint is feasible.

www.frontiersin.org

Figure 5 . Results of the pre-experiment on cell number measurement. Cells cultured in DMEM at a concentration of (A) 6.4×10 4 /mL, (B) 1.25×10 4 /mL, (C) 2.5×10 5 /mL, (D) 5×10 5 /mL, (E) 1×10 6 /mL, (F) 2×10 6 /mL. (G) Cells in PBS, and (H) cells in ultrapure water. (I) S 11 of different concentrations in ultrapure water. (J) Linear fitted results of cell concentration and magnitude in resonant frequency.

3.2 Measurements from drug inhibition experiments

The results of the drug concentration cytotoxicity assay measurements are presented in Figure 6 . Figure 6A demonstrates the cell growth after 48 h of culture with drug concentration ranging from 1.7 μM to 12.65 μM. It can be clearly seen that the number of live cells gradually decreased as the drug concentration increased, and the inhibition of cell growth by the drug can be assessed from the number of surviving live cells. The inhibitory capacity of mitomycin reaches its maximum at drug concentrations of approximately 9.5 μM. Higher concentrations have similar inhibitory effects to 9.5 μM. The cells were subjected to OD450 measurements, depicting the curves in Figure 6B . In addition, the curves of cell concentration and OD450 values were measured for HepG2, and the results are shown in Figure 6C which is similar to the results of the OD450 measurement of cell number in Figure 5J . Since OD450 values have a good linear relationship with cell concentration, OD450 measurements can be equated to cell concentration. Microwave resonance peak amplitude measurements were performed after ultrapure water treatment. A set of near-mean measurements was selected and their S 11 are plotted in Figure 6D . It can be observed that the amplitude of the resonance peak decreases by approximately 0.45 dB as the drug concentration increases from 1.70 μM to 12.65 μM. The relationship between amplitude and drug concentration was plotted in Figure 6E after an equal number of measurements were taken in three parallel groups and the mean value was selected. It can be seen that the microwave resonance amplitude measurements have similar results to the OD450 measurements. At higher drug concentrations, the resonance amplitude tends to a stable value. Therefore, it is feasible to use the resonance amplitude curve as an assessment index of drug toxicity. Various in vitro cytotoxicity assays are currently available including chromium release, bioluminescence, impedance, and flow cytometry ( Kiesgen et al., 2021 ), most of which are based on chemical methods such as fluorescent labelling, optical densitometry and radioactivity determination. These methods have their characteristics and scope of application as well as limitations, microwave sensor methods introduce a new possibility for cytotoxicity determination, and their comparison is presented in Table 1 . These methods can be divided into two main categories, optical and electrical, covering a wide range of cellular measurements. In terms of device size, microwave biosensors have the advantage of being small. Microwave methods are on a similar scale to flow cytometry in terms of the concentration of cells that can be processed. Microwave methods are characterized by a tiny sample capacity (0.8 μL∼ 2 μL) in addition to inheriting the advantages of electrical methods that do not require staining of cells.

www.frontiersin.org

Figure 6 . Results of cytotoxicity assay for different concentrations of drugs. (A) Microscopic images of cells cultured at different drug concentrations for 48 h and washed with PBS buffer to remove dead cells. (B) OD450 value detection of live cells after 48 h of action with different concentrations of Mitomycin-c. (C) Measurement results of linear relationship between HepG2 concentration and OD450 value. (D) Measurement results of S 11 after drug concentration 1.7 μM∼40.0 μM action. (E) Microwave amplitude detection of living cells in aqueous solutions of Mitomycin-c with different concentrations after 48 h of action.

www.frontiersin.org

Table 1 . Summaries of existing cytotoxicity assays.

3.3 Experimental principles

The measurement mechanism of this experiment is divided into two main parts. The first part is cellular water uptake and subsequent rupture as shown in Figure 7A . When a cell is placed in ultrapure water, the concentration of the solution inside the cell is relatively high, while the concentration of the solution in ultrapure water is extremely low. Osmotic forces drive water molecules from the exterior into the cell, causing a volumetric expansion of the cell, a phenomenon termed cellular water absorption. However, if the cell absorbs more water molecules than it can hold, the increased internal pressure may cause the cell membrane to rupture. This typically happens when the cell membrane’s elastic limit is surpassed. After mechanical shaking, the broken cell membrane and various ions within the cell are dispersed relatively uniformly in solution. Differences in the number of cells can lead to differences in the final total ion concentration of the solution, as the cells are treated with equal amounts of ultrapure water. It should be noted that the number of cells should not be too high, otherwise they may not all rupture completely after absorbing water.

www.frontiersin.org

Figure 7 . Experimental principles. (A) Cell rupture in ultrapure water. (B) Measurement of sample in biosensor.

The second part is the principle of sample detection by the microwave biosensor as shown in Figure 7B . The cytosol contains a variety of ions, with sodium, potassium and chloride ions making up a large proportion. The effect of ion concentration on dielectric properties has been studied extensively, e.g., an increase in the concentration of sodium chloride leads to a decrease in the loss angle tangent ( Wang et al., 2013 ; Dandan et al., 2015 ). When the concentrations of sodium chloride and potassium chloride solutions are below a certain value, the dielectric properties of the solutions are similar to those of pure water, and only when they are above a certain value do the dielectric properties show a clear trend ( Eldamak et al., 2020 ). The loss angle tangent describes the nature of the ability of a material to absorb electromagnetic waves and is related to the energy loss in the material. As the concentration of a solution increases, so does the number of solute molecules or ions. At lower concentrations, the ions in the solution have a weaker ability to absorb electromagnetic waves, resulting in a larger loss angle tangent. However, as the concentration increases, the polarization effect of the ions in the solution increases, making the solution less able to absorb electromagnetic waves, resulting in a decrease in the loss angle tangent. Changes in the loss angle tangent affect the degree of microwave attenuation in the solution and the resonance peak of the resonator. When the solution is dropped onto the capacitive area of the microwave resonator, the medium surrounding the capacitive area changes, and the microwave biosensor detects this change sensitively and rapidly. The VNA sends a microwave signal over a set frequency range and measures the amplitude and phase of the reflected and transmitted signals. By varying the frequency and recording the corresponding signal response, data on the S 11 can be obtained. Further, the VNA can be connected to a computer to efficiently detect changes in the analyzed parameters using the corresponding software.

4 Conclusion

In this work, a microwave resonant sensor based on an integrated passive device is presented. The device can be used for cell number detection and further, for the assessment of the degree of cell growth inhibition by drug concentration. The sensor’s capability to detect cytotoxicity was validated against the biological gold standard, the CCK-8 assay. Unlike the usual PBS treatment of cells, ultrapure water was used to treat the cells in this experiment, offering an innovative approach for cell sensing via microwave technology. This novel method provides rapid, precise, and miniaturized cytotoxicity assessments, suitable for various applications. Future enhancements should concentrate on minimizing random detection errors through appropriate peripheral matching circuits and improving sensor sensitivity via structural design modifications. The improvement of the device structure relies mainly on the optimization of the interdigital capacitance. The matching of microwave biosensors with electronic circuits and the introduction of algorithms can result in a miniaturized smart device.

Data availability statement

The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding authors.

Ethics statement

Ethical approval was not required for the studies on humans in accordance with the local legislation and institutional requirements because only commercially available established cell lines were used. Ethical approval was not required for the studies on animals in accordance with the local legislation and institutional requirements because only commercially available established cell lines were used.

Author contributions

J-MZ: Writing–original draft, Conceptualization, Data curation, Formal Analysis, Investigation, Methodology, Software, Validation, Visualization. Y-KW: Writing–review and editing, Methodology, Software. B-WS: Writing–review and editing, Methodology. Y-XW: Writing–original draft, Validation. Y-FJ: Writing–review and editing, Supervision. G-LY: Writing–review and editing, Supervision. X-DG: Writing–review and editing, Supervision. TQ: Writing–review and editing, Supervision, Conceptualization, Funding acquisition, Project administration, Resources, Visualization.

The author(s) declare that financial support was received for the research, authorship, and/or publication of this article. This research is supported by National Natural Science Foundation of China (Grant No. 61801146), Project funded by China Postdoctoral Science Foundation (Grant No. 2021M691284), Postgraduate Research and Practice Innovation Program of Jiangsu Province (Grant No. SJCX23_1226), and Open Project of the Key Laboratory of Nanodevices and Applications, Chinese Academy of Sciences (Grant No. 22ZS07).

Acknowledgments

The authors acknowledge helpful conversations regarding the interpretation of these data with Prof. Xiaoman Zhou (School of Biotechnology, Jiangnan University, Wuxi, China). The sample of Figure 3 is designed by macrovector/Freepik.

Conflict of interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Publisher’s note

All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.

Adan, A., Kiraz, Y., and Baran, Y. (2016). Cell proliferation and cytotoxicity assays. Curr. Pharm. Biotechnol. 17, 1213–1221. doi:10.2174/1389201017666160808160513

PubMed Abstract | CrossRef Full Text | Google Scholar

Cai, L., Qin, X. J., Xu, Z. H., Song, Y. Y., Jiang, H. J., Wu, Y., et al. (2019). Comparison of cytotoxicity evaluation of anticancer drugs between real-time cell analysis and CCK-8 method. ACS Omega 4, 12036–12042. doi:10.1021/acsomega.9b01142

Chan, W. L., Zheng, Y. T., Huang, H., and Tam, S. C. (2002). Relationship between trichosanthin cytotoxicity and its intracellular concentration. Toxicology 177, 245–251. doi:10.1016/s0300-483x(02)00226-3

Chen, Y. F., Wu, H. W., Hong, Y. H., and Lee, H. Y. (2014). 40 GHz RF biosensor based on microwave coplanar waveguide transmission line for cancer cells (HepG2) dielectric characterization. Biosens. Bioelectron. 61, 417–421. doi:10.1016/j.bios.2014.05.060

Chu, H. N., Jiang, M. J., and Ma, T. G. (2020). On-chip dual-band millimeter-wave power divider using GaAs-based IPD process. IEEE Microw. Wirel. Compon. Lett. 30, 173–176. doi:10.1109/lmwc.2019.2961803

CrossRef Full Text | Google Scholar

Dandan, F., Yong, X., Zhaojie, L., Yuming, W., Wenge, Y., and Changhu, X. (2015). Dielectric properties of myofibrillar protein dispersions from Alaska Pollock (Theragra chalcogramma) as a function of concentration, temperature, and NaCl concentration. J. Food Eng. 166, 342–348. doi:10.1016/j.jfoodeng.2015.06.038

Dongsar, T. T., Dongsar, T. S., Gupta, N., Almalki, W. H., Sahebkar, A., and Kesharwani, P. (2023). Emerging potential of 5-Fluorouracil-loaded chitosan nanoparticles in cancer therapy. J. Drug Deliv. Sci. Technol. 82, 104371. doi:10.1016/j.jddst.2023.104371

Eldamak, A. R., Thorson, S., and Fear, E. C. (2020). Study of the dielectric properties of artificial sweat mixtures at microwave Frequencies. Biosens.-Basel 10, 62. doi:10.3390/bios10060062

Elkady, H., Elwan, A., El-Mahdy, H. A., Doghish, A. S., Ismail, A., Taghour, M. S., et al. (2022). New benzoxazole derivatives as potential VEGFR-2 inhibitors and apoptosis inducers: design, synthesis, anti-proliferative evaluation, flowcytometric analysis, and in silico studies. J. Enzyme Inhib. Med. Chem. 37, 403–416. doi:10.1080/14756366.2021.2015343

Gao, M. J., Qiang, T., Ma, Y. C., Liang, J. E., and Jiang, Y. F. (2021). RFID-based microwave biosensor for non-contact detection of glucose solution. Biosens.-Basel 11, 480. doi:10.3390/bios11120480

He, Y., Chen, K. Y., Wang, T. T., Jia, M., Bai, L. H., Wang, X., et al. (2023). MiRNA-155 biosensors based on AlGaN/GaN heterojunction field effect transistors with an Au-SH-RNA probe gate. IEEE Trans. Electron Devices 70, 1860–1864. doi:10.1109/ted.2023.3245569

Jain, M. C., Nadaraja, A. V., Mohammadi, S., Vizcaino, B. M., and Zarifi, M. H. (2021). Passive microwave biosensor for real-time monitoring of subsurface bacterial growth. IEEE Trans. Biomed. Circuits Syst. 15, 122–132. doi:10.1109/TBCAS.2021.3055227

Jain, M. C., Nadaraja, A. V., Vizcaino, B. M., Roberts, D. J., and Zarifi, M. H. (2020). Differential microwave resonator sensor reveals glucose-dependent growth profile of E. coli on solid agar. IEEE Microw. Wirel. Compon. Lett. 30, 531–534. doi:10.1109/lmwc.2020.2980756

Kandwal, A., Igbe, T., Li, J., Liu, Y., Li, S., Liu, L. W. Y., et al. (2020). Highly sensitive closed loop enclosed split ring biosensor with high field confinement for aqueous and blood-glucose measurements. Sci. Rep. 10, 4081. doi:10.1038/s41598-020-60806-9

Kanemaru, H., Mizukami, Y., Kaneko, A., Kajihara, I., and Fukushima, S. (2022). A protocol for quantifying lymphocyte-mediated cytotoxicity using an impedance-based real-time cell analyzer. Star. Protoc. 3, 101128. doi:10.1016/j.xpro.2022.101128

Kiesgen, S., Messinger, J. C., Chintala, N. K., Tano, Z., and Adusumilli, P. S. (2021). Comparative analysis of assays to measure CAR T-cell-mediated cytotoxicity. Nat. Protoc. 16, 1331–1342. doi:10.1038/s41596-020-00467-0

Kim, J., Phan, M. T. T., Kweon, S., Yu, H., Park, J., Kim, K. H., et al. (2020). A flow cytometry-based whole blood natural killer cell cytotoxicity assay using overnight cytokine activation. Front. Immunol. 11, 1851. doi:10.3389/fimmu.2020.01851

Koukoulias, K., Papayanni, P. G., Jones, J., Kuvalekar, M., Watanabe, A., Velazquez, Y., et al. (2023). Assessment of the cytolytic potential of a multivirus-targeted T cell therapy using a vital dye-based, flow cytometric assay. Front. Immunol. 14, 1299512. doi:10.3389/fimmu.2023.1299512

Kumar, A., Wang, C., Meng, F. Y., Zhou, Z. L., Zhao, M., Yan, G. F., et al. (2020). High-sensitivity, quantified, linear and mediator-free resonator-based microwave biosensor for glucose detection. Sensors 20, 4024. doi:10.3390/s20144024

Lai, F. F., Shen, Z. W., Wen, H., Chen, J. L., Zhang, X., Lin, P., et al. (2017). A morphological identification cell cytotoxicity assay using cytoplasm-localized fluorescent probe (CLFP) to distinguish living and dead cells. Biochem. Biophys. Res. Commun. 482, 257–263. doi:10.1016/j.bbrc.2016.09.169

Liu, Z. J., Li, G., Long, C., Xu, J., Cen, J. R., and Yang, X. B. (2018). The antioxidant activity and genotoxicity of isogarcinol. Food Chem. 253, 5–12. doi:10.1016/j.foodchem.2018.01.074

McQuade, R. M., Stojanovska, V., Bornstein, J. C., and Nurgali, K. (2017). Colorectal cancer chemotherapy: the evolution of treatment and new approaches. Curr. Med. Chem. 24, 1537–1557. doi:10.2174/0929867324666170111152436

Muñoz-Enano, J., Vélez, P., Gil, M., and Martín, F. (2020). Planar microwave resonant sensors: a review and recent developments. Appl. Sci.-Basel 10, 2615. doi:10.3390/app10072615

Narang, R., Mohammadi, S., Ashani, M. M., Sadabadi, H., Hejazi, H., Zarifi, M. H., et al. (2018). Sensitive, real-time and non-intrusive detection of concentration and growth of pathogenic bacteria using microfluidic-microwave ring resonator biosensor. Sci. Rep. 8, 15807. doi:10.1038/s41598-018-34001-w

Nazli Kazemi, M. A., and Light, P. E. (2023). In–human testing of a non-invasive continuous low–energy microwave glucose sensor with advanced machine learning capabilities. Biosens. Bioelectron. 22. doi:10.1016/j.bios.2023.115668

Niles, A. L., Moravec, R. A., and Riss, T. L. (2008). Update on in vitro cytotoxicity assays for drug development. Expert Opin. Drug Discov. 3, 655–669. doi:10.1517/17460441.3.6.655

Parboosing, R., Mzobe, G., Chonco, L., and Moodley, I. (2017). Cell-based assays for assessing toxicity: a basic guide. Med. Chem. 13, 13–21. doi:10.2174/1573406412666160229150803

Park, A., Hardin, J. S., Bora, N. S., and Morshedi, R. G. (2021). Effects of lidocaine on mitomycin C cytotoxicity. Ophthalmolo Glaucoma 4, 330–335. doi:10.1016/j.ogla.2020.10.011

Qiang, T., Wang, C., and Kim, N. Y. (2017). Quantitative detection of glucose level based on radiofrequency patch biosensor combined with volume-fixed structures. Biosens. Bioelectron. 98, 357–363. doi:10.1016/j.bios.2017.06.057

Radko, L., Minta, M., and Stypula-Trebas, S. (2013). Influence of fluoroquinolones on viability of Balb/c 3T3 and HepG2 cells. Bull. Vet. Inst. Pulawy 57, 599–606. doi:10.2478/bvip-2013-0102

Tofzikovskaya, Z., Casey, A., Howe, O., O’Connor, C., and McNamara, M. (2015). In vitro evaluation of the cytotoxicity of a folate-modified β-cyclodextrin as a new anti-cancer drug delivery system. J. Incl. Phenom. Macrocycl. Chem. 81, 85–94. doi:10.1007/s10847-014-0436-0

Tomasz, M. (1995). Mitomycin-C - small, fast and deadly (but very selective). Chem. Biol. 2, 575–579. doi:10.1016/1074-5521(95)90120-5

Vaucher, R. A., Teixeira, M. L., and Brandelli, A. (2010). Investigation of the cytotoxicity of antimicrobial peptide P40 on eukaryotic cells. Curr. Microbiol. 60, 1–5. doi:10.1007/s00284-009-9490-z

Wang, F., Jia, G. Z., Liu, L., Liu, F. H., and Liang, W. H. (2013). Temperature dependent dielectric of aqueous NaCl solution at microwave frequency. Acta Phys. Sin. 62, 048701. doi:10.7498/aps.62.048701

Wang, X. Y., Zhang, H. Y., Bai, M., Ning, T., Ge, S. H., Deng, T., et al. (2018). Exosomes serve as nanoparticles to deliver anti-miR-214 to reverse chemoresistance to cisplatin in gastric cancer. Mol. Ther. 26, 774–783. doi:10.1016/j.ymthe.2018.01.001

Wang, Y. J., Zhou, S. M., Xu, G., and Gao, Y. Q. (2015). Interference of phenylethanoid glycosides from cistanche tubulosa with the MTT assay. Molecules 20, 8060–8071. doi:10.3390/molecules20058060

Wang, Y. X., Fu, S. F., Xu, M. X., Tang, P., Liang, J. G., Jiang, Y. F., et al. (2023). Integrated passive sensing chip for highly sensitive and reusable detection of differential-charged nanoplastics concentration. ACS Sens. 8, 3862–3872. doi:10.1021/acssensors.3c01406

Withayachumnankul, W., Jaruwongrungsee, K., Tuantranont, A., Fumeaux, C., and Abbott, D. (2013). Metamaterial-based microfluidic sensor for dielectric characterization. Sens. Actuators, A 189, 233–237. doi:10.1016/j.sna.2012.10.027

Xu, J., and Zhu, Y. (2017). Tunable bandpass filter using a switched tunable diplexer technique. IEEE Trans. Ind. Electron. 64, 3118–3126. doi:10.1109/tie.2016.2638402

Yan, G. J., Du, Q., Wei, X. C., Miozzi, J., Kang, C., Wang, J. N., et al. (2018). Application of real-time cell electronic analysis system in modern pharmaceutical evaluation and analysis. Molecules 23, 3280. doi:10.3390/molecules23123280

Yang, J., Liao, L. W., Wang, J., Zhu, X. G., Xu, A., and Wu, Z. K. (2016). Size-dependent cytotoxicity of thiolated silver nanoparticles rapidly probed by using differential pulse voltammetry. Chemelectrochem 3, 1197–1200. doi:10.1002/celc.201600211

Yilmaz, T., Foster, R., and Hao, Y. (2019). Radio-frequency and microwave techniques for non-invasive measurement of blood glucose levels. Diagn. (Basel) 9, 6. doi:10.3390/diagnostics9010006

Yu, H., Wang, C., Meng, F. Y., Xiao, J., Liang, J. G., Kim, H., et al. (2021). Microwave humidity sensor based on carbon dots-decorated MOF-derived porous Co 3 O 4 for breath monitoring and finger moisture detection. Carbon 183, 578–589. doi:10.1016/j.carbon.2021.07.031

Yu, H., Wang, C., Qiang, T., and Meng, F. Y. (2019). High performance miniaturized compact diplexer based on optimized integrated passive device fabrication technology. Solid-State Electron. 160, 107628. doi:10.1016/j.sse.2019.107628

Zarifi, M. H., Shariaty, P., Hashisho, Z., and Daneshmand, M. (2017). A non-contact microwave sensor for monitoring the interaction of zeolite 13X with CO 2 and CH 4 in gaseous streams. Sens. Actuators, B 238, 1240–1247. doi:10.1016/j.snb.2016.09.047

Zhang, H. K., and Wan, L. Q. (2022). Cell chirality as a novel measure for cytotoxicity. Adv. Biol. 6, e2101088. doi:10.1002/adbi.202101088

Zhang, Y. Y., Zhu, S. P., Xu, X., and Zuo, L. (2019). In vitro study of combined application of bevacizumab and 5-fluorouracil or bevacizumab and mitomycin C to inhibit scar formation in glaucoma filtration surgery. J. Ophthalmol. 2019, 1–10. doi:10.1155/2019/7419571

Zhou, Y., Ren, H. Z., Dai, B., Li, J., Shang, L. C., Huang, J. F., et al. (2018). Hepatocellular carcinoma-derived exosomal miRNA-21 contributes to tumor progression by converting hepatocyte stellate cells to cancer-associated fibroblasts. J. Exp. Clin. Cancer Res. 37, 324. doi:10.1186/s13046-018-0965-2

Zhu, H., and Abbosh, A. M. (2016). Tunable balanced bandpass filter with wide tuning range of center frequency and bandwidth using compact coupled-line resonator. IEEE Microw. Wirel. Compon. Lett. 26, 7–9. doi:10.1109/lmwc.2015.2505647

Keywords: cytotoxicity assay, microwave sensors, live cells, drug concentrations, growth inhibition

Citation: Zhao J-M, Wang Y-K, Shi B-W, Wang Y-X, Jiang Y-F, Yang G-L, Gao X-D and Qiang T (2024) Microwave biosensor for the detection of growth inhibition of human liver cancer cells at different concentrations of chemotherapeutic drug. Front. Bioeng. Biotechnol. 12:1398189. doi: 10.3389/fbioe.2024.1398189

Received: 09 March 2024; Accepted: 23 April 2024; Published: 13 May 2024.

Reviewed by:

Copyright © 2024 Zhao, Wang, Shi, Wang, Jiang, Yang, Gao and Qiang. This is an open-access article distributed under the terms of the Creative Commons Attribution License (CC BY). The use, distribution or reproduction in other forums is permitted, provided the original author(s) and the copyright owner(s) are credited and that the original publication in this journal is cited, in accordance with accepted academic practice. No use, distribution or reproduction is permitted which does not comply with these terms.

*Correspondence: Xiao-Dong Gao, [email protected] ; Tian Qiang, [email protected]

This article is part of the Research Topic

Insights in Biosensors and Biomolecular Electronics 2024: Novel Developments, Current Challenges, and Future Perspectives

Log in using your username and password

  • Search More Search for this keyword Advanced search
  • Latest content
  • BMJ NPH Collections
  • BMJ Journals More You are viewing from: Google Indexer

You are here

  • Online First
  • Apple cider vinegar for weight management in Lebanese adolescents and young adults with overweight and obesity: a randomised, double-blind, placebo-controlled study
  • Article Text
  • Article info
  • Citation Tools
  • Rapid Responses
  • Article metrics

Download PDF

  • http://orcid.org/0000-0002-0214-242X Rony Abou-Khalil 1 ,
  • Jeanne Andary 2 and
  • Elissar El-Hayek 1
  • 1 Department of Biology , Holy Spirit University of Kaslik , Jounieh , Lebanon
  • 2 Nutrition and Food Science Department , American University of Science and Technology , Beirut , Lebanon
  • Correspondence to Dr Rony Abou-Khalil, Department of Biology, Holy Spirit University of Kaslik, Jounieh, Lebanon; ronyaboukhalil{at}usek.edu.lb

Background and aims Obesity and overweight have become significant health concerns worldwide, leading to an increased interest in finding natural remedies for weight reduction. One such remedy that has gained popularity is apple cider vinegar (ACV).

Objective To investigate the effects of ACV consumption on weight, blood glucose, triglyceride and cholesterol levels in a sample of the Lebanese population.

Materials and methods 120 overweight and obese individuals were recruited. Participants were randomly assigned to either an intervention group receiving 5, 10 or 15 mL of ACV or a control group receiving a placebo (group 4) over a 12-week period. Measurements of anthropometric parameters, fasting blood glucose, triglyceride and cholesterol levels were taken at weeks 0, 4, 8 and 12.

Results Our findings showed that daily consumption of the three doses of ACV for a duration of between 4 and 12 weeks is associated with significant reductions in anthropometric variables (weight, body mass index, waist/hip circumferences and body fat ratio), blood glucose, triglyceride and cholesterol levels. No significant risk factors were observed during the 12 weeks of ACV intake.

Conclusion Consumption of ACV in people with overweight and obesity led to an improvement in the anthropometric and metabolic parameters. ACV could be a promising antiobesity supplement that does not produce any side effects.

  • Weight management
  • Lipid lowering

Data availability statement

All data relevant to the study are included in the article or uploaded as supplementary information.

This is an open access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited, appropriate credit is given, any changes made indicated, and the use is non-commercial. See:  http://creativecommons.org/licenses/by-nc/4.0/ .

https://doi.org/10.1136/bmjnph-2023-000823

Statistics from Altmetric.com

Request permissions.

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

WHAT IS ALREADY KNOWN ON THIS TOPIC

Recently, there has been increasing interest in alternative remedies to support weight management, and one such remedy that has gained popularity is apple cider vinegar (ACV).

A few small-scale studies conducted on humans have shown promising results, with ACV consumption leading to weight loss, reduced body fat and decreased waist circumference.

WHAT THIS STUDY ADDS

No study has been conducted to investigate the potential antiobesity effect of ACV in the Lebanese population. By conducting research in this demographic, the study provides region-specific data and offers a more comprehensive understanding of the impact of ACV on weight loss and metabolic health.

HOW THIS STUDY MIGHT AFFECT RESEARCH, PRACTICE OR POLICY

The results might contribute to evidence-based recommendations for the use of ACV as a dietary intervention in the management of obesity.

The study could stimulate further research in the field, prompting scientists to explore the underlying mechanisms and conduct similar studies in other populations.

Introduction

Obesity is a growing global health concern characterised by excessive body fat accumulation, often resulting from a combination of genetic, environmental and lifestyle factors. 1 It is associated with an increased risk of numerous chronic illnesses such as type 2 diabetes, cardiovascular diseases, several common cancers and osteoarthritis. 1–3

According to the WHO, more than 1.9 billion adults were overweight worldwide in 2016, of whom more than 650 million were obese. 4 Worldwide obesity has nearly tripled since 1975. 4 The World Obesity Federation’s 2023 Atlas predicts that by 2035 more than half of the world’s population will be overweight or obese. 5

According to the 2022 Global Nutrition Report, Lebanon has made limited progress towards meeting its diet-related non-communicable diseases target. A total of 39.9% of adult (aged ≥18 years) women and 30.5% of adult men are living with obesity. Lebanon’s obesity prevalence is higher than the regional average of 10.3% for women and 7.5% for men. 6 In Lebanon, obesity was considered as the most important health problem by 27.6% and ranked fifth after cancer, cardiovascular, smoking and HIV/AIDS. 7

In recent years, there has been increasing interest in alternative remedies to support weight management, and one such remedy that has gained popularity is apple cider vinegar (ACV), which is a type of vinegar made by fermenting apple juice. ACV contains vitamins, minerals, amino acids and polyphenols such as flavonoids, which are believed to contribute to its potential health benefits. 8 9

It has been used for centuries as a traditional remedy for various ailments and has recently gained attention for its potential role in weight management.

In hypercaloric-fed rats, the daily consumption of ACV showed a lower rise in blood sugar and lipid profile. 10 In addition, ACV seems to decrease oxidative stress and reduces the risk of obesity in male rats with high-fat consumption. 11

A few small-scale studies conducted on humans have shown promising results, with ACV consumption leading to weight loss, reduced body fat and decreased waist circumference. 12 13 In fact, It has been suggested that ACV by slowing down gastric emptying, might promote satiety and reduce appetite. 14–16 Furthermore, ACV intake seems to ameliorate the glycaemic and lipid profile in healthy adults 17 and might have a positive impact on insulin sensitivity, potentially reducing the risk of type 2 diabetes. 8 10 18

Unfortunately, the sample sizes and durations of these studies were limited, necessitating larger and longer-term studies for more robust conclusions.

This work aims to study the efficacy and safety of ACV in reducing weight and ameliorating the lipid and glycaemic profiles in a sample of overweight and obese adolescents and young adults of the Lebanese population. To the best of our knowledge, no study has been conducted to investigate the potential antiobesity effect of ACV in the Lebanese population.

Materials and methods

Participants.

A total of 120 overweight and obese adolescents and young adults (46 men and 74 women) were enrolled in the study and assigned to either placebo group or experimental groups (receiving increasing doses of ACV).

The subjects were evaluated for eligibility according to the following inclusion criteria: age between 12 and 25 years, BMIs between 27 and 34 kg/m 2 , no chronic diseases, no intake of medications, no intake of ACV over the past 8 weeks prior to the beginning of the study. The subjects who met the inclusion criteria were selected by convenient sampling technique. Those who experienced heartburn due to vinegar were excluded.

Demographic, clinical data and eating habits were collected from all participants by self-administered questionnaire.

Study design

This study was a double-blind, randomised clinical trial conducted for 12 weeks.

Subjects were divided randomly into four groups: three treatment groups and a placebo group. A simple randomisation method was employed using the randomisation allocation software. Groups 1, 2 and 3 consumed 5, 10 and 15 mL, respectively, of ACV (containing 5% of acetic acid) diluted in 250 mL of water daily, in the morning on an empty stomach, for 12 weeks. The control group received a placebo consisting of water with similar taste and appearance. In order to mimic the taste of vinegar, the placebo group’s beverage (250 mL of water) contained lactic acid (250 mg/100 mL). Identical-looking ACV and placebo bottles were used and participants were instructed to consume their assigned solution without knowing its identity. The subject’s group assignment was withheld from the researchers performing the experiment.

Subjects consumed their normal diets throughout the study. The contents of daily meals and snacks were recorded in a diet diary. The physical activity of the subjects was also recorded. Daily individual phone messages were sent to all participants to remind them to take the ACV or the placebo. A mailing group was also created. Confidentiality was maintained throughout the procedure.

At weeks 0, 4, 8 and 12, anthropometric measurements were taken for all participants, and the level of glucose, triglycerides and total cholesterol was assessed by collecting 5 mL of fasting blood from each subject.

Anthropometric measurements

Body weight was measured in kg, to the nearest 0.01 kg, by standardised and calibrated digital scale. Height was measured in cm, to the nearest 0.1 cm, by a stadiometer. Anthropometric measurements were taken for all participants, by a team of trained field researchers, after 10–12 hours fast and while wearing only undergarments.

Body mass indices (BMIs) were calculated using the following equation:

The waist circumference measurement was taken between the lowest rib margin and the iliac crest while the subject was in a standing position (to the nearest 0.1 cm). Hip circumference was measured at the widest point of the hip (to the nearest 0.1 cm).

The body fat ratio (BFR) was measured by the bioelectrical impedance analysis method (OMRON Fat Loss Monitor, Model No HBF-306C; Japan). Anthropometric variables are shown in table 1 .

  • View inline

Baseline demographic, anthropometric and biochemical variables of the three apple cider vinegar groups (group 1, 2 and 3) and the placebo group (group 4)

Blood biochemical analysis

Serum glucose was measured by the glucose oxidase method. 19 Triglyceride levels were determined using a serum triglyceride determination kit (TR0100, Sigma-Aldrich). Cholesterol levels were determined using a cholesterol quantitation kit (MAK043, Sigma-Aldrich). Biochemical variables are shown in table 1 .

Statistical methods and data analysis

Data are presented as mean±SD. Statistical analyses were performed using Statistical Package for the Social Sciences (SPSS) software (version 23.0). Significant differences between groups were determined by using an independent t-test. Statistical significance was set at p<0.05.

Ethical approval

The study protocol was reviewed and approved by the research ethics committee (REC) of the Higher Centre for Research (HCR) at The Holy Spirit University of Kaslik (USEK), Lebanon. The number/ID of the approval is HCR/EC 2023–005. The participants were informed of the study objectives and signed a written informed consent before enrolment. The study was conducted in accordance to the International Conference and Harmonisation E6 Guideline for Good Clinical Practice and the Ethical principles of the Declaration of Helsinki.

Sociodemographic, nutritional and other baseline characteristics of the participants

A total of 120 individuals (46 men and 74 women) with BMIs between 27 and 34 kg/m 2 , were enrolled in the study. The mean age of the subjects was 17.8±5.7 years and 17.6±5.4 years in the placebo and experimental groups respectively.

The majority of participants, approximately 98.3%, were non-vegetarian and 89% of them reported having a high eating frequency, with more than four meals per day. Eighty-seven per cent had no family history of obesity and 98% had no history of childhood obesity. The majority reported not having a regular exercise routine and experiencing negative emotions or anxiety. All participants were non-smokers and non-drinkers. A small percentage (6.7%) were following a therapeutic diet.

Effects of ACV intake on anthropometric variables

The addition of 5 mL, 10 mL or 15 mL of ACV to the diet resulted in significant decreases in body weight and BMI at weeks 4, 8 and 12 of ACV intake, when compared with baseline (week 0) (p<0.05). The decrease in body weight and BMI seemed to be dose-dependent, with the group receiving 15 mL of ACV showing the most important reduction ( table 2 ).

Anthropometric variables of the participants at weeks 0, 4, 8 and 12

The impact of ACV on body weight and BMI seems to be time-dependent as well. Reductions were more pronounced as the study progressed, with the most significant changes occurring at week 12.

The circumferences of the waist and hip, along with the Body Fat Ratio (BFR), decreased significantly in the three treatment groups at weeks 8 and 12 compared with week 0 (p<0.05). No significant effect was observed at week 4, compared with baseline (p>0.05). The effect of ACV on these parameters seems to be time-dependent with the most prominent effect observed at week 12 compared with week 4 and 8. However it does not seem to be dose dependent, as the three doses of ACV showed a similar level of efficacy in reducing the circumferences of the waist/hip circumferences and the BFR at week 8 and 12, compared with baseline ( table 2 ).

The placebo group did not experience any significant changes in the anthropometric variables throughout the study (p>0.05). This highlights that the observed improvements in body weight, BMI, waist and hip circumferences and Body Fat Ratio were likely attributed to the consumption of ACV.

Effects of ACV on blood biochemical parameters

The consumption of ACV also led to a time and dose dependent decrease in serum glucose, serum triglyceride and serum cholesterol levels. ( table 3 ).

Biochemical variables of the participants at weeks 0, 4, 8 and 12

Serum glucose levels decreased significantly by three doses of ACV at week 4, 8 and 12 compared with week 0 (p<0.05) ( table 3 ). Triglycerides and total cholesterol levels decreased significantly at weeks 8 and 12, compared with week 0 (p<0.05). A dose of 15 mL of ACV for a duration of 12 weeks seems to be the most effective dose in reducing these three blood biochemical parameters.

There were no changes in glucose, triglyceride and cholesterol levels in the placebo groups at weeks 4, 8 and 12 compared with week 0 ( table 3 ).

These data suggest that continued intake of 15 mL of ACV for more than 8 weeks is effective in reducing blood fasting sugar, triglyceride and total cholesterol levels in overweight/obese people.

Adverse reactions of ACV

No apparent adverse or harmful effects were reported by the participants during the 12 weeks of ACV intake.

During the past two decades of the last century, childhood and adolescent obesity have dramatically increased healthcare costs. 20 21 Diet and exercise are the basic elements of weight loss. Many complementary therapies have been promoted to treat obesity, but few are truly beneficial.

The present study is the first to investigate the antiobesity effectiveness of ACV, the fermented juice from crushed apples, in the Lebanese population.

A total of 120 overweight and obese adolescents and young adults (46 men and 74 women) with BMIs between 27 and 34 kg/m 2 , were enrolled. Participants were randomised to receive either a daily dose of ACV (5, 10 or 15 mL) or a placebo for a duration of 12 weeks.

Some previous studies have suggested that taking ACV before or with meals might help to reduce postprandial blood sugar levels, 22 23 but in our study, participants took ACV in the morning on an empty stomach. The choice of ACV intake timing was motivated by the aim to study the impact of apple cider vinegar without the confounding variables introduced by simultaneous food intake. In addition, taking ACV before meals could better reduce appetite and increase satiety.

Our findings reveal that the consumption of ACV in people with overweight and obesity led to an improvement in the anthropometric and metabolic parameters.

It is important to note that the diet diary and physical activity did not differ among the three treatment groups and the placebo throughout the whole study, suggesting that the decrease in anthropometric and biochemical parameters was caused by ACV intake.

Studies conducted on animal models often attribute these effects to various mechanisms, including increased energy expenditure, improved insulin sensitivity, appetite and satiety regulation.

While vinegar is composed of various ingredients, its primary component is acetic acid (AcOH). It has been shown that after 15 min of oral ingestion of 100 mL vinegar containing 0.75 g acetic acid, the serum acetate levels increases from 120 µmol/L at baseline to 350 µmol/L 24 ; this fast increase in circulatory acetate is due to its fast absorption in the upper digestive tract. 24 25

Biological action of acetate may be mediated by binding to the G-protein coupled receptors (GPRs), including GPR43 and GPR41. 25 These receptors are expressed in various insulin-sensitive tissues, such as adipose tissue, 26 skeletal muscle, liver, 27 and pancreatic beta cells, 28 but also in the small intestine and colon. 29 30

Yamashita and colleagues have revealed that oral administration of AcOH to type 2 diabetic Otsuka Long-Evans Tokushima Fatty rats, improves glucose tolerance and reduces lipid accumulation in the adipose tissue and liver. This improvement in obesity-linked type 2 diabetes is due to the capacity of AcOH to inhibit the activity of carbohydrate-responsive, element-binding protein, a transcription factor involved in regulating the expression of lipogenic genes such as fatty acid synthase and acetyl-CoA carboxylase. 26 31 Sakakibara and colleagues, have reported that AcOH, besides inhibiting lipogenesis, reduces the expression of genes involved in gluconeogenesis, such as glucose-6-phosphatase. 32 The effect of AcOH on lipogenesis and gluconeogenesis is in part mediated by the activation of 5'-AMP-activated protein kinase in the liver. 32 This enzyme seems to be an important pharmacological target for the treatment of metabolic disorders such as obesity, type 2 diabetes and hyperlipidaemia. 32 33

5'-AMP-activated protein kinase is also known to stimulate fatty acid oxidation, thereby increasing energy expenditure. 32 33 These data suggest that the effect of ACV on weight and fat loss may be partly due to the ability of AcOH to inhibit lipogenesis and gluconeogenesis and activate fat oxidation.

Animal studies suggest that besides reducing energy expenditure, acetate may also reduce energy intake, by regulating appetite and satiety. In mice, an intraperitoneal injection of acetate significantly reduced food intake by activating vagal afferent neurons. 32–34 It is important to note that animal studies done on the effect of acetate on vagal activation are contradictory. This might be due to the site of administration of acetate and the use of different animal models.

In addition, in vitro and in vivo animal model studies suggest that acetate increases the secretion of gut-derived satiety hormones by enter endocrine cells (located in the gut) such as GLP-1 and PYY hormones. 25 32–35

Human studies related to the effect of vinegar on body weight are limited.

In accordance with our study, a randomised clinical trial conducted by Khezri and his colleagues has shown that daily consumption of 30 mL of ACV for 12 weeks significantly reduced body weight, BMI, hip circumference, Visceral Adiposity Index and appetite score in obese subjects subjected to a restricted calorie diet, compared with the control group (restricted calorie diet without ACV). Furthermore, Khezri and his colleagues showed that plasma triglyceride and total cholesterol levels significantly decreased, and high density lipoprotein cholesterol concentration significantly increased, in the ACV group in comparison with the control group. 13 32–34

Similarly, Kondo and his colleagues showed that daily consumption of 15 or 30 mL of ACV for 12 weeks reduced body weight, BMI and serum triglyceride in a sample of the Japanese population. 12 13 32–34

In contrast, Park et al reported that daily consumption of 200 mL of pomegranate vinegar for 8 weeks significantly reduced total fat mass in overweight or obese subjects compared with the control group without significantly affecting body weight and BMI. 36 This contradictory result could be explained by the difference in the percentage of acetate and other potentially bioactive compounds (such as flavonoids and other phenolic compounds) in different vinegar types.

In Lebanon, the percentage of the population with a BMI of 30 kg/m 2 or more is approximately 32%. The results of the present study showed that in obese Lebanese subjects who had BMIs ranging from 27 and 34 kg/m 2 , daily oral intake of ACV for 12 weeks reduced the body weight by 6–8 kg and BMIs by 2.7–3.0 points.

It would be interesting to investigate in future studies the effect of neutralised acetic acid on anthropometric and metabolic parameters, knowing that acidic substances, including acetic acid, could contribute to enamel erosion over time. In addition to promoting oral health, neutralising the acidity of ACV could improve its taste, making it more palatable. Furthermore, studying the effects of ACV on weight loss in young Lebanese individuals provides valuable insights, but further research is needed for a comprehensive understanding of how the effect of ACV might vary across different age groups, particularly in older populations and menopausal women.

The findings of this study indicate that ACV consumption for 12 weeks led to significant reduction in anthropometric variables and improvements in blood glucose, triglyceride and cholesterol levels in overweight/obese adolescents/adults. These results suggest that ACV might have potential benefits in improving metabolic parameters related to obesity and metabolic disorders in obese individuals. The results may contribute to evidence-based recommendations for the use of ACV as a dietary intervention in the management of obesity. The study duration of 12 weeks limits the ability to observe long-term effects. Additionally, a larger sample size would enhance the generalisability of the results.

Ethics statements

Patient consent for publication.

Consent obtained from parent(s)/guardian(s)

Ethics approval

This study involves human participants and was approved by the research ethics committee of the Higher Center for Research (HCR) at The Holy Spirit University of Kaslik (USEK), Lebanon. The number/ID of the approval is HCR/EC 2023-005. Participants gave informed consent to participate in the study before taking part.

  • Pandi-Perumal SR , et al
  • Poirier P ,
  • Bray GA , et al
  • World Health Organization
  • Global Nutrition Report
  • Geagea AG ,
  • Jurjus RA , et al
  • Liao H-J , et al
  • Serafin V ,
  • Ousaaid D ,
  • Laaroussi H ,
  • Bakour M , et al
  • Halima BH ,
  • Sarra K , et al
  • Fushimi T , et al
  • Khezri SS ,
  • Saidpour A ,
  • Hosseinzadeh N , et al
  • Montaser R , et al
  • Hlebowicz J ,
  • Darwiche G ,
  • Björgell O , et al
  • Santos HO ,
  • de Moraes WMAM ,
  • da Silva GAR , et al
  • Pourmasoumi M ,
  • Najafgholizadeh A , et al
  • Walker HK ,
  • Sanyaolu A ,
  • Qi X , et al
  • Nosrati HR ,
  • Mousavi SE ,
  • Sajjadi P , et al
  • Johnston CS ,
  • Quagliano S ,
  • Sugiyama S ,
  • Fushimi T ,
  • Kishi M , et al
  • Hernández MAG ,
  • Canfora EE ,
  • Jocken JWE , et al
  • Le Poul E ,
  • Struyf S , et al
  • Goldsworthy SM ,
  • Barnes AA , et al
  • Priyadarshini M ,
  • Fuller M , et al
  • Karaki S-I ,
  • Hayashi H , et al
  • Karaki S-I , et al
  • Yamashita H ,
  • Fujisawa K ,
  • Ito E , et al
  • Sakakibara S ,
  • Yamauchi T ,
  • Oshima Y , et al
  • Schimmack G ,
  • Defronzo RA ,
  • Goswami C ,
  • Iwasaki Y ,
  • Kim J , et al

Supplementary materials

  • Press release

Contributors RA-K: conceptualisation, methodology, data curation, supervision, guarantor, project administration, visualisation, writing–original draft. EE-H: conceptualisation, methodology, data curation, visualisation, writing–review and editing. JA: investigation, validation, writing–review and editing.

Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

Competing interests No, there are no competing interests.

Provenance and peer review Not commissioned; externally peer reviewed.

Read the full text or download the PDF:

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

An experimental investigation of Lean Six Sigma philosophies in a high-mix low-volume manufacturing environment

Contributed equally to this work with: Amanda Normand, T. H. Bradley

Roles Conceptualization, Formal analysis, Investigation, Methodology, Project administration, Supervision, Visualization, Writing – original draft

* E-mail: [email protected]

Affiliation Department of Systems Engineering, Walter Scott, Jr. College of Engineering, Colorado State University, Fort Collins, Colorado, United States of America

ORCID logo

Roles Conceptualization, Supervision, Visualization, Writing – review & editing

  • Amanda Normand, 
  • T. H. Bradley

PLOS

  • Published: May 17, 2024
  • https://doi.org/10.1371/journal.pone.0299498
  • Peer Review
  • Reader Comments

Table 1

This article experimentally examines methods for implementing the philosophies of Lean Six Sigma (LSS) in a High-Mix Low-Volume (HMLV) manufacturing environment. HMLV environments present unique challenges to LSS paradigms because of the need for extraordinary operational flexibility and customer responsiveness. The subject HMLV manufacturer for this experimentation manufactures (among 8500 others) an example component for which 3 machines work independently to perform the necessary operations to manufacture this component. The experiment that is the subject of this research seeks to adapt LSS philosophies to develop treatments to improve the performance of the manufacturing of this component. These LSS-inspired treatments included 1) using cellular manufacturing methods, and the 3 machines as a single work cell to manufacture the component, and 2) using a single multipurpose machine to perform all operations required to manufacture the component. The results of this experiment demonstrate that the cellular manufacturing method was the most effective to reduce costs, to standardize operations at a process level, and to increase throughput. The single machine processing method improved production rates and on-time delivery relative to the baseline, but greatly increased lead time, thereby increasing total cost per part. These results highlight the importance of critically assessing the application of LSS within HMLV environments compared to the Low-Mix High-Volume (LMHV) environments where LSS is traditionally successful. HMLV manufacturers and researchers can use these findings to identify the most effective methods for their specific needs and to design interventions that will improve system-level manufacturing performance in high mix environments.

Citation: Normand A, Bradley TH (2024) An experimental investigation of Lean Six Sigma philosophies in a high-mix low-volume manufacturing environment. PLoS ONE 19(5): e0299498. https://doi.org/10.1371/journal.pone.0299498

Editor: Agbotiname Lucky Imoize, University of Lagos Faculty of Engineering, NIGERIA

Received: July 6, 2023; Accepted: February 11, 2024; Published: May 17, 2024

Copyright: © 2024 Normand, Bradley. This is an open access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: Relevant data are within the manuscript and supporting files. Dataset is also publicly available on Dryad (forthcoming 2024): https://doi.org/10.5061/dryad.8pk0p2nvv Current access is provided as URL.

Funding: The authors received no specific funding for this work.

Competing interests: I have read the journal’s policy and the authors of this manuscript have the following competing interests: Amanda Normand was employed by the manufacturer while conducting this research. Thomas Bradley declares no competing interests. This does not alter our adherence to PLOS ONE policies on sharing data and materials.

1. Introduction

The manufacturing landscape is rapidly evolving, and much of the manufacturing industry has adopted high-mix strategies to compete globally. High-mix low-volume (HMLV) manufacturers are those that produce a large variety of products and components in relatively small quantities [ 1 ]. The HMLV environment embraces high variability in processes, demand rates, and product complexity because they allow for customization as a competitive strategy [ 2 ]. HMLV manufacturing as a category has been rapidly growing since the 1970’s [ 3 ] despite global competition for low-cost production [ 4 ]. HMLV focuses on customer-driven product customization, and specialization that high volume manufacturing cannot easily adapt to [ 5 ].

Lean and Six Sigma (LSS) are typically conceptualized and executed together as a combination of existing industrial paradigms, and both have been widely applied to conventional LMHV manufacturing. Lean is a method of improving manufacturing processes through the removal of waste from the system. Six Sigma is a means of statistically controlling processes [ 6 ]. Six Sigma asserts that quality values, like feature tolerances, tended to fall on a normal distribution curve when the process was “in control”. When the process requires correction, the distribution of measurements will be skewed. This insight allows manufacturers to focus on process corrections rather than constantly adjusting processes which can be expensive and unnecessary. LSS is also a departure from traditional measurements, such as defects per million, which provide in-process quality controls and allow for corrections to keep the process in control [ 7 ].

1.1 Literature review

The HMLV manufacturing environment presents unique challenges in the application of LSS industrial paradigms, which have traditionally been applied to great benefit in Low-Mix High-Volume (LMHV) manufacturing [ 8 ]. As summarized in Table 1 , there are differences between HMLV and LMHV manufacturing that challenge the direct applicability of LSS in HMLV manufacturing environments.

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

https://doi.org/10.1371/journal.pone.0299498.t001

In addition to the practical examples of this disconnect presented in Table 1 , there are many philosophical aspects of LSS that require a re-envisioning of the context of HMLV. For example, although the fundamental Lean concept of “waste” is fundamental, the types of waste in HMLV manufacturing are different than those in LMHV environments. Definitionally, “Waste” includes transportation, inventory, motion, waiting, overproduction, over-processing, defects, and skills [ 6 ]. Lean Manufacturing focuses on reducing these types of waste in production processes [ 10 ]. In HMLV manufacturing overproduction waste is particularly relevant, because although inventory can be wasteful, inventory is also effective when used as a buffer against the varying demand patterns that are amplified by customization efforts [ 11 ]. The customization of products creates complexity in scheduling and load leveling for HMLV manufacturers. These production variations will influence motion waste, transportation waste, and will complicate the layout of work area that can accommodate the diverse value streams. This inhibits a smooth production flow and creates areas where operational “bottlenecks” occur. In LMHV manufacturing, these bottlenecks typically have easily identifiable and predictable inputs and outputs. In HMLV manufacturing, this diversity of activities and lack of repetition makes identification of bottlenecks more difficult.

In HMLV environment, waiting waste may also manifest differently than in LMHV environments. In LMHV, waiting waste is typically due to long production runs. In HMLV, the large degree of product variation means that changeovers are instead a primary source of waiting waste. Product variation can contribute to increased motion and transportation waste because of the difficulty and complexity of defining a work area layout that enables multiple converging value streams. Value streams can be defined as a map of how product flows through a production system [ 10 ]. Value Stream Mapping is typically used to identify areas of production where inefficiencies and waste occur to improve the flow and eliminate these wastes [ 12 ]. Considering this complexity and the flexibility that HMLV environments must maintain, we can also understand that the data-driven approach inherent to LSS, must also be adapted to smaller batches and to more diversity in order to be successful in waste reduction. When employing LSS philosophies and techniques, adaptations to the specific environment and context of the process to be improved is necessary [ 13 ]. This same re-envisioning of the precepts of LSS is applicable to many of the challenges (as in Table 1 ) of applying LSS to HMLV.

Based on this understanding of the applicability and value of LSS in HMLV environments, we can identify that there is a need to measure the efficacy of these types of adaptations of LSS in HMLV practice. This research therefore presents a set of adaptations of LSS philosophies, metrics, and interventions to meet the needs of a HMLV manufacturer located in Wisconsin, USA. We present an experimental evaluation and assessment of these interventions and discuss the implications of these findings to the more general question of the applicability of LSS to the HMLV manufacturing environment. Conclusions focus on the definition of specific LSS philosophies that can be used to inspire improvements in HMLV manufacturing.

This section presents the methods by which we define, measure and test a set of LSS-inspired interventions in a HMLV manufacturing environment. Following LSS philosophies, the Define, Measure, Analyze, Improve, Control (DMAIC) process was used for each experimental intervention. This is further detailed for each intervention in S1 Appendix .

2.1. HMLV manufacturing site

A set of experiments was conducted in practice within an operating HMLV manufacturing environment to assess how the philosophies of LSS could be effectively applied and evaluated. The location of these experiments was an operating HMLV manufacturing plant in Wisconsin, USA. This plant is classifiable as a HMLV environment in that it manufactures 8,500-part numbers annually with an average batch quantity of 40 components.

Within this HMLV environment, there are several “work centers” that are part of the many converging value streams. Each work center is used as needed based on the production demand that is both volatile and continuously changing. This aspect of HMLV manufacturing allows for experimentation without significant production disruption if conducted while a work center has lower volume production flow.

2.2. Adaptation of LSS to HMLV

LSS philosophies include many potential benefits for HMLV manufacturers that could help resolve some of the major detractors from competitive advantage. These philosophies provide a framework that aims to reduce waste and enhance process flow through reduced process variation and defects. This aligns closely with the objectives of HMLV manufacturers that seek to improve resource utilization, minimize costs, and smooth production flow while maintaining the flexibility required to provide highly customized products. It is hypothesized that improving flow in this production environment will improve the overall cost to manufacture components.

These experiments specifically targeted process-level production streamlining instead of product-level interventions. By focusing on process-level improvements, these experiments aimed to maximize standardization within the HMLV environment, leading to enhanced waste reduction outcomes.

The specific HMLV manufacturer where this study was conducted had multiple CNC turning centers. The number of operators available was less than the number of turning centers available. In LMHV manufacturing, long runs (quantity) of components enables operators to run multiple machines where they focus on loading and unloading materials to keep production continuous. Tooling change-overs between components are infrequent and, depending on volume, may be unnecessary [ 14 ]. To do this, highly specialized and automated equipment is used that is typically dedicated to a specific product or product line. This dedicated equipment is a large capital investment that seeks to improve productivity but reduces flexibility.

The need for flexibility in HMLV manufacturing can have a direct negative impact on equipment uptime because it adds to operational complexity. To improve equipment uptime, specialized equipment and techniques are used. However, with the large variety of components, this becomes impractical from a capital investment standpoint. The potential answer to this is to focus on less specialized operations that can be used for multiple components by reducing them to their basic functions and creating efficiency at that level.

2.3. Baseline manufacturing operations

At a process level, we can understand that many unrelated components have similar processing steps, requiring coordination and consideration in the HMLV environment. As illustrated in Fig 1 , mapping these process steps for multiple components shows crossing and overlapping paths for product flow [ 15 ].

thumbnail

https://doi.org/10.1371/journal.pone.0299498.g001

A single part number (Component A from Fig 1 ) was chosen for experimentation. As shown in Fig 2 , the baseline operations to complete this component included turning, grinding, and hobbing.

thumbnail

https://doi.org/10.1371/journal.pone.0299498.g002

2.4. Experimental interventions

For this experiment, both highly specialized manufacturing processes that are refined for the specific component, and more basic manufacturing techniques that are refined at the process level were compared. Consideration was also given to the volume and frequency of manufacturing where this component has relatively higher volume and frequency than other components and provided the best opportunity for experimentation without creating unnecessary production. Before the experiment, the component was manufactured in multiple operations that were used in many value streams. The scheduling of multiple value streams created WIP between operations. This WIP, as part of the overall cost to manufacture, was addressed in the experiment using both single machine manufacturing and cellular manufacturing.

The production operations were examined for opportunities for standardization that allowed for maintained operational flexibility. The opportunity was identified at a process level where multiple similar components could be grouped by their shared processing methods.

Two different interventions were chosen based on the LSS philosophy of waste reduction:

2.4.1. Intervention 1: Cellular manufacturing.

A work cell was created that included the processes required to manufacture the component(s). As shown in Fig 3 , this work cell allowed for the continuous flow of components without wait time between processing operations. This processing method also allowed functions that would be considered “waste” in LSS to become internal operations. For example, setup time for each machine, part changeovers, and inspections between operations for quality control can now be considered internal to the cellular manufacturing system. Under this intervention, the operators loaded the raw materials for the first operation (the lathe) and then moved them to the remaining operations (grinder and hob) for processing. The cell produced completed components with the 3 separate machines.

thumbnail

https://doi.org/10.1371/journal.pone.0299498.g003

2.4.2. Intervention 2: Single machine processing.

The second intervention involved the use of a single machine to process the component(s) completely. For this, a lathe with live tooling was chosen. This lathe was capable of turning the components, including a turning operation capable of the same surface finish as grinding, and cutting the spline teeth with a single tooth cutter rather than a hob. As shown in Fig 4 , the operator only needed to load the raw material and then unload the completed component(s).

thumbnail

https://doi.org/10.1371/journal.pone.0299498.g004

2.5. Metrics

To effectively assess the impact of the application of the principles of LSS, a set of metrics and specific measurement methods, including the variables ( Table 2 ), were defined.

thumbnail

https://doi.org/10.1371/journal.pone.0299498.t002

  • Work In Process (WIP) = IW × OH EX
  • On Time Delivery (OTD) = Quantity Batches Completed on Time/ Total Batches
  • Performance (H C ) = T C / Q B
  • Effectiveness (E) = (((Q B −Q S ) × T P ) / T O ) / 100
  • Uptime (H U ) = (T C × Q B ) / T J
  • Lead Time (LT) = Average Hours Per Batch

research experimental design example

  • Parts Per Hour (PPH) = Q B / T J
  • Total Cost (C P ) = (T C + (M S / Q B )) × (OW WC + OH WC )

Each of these variables and metrics were measured and calculated for the duration of the experimental interventions. The baseline period and each of the interventions were implemented in a 3-month period (each, totaling 9 months). Operators were instructed to run the components as normal, where each machine was part of the larger mixed value stream, for the baseline period. For the cellular intervention, operators were instructed to use the work cell as a single entity where single piece flow was achieved for the batch of components. For the single machine processing intervention, the operator was instructed to run the entire batch on a single machine.

For both the baseline and the experimental periods, production demand was typical, and all machines were expected to perform normal production operations, including other components, during these periods.

This section presents the results of the baseline and two interventions. A summary of the results allows for direct comparison of each of the manufacturing setups along multiple dimensions of LSS performance.

In HMLV environments, traditional parametric statistical tools cannot be defensibly applied due to their inherent assumptions of large sample sizes and stable data distributions [ 16 ]. In HMLV environments, data is sparse and non-stationary (see S2 Appendix for details), rendering these assumptions invalid. All data presented in this study are presented as values that are the integrated result over the duration of these experiments. This provides more accurate and comprehensive understanding of the HMLV performance in practice, including variability and non-stationarity of the system under test. This approach recognizes the dynamic nature of HMLV environments and provides analytical methods to suit.

3.1. Baseline manufacturing system performance

For the 3 months prior to implementation of any of the interventions, the performance of the baseline manufacturing system was determined by measuring each of these metrics before any experimental treatment was applied. The baseline included 3 separate work centers (Lathe, Grinder, Hob) operating independently of each other. In Fig 5 , a Value Stream representation of this baseline is presented. This baseline batch manufacturing process involved the three manufacturing processes (Lathe, Grinder, Hob), with staging and setup before each of these. During the baseline measurement period, Part A was manufactured 58 times (1 complete batch), and Fig 5 presents the summation of those parts’ results.

thumbnail

https://doi.org/10.1371/journal.pone.0299498.g005

With the work centers acting as separate entities and as part of a larger system of mixed value streams, WIP is present at each machine waiting to be processed. Batch processing dictates that a batch of components is complete before any single component is considered complete and logged into stock. As shown in Fig 1 , this means that a new job or work packet will be in queue for an average of 67.32 hours before it begins to process in the first operation (Lathe). The critical path for this processing method, for a batch to be completed, was 253.61 hours. The uptime for this method is measured at 14.76% of the total processing time with wait time in the multiple staging events providing the most significant portion of downtime. The entire set of metrics of performance are summarized in Table 3 .

thumbnail

https://doi.org/10.1371/journal.pone.0299498.t003

3.2. Manufacturing system performance under cellular manufacturing intervention

Intervention #1 used the same machines as the baseline but now configured as a cellular workflow where the queue for work existing in front of the first operation and then components flowed through the work cell to the second and third operations, as represented in the Value Stream Map shown in Fig 6 .

thumbnail

https://doi.org/10.1371/journal.pone.0299498.g006

With the work centers acting as a cellular entity, WIP is reduced to a single location ahead of the cell. The time to set up the next operation was also internalized where it was completed during the previous operation. The reduction in wait time and internalizing of set up time for the grinder and hob operations reduced the total processing time. As shown in Fig 5 , the total wait time in staging was reduced to 67.32 hours and the set-up time, although not reduced overall, only contributed to 4.75 hours of actual down time for the cell. The critical path for this processing method, for the batch to be completed, was 109.51 hours. The uptime for this method is measured at 34.19% of the total processing time with wait time and down time reduced. The entire set of metrics of performance are summarized in Table 3 .

3.3. Manufacturing system performance under single machine processing intervention

Intervention #2 used a single machine to manufacture the component in a single setup. There were many considerations to do this that required some changes in processing methods to machine the component to the same specifications as with multiple machines. For instance, plunge grinding was not an option inside of the lathe without additional machine modification and reduction in machine and tool life due to abrasives used, and therefore increased the processing time to turn the required surface finish. Hob operations were performed using a single tooth cutter in live tooling.

As shown in Fig 7 , the wait time in staging was more than the wait time in the cellular processing method but less than the total wait time for the baseline processing method. The reduction in total wait time in staging, compared to the baseline, increased the uptime compared to the baseline. However, the processing time for the lathe to perform the 3 required operations was increased. The critical path for this processing method was 210.13 hours to complete the component.

thumbnail

https://doi.org/10.1371/journal.pone.0299498.g007

The entire set of metrics of performance are summarized in Table 3 .

3.4. Results summary and synthesis

The measured variables for the baseline and both experimental methods can be seen in Table 3 . As shown, the space required for single machine processing was less than the cellular processing method. When compared to the baseline, the cellular processing method required less space because of the reduction in staging space needed and the ability to overlap operator work envelopes between machines.

Using these variables, the LSS-informed metrics of performance were calculated as presented in Table 4 .

thumbnail

https://doi.org/10.1371/journal.pone.0299498.t004

This experiment provides evidence of the benefits available from both single machine and cellular processing interventions in HMLV manufacturing. Both interventions provided improvements in cost, parts per hour, and WIP compared to the baseline. Cellular processing provided a larger benefit in each category in addition to improved overall lead time and quality. For single machine processing, the added cycle time to perform all of the operations negatively impacted the performance of Intervention 2 by these metrics compared to cellular processing.

During the experimental period, there were also extraneous factors that affect the replicability and applicability of these measurements. The main impact came from global supply chain challenges due to the COVID-19 pandemic, which disrupted the material availability and changed the lead times resulting in some work packets completed well in advance of their due date and others were rushed through once materials were available. The results presented here are asserted to be replicable and applicable, other experiments performed in 2020 are not presented here due to confounding with the COVID-19 pandemic.

4. Discussion

4.1. the application of lean six sigma to high-mix low-volume manufacturing.

If we consider results of these experiments in the context of conventional LSS philosophies, the baseline manufacturing configuration might be considered wasteful and inefficient. The baseline is measured as having significant added costs due to WIP ($7,624.37 compared to $1,231.33 for cellular and $4,646.81 for Single WC) between operations. This resulted in a relatively long wait time, and large overall time required to produce the components. Both the Cellular Manufacturing method, and the Single Machine method would be considered promising LSS interventions in a LMHV manufacturing environment, relative to the baseline, because of their potential to reduce WIP, reduce setup time (M S ), and thereby reduce waste.

The results of these experiments in a HMLV manufacturing environment instead illustrate that these tradeoffs are more complicated than might be conventionally considered. For the Single Machine Manufacturing method, the component was manufactured with significantly reduced process intervention (an operator wasn’t moving components), but in this HMLV application, the results of this experiment show that there is a reduction in product quality (Cpk) that overwhelms the benefits from reduced operator intervention. In this HMLV application, the types of parts that must be manufactured with this Single Machine are so numerous, that the multi-step manufacturing process is difficult to control. The complexity of machine setup (M S ), of inter-machining-step quality control, and of labor meant that the Single Machine manufacturing method produced lower quality parts (Q S ) that had to be reworked to meet specifications. The Single Machine processing method also reduced the cost of WIP, but decreased process effectiveness (E) due to quality problems that were the result of the increased complexity of machine set up.

On the other hand, for the Cellular Manufacturing method, the experimental evidence indicates that internalizing non-valued added activities (i.e., “waste”) into value-added resulted in decreased production time (T C * Q S + M S ) and fewer quality errors (Q S ). In this HMLV environment, manufacturing quality was improved because of the frequent human interventions and in-process quality checks. The Cellular Manufacturing method also significantly reduced the cost of WIP because of its increased throughput and reduced wait time.

These findings illustrate that although the philosophies and concepts of LSS are fundamental to improving productivity, the unique demands of the HMLV environment mean that many of the conventional LSS metrics and concepts that have been successfully applied to LMHV manufacturing must be re-validated in application to HMLV manufacturing.

4.2. Implications for the applicability of single machine processing

Single Machine Processing is often presented in literature as an ideal case in which to realize LMHV manufacturing quality because it allows for higher accuracy between features by removing the need to control interactions between machines [ 17 ]. Instead, as highlighted in Tables 3 and 4 in this HMLV experiment, the Single Machine intervention had the lowest process quality level (E = 18.65% compared to 47.68% and 47.74% respectively). The Single Work Center method had measurably lower process control (ie. lower Cpk) than the other methods.

In the HMLV manufacturing environment, these quality problems were largely the results of the increased complexity of the machine setup, and of fewer opportunities to measure and adjust the machine during processing. The resulting quality issues negated the Single Work Center’s improvement in parts per hour compared to the other processing methods. These results point to the importance of very strong quality controls for the Single Work Center method. If stronger quality controls during setup and during processing had been realized, the Single Work Center method might have been able to realize an increase in production rate (0.83 PPH, compared to 0.19 for the baseline and 0.36 for the cellular intervention). In this experiment, and in this HMLV application, the highly specialized equipment that would be required to accommodate the high numbers of different components, and the high volatility associated with very small volumes, was cost-prohibitive.

We also observed that skill level required of the operator for the Single Machine method is higher than the other manufacturing systems studied here. The operator needed to be capable of setting up more than one type of machine operation and needs to do so in a machine that was more complex than those used in the baseline and the cellular interventions. In the cellular intervention, there was also an increase in the required skill level because there was a requirement that each of the operators were capable of at least (2) different machine operations. The requirement of highly skilled labor is a typical constraint in HMLV manufacturing. Although it increases workforce flexibility in terms of labor skillsets, it may reduce flexibility in terms of change management [ 18 ].

4.3. Implications for the applicability of cellular manufacturing

Cellular Manufacturing in a HMLV environment requires the development of groups of processes that can be executed together in a manufacturing cell [ 15 ]. Cellular manufacturing is also more difficult to set up in HMLV environments where the required equipment is not portable and is not reconfigurable at the same rate that the product changes. In HMLV manufacturing, production demand patterns frequently change, leading to processing methods that are poorly compatible with existing work cells.

Instead, in HMLV manufacturing, manufacturing cells must be constructed to serve commonly applied sets of operations, which would apply to a wide variety of products and product families. Using the workflow mapping technique illustrated in Fig 1 , we developed an understanding of commonalities in the product which allow for common processing methods. If all the components were mapped in this environment, stronger trends would be apparent that would potentially allow for additional manufacturing cells to be created to achieve the same successes.

Although the Cellular Manufacturing work cell required more space in the manufacturing facility compared to the single machine processing, there was a significant advantage in the cost of WIP in the work cell (approximately 84% less than the baseline and 74% less than the single machine intervention). This was the result of faster processing with setups internal to cycle times, and single piece flow through the work center. These findings are consistent with the benefits that others have achieved with cellular processing methods [ 19 ].

The Cellular Manufacturing methods measurably improved the workflow and reduced quality errors compared to the baseline and the Single Machine method. The time to set up the second and third machining operations was able to be done internally to the cycle time of the previous operation which, in addition to one-piece-flow for components, reduced the critical path. This method provided an additional benefit by only requiring 1 machine operator to run all 3 machines. Quality also improved (0.06 scrap rate) compared to the baseline (0.07 scrap rate) because of the operator’s ability to impact all machining operations as necessary to improve and optimize operations in sequence. The single piece-flow also decreased the wait time between operations because batches were completed through all operations using single piece flow. WIP existed only at the beginning of the value stream where it was waiting to be processed in the work cell. This reduction in wait time is consistent with removal of waste as defined philosophically by LSS [ 20 ].

5. Conclusions

HMLV manufacturing is an important component of the US manufacturing sector, but the philosophies and practicalities of applying LSS manufacturing paradigms to HMLV environments are less developed.

Through a set of on-site in-practice experiments with the application of two LSS-inspired interventions (Cellular Manufacturing, and Single Machine Manufacturing) to the baseline production processes at an example US manufacturer, this study has been able to quantify these interventions’ costs and benefits in the HMLV environment. The experimental results of this study provide evidence that the cellular processing method resulted in the most benefits to the manufacturing environment. The cellular method resulted in less inventory value in WIP (C P = $1,231.33 compared to $7,624.37 for the baseline and $4,646.81 for the Single WC), stronger On Time Delivery (OTD = 85.71% compared to 33.33% for the both the baseline and the Single WC), and the lowest total cost per part (C P = $64.81 compared to $66.69 for the baseline and $76.00 for the Single WC). Together these results illustrate that cellular manufacturing method proved to be the most effective in reducing costs and improving flow, while the single machine processing method was ineffective in this HMLV manufacturing environment without further quality control measures. These results and discussion provide insights for HMLV manufacturers looking to optimize their operations through standardization at a process level that allows them to maintain operation flexibility while reducing component costs.

Supporting information

S1 appendix. appendix a: dmaic for experimental approaches..

https://doi.org/10.1371/journal.pone.0299498.s001

S2 Appendix. Appendix B: Non-stationarity of production statistics under HMLV manufacturing.

https://doi.org/10.1371/journal.pone.0299498.s002

S3 Appendix. Appendix C: Variables and dataset.

https://doi.org/10.1371/journal.pone.0299498.s003

S1 Dataset.

https://doi.org/10.1371/journal.pone.0299498.s004

Acknowledgments

This research was supported by a team of Engineers and CNC Programmers including: Shane Sullivan, Tim Dewitz, Jeff Smith, Garret Yohnk, Carl Krumenauer, Shawn Mykytiuk, Ryan Elliott, Josh Adrian, and Taylor Thompson.

  • View Article
  • Google Scholar
  • 2. Lane Greg. MADE-TO-ORDER LEAN : Excelling in a High-Mix , Low -Volume Environment . S.L., CRC Press, 2020.
  • 3. Managing High-Mix, Low-Volume Assembly | ASSEMBLY [Internet]. www.assemblymag.com . [cited 2023 May 1]. https://www.assemblymag.com/articles/83764-managing-high-mix-low-volume-assembly
  • PubMed/NCBI
  • 9. Ooramvley A, Ooramvley K. Standardization in a high mix low volume company [Internet] [Thesis]. [Jönköping University]; 2020. https://www.diva-portal.org/smash/record.jsf?pid=diva2%3A1442082&dswid=9417
  • 14. Rheaume J. High-Mix, Low-Volume Lean Manufacturing Implementation and Lot Size Optimization at an Aerospace OEM [Internet] [Thesis]. [MIT]; 1995. https://dspace.mit.edu/bitstream/handle/1721.1/82699/53343030-MIT.pdf

IMAGES

  1. 15 Experimental Design Examples (2024)

    research experimental design example

  2. Experimental Study Design: Types, Methods, Advantages

    research experimental design example

  3. Experimental Research Designs: Types, Examples & Advantages

    research experimental design example

  4. Experimental research design.revised

    research experimental design example

  5. PPT

    research experimental design example

  6. PPT

    research experimental design example

VIDEO

  1. Experimental design example problems

  2. Experimental research #research methodology #psychology #variables #ncertpsychology #lecture28

  3. Needs of Experimental Design

  4. SPSS Tutor, Paired Sample t test With in subject, repeated measure

  5. What is experimental research design? (4 of 11)

  6. Two-Group Experimental Design

COMMENTS

  1. 19+ Experimental Design Examples (Methods + Types)

    1) True Experimental Design. In the world of experiments, the True Experimental Design is like the superstar quarterback everyone talks about. Born out of the early 20th-century work of statisticians like Ronald A. Fisher, this design is all about control, precision, and reliability.

  2. Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  3. Experimental Research Designs: Types, Examples & Advantages

    Experimental Research Design Example. In an experimental design, a researcher gathers plant samples and then randomly assigns half the samples to photosynthesize in sunlight and the other half to be kept in a dark box without sunlight, while controlling all the other variables (nutrients, water, soil, etc.)

  4. Experimental Design: Types, Examples & Methods

    Three types of experimental designs are commonly used: 1. Independent Measures. Independent measures design, also known as between-groups, is an experimental design where different participants are used in each condition of the independent variable. This means that each condition of the experiment includes a different group of participants.

  5. A Quick Guide to Experimental Design

    Step 1: Define your variables. You should begin with a specific research question. We will work with two research question examples, one from health sciences and one from ecology: Example question 1: Phone use and sleep. You want to know how phone use before bedtime affects sleep patterns.

  6. Experimental Research Designs: Types, Examples & Methods

    The pre-experimental research design is further divided into three types. One-shot Case Study Research Design. In this type of experimental study, only one dependent group or variable is considered. The study is carried out after some treatment which was presumed to cause change, making it a posttest study.

  7. Research Design

    Step 1: Consider your aims and approach. Step 2: Choose a type of research design. Step 3: Identify your population and sampling method. Step 4: Choose your data collection methods. Step 5: Plan your data collection procedures. Step 6: Decide on your data analysis strategies. Frequently asked questions.

  8. Experimental Design: Definition and Types

    An experimental design is a detailed plan for collecting and using data to identify causal relationships. Through careful planning, the design of experiments allows your data collection efforts to have a reasonable chance of detecting effects and testing hypotheses that answer your research questions. An experiment is a data collection ...

  9. Guide to experimental research design

    Experimental design is a research method that enables researchers to assess the effect of multiple factors on an outcome.. You can determine the relationship between each of the variables by: Manipulating one or more independent variables (i.e., stimuli or treatments). Applying the changes to one or more dependent variables (i.e., test groups or outcomes)

  10. Experimental Research: Definition, Types, Design, Examples

    Content. Experimental research is a cornerstone of scientific inquiry, providing a systematic approach to understanding cause-and-effect relationships and advancing knowledge in various fields. At its core, experimental research involves manipulating variables, observing outcomes, and drawing conclusions based on empirical evidence.

  11. What Is a Research Design

    A research design is a strategy for answering your research question using empirical data. Creating a research design means making decisions about: Your overall research objectives and approach. Whether you'll rely on primary research or secondary research. Your sampling methods or criteria for selecting subjects. Your data collection methods.

  12. What Is Research Design? 8 Types + Examples

    Experimental Research Design. Experimental research design is used to determine if there is a causal relationship between two or more variables.With this type of research design, you, as the researcher, manipulate one variable (the independent variable) while controlling others (dependent variables). Doing so allows you to observe the effect of the former on the latter and draw conclusions ...

  13. Study designs: Part 1

    The study design used to answer a particular research question depends on the nature of the question and the availability of resources. In this article, which is the first part of a series on "study designs," we provide an overview of research study designs and their classification. The subsequent articles will focus on individual designs.

  14. 15 Experimental Design Examples (2024)

    15 Experimental Design Examples. By Chris Drew (PhD) / October 9, 2023. Experimental design involves testing an independent variable against a dependent variable. It is a central feature of the scientific method. A simple example of an experimental design is a clinical trial, where research participants are placed into control and treatment ...

  15. Exploring Experimental Research: Methodologies, Designs, and

    Experimental research serves as a fundamental scientific method aimed at unraveling. cause-and-effect relationships between variables across various disciplines. This. paper delineates the key ...

  16. Experimental Design

    Examples of Experimental Design . Here are some examples of experimental design in different fields: Example in Medical research: A study that investigates the effectiveness of a new drug treatment for a particular condition. Patients are randomly assigned to either a treatment group or a control group, with the treatment group receiving the ...

  17. Experimental Research: What it is + Types of designs

    The classic experimental design definition is: "The methods used to collect data in experimental studies.". There are three primary types of experimental design: The way you classify research subjects based on conditions or groups determines the type of research design you should use. 01. Pre-Experimental Design.

  18. Experimental Research Design

    Experimental research design is centrally concerned with constructing research that is high in causal (internal) validity. Randomized experimental designs provide the highest levels of causal validity. Quasi-experimental designs have a number of potential threats to their causal validity. Yet, new quasi-experimental designs adopted from fields ...

  19. Types of Research Designs Compared

    Laboratory experiments have higher internal validity but lower external validity. Fixed design vs flexible design. In a fixed research design the subjects, timescale and location are set before data collection begins, while in a flexible design these aspects may develop through the data collection process.

  20. Study/Experimental/Research Design: Much More Than Statistics

    Study, experimental, or research design is the backbone of good research. It directs the experiment by orchestrating data collection, defines the statistical analysis of the resultant data, and guides the interpretation of the results. ... And in this example, separate design statements would be needed for temperature measurement and H max:M ...

  21. Experimental Design

    Experimental Research is research that randomly selects subjects to participate in a study that includes some kind of intervention, or action intended to have an effect on the participants. An example of an experimental design would be randomly selecting all of the schools participating in the hand washing poster campaign.

  22. PDF Experimental Research Designs

    validity of an experimental design. Each research study poses different challenges that require thoughtful, often creative solutions. As Sandelowski and colleagues (2012) have observed, we actually have to reinvent these research methods every time we use them to accommodate the real world of research practice (p. 320). The True Experiment

  23. Evolving Improved Sampling Protocols for Dose-Response ...

    This research presents a genetic algorithm approach to optimise sample timing across various parameterisations of a demonstrative PK-PD model with the goal of aiding experimental design. The optimisation relies on a chosen metric of parameter uncertainty that is based on the profile-likelihood method.

  24. Frontiers

    Cytotoxicity assays are crucial for assessing the efficacy of drugs in killing cancer cells and determining their potential therapeutic value. Measurement of the effect of drug concentration, which is an influence factor on cytotoxicity, is of great importance. This paper proposes a cytotoxicity assay using microwave sensors in an end-point approach based on the detection of the number of live ...

  25. Quasi-Experimental Design

    Revised on January 22, 2024. Like a true experiment, a quasi-experimental design aims to establish a cause-and-effect relationship between an independent and dependent variable. However, unlike a true experiment, a quasi-experiment does not rely on random assignment. Instead, subjects are assigned to groups based on non-random criteria.

  26. Apple cider vinegar for weight management in Lebanese adolescents and

    Background and aims Obesity and overweight have become significant health concerns worldwide, leading to an increased interest in finding natural remedies for weight reduction. One such remedy that has gained popularity is apple cider vinegar (ACV). Objective To investigate the effects of ACV consumption on weight, blood glucose, triglyceride and cholesterol levels in a sample of the Lebanese ...

  27. An experimental investigation of Lean Six Sigma philosophies in a high

    This article experimentally examines methods for implementing the philosophies of Lean Six Sigma (LSS) in a High-Mix Low-Volume (HMLV) manufacturing environment. HMLV environments present unique challenges to LSS paradigms because of the need for extraordinary operational flexibility and customer responsiveness. The subject HMLV manufacturer for this experimentation manufactures (among 8500 ...