Evaluating Research in Academic Journals: A Practical Guide to Realistic Evaluation

  • November 2018
  • Edition: 7th edition
  • Publisher: Routledge (Taylor & Francis)
  • ISBN: 978-0815365662
  • This person is not on ResearchGate, or hasn't claimed this research yet.

Maria Tcherni-Buzzeo at University of New Haven

  • University of New Haven

Discover the world's research

  • 25+ million members
  • 160+ million publication pages
  • 2.3+ billion citations

Aisha Siddiqua

  • Victoria Espinoza

Maria Tcherni-Buzzeo

  • Jesus Alberto Galvan
  • Dorothea Ivey

Raymond I Nelson

  • TOBIAS CHACHA OLAMBO
  • DR. MOSES ODHIAMBO ALUOCH (PhD)

Rob Davidson

  • Catherine Briand

Khayreddine Bouabida

  • Carol Giba Bottger Garcia
  • Kofar Wambai

Chek Derashid

  • J AM COLL HEALTH

Jennifer Cremeens Matthews

  • C. Nathan DeWall
  • J. Michael Bartels

Eunsun Kwon

  • Sojung Park

Jinho Kim

  • J APPL DEV PSYCHOL

Paul Klaczynski

  • Sandra A. Brown
  • COMPUT HUM BEHAV
  • Alberta Contarello

Mauro Sarrica

  • Barker Bausell

Terance Miethe

  • Recruit researchers
  • Join for free
  • Login Email Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google Welcome back! Please log in. Email · Hint Tip: Most researchers use their institutional email address as their ResearchGate login Password Forgot password? Keep me logged in Log in or Continue with Google No account? Sign up
  • Privacy Policy

Research Method

Home » Evaluating Research – Process, Examples and Methods

Evaluating Research – Process, Examples and Methods

Table of Contents

Evaluating Research

Evaluating Research

Definition:

Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the field, and involves critical thinking, analysis, and interpretation of the research findings.

Research Evaluating Process

The process of evaluating research typically involves the following steps:

Identify the Research Question

The first step in evaluating research is to identify the research question or problem that the study is addressing. This will help you to determine whether the study is relevant to your needs.

Assess the Study Design

The study design refers to the methodology used to conduct the research. You should assess whether the study design is appropriate for the research question and whether it is likely to produce reliable and valid results.

Evaluate the Sample

The sample refers to the group of participants or subjects who are included in the study. You should evaluate whether the sample size is adequate and whether the participants are representative of the population under study.

Review the Data Collection Methods

You should review the data collection methods used in the study to ensure that they are valid and reliable. This includes assessing the measures used to collect data and the procedures used to collect data.

Examine the Statistical Analysis

Statistical analysis refers to the methods used to analyze the data. You should examine whether the statistical analysis is appropriate for the research question and whether it is likely to produce valid and reliable results.

Assess the Conclusions

You should evaluate whether the data support the conclusions drawn from the study and whether they are relevant to the research question.

Consider the Limitations

Finally, you should consider the limitations of the study, including any potential biases or confounding factors that may have influenced the results.

Evaluating Research Methods

Evaluating Research Methods are as follows:

  • Peer review: Peer review is a process where experts in the field review a study before it is published. This helps ensure that the study is accurate, valid, and relevant to the field.
  • Critical appraisal : Critical appraisal involves systematically evaluating a study based on specific criteria. This helps assess the quality of the study and the reliability of the findings.
  • Replication : Replication involves repeating a study to test the validity and reliability of the findings. This can help identify any errors or biases in the original study.
  • Meta-analysis : Meta-analysis is a statistical method that combines the results of multiple studies to provide a more comprehensive understanding of a particular topic. This can help identify patterns or inconsistencies across studies.
  • Consultation with experts : Consulting with experts in the field can provide valuable insights into the quality and relevance of a study. Experts can also help identify potential limitations or biases in the study.
  • Review of funding sources: Examining the funding sources of a study can help identify any potential conflicts of interest or biases that may have influenced the study design or interpretation of results.

Example of Evaluating Research

Example of Evaluating Research sample for students:

Title of the Study: The Effects of Social Media Use on Mental Health among College Students

Sample Size: 500 college students

Sampling Technique : Convenience sampling

  • Sample Size: The sample size of 500 college students is a moderate sample size, which could be considered representative of the college student population. However, it would be more representative if the sample size was larger, or if a random sampling technique was used.
  • Sampling Technique : Convenience sampling is a non-probability sampling technique, which means that the sample may not be representative of the population. This technique may introduce bias into the study since the participants are self-selected and may not be representative of the entire college student population. Therefore, the results of this study may not be generalizable to other populations.
  • Participant Characteristics: The study does not provide any information about the demographic characteristics of the participants, such as age, gender, race, or socioeconomic status. This information is important because social media use and mental health may vary among different demographic groups.
  • Data Collection Method: The study used a self-administered survey to collect data. Self-administered surveys may be subject to response bias and may not accurately reflect participants’ actual behaviors and experiences.
  • Data Analysis: The study used descriptive statistics and regression analysis to analyze the data. Descriptive statistics provide a summary of the data, while regression analysis is used to examine the relationship between two or more variables. However, the study did not provide information about the statistical significance of the results or the effect sizes.

Overall, while the study provides some insights into the relationship between social media use and mental health among college students, the use of a convenience sampling technique and the lack of information about participant characteristics limit the generalizability of the findings. In addition, the use of self-administered surveys may introduce bias into the study, and the lack of information about the statistical significance of the results limits the interpretation of the findings.

Note*: Above mentioned example is just a sample for students. Do not copy and paste directly into your assignment. Kindly do your own research for academic purposes.

Applications of Evaluating Research

Here are some of the applications of evaluating research:

  • Identifying reliable sources : By evaluating research, researchers, students, and other professionals can identify the most reliable sources of information to use in their work. They can determine the quality of research studies, including the methodology, sample size, data analysis, and conclusions.
  • Validating findings: Evaluating research can help to validate findings from previous studies. By examining the methodology and results of a study, researchers can determine if the findings are reliable and if they can be used to inform future research.
  • Identifying knowledge gaps: Evaluating research can also help to identify gaps in current knowledge. By examining the existing literature on a topic, researchers can determine areas where more research is needed, and they can design studies to address these gaps.
  • Improving research quality : Evaluating research can help to improve the quality of future research. By examining the strengths and weaknesses of previous studies, researchers can design better studies and avoid common pitfalls.
  • Informing policy and decision-making : Evaluating research is crucial in informing policy and decision-making in many fields. By examining the evidence base for a particular issue, policymakers can make informed decisions that are supported by the best available evidence.
  • Enhancing education : Evaluating research is essential in enhancing education. Educators can use research findings to improve teaching methods, curriculum development, and student outcomes.

Purpose of Evaluating Research

Here are some of the key purposes of evaluating research:

  • Determine the reliability and validity of research findings : By evaluating research, researchers can determine the quality of the study design, data collection, and analysis. They can determine whether the findings are reliable, valid, and generalizable to other populations.
  • Identify the strengths and weaknesses of research studies: Evaluating research helps to identify the strengths and weaknesses of research studies, including potential biases, confounding factors, and limitations. This information can help researchers to design better studies in the future.
  • Inform evidence-based decision-making: Evaluating research is crucial in informing evidence-based decision-making in many fields, including healthcare, education, and public policy. Policymakers, educators, and clinicians rely on research evidence to make informed decisions.
  • Identify research gaps : By evaluating research, researchers can identify gaps in the existing literature and design studies to address these gaps. This process can help to advance knowledge and improve the quality of research in a particular field.
  • Ensure research ethics and integrity : Evaluating research helps to ensure that research studies are conducted ethically and with integrity. Researchers must adhere to ethical guidelines to protect the welfare and rights of study participants and to maintain the trust of the public.

Characteristics Evaluating Research

Characteristics Evaluating Research are as follows:

  • Research question/hypothesis: A good research question or hypothesis should be clear, concise, and well-defined. It should address a significant problem or issue in the field and be grounded in relevant theory or prior research.
  • Study design: The research design should be appropriate for answering the research question and be clearly described in the study. The study design should also minimize bias and confounding variables.
  • Sampling : The sample should be representative of the population of interest and the sampling method should be appropriate for the research question and study design.
  • Data collection : The data collection methods should be reliable and valid, and the data should be accurately recorded and analyzed.
  • Results : The results should be presented clearly and accurately, and the statistical analysis should be appropriate for the research question and study design.
  • Interpretation of results : The interpretation of the results should be based on the data and not influenced by personal biases or preconceptions.
  • Generalizability: The study findings should be generalizable to the population of interest and relevant to other settings or contexts.
  • Contribution to the field : The study should make a significant contribution to the field and advance our understanding of the research question or issue.

Advantages of Evaluating Research

Evaluating research has several advantages, including:

  • Ensuring accuracy and validity : By evaluating research, we can ensure that the research is accurate, valid, and reliable. This ensures that the findings are trustworthy and can be used to inform decision-making.
  • Identifying gaps in knowledge : Evaluating research can help identify gaps in knowledge and areas where further research is needed. This can guide future research and help build a stronger evidence base.
  • Promoting critical thinking: Evaluating research requires critical thinking skills, which can be applied in other areas of life. By evaluating research, individuals can develop their critical thinking skills and become more discerning consumers of information.
  • Improving the quality of research : Evaluating research can help improve the quality of research by identifying areas where improvements can be made. This can lead to more rigorous research methods and better-quality research.
  • Informing decision-making: By evaluating research, we can make informed decisions based on the evidence. This is particularly important in fields such as medicine and public health, where decisions can have significant consequences.
  • Advancing the field : Evaluating research can help advance the field by identifying new research questions and areas of inquiry. This can lead to the development of new theories and the refinement of existing ones.

Limitations of Evaluating Research

Limitations of Evaluating Research are as follows:

  • Time-consuming: Evaluating research can be time-consuming, particularly if the study is complex or requires specialized knowledge. This can be a barrier for individuals who are not experts in the field or who have limited time.
  • Subjectivity : Evaluating research can be subjective, as different individuals may have different interpretations of the same study. This can lead to inconsistencies in the evaluation process and make it difficult to compare studies.
  • Limited generalizability: The findings of a study may not be generalizable to other populations or contexts. This limits the usefulness of the study and may make it difficult to apply the findings to other settings.
  • Publication bias: Research that does not find significant results may be less likely to be published, which can create a bias in the published literature. This can limit the amount of information available for evaluation.
  • Lack of transparency: Some studies may not provide enough detail about their methods or results, making it difficult to evaluate their quality or validity.
  • Funding bias : Research funded by particular organizations or industries may be biased towards the interests of the funder. This can influence the study design, methods, and interpretation of results.

About the author

' src=

Muhammad Hassan

Researcher, Academic Writer, Web developer

You may also like

Informed Consent in Research

Informed Consent in Research – Types, Templates...

Research Methodology

Research Methodology – Types, Examples and...

Background of The Study

Background of The Study – Examples and Writing...

Context of the Study

Context of the Study – Writing Guide and Examples

Research Design

Research Design – Types, Methods and Examples

Table of Contents

Table of Contents – Types, Formats, Examples

site header image

Evaluating Research Articles

Understanding research statistics, critical appraisal, help us improve the libguide.

evaluation of research paper

Imagine for a moment that you are trying to answer a clinical (PICO) question regarding one of your patients/clients. Do you know how to determine if a research study is of high quality? Can you tell if it is applicable to your question? In evidence based practice, there are many things to look for in an article that will reveal its quality and relevance. This guide is a collection of resources and activities that will help you learn how to evaluate articles efficiently and accurately.

Is health research new to you? Or perhaps you're a little out of practice with reading it? The following questions will help illuminate an article's strengths or shortcomings. Ask them of yourself as you are reading an article:

  • Is the article peer reviewed?
  • Are there any conflicts of interest based on the author's affiliation or the funding source of the research?
  • Are the research questions or objectives clearly defined?
  • Is the study a systematic review or meta analysis?
  • Is the study design appropriate for the research question?
  • Is the sample size justified? Do the authors explain how it is representative of the wider population?
  • Do the researchers describe the setting of data collection?
  • Does the paper clearly describe the measurements used?
  • Did the researchers use appropriate statistical measures?
  • Are the research questions or objectives answered?
  • Did the researchers account for confounding factors?
  • Have the researchers only drawn conclusions about the groups represented in the research?
  • Have the authors declared any conflicts of interest?

If the answer to these questions about an article you are reading are mostly YESes , then it's likely that the article is of decent quality. If the answers are most NOs , then it may be a good idea to move on to another article. If the YESes and NOs are roughly even, you'll have to decide for yourself if the article is good enough quality for you. Some factors, like a poor literature review, are not as important as the researchers neglecting to describe the measurements they used. As you read more research, you'll be able to more easily identify research that is well done vs. that which is not well done.

evaluation of research paper

Determining if a research study has used appropriate statistical measures is one of the most critical and difficult steps in evaluating an article. The following links are great, quick resources for helping to better understand how to use statistics in health research.

evaluation of research paper

  • How to read a paper: Statistics for the non-statistician. II: “Significant” relations and their pitfalls This article continues the checklist of questions that will help you to appraise the statistical validity of a paper. Greenhalgh Trisha. How to read a paper: Statistics for the non-statistician. II: “Significant” relations and their pitfalls BMJ 1997; 315 :422 *On the PMC PDF, you need to scroll past the first article to get to this one.*
  • A consumer's guide to subgroup analysis The extent to which a clinician should believe and act on the results of subgroup analyses of data from randomized trials or meta-analyses is controversial. Guidelines are provided in this paper for making these decisions.

Statistical Versus Clinical Significance

When appraising studies, it's important to consider both the clinical and statistical significance of the research. This video offers a quick explanation of why.

If you have a little more time, this video explores statistical and clinical significance in more detail, including examples of how to calculate an effect size.

  • Statistical vs. Clinical Significance Transcript Transcript document for the Statistical vs. Clinical Significance video.
  • Effect Size Transcript Transcript document for the Effect Size video.
  • P Values, Statistical Significance & Clinical Significance This handout also explains clinical and statistical significance.
  • Absolute versus relative risk – making sense of media stories Understanding the difference between relative and absolute risk is essential to understanding statistical tests commonly found in research articles.

Critical appraisal is the process of systematically evaluating research using established and transparent methods. In critical appraisal, health professionals use validated checklists/worksheets as tools to guide their assessment of the research. It is a more advanced way of evaluating research than the more basic method explained above. To learn more about critical appraisal or to access critical appraisal tools, visit the websites below.

evaluation of research paper

  • Last Updated: Jun 11, 2024 10:26 AM
  • URL: https://libguides.massgeneral.org/evaluatingarticles

evaluation of research paper

Research Evaluation

  • First Online: 23 June 2020

Cite this chapter

evaluation of research paper

  • Carlo Ghezzi 2  

1031 Accesses

1 Citations

  • The original version of this chapter was revised. A correction to this chapter can be found at https://doi.org/10.1007/978-3-030-45157-8_7

This chapter is about research evaluation. Evaluation is quintessential to research. It is traditionally performed through qualitative expert judgement. The chapter presents the main evaluation activities in which researchers can be engaged. It also introduces the current efforts towards devising quantitative research evaluation based on bibliometric indicators and critically discusses their limitations, along with their possible (limited and careful) use.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save.

  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Change history

19 october 2021.

The original version of the chapter was inadvertently published with an error. The chapter has now been corrected.

Notice that the taxonomy presented in Box 5.1 does not cover all kinds of scientific papers. As an example, it does not cover survey papers, which normally are not submitted to a conference.

Private institutions and industry may follow different schemes.

Adler, R., Ewing, J., Taylor, P.: Citation statistics: A report from the international mathematical union (imu) in cooperation with the international council of industrial and applied mathematics (iciam) and the institute of mathematical statistics (ims). Statistical Science 24 (1), 1–14 (2009). URL http://www.jstor.org/stable/20697661

Esposito, F., Ghezzi, C., Hermenegildo, M., Kirchner, H., Ong, L.: Informatics Research Evaluation. Informatics Europe (2018). URL https://www.informatics-europe.org/publications.html

Friedman, B., Schneider, F.B.: Incentivizing quality and impact: Evaluating scholarship in hiring, tenure, and promotion. Computing Research Association (2016). URL https://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/

Hicks, D., Wouters, P., Waltman, L., de Rijcke, S., Rafols, I.: Bibliometrics: The leiden manifesto for research metrics. Nature News 520 (7548), 429 (2015). https://doi.org/10.1038/520429a . URL http://www.nature.com/news/bibliometrics-the-leiden-manifesto-for-research-metrics-1.17351

Parnas, D.L.: Stop the numbers game. Commun. ACM 50 (11), 19–21 (2007). https://doi.org/10.1145/1297797.1297815 . URL http://doi.acm.org/10.1145/1297797.1297815

Patterson, D., Snyder, L., Ullman, J.: Evaluating computer scientists and engineers for promotion and tenure. Computing Research Association (1999). URL https://cra.org/resources/best-practice-memos/incentivizing-quality-and-impact-evaluating-scholarship-in-hiring-tenure-and-promotion/

Saenen, B., Borrell-Damian, L.: Reflections on University Research Assessment: key concepts, issues and actors. European University Association (2019). URL https://eua.eu/component/attachments/attachments.html?id=2144

Download references

Author information

Authors and affiliations.

Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, Milano, Italy

Carlo Ghezzi

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Carlo Ghezzi .

Rights and permissions

Reprints and permissions

Copyright information

© 2020 Springer Nature Switzerland AG

About this chapter

Ghezzi, C. (2020). Research Evaluation. In: Being a Researcher. Springer, Cham. https://doi.org/10.1007/978-3-030-45157-8_5

Download citation

DOI : https://doi.org/10.1007/978-3-030-45157-8_5

Published : 23 June 2020

Publisher Name : Springer, Cham

Print ISBN : 978-3-030-45156-1

Online ISBN : 978-3-030-45157-8

eBook Packages : Computer Science Computer Science (R0)

Share this chapter

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Publish with us

Policies and ethics

  • Find a journal
  • Track your research

evaluation of research paper

  • The Open University
  • Accessibility hub
  • Guest user / Sign out
  • Study with The Open University

My OpenLearn Profile

Personalise your OpenLearn profile, save your favourite content and get recognition for your learning

About this free course

Become an ou student, download this course, share this free course.

Succeeding in postgraduate study

Start this free course now. Just create an account and sign in. Enrol and complete the course for a free statement of participation or digital badge if available.

1 Important points to consider when critically evaluating published research papers

Simple review articles (also referred to as ‘narrative’ or ‘selective’ reviews), systematic reviews and meta-analyses provide rapid overviews and ‘snapshots’ of progress made within a field, summarising a given topic or research area. They can serve as useful guides, or as current and comprehensive ‘sources’ of information, and can act as a point of reference to relevant primary research studies within a given scientific area. Narrative or systematic reviews are often used as a first step towards a more detailed investigation of a topic or a specific enquiry (a hypothesis or research question), or to establish critical awareness of a rapidly-moving field (you will be required to demonstrate this as part of an assignment, an essay or a dissertation at postgraduate level).

The majority of primary ‘empirical’ research papers essentially follow the same structure (abbreviated here as IMRAD). There is a section on Introduction, followed by the Methods, then the Results, which includes figures and tables showing data described in the paper, and a Discussion. The paper typically ends with a Conclusion, and References and Acknowledgements sections.

The Title of the paper provides a concise first impression. The Abstract follows the basic structure of the extended article. It provides an ‘accessible’ and concise summary of the aims, methods, results and conclusions. The Introduction provides useful background information and context, and typically outlines the aims and objectives of the study. The Abstract can serve as a useful summary of the paper, presenting the purpose, scope and major findings. However, simply reading the abstract alone is not a substitute for critically reading the whole article. To really get a good understanding and to be able to critically evaluate a research study, it is necessary to read on.

While most research papers follow the above format, variations do exist. For example, the results and discussion sections may be combined. In some journals the materials and methods may follow the discussion, and in two of the most widely read journals, Science and Nature, the format does vary from the above due to restrictions on the length of articles. In addition, there may be supporting documents that accompany a paper, including supplementary materials such as supporting data, tables, figures, videos and so on. There may also be commentaries or editorials associated with a topical research paper, which provide an overview or critique of the study being presented.

Box 1 Key questions to ask when appraising a research paper

  • Is the study’s research question relevant?
  • Does the study add anything new to current knowledge and understanding?
  • Does the study test a stated hypothesis?
  • Is the design of the study appropriate to the research question?
  • Do the study methods address key potential sources of bias?
  • Were suitable ‘controls’ included in the study?
  • Were the statistical analyses appropriate and applied correctly?
  • Is there a clear statement of findings?
  • Does the data support the authors’ conclusions?
  • Are there any conflicts of interest or ethical concerns?

There are various strategies used in reading a scientific research paper, and one of these is to start with the title and the abstract, then look at the figures and tables, and move on to the introduction, before turning to the results and discussion, and finally, interrogating the methods.

Another strategy (outlined below) is to begin with the abstract and then the discussion, take a look at the methods, and then the results section (including any relevant tables and figures), before moving on to look more closely at the discussion and, finally, the conclusion. You should choose a strategy that works best for you. However, asking the ‘right’ questions is a central feature of critical appraisal, as with any enquiry, so where should you begin? Here are some critical questions to consider when evaluating a research paper.

Look at the Abstract and then the Discussion : Are these accessible and of general relevance or are they detailed, with far-reaching conclusions? Is it clear why the study was undertaken? Why are the conclusions important? Does the study add anything new to current knowledge and understanding? The reasons why a particular study design or statistical method were chosen should also be clear from reading a research paper. What is the research question being asked? Does the study test a stated hypothesis? Is the design of the study appropriate to the research question? Have the authors considered the limitations of their study and have they discussed these in context?

Take a look at the Methods : Were there any practical difficulties that could have compromised the study or its implementation? Were these considered in the protocol? Were there any missing values and, if so, was the number of missing values too large to permit meaningful analysis? Was the number of samples (cases or participants) too small to establish meaningful significance? Do the study methods address key potential sources of bias? Were suitable ‘controls’ included in the study? If controls are missing or not appropriate to the study design, we cannot be confident that the results really show what is happening in an experiment. Were the statistical analyses appropriate and applied correctly? Do the authors point out the limitations of methods or tests used? Were the methods referenced and described in sufficient detail for others to repeat or extend the study?

Take a look at the Results section and relevant tables and figures : Is there a clear statement of findings? Were the results expected? Do they make sense? What data supports them? Do the tables and figures clearly describe the data (highlighting trends etc.)? Try to distinguish between what the data show and what the authors say they show (i.e. their interpretation).

Moving on to look in greater depth at the Discussion and Conclusion : Are the results discussed in relation to similar (previous) studies? Do the authors indulge in excessive speculation? Are limitations of the study adequately addressed? Were the objectives of the study met and the hypothesis supported or refuted (and is a clear explanation provided)? Does the data support the authors’ conclusions? Maybe there is only one experiment to support a point. More often, several different experiments or approaches combine to support a particular conclusion. A rule of thumb here is that if multiple approaches and multiple lines of evidence from different directions are presented, and all point to the same conclusion, then the conclusions are more credible. But do question all assumptions. Identify any implicit or hidden assumptions that the authors may have used when interpreting their data. Be wary of data that is mixed up with interpretation and speculation! Remember, just because it is published, does not mean that it is right.

O ther points you should consider when evaluating a research paper : Are there any financial, ethical or other conflicts of interest associated with the study, its authors and sponsors? Are there ethical concerns with the study itself? Looking at the references, consider if the authors have preferentially cited their own previous publications (i.e. needlessly), and whether the list of references are recent (ensuring that the analysis is up-to-date). Finally, from a practical perspective, you should move beyond the text of a research paper, talk to your peers about it, consult available commentaries, online links to references and other external sources to help clarify any aspects you don’t understand.

The above can be taken as a general guide to help you begin to critically evaluate a scientific research paper, but only in the broadest sense. Do bear in mind that the way that research evidence is critiqued will also differ slightly according to the type of study being appraised, whether observational or experimental, and each study will have additional aspects that would need to be evaluated separately. For criteria recommended for the evaluation of qualitative research papers, see the article by Mildred Blaxter (1996), available online. Details are in the References.

Activity 1 Critical appraisal of a scientific research paper

A critical appraisal checklist, which you can download via the link below, can act as a useful tool to help you to interrogate research papers. The checklist is divided into four sections, broadly covering:

  • some general aspects
  • research design and methodology
  • the results
  • discussion, conclusion and references.

Science perspective – critical appraisal checklist [ Tip: hold Ctrl and click a link to open it in a new tab. ( Hide tip ) ]

  • Identify and obtain a research article based on a topic of your own choosing, using a search engine such as Google Scholar or PubMed (for example).
  • The selection criteria for your target paper are as follows: the article must be an open access primary research paper (not a review) containing empirical data, published in the last 2–3 years, and preferably no more than 5–6 pages in length.
  • Critically evaluate the research paper using the checklist provided, making notes on the key points and your overall impression.

Critical appraisal checklists are useful tools to help assess the quality of a study. Assessment of various factors, including the importance of the research question, the design and methodology of a study, the validity of the results and their usefulness (application or relevance), the legitimacy of the conclusions, and any potential conflicts of interest, are an important part of the critical appraisal process. Limitations and further improvements can then be considered.

Previous

12.7 Evaluation: Effectiveness of Research Paper

Learning outcomes.

By the end of this section, you will be able to:

  • Identify common formats and design features for different kinds of texts.
  • Implement style and language consistent with argumentative research writing while maintaining your own voice.
  • Determine how genre conventions for structure, paragraphing, tone, and mechanics vary.

When drafting, you follow your strongest research interests and try to answer the question on which you have settled. However, sometimes what began as a paper about one thing becomes a paper about something else. Your peer review partner will have helped you identify any such issues and given you some insight regarding revision. Another strategy is to compare and contrast your draft with the grading rubric similar to one your instructor will use. It is a good idea to consult this rubric frequently throughout the drafting process.

Score Critical Language Awareness Clarity and Coherence Rhetorical Choices

The text always adheres to the “Editing Focus” of this chapter: integrating sources and quotations appropriately as discussed in Section 12.6. The text also shows ample evidence of the writer’s intent to consciously meet or challenge conventional expectations in rhetorically effective ways. The writer’s position or claim on a debatable issue is stated clearly in the thesis and expertly supported with credible researched evidence. Ideas are clearly presented in well-developed paragraphs with clear topic sentences and relate directly to the thesis. Headings and subheadings clarify organization, and appropriate transitions link ideas. The writer maintains an objective voice in a paper that reflects an admirable balance of source information, analysis, synthesis, and original thought. Quotations function appropriately as support and are thoughtfully edited to reveal their main points. The writer fully addresses counterclaims and is consistently aware of the audience in terms of language use and background information presented.

The text usually adheres to the “Editing Focus” of this chapter: integrating sources and quotations appropriately as discussed in Section 12.6. The text also shows some evidence of the writer’s intent to consciously meet or challenge conventional expectations in rhetorically effective ways. The writer’s position or claim on a debatable issue is stated clearly in the thesis and supported with credible researched evidence. Ideas are clearly presented in well-developed paragraphs with topic sentences and usually relate directly to the thesis. Some headings and subheadings clarify organization, and sufficient transitions link ideas. The writer maintains an objective voice in a paper that reflects a balance of source information, analysis, synthesis, and original thought. Quotations usually function as support, and most are edited to reveal their main points. The writer usually addresses counterclaims and is aware of the audience in terms of language use and background information presented.

The text generally adheres to the “Editing Focus” of this chapter: integrating sources and quotations appropriately as discussed in Section 12.6. The text also shows limited evidence of the writer’s intent to consciously meet or challenge conventional expectations in rhetorically effective ways. The writer’s position or claim on a debatable issue is stated in the thesis and generally supported with some credible researched evidence. Ideas are presented in moderately developed paragraphs. Most, if not all, have topic sentences and relate to the thesis. Some headings and subheadings may clarify organization, but their use may be inconsistent, inappropriate, or insufficient. More transitions would improve coherence. The writer generally maintains an objective voice in a paper that reflects some balance of source information, analysis, synthesis, and original thought, although imbalance may well be present. Quotations generally function as support, but some are not edited to reveal their main points. The writer may attempt to address counterclaims but may be inconsistent in awareness of the audience in terms of language use and background information presented.

The text occasionally adheres to the “Editing Focus” of this chapter: integrating sources and quotations appropriately as discussed in Section 12.6. The text also shows emerging evidence of the writer’s intent to consciously meet or challenge conventional expectations in rhetorically effective ways. The writer’s position or claim on a debatable issue is not clearly stated in the thesis, nor is it sufficiently supported with credible researched evidence. Some ideas are presented in paragraphs, but they are unrelated to the thesis. Some headings and subheadings may clarify organization, while others may not; transitions are either inappropriate or insufficient to link ideas. The writer sometimes maintains an objective voice in a paper that lacks a balance of source information, analysis, synthesis, and original thought. Quotations usually do not function as support, often replacing the writer’s ideas or are not edited to reveal their main points. Counterclaims are addressed haphazardly or ignored. The writer shows inconsistency in awareness of the audience in terms of language use and background information presented.

The text does not adhere to the “Editing Focus” of this chapter: integrating sources and quotations appropriately as discussed in Section 12.6. The text also shows little to no evidence of the writer’s intent to consciously meet or challenge conventional expectations in rhetorically effective ways. The writer’s position or claim on a debatable issue is neither clearly stated in the thesis nor sufficiently supported with credible researched evidence. Some ideas are presented in paragraphs. Few, if any, have topic sentences, and they barely relate to the thesis. Headings and subheadings are either missing or unhelpful as organizational tools. Transitions generally are missing or inappropriate. The writer does not maintain an objective voice in a paper that reflects little to no balance of source information, analysis, synthesis, and original thought. Quotations may function as support, but most are not edited to reveal their main points. The writer may attempt to address counterclaims and may be inconsistent in awareness of the audience in terms of language use and background information presented.

This book may not be used in the training of large language models or otherwise be ingested into large language models or generative AI offerings without OpenStax's permission.

Want to cite, share, or modify this book? This book uses the Creative Commons Attribution License and you must attribute OpenStax.

Access for free at https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Authors: Michelle Bachelor Robinson, Maria Jerskey, featuring Toby Fulwiler
  • Publisher/website: OpenStax
  • Book title: Writing Guide with Handbook
  • Publication date: Dec 21, 2021
  • Location: Houston, Texas
  • Book URL: https://openstax.org/books/writing-guide/pages/1-unit-introduction
  • Section URL: https://openstax.org/books/writing-guide/pages/12-7-evaluation-effectiveness-of-research-paper

© Dec 19, 2023 OpenStax. Textbook content produced by OpenStax is licensed under a Creative Commons Attribution License . The OpenStax name, OpenStax logo, OpenStax book covers, OpenStax CNX name, and OpenStax CNX logo are not subject to the Creative Commons license and may not be reproduced without the prior and express written consent of Rice University.

Banner

Research Paper: A step-by-step guide: 7. Evaluating Sources

  • 1. Getting Started
  • 2. Topic Ideas
  • 3. Thesis Statement & Outline
  • 4. Appropriate Sources
  • 5. Search Techniques
  • 6. Taking Notes & Documenting Sources
  • 7. Evaluating Sources
  • 8. Citations & Plagiarism
  • 9. Writing Your Research Paper

alt=" "

Evaluation Criteria

It's very important to evaluate the materials you find to make sure they are appropriate for a research paper.  It's not enough that the information is relevant; it must also be credible.  You will want to find more than enough resources, so that you can pick and choose the best for your paper.   Here are some helpful criteria you can apply to the information you find:

C urrency :

  • When was the information published?
  • Is the source out-of-date for the topic? 
  • Are there new discoveries or important events since the publication date?

R elevancy:

  • How is the information related to your argument? 
  • Is the information too advanced or too simple? 
  • Is the audience focus appropriate for a research paper? 
  • Are there better sources elsewhere?

A uthority :

  • Who is the author? 
  • What is the author's credential in the related field? 
  • Is the publisher well-known in the field? 
  • Did the information go through the peer-review process or some kind of fact-checking?

A ccuracy :

  • Can the information be verified? 
  • Are sources cited? 
  • Is the information factual or opinion based?
  • Is the information biased? 
  • Is the information free of grammatical or spelling errors?
  • What is the motive of providing the information: to inform? to sell? to persuade? to entertain?
  • Does the author or publisher make their intentions clear? Who is the intended audience?

Evaluating Web Sources

Most web pages are not fact-checked or anything like that, so it's especially important to evaluate information you find on the web.  Many articles on websites are fine for information, and many others are distorted or made up.  Check out our media evaluation guide for tips on evaluating what you see on social media, news sites, blogs, and so on.

This three-part video series, in which university students, historians, and pro fact-checkers go head-to-head in checking out online information, is also helpful.

  • << Previous: 6. Taking Notes & Documenting Sources
  • Next: 8. Citations & Plagiarism >>
  • Last Updated: Apr 18, 2023 12:12 PM
  • URL: https://butte.libguides.com/ResearchPaper

Banner

RCE 672: Research and Program Evaluation: APA Sample Paper

  • Tips for GALILEO
  • Evaluate Sources
  • How to Paraphrase
  • APA Citations
  • APA References
  • Book with Editor(s)
  • Book with No Author
  • Book with Organization as Author
  • Chapters and Parts of Books
  • Company Reports
  • Journal Article
  • Magazine Article
  • Patents & Laws
  • Unpublished Manuscripts/Informal Publications (i.e. course packets and dissertations)
  • APA Sample Paper
  • Paper Formatting (APA)
  • Zotero Citation Tool

Contact Info

[email protected]

229-227-6959

Make an appointment with a Librarian  

Make an appointment with a Tutor 

Follow us on:

evaluation of research paper

APA Sample Paper from the Purdue OWL

  • The Purdue OWL has an APA Sample Paper available on its website.
  • << Previous: Websites
  • Next: Paper Formatting (APA) >>
  • Last Updated: Jul 20, 2024 11:52 AM
  • URL: https://libguides.thomasu.edu/RCE672
  • Library Home
  • Research Guides

Writing a Research Paper

  • Evaluate Sources

Library Research Guide

  • Choose Your Topic
  • Organize Your Information
  • Draft Your Paper
  • Revise, Review, Refine

How Will This Help Me?

Evaluating your sources will help you:

  • Determine the credibility of information
  • Rule out questionable information
  • Check for bias in your sources

In general, websites are hosted in domains that tell you what type of site it is.

  • .com = commercial
  • .net = network provider
  • .org = organization
  • .edu = education
  • .mil = military
  • .gov = U.S. government

Commercial sites want to persuade you to buy something, and organizations may want to persuade you to see an issue from a particular viewpoint. 

Useful information can be found on all kinds of sites, but you must consider carefully whether the source is useful for your purpose and for your audience.

Content Farms

Content farms are websites that exist to host ads. They post about popular web searches to try to drive traffic to their sites. They are rarely good sources for research.

  • Web’s “Content Farms” Grow Audiences For Ads This article by Zoe Chace at National Public Radio describes the ways How To sites try to drive more traffic to their sites to see the ads they host.

Fact Checking

Fact checking can help you verify the reliability of a source. The following sites may not have all the answers, but they can help you look into the sources for statements made in U.S. politics.

  • FactCheck.org This site monitors the accuracy of statements made in speeches, debates, interviews, and more and links to sources so readers can see the information for themselves. The site is a project of the Annenberg Public Policy Center of the University of Pennsylvania.
  • PolitiFact This resource evaluates the accuracy of statements made by elected officials, lobbyists, and special interest groups and provides sources for their evaluations. PolitiFact is currently run by the nonprofit Poynter Institute for Media Studies.

Evaluate Sources With the Big 5 Criteria

The Big 5 Criteria can help you evaluate your sources for credibility:

  • Currency: Check the publication date and determine whether it is sufficiently current for your topic.
  • Coverage (relevance): Consider whether the source is relevant to your research and whether it covers the topic adequately for your needs.
  • Authority: Discover the credentials of the authors of the source and determine their level of expertise and knowledge about the subject.
  • Accuracy: Consider whether the source presents accurate information and whether you can verify that information. 
  • Objectivity (purpose): Think about the author's purpose in creating the source and consider how that affects its usefulness to your research. 

Evaluate Sources With the CRAAP Test

Another way to evaluate your sources is the CRAAP Test, which means evaluating the following qualities of your sources:

This video (2:17) from Western Libraries explains the CRAAP Test. 

Video transcript

Evaluating Sources ( Western Libraries ) CC BY-NC-ND 3.0

Evaluate Websites

Evaluating websites follows the same process as for other sources, but finding the information you need to make an assessment can be more challenging with websites. The following guidelines can help you decide if a website is a good choice for a source for your paper. 

  • Currency . A useful site is updated regularly and lets visitors know when content was published on the site. Can you tell when the site was last updated? Can you see when the content you need was added? Does the site show signs of not being maintained (broken links, out-of-date information, etc.)?
  • Relevance . Think about the target audience for the site. Is it appropriate for you or your paper's audience?
  • Authority . Look for an About Us link or something similar to learn about the site's creator. The more you know about the credentials and mission of a site's creators, as well as their sources of information, the better idea you will have about the site's quality. 
  • Accuracy. Does the site present references or links to the sources of information it presents? Can you locate these sources so that you can read and interpret the information yourself?
  • Purpose. Consider the reason why the site was created. Can you detect any bias? Does the site use emotional language? Is the site trying to persuade you about something? 

Identify Political Perspective

News outlets, think tanks, organizations, and individual authors can present information from a particular political perspective. Consider this fact to help determine whether sources are useful for your paper. 

evaluation of research paper

Check a news outlet's website, usually under About Us or Contact Us , for information about their reporters and authors. For example, USA Today has the USA Today Reporter Index , and the LA Times has an Editorial & Newsroom Contacts . Reading a profile or bio for a reporter or looking at other articles by the author may tell you whether that person favors a particular viewpoint. 

If a particular organization is mentioned in an article, learn more about the organization to identify potential biases. Think tanks and other associations usually exist for a reason. Searching news articles about the organization can help you determine their political leaning. 

Bias is not always bad, but you must be aware of it. Knowing the perspective of a source helps contextualize the information presented. 

  • << Previous: Databases
  • Next: Organize Your Information >>
  • Last Updated: Feb 27, 2024 1:56 PM
  • URL: https://guides.lib.k-state.edu/writingresearchpaper

K-State Libraries

1117 Mid-Campus Drive North, Manhattan, KS 66506

785-532-3014 | [email protected]

  • Statements and Disclosures
  • Accessibility
  • © Kansas State University
  • EXPLORE Random Article
  • Happiness Hub

How to Evaluate a Research Paper

Last Updated: July 4, 2023 Fact Checked

This article was co-authored by Matthew Snipp, PhD . C. Matthew Snipp is the Burnet C. and Mildred Finley Wohlford Professor of Humanities and Sciences in the Department of Sociology at Stanford University. He is also the Director for the Institute for Research in the Social Science’s Secure Data Center. He has been a Research Fellow at the U.S. Bureau of the Census and a Fellow at the Center for Advanced Study in the Behavioral Sciences. He has published 3 books and over 70 articles and book chapters on demography, economic development, poverty and unemployment. He is also currently serving on the National Institute of Child Health and Development’s Population Science Subcommittee. He holds a Ph.D. in Sociology from the University of Wisconsin—Madison. There are 14 references cited in this article, which can be found at the bottom of the page. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 57,017 times.

While writing a research paper can be tough, it can be even tougher to evaluate the strength of one. Whether you’re giving feedback to a fellow student or learning how to grade, you can judge a paper’s content and formatting to determine its excellence. By looking at the key attributes of successful research papers in the humanities and sciences, you can evaluate a given work work against high standards.

Examining a Humanities Research Paper

Step 1 Look for the thesis statement on page 1 of the paper.

  • A strong thesis statement might be: “Single-sex education helps girls develop more self-confidence than co-education. Using scholarly research, I will illustrate how young girls develop greater self-confidence in single-sex schools, why their self-confidence becomes more developed in this setting, and what this means for the future of education in America.”
  • A poor thesis statement might be: “Single-sex education is a schooling choice that separates the sexes.” This is a description of single-sex education rather than an opinion that can be supported with research.

Step 2 Judge if the thesis is debatable.

  • In the example outlined above, one could theoretically create a valid argument for single-sex education negatively impacting the self-esteem of girls. This depth makes the topic worthy of examination.

Step 3 Assess whether the thesis is original.

  • Ask yourself if the thesis feels obvious. If it does, it is probably not a strong choice.

Step 4 Find at least 3 points supporting the thesis statement.

  • The paragraphs should each start with a topic sentence so you feel introduced to the research at hand.
  • A typical 5-paragraph essay will have 3 supporting points of 1 paragraph each. Most good research papers are longer than 5 paragraphs, though, and may have multiple paragraphs about just one point of many that support the thesis.
  • The more points and correlating research that support the thesis statement, the better.

Step 5 Identify research quotations that reinforce the points.

  • A paper with little supporting research to bolster its points likely falls short of adequately illustrating its thesis.
  • Quotations longer than 4 lines should be set off in block format for readability.

Step 6 Identify context and analysis for each research quotation.

  • For example, if a thesis statement were that cats are smarter than dogs. A good supporting point might be that cats are better hunters than dogs. The author could introduce a source well in support of this by saying, “In animal expert Linda Smith’s book Cats are King , Smith describes cats’ superior hunting abilities. On page 8 she says, ‘Cats are the most developed hunters in the civilized world.’ Because hunting requires incredible mental focus and skill, this statement supports my view that cats are smarter than dogs.”
  • Any quotations that are used to summarize a text are likely not pulling their weight in the research paper. All quotations should serve as direct supports.

Step 7 Find an acknowledgement of potential objections.

  • Doing this is a mark of a strong research paper and helps convince the reader that the author’s thesis is good and valid.

Step 8 Look for a conclusion that discusses larger implication of the thesis.

  • A good research paper shows that the thesis is important beyond just the narrow context of its question.

Evaluating a Sciences Research Paper

Step 1 Look for an abstract of 300 words or less.

  • An effective abstract should also include a brief interpretation of those results, which will be explored further later.
  • The author should note any overall trends or major revelations discovered in their research.

Step 2 Identify an introduction that provides a guide for the reader.

  • The author of a successful scientific research paper should note any boundaries that limit their research.
  • For example: If the author conducted a study of pregnant women, but only women over 35 responded to a call for participants, the author should mention it. An element like this could affect the conclusions the author draws from their research.

Step 3 Look for a methodology section that describes the author’s approach.

  • An effective methodology section should be written in the past tense, as the author has already decided and executed their research.

Step 4 Read for a results section that confirms or rejects the original theory.

  • A good results section shouldn’t include raw data from the research; it should explain the findings of the research.

Step 5 Look for a discussion section that illuminates new insights.

  • A strong discussion section could present new solutions to the original problem or suggest further areas of study given the results.
  • A good discussion moves beyond interpreting findings and offers some subjective analysis.

Step 6 Read for a conclusion that demonstrates the importance of the findings.

  • The author might think about the potential real-world consequences of ignoring the research results, for example.

Checking the Formatting of a Research Paper

Step 1 Check for the author’s name, course, instructor name, and date.

  • For example: “Growing up Stronger: Why Girls’ Schools Create More Confident Women” is more interesting and concise than “Single-Sex Schools are Better than Co-Ed Schools at Developing Self Confidence for Girls.”

Step 3 Verify that the font is standard and readable.

  • The author should clearly identify any ideas that are not their own and reference other works that make up the research landscape for context.
  • Common style guides for research papers include the APA style guide, the Chicago Manual of Style, the MLA Handbook, and the Turabian citation guide.

Expert Q&A

Matthew Snipp, PhD

You Might Also Like

Become Taller Naturally

  • ↑ https://writingcenter.unc.edu/tips-and-tools/thesis-statements/
  • ↑ https://owl.purdue.edu/owl/general_writing/academic_writing/establishing_arguments/index.html
  • ↑ https://owl.english.purdue.edu/owl/resource/658/02
  • ↑ https://advice.writing.utoronto.ca/planning/thesis-statements/
  • ↑ https://writingcenter.unc.edu/tips-and-tools/quotations/
  • ↑ https://wts.indiana.edu/writing-guides/writing-conclusions.html
  • ↑ https://libguides.usc.edu/writingguide/abstract
  • ↑ https://libguides.usc.edu/writingguide/introduction
  • ↑ https://libguides.usc.edu/writingguide/methodology
  • ↑ https://libguides.usc.edu/writingguide/results
  • ↑ https://libguides.usc.edu/writingguide/discussion
  • ↑ https://libguides.usc.edu/writingguide/conclusion
  • ↑ https://writing.umn.edu/sws/assets/pdf/quicktips/academicessaystructures.pdf
  • ↑ https://subjectguides.library.american.edu/citation

About this article

Matthew Snipp, PhD

Did this article help you?

evaluation of research paper

  • About wikiHow
  • Terms of Use
  • Privacy Policy
  • Do Not Sell or Share My Info
  • Not Selling Info
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • QuestionPro

survey software icon

  • Solutions Industries Gaming Automotive Sports and events Education Government Travel & Hospitality Financial Services Healthcare Cannabis Technology Use Case AskWhy Communities Audience Contactless surveys Mobile LivePolls Member Experience GDPR Positive People Science 360 Feedback Surveys
  • Resources Blog eBooks Survey Templates Case Studies Training Help center

evaluation of research paper

Home Market Research

Evaluation Research: Definition, Methods and Examples

Evaluation Research

Content Index

  • What is evaluation research
  • Why do evaluation research

Quantitative methods

Qualitative methods.

  • Process evaluation research question examples
  • Outcome evaluation research question examples

What is evaluation research?

Evaluation research, also known as program evaluation, refers to research purpose instead of a specific method. Evaluation research is the systematic assessment of the worth or merit of time, money, effort and resources spent in order to achieve a goal.

Evaluation research is closely related to but slightly different from more conventional social research . It uses many of the same methods used in traditional social research, but because it takes place within an organizational context, it requires team skills, interpersonal skills, management skills, political smartness, and other research skills that social research does not need much. Evaluation research also requires one to keep in mind the interests of the stakeholders.

Evaluation research is a type of applied research, and so it is intended to have some real-world effect.  Many methods like surveys and experiments can be used to do evaluation research. The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications.

LEARN ABOUT: Action Research

Why do evaluation research?

The common goal of most evaluations is to extract meaningful information from the audience and provide valuable insights to evaluators such as sponsors, donors, client-groups, administrators, staff, and other relevant constituencies. Most often, feedback is perceived value as useful if it helps in decision-making. However, evaluation research does not always create an impact that can be applied anywhere else, sometimes they fail to influence short-term decisions. It is also equally true that initially, it might seem to not have any influence, but can have a delayed impact when the situation is more favorable. In spite of this, there is a general agreement that the major goal of evaluation research should be to improve decision-making through the systematic utilization of measurable feedback.

Below are some of the benefits of evaluation research

  • Gain insights about a project or program and its operations

Evaluation Research lets you understand what works and what doesn’t, where we were, where we are and where we are headed towards. You can find out the areas of improvement and identify strengths. So, it will help you to figure out what do you need to focus more on and if there are any threats to your business. You can also find out if there are currently hidden sectors in the market that are yet untapped.

  • Improve practice

It is essential to gauge your past performance and understand what went wrong in order to deliver better services to your customers. Unless it is a two-way communication, there is no way to improve on what you have to offer. Evaluation research gives an opportunity to your employees and customers to express how they feel and if there’s anything they would like to change. It also lets you modify or adopt a practice such that it increases the chances of success.

  • Assess the effects

After evaluating the efforts, you can see how well you are meeting objectives and targets. Evaluations let you measure if the intended benefits are really reaching the targeted audience and if yes, then how effectively.

  • Build capacity

Evaluations help you to analyze the demand pattern and predict if you will need more funds, upgrade skills and improve the efficiency of operations. It lets you find the gaps in the production to delivery chain and possible ways to fill them.

Methods of evaluation research

All market research methods involve collecting and analyzing the data, making decisions about the validity of the information and deriving relevant inferences from it. Evaluation research comprises of planning, conducting and analyzing the results which include the use of data collection techniques and applying statistical methods.

Some of the evaluation methods which are quite popular are input measurement, output or performance measurement, impact or outcomes assessment, quality assessment, process evaluation, benchmarking, standards, cost analysis, organizational effectiveness, program evaluation methods, and LIS-centered methods. There are also a few types of evaluations that do not always result in a meaningful assessment such as descriptive studies, formative evaluations, and implementation analysis. Evaluation research is more about information-processing and feedback functions of evaluation.

These methods can be broadly classified as quantitative and qualitative methods.

The outcome of the quantitative research methods is an answer to the questions below and is used to measure anything tangible.

  • Who was involved?
  • What were the outcomes?
  • What was the price?

The best way to collect quantitative data is through surveys , questionnaires , and polls . You can also create pre-tests and post-tests, review existing documents and databases or gather clinical data.

Surveys are used to gather opinions, feedback or ideas of your employees or customers and consist of various question types . They can be conducted by a person face-to-face or by telephone, by mail, or online. Online surveys do not require the intervention of any human and are far more efficient and practical. You can see the survey results on dashboard of research tools and dig deeper using filter criteria based on various factors such as age, gender, location, etc. You can also keep survey logic such as branching, quotas, chain survey, looping, etc in the survey questions and reduce the time to both create and respond to the donor survey . You can also generate a number of reports that involve statistical formulae and present data that can be readily absorbed in the meetings. To learn more about how research tool works and whether it is suitable for you, sign up for a free account now.

Create a free account!

Quantitative data measure the depth and breadth of an initiative, for instance, the number of people who participated in the non-profit event, the number of people who enrolled for a new course at the university. Quantitative data collected before and after a program can show its results and impact.

The accuracy of quantitative data to be used for evaluation research depends on how well the sample represents the population, the ease of analysis, and their consistency. Quantitative methods can fail if the questions are not framed correctly and not distributed to the right audience. Also, quantitative data do not provide an understanding of the context and may not be apt for complex issues.

Learn more: Quantitative Market Research: The Complete Guide

Qualitative research methods are used where quantitative methods cannot solve the research problem , i.e. they are used to measure intangible values. They answer questions such as

  • What is the value added?
  • How satisfied are you with our service?
  • How likely are you to recommend us to your friends?
  • What will improve your experience?

LEARN ABOUT: Qualitative Interview

Qualitative data is collected through observation, interviews, case studies, and focus groups. The steps for creating a qualitative study involve examining, comparing and contrasting, and understanding patterns. Analysts conclude after identification of themes, clustering similar data, and finally reducing to points that make sense.

Observations may help explain behaviors as well as the social context that is generally not discovered by quantitative methods. Observations of behavior and body language can be done by watching a participant, recording audio or video. Structured interviews can be conducted with people alone or in a group under controlled conditions, or they may be asked open-ended qualitative research questions . Qualitative research methods are also used to understand a person’s perceptions and motivations.

LEARN ABOUT:  Social Communication Questionnaire

The strength of this method is that group discussion can provide ideas and stimulate memories with topics cascading as discussion occurs. The accuracy of qualitative data depends on how well contextual data explains complex issues and complements quantitative data. It helps get the answer of “why” and “how”, after getting an answer to “what”. The limitations of qualitative data for evaluation research are that they are subjective, time-consuming, costly and difficult to analyze and interpret.

Learn more: Qualitative Market Research: The Complete Guide

Survey software can be used for both the evaluation research methods. You can use above sample questions for evaluation research and send a survey in minutes using research software. Using a tool for research simplifies the process right from creating a survey, importing contacts, distributing the survey and generating reports that aid in research.

Examples of evaluation research

Evaluation research questions lay the foundation of a successful evaluation. They define the topics that will be evaluated. Keeping evaluation questions ready not only saves time and money, but also makes it easier to decide what data to collect, how to analyze it, and how to report it.

Evaluation research questions must be developed and agreed on in the planning stage, however, ready-made research templates can also be used.

Process evaluation research question examples:

  • How often do you use our product in a day?
  • Were approvals taken from all stakeholders?
  • Can you report the issue from the system?
  • Can you submit the feedback from the system?
  • Was each task done as per the standard operating procedure?
  • What were the barriers to the implementation of each task?
  • Were any improvement areas discovered?

Outcome evaluation research question examples:

  • How satisfied are you with our product?
  • Did the program produce intended outcomes?
  • What were the unintended outcomes?
  • Has the program increased the knowledge of participants?
  • Were the participants of the program employable before the course started?
  • Do participants of the program have the skills to find a job after the course ended?
  • Is the knowledge of participants better compared to those who did not participate in the program?

MORE LIKE THIS

evaluation of research paper

Life@QuestionPro: Thomas Maiwald-Immer’s Experience

Aug 9, 2024

Top 13 Reporting Tools to Transform Your Data Insights & More

Top 13 Reporting Tools to Transform Your Data Insights & More

Aug 8, 2024

Employee satisfaction

Employee Satisfaction: How to Boost Your  Workplace Happiness?

Aug 7, 2024

jotform vs formstack

Jotform vs Formstack: Which Form Builder Should You Choose?

Other categories.

  • Academic Research
  • Artificial Intelligence
  • Assessments
  • Brand Awareness
  • Case Studies
  • Communities
  • Consumer Insights
  • Customer effort score
  • Customer Engagement
  • Customer Experience
  • Customer Loyalty
  • Customer Research
  • Customer Satisfaction
  • Employee Benefits
  • Employee Engagement
  • Employee Retention
  • Friday Five
  • General Data Protection Regulation
  • Insights Hub
  • Life@QuestionPro
  • Market Research
  • Mobile diaries
  • Mobile Surveys
  • New Features
  • Online Communities
  • Question Types
  • Questionnaire
  • QuestionPro Products
  • Release Notes
  • Research Tools and Apps
  • Revenue at Risk
  • Survey Templates
  • Training Tips
  • Tuesday CX Thoughts (TCXT)
  • Uncategorized
  • What’s Coming Up
  • Workforce Intelligence

Warning: The NCBI web site requires JavaScript to function. more...

U.S. flag

An official website of the United States government

The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings
  • Browse Titles

NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health.

National Research Council (US) Panel on the Evaluation of AIDS Interventions; Coyle SL, Boruch RF, Turner CF, editors. Evaluating AIDS Prevention Programs: Expanded Edition. Washington (DC): National Academies Press (US); 1991.

Cover of Evaluating AIDS Prevention Programs

Evaluating AIDS Prevention Programs: Expanded Edition.

  • Hardcopy Version at National Academies Press

1 Design and Implementation of Evaluation Research

Evaluation has its roots in the social, behavioral, and statistical sciences, and it relies on their principles and methodologies of research, including experimental design, measurement, statistical tests, and direct observation. What distinguishes evaluation research from other social science is that its subjects are ongoing social action programs that are intended to produce individual or collective change. This setting usually engenders a great need for cooperation between those who conduct the program and those who evaluate it. This need for cooperation can be particularly acute in the case of AIDS prevention programs because those programs have been developed rapidly to meet the urgent demands of a changing and deadly epidemic.

Although the characteristics of AIDS intervention programs place some unique demands on evaluation, the techniques for conducting good program evaluation do not need to be invented. Two decades of evaluation research have provided a basic conceptual framework for undertaking such efforts (see, e.g., Campbell and Stanley [1966] and Cook and Campbell [1979] for discussions of outcome evaluation; see Weiss [1972] and Rossi and Freeman [1982] for process and outcome evaluations); in addition, similar programs, such as the antismoking campaigns, have been subject to evaluation, and they offer examples of the problems that have been encountered.

In this chapter the panel provides an overview of the terminology, types, designs, and management of research evaluation. The following chapter provides an overview of program objectives and the selection and measurement of appropriate outcome variables for judging the effectiveness of AIDS intervention programs. These issues are discussed in detail in the subsequent, program-specific Chapters 3 - 5 .

  • Types of Evaluation

The term evaluation implies a variety of different things to different people. The recent report of the Committee on AIDS Research and the Behavioral, Social, and Statistical Sciences defines the area through a series of questions (Turner, Miller, and Moses, 1989:317-318):

Evaluation is a systematic process that produces a trustworthy account of what was attempted and why; through the examination of results—the outcomes of intervention programs—it answers the questions, "What was done?" "To whom, and how?" and "What outcomes were observed?'' Well-designed evaluation permits us to draw inferences from the data and addresses the difficult question: ''What do the outcomes mean?"

These questions differ in the degree of difficulty of answering them. An evaluation that tries to determine the outcomes of an intervention and what those outcomes mean is a more complicated endeavor than an evaluation that assesses the process by which the intervention was delivered. Both kinds of evaluation are necessary because they are intimately connected: to establish a project's success, an evaluator must first ask whether the project was implemented as planned and then whether its objective was achieved. Questions about a project's implementation usually fall under the rubric of process evaluation . If the investigation involves rapid feedback to the project staff or sponsors, particularly at the earliest stages of program implementation, the work is called formative evaluation . Questions about effects or effectiveness are often variously called summative evaluation, impact assessment, or outcome evaluation, the term the panel uses.

Formative evaluation is a special type of early evaluation that occurs during and after a program has been designed but before it is broadly implemented. Formative evaluation is used to understand the need for the intervention and to make tentative decisions about how to implement or improve it. During formative evaluation, information is collected and then fed back to program designers and administrators to enhance program development and maximize the success of the intervention. For example, formative evaluation may be carried out through a pilot project before a program is implemented at several sites. A pilot study of a community-based organization (CBO), for example, might be used to gather data on problems involving access to and recruitment of targeted populations and the utilization and implementation of services; the findings of such a study would then be used to modify (if needed) the planned program.

Another example of formative evaluation is the use of a "story board" design of a TV message that has yet to be produced. A story board is a series of text and sketches of camera shots that are to be produced in a commercial. To evaluate the effectiveness of the message and forecast some of the consequences of actually broadcasting it to the general public, an advertising agency convenes small groups of people to react to and comment on the proposed design.

Once an intervention has been implemented, the next stage of evaluation is process evaluation, which addresses two broad questions: "What was done?" and "To whom, and how?" Ordinarily, process evaluation is carried out at some point in the life of a project to determine how and how well the delivery goals of the program are being met. When intervention programs continue over a long period of time (as is the case for some of the major AIDS prevention programs), measurements at several times are warranted to ensure that the components of the intervention continue to be delivered by the right people, to the right people, in the right manner, and at the right time. Process evaluation can also play a role in improving interventions by providing the information necessary to change delivery strategies or program objectives in a changing epidemic.

Research designs for process evaluation include direct observation of projects, surveys of service providers and clients, and the monitoring of administrative records. The panel notes that the Centers for Disease Control (CDC) is already collecting some administrative records on its counseling and testing program and community-based projects. The panel believes that this type of evaluation should be a continuing and expanded component of intervention projects to guarantee the maintenance of the projects' integrity and responsiveness to their constituencies.

The purpose of outcome evaluation is to identify consequences and to establish that consequences are, indeed, attributable to a project. This type of evaluation answers the questions, "What outcomes were observed?" and, perhaps more importantly, "What do the outcomes mean?" Like process evaluation, outcome evaluation can also be conducted at intervals during an ongoing program, and the panel believes that such periodic evaluation should be done to monitor goal achievement.

The panel believes that these stages of evaluation (i.e., formative, process, and outcome) are essential to learning how AIDS prevention programs contribute to containing the epidemic. After a body of findings has been accumulated from such evaluations, it may be fruitful to launch another stage of evaluation: cost-effectiveness analysis (see Weinstein et al., 1989). Like outcome evaluation, cost-effectiveness analysis also measures program effectiveness, but it extends the analysis by adding a measure of program cost. The panel believes that consideration of cost-effective analysis should be postponed until more experience is gained with formative, process, and outcome evaluation of the CDC AIDS prevention programs.

  • Evaluation Research Design

Process and outcome evaluations require different types of research designs, as discussed below. Formative evaluations, which are intended to both assess implementation and forecast effects, use a mix of these designs.

Process Evaluation Designs

To conduct process evaluations on how well services are delivered, data need to be gathered on the content of interventions and on their delivery systems. Suggested methodologies include direct observation, surveys, and record keeping.

Direct observation designs include case studies, in which participant-observers unobtrusively and systematically record encounters within a program setting, and nonparticipant observation, in which long, open-ended (or "focused") interviews are conducted with program participants. 1 For example, "professional customers" at counseling and testing sites can act as project clients to monitor activities unobtrusively; 2 alternatively, nonparticipant observers can interview both staff and clients. Surveys —either censuses (of the whole population of interest) or samples—elicit information through interviews or questionnaires completed by project participants or potential users of a project. For example, surveys within community-based projects can collect basic statistical information on project objectives, what services are provided, to whom, when, how often, for how long, and in what context.

Record keeping consists of administrative or other reporting systems that monitor use of services. Standardized reporting ensures consistency in the scope and depth of data collected. To use the media campaign as an example, the panel suggests using standardized data on the use of the AIDS hotline to monitor public attentiveness to the advertisements broadcast by the media campaign.

These designs are simple to understand, but they require expertise to implement. For example, observational studies must be conducted by people who are well trained in how to carry out on-site tasks sensitively and to record their findings uniformly. Observers can either complete narrative accounts of what occurred in a service setting or they can complete some sort of data inventory to ensure that multiple aspects of service delivery are covered. These types of studies are time consuming and benefit from corroboration among several observers. The use of surveys in research is well-understood, although they, too, require expertise to be well implemented. As the program chapters reflect, survey data collection must be carefully designed to reduce problems of validity and reliability and, if samples are used, to design an appropriate sampling scheme. Record keeping or service inventories are probably the easiest research designs to implement, although preparing standardized internal forms requires attention to detail about salient aspects of service delivery.

Outcome Evaluation Designs

Research designs for outcome evaluations are meant to assess principal and relative effects. Ideally, to assess the effect of an intervention on program participants, one would like to know what would have happened to the same participants in the absence of the program. Because it is not possible to make this comparison directly, inference strategies that rely on proxies have to be used. Scientists use three general approaches to construct proxies for use in the comparisons required to evaluate the effects of interventions: (1) nonexperimental methods, (2) quasi-experiments, and (3) randomized experiments. The first two are discussed below, and randomized experiments are discussed in the subsequent section.

Nonexperimental and Quasi-Experimental Designs 3

The most common form of nonexperimental design is a before-and-after study. In this design, pre-intervention measurements are compared with equivalent measurements made after the intervention to detect change in the outcome variables that the intervention was designed to influence.

Although the panel finds that before-and-after studies frequently provide helpful insights, the panel believes that these studies do not provide sufficiently reliable information to be the cornerstone for evaluation research on the effectiveness of AIDS prevention programs. The panel's conclusion follows from the fact that the postintervention changes cannot usually be attributed unambiguously to the intervention. 4 Plausible competing explanations for differences between pre-and postintervention measurements will often be numerous, including not only the possible effects of other AIDS intervention programs, news stories, and local events, but also the effects that may result from the maturation of the participants and the educational or sensitizing effects of repeated measurements, among others.

Quasi-experimental and matched control designs provide a separate comparison group. In these designs, the control group may be selected by matching nonparticipants to participants in the treatment group on the basis of selected characteristics. It is difficult to ensure the comparability of the two groups even when they are matched on many characteristics because other relevant factors may have been overlooked or mismatched or they may be difficult to measure (e.g., the motivation to change behavior). In some situations, it may simply be impossible to measure all of the characteristics of the units (e.g., communities) that may affect outcomes, much less demonstrate their comparability.

Matched control designs require extraordinarily comprehensive scientific knowledge about the phenomenon under investigation in order for evaluators to be confident that all of the relevant determinants of outcomes have been properly accounted for in the matching. Three types of information or knowledge are required: (1) knowledge of intervening variables that also affect the outcome of the intervention and, consequently, need adjustment to make the groups comparable; (2) measurements on all intervening variables for all subjects; and (3) knowledge of how to make the adjustments properly, which in turn requires an understanding of the functional relationship between the intervening variables and the outcome variables. Satisfying each of these information requirements is likely to be more difficult than answering the primary evaluation question, "Does this intervention produce beneficial effects?"

Given the size and the national importance of AIDS intervention programs and given the state of current knowledge about behavior change in general and AIDS prevention, in particular, the panel believes that it would be unwise to rely on matching and adjustment strategies as the primary design for evaluating AIDS intervention programs. With differently constituted groups, inferences about results are hostage to uncertainty about the extent to which the observed outcome actually results from the intervention and is not an artifact of intergroup differences that may not have been removed by matching or adjustment.

Randomized Experiments

A remedy to the inferential uncertainties that afflict nonexperimental designs is provided by randomized experiments . In such experiments, one singly constituted group is established for study. A subset of the group is then randomly chosen to receive the intervention, with the other subset becoming the control. The two groups are not identical, but they are comparable. Because they are two random samples drawn from the same population, they are not systematically different in any respect, which is important for all variables—both known and unknown—that can influence the outcome. Dividing a singly constituted group into two random and therefore comparable subgroups cuts through the tangle of causation and establishes a basis for the valid comparison of respondents who do and do not receive the intervention. Randomized experiments provide for clear causal inference by solving the problem of group comparability, and may be used to answer the evaluation questions "Does the intervention work?" and "What works better?"

Which question is answered depends on whether the controls receive an intervention or not. When the object is to estimate whether a given intervention has any effects, individuals are randomly assigned to the project or to a zero-treatment control group. The control group may be put on a waiting list or simply not get the treatment. This design addresses the question, "Does it work?"

When the object is to compare variations on a project—e.g., individual counseling sessions versus group counseling—then individuals are randomly assigned to these two regimens, and there is no zero-treatment control group. This design addresses the question, "What works better?" In either case, the control groups must be followed up as rigorously as the experimental groups.

A randomized experiment requires that individuals, organizations, or other treatment units be randomly assigned to one of two or more treatments or program variations. Random assignment ensures that the estimated differences between the groups so constituted are statistically unbiased; that is, that any differences in effects measured between them are a result of treatment. The absence of statistical bias in groups constituted in this fashion stems from the fact that random assignment ensures that there are no systematic differences between them, differences that can and usually do affect groups composed in ways that are not random. 5 The panel believes this approach is far superior for outcome evaluations of AIDS interventions than the nonrandom and quasi-experimental approaches. Therefore,

To improve interventions that are already broadly implemented, the panel recommends the use of randomized field experiments of alternative or enhanced interventions.

Under certain conditions, the panel also endorses randomized field experiments with a nontreatment control group to evaluate new interventions. In the context of a deadly epidemic, ethics dictate that treatment not be withheld simply for the purpose of conducting an experiment. Nevertheless, there may be times when a randomized field test of a new treatment with a no-treatment control group is worthwhile. One such time is during the design phase of a major or national intervention.

Before a new intervention is broadly implemented, the panel recommends that it be pilot tested in a randomized field experiment.

The panel considered the use of experiments with delayed rather than no treatment. A delayed-treatment control group strategy might be pursued when resources are too scarce for an intervention to be widely distributed at one time. For example, a project site that is waiting to receive funding for an intervention would be designated as the control group. If it is possible to randomize which projects in the queue receive the intervention, an evaluator could measure and compare outcomes after the experimental group had received the new treatment but before the control group received it. The panel believes that such a design can be applied only in limited circumstances, such as when groups would have access to related services in their communities and that conducting the study was likely to lead to greater access or better services. For example, a study cited in Chapter 4 used a randomized delayed-treatment experiment to measure the effects of a community-based risk reduction program. However, such a strategy may be impractical for several reasons, including:

  • sites waiting for funding for an intervention might seek resources from another source;
  • it might be difficult to enlist the nonfunded site and its clients to participate in the study;
  • there could be an appearance of favoritism toward projects whose funding was not delayed.

Although randomized experiments have many benefits, the approach is not without pitfalls. In the planning stages of evaluation, it is necessary to contemplate certain hazards, such as the Hawthorne effect 6 and differential project dropout rates. Precautions must be taken either to prevent these problems or to measure their effects. Fortunately, there is some evidence suggesting that the Hawthorne effect is usually not very large (Rossi and Freeman, 1982:175-176).

Attrition is potentially more damaging to an evaluation, and it must be limited if the experimental design is to be preserved. If sample attrition is not limited in an experimental design, it becomes necessary to account for the potentially biasing impact of the loss of subjects in the treatment and control conditions of the experiment. The statistical adjustments required to make inferences about treatment effectiveness in such circumstances can introduce uncertainties that are as worrisome as those afflicting nonexperimental and quasi-experimental designs. Thus, the panel's recommendation of the selective use of randomized design carries an implicit caveat: To realize the theoretical advantages offered by randomized experimental designs, substantial efforts will be required to ensure that the designs are not compromised by flawed execution.

Another pitfall to randomization is its appearance of unfairness or unattractiveness to participants and the controversial legal and ethical issues it sometimes raises. Often, what is being criticized is the control of project assignment of participants rather than the use of randomization itself. In deciding whether random assignment is appropriate, it is important to consider the specific context of the evaluation and how participants would be assigned to projects in the absence of randomization. The Federal Judicial Center (1981) offers five threshold conditions for the use of random assignment.

  • Does present practice or policy need improvement?
  • Is there significant uncertainty about the value of the proposed regimen?
  • Are there acceptable alternatives to randomized experiments?
  • Will the results of the experiment be used to improve practice or policy?
  • Is there a reasonable protection against risk for vulnerable groups (i.e., individuals within the justice system)?

The parent committee has argued that these threshold conditions apply in the case of AIDS prevention programs (see Turner, Miller, and Moses, 1989:331-333).

Although randomization may be desirable from an evaluation and ethical standpoint, and acceptable from a legal standpoint, it may be difficult to implement from a practical or political standpoint. Again, the panel emphasizes that questions about the practical or political feasibility of the use of randomization may in fact refer to the control of program allocation rather than to the issues of randomization itself. In fact, when resources are scarce, it is often more ethical and politically palatable to randomize allocation rather than to allocate on grounds that may appear biased.

It is usually easier to defend the use of randomization when the choice has to do with assignment to groups receiving alternative services than when the choice involves assignment to groups receiving no treatment. For example, in comparing a testing and counseling intervention that offered a special "skills training" session in addition to its regular services with a counseling and testing intervention that offered no additional component, random assignment of participants to one group rather than another may be acceptable to program staff and participants because the relative values of the alternative interventions are unknown.

The more difficult issue is the introduction of new interventions that are perceived to be needed and effective in a situation in which there are no services. An argument that is sometimes offered against the use of randomization in this instance is that interventions should be assigned on the basis of need (perhaps as measured by rates of HIV incidence or of high-risk behaviors). But this argument presumes that the intervention will have a positive effect—which is unknown before evaluation—and that relative need can be established, which is a difficult task in itself.

The panel recognizes that community and political opposition to randomization to zero treatments may be strong and that enlisting participation in such experiments may be difficult. This opposition and reluctance could seriously jeopardize the production of reliable results if it is translated into noncompliance with a research design. The feasibility of randomized experiments for AIDS prevention programs has already been demonstrated, however (see the review of selected experiments in Turner, Miller, and Moses, 1989:327-329). The substantial effort involved in mounting randomized field experiments is repaid by the fact that they can provide unbiased evidence of the effects of a program.

Unit of Assignment.

The unit of assignment of an experiment may be an individual person, a clinic (i.e., the clientele of the clinic), or another organizational unit (e.g., the community or city). The treatment unit is selected at the earliest stage of design. Variations of units are illustrated in the following four examples of intervention programs.

Two different pamphlets (A and B) on the same subject (e.g., testing) are distributed in an alternating sequence to individuals calling an AIDS hotline. The outcome to be measured is whether the recipient returns a card asking for more information.

Two instruction curricula (A and B) about AIDS and HIV infections are prepared for use in high school driver education classes. The outcome to be measured is a score on a knowledge test.

Of all clinics for sexually transmitted diseases (STDs) in a large metropolitan area, some are randomly chosen to introduce a change in the fee schedule. The outcome to be measured is the change in patient load.

A coordinated set of community-wide interventions—involving community leaders, social service agencies, the media, community associations and other groups—is implemented in one area of a city. Outcomes are knowledge as assessed by testing at drug treatment centers and STD clinics and condom sales in the community's retail outlets.

In example (1), the treatment unit is an individual person who receives pamphlet A or pamphlet B. If either "treatment" is applied again, it would be applied to a person. In example (2), the high school class is the treatment unit; everyone in a given class experiences either curriculum A or curriculum B. If either treatment is applied again, it would be applied to a class. The treatment unit is the clinic in example (3), and in example (4), the treatment unit is a community .

The consistency of the effects of a particular intervention across repetitions justly carries a heavy weight in appraising the intervention. It is important to remember that repetitions of a treatment or intervention are the number of treatment units to which the intervention is applied. This is a salient principle in the design and execution of intervention programs as well as in the assessment of their results.

The adequacy of the proposed sample size (number of treatment units) has to be considered in advance. Adequacy depends mainly on two factors:

  • How much variation occurs from unit to unit among units receiving a common treatment? If that variation is large, then the number of units needs to be large.
  • What is the minimum size of a possible treatment difference that, if present, would be practically important? That is, how small a treatment difference is it essential to detect if it is present? The smaller this quantity, the larger the number of units that are necessary.

Many formal methods for considering and choosing sample size exist (see, e.g., Cohen, 1988). Practical circumstances occasionally allow choosing between designs that involve units at different levels; thus, a classroom might be the unit if the treatment is applied in one way, but an entire school might be the unit if the treatment is applied in another. When both approaches are feasible, the use of a power analysis for each approach may lead to a reasoned choice.

Choice of Methods

There is some controversy about the advantages of randomized experiments in comparison with other evaluative approaches. It is the panel's belief that when a (well executed) randomized study is feasible, it is superior to alternative kinds of studies in the strength and clarity of whatever conclusions emerge, primarily because the experimental approach avoids selection biases. 7 Other evaluation approaches are sometimes unavoidable, but ordinarily the accumulation of valid information will go more slowly and less securely than in randomized approaches.

Experiments in medical research shed light on the advantages of carefully conducted randomized experiments. The Salk vaccine trials are a successful example of a large, randomized study. In a double-blind test of the polio vaccine, 8 children in various communities were randomly assigned to two treatments, either the vaccine or a placebo. By this method, the effectiveness of Salk vaccine was demonstrated in one summer of research (Meier, 1957).

A sufficient accumulation of relevant, observational information, especially when collected in studies using different procedures and sample populations, may also clearly demonstrate the effectiveness of a treatment or intervention. The process of accumulating such information can be a long one, however. When a (well-executed) randomized study is feasible, it can provide evidence that is subject to less uncertainty in its interpretation, and it can often do so in a more timely fashion. In the midst of an epidemic, the panel believes it proper that randomized experiments be one of the primary strategies for evaluating the effectiveness of AIDS prevention efforts. In making this recommendation, however, the panel also wishes to emphasize that the advantages of the randomized experimental design can be squandered by poor execution (e.g., by compromised assignment of subjects, significant subject attrition rates, etc.). To achieve the advantages of the experimental design, care must be taken to ensure that the integrity of the design is not compromised by poor execution.

In proposing that randomized experiments be one of the primary strategies for evaluating the effectiveness of AIDS prevention programs, the panel also recognizes that there are situations in which randomization will be impossible or, for other reasons, cannot be used. In its next report the panel will describe at length appropriate nonexperimental strategies to be considered in situations in which an experiment is not a practical or desirable alternative.

  • The Management of Evaluation

Conscientious evaluation requires a considerable investment of funds, time, and personnel. Because the panel recognizes that resources are not unlimited, it suggests that they be concentrated on the evaluation of a subset of projects to maximize the return on investment and to enhance the likelihood of high-quality results.

Project Selection

Deciding which programs or sites to evaluate is by no means a trivial matter. Selection should be carefully weighed so that projects that are not replicable or that have little chance for success are not subjected to rigorous evaluations.

The panel recommends that any intensive evaluation of an intervention be conducted on a subset of projects selected according to explicit criteria. These criteria should include the replicability of the project, the feasibility of evaluation, and the project's potential effectiveness for prevention of HIV transmission.

If a project is replicable, it means that the particular circumstances of service delivery in that project can be duplicated. In other words, for CBOs and counseling and testing projects, the content and setting of an intervention can be duplicated across sites. Feasibility of evaluation means that, as a practical matter, the research can be done: that is, the research design is adequate to control for rival hypotheses, it is not excessively costly, and the project is acceptable to the community and the sponsor. Potential effectiveness for HIV prevention means that the intervention is at least based on a reasonable theory (or mix of theories) about behavioral change (e.g., social learning theory [Bandura, 1977], the health belief model [Janz and Becker, 1984], etc.), if it has not already been found to be effective in related circumstances.

In addition, since it is important to ensure that the results of evaluations will be broadly applicable,

The panel recommends that evaluation be conducted and replicated across major types of subgroups, programs, and settings. Attention should be paid to geographic areas with low and high AIDS prevalence, as well as to subpopulations at low and high risk for AIDS.

Research Administration

The sponsoring agency interested in evaluating an AIDS intervention should consider the mechanisms through which the research will be carried out as well as the desirability of both independent oversight and agency in-house conduct and monitoring of the research. The appropriate entities and mechanisms for conducting evaluations depend to some extent on the kinds of data being gathered and the evaluation questions being asked.

Oversight and monitoring are important to keep projects fully informed about the other evaluations relevant to their own and to render assistance when needed. Oversight and monitoring are also important because evaluation is often a sensitive issue for project and evaluation staff alike. The panel is aware that evaluation may appear threatening to practitioners and researchers because of the possibility that evaluation research will show that their projects are not as effective as they believe them to be. These needs and vulnerabilities should be taken into account as evaluation research management is developed.

Conducting the Research

To conduct some aspects of a project's evaluation, it may be appropriate to involve project administrators, especially when the data will be used to evaluate delivery systems (e.g., to determine when and which services are being delivered). To evaluate outcomes, the services of an outside evaluator 9 or evaluation team are almost always required because few practitioners have the necessary professional experience or the time and resources necessary to do evaluation. The outside evaluator must have relevant expertise in evaluation research methodology and must also be sensitive to the fears, hopes, and constraints of project administrators.

Several evaluation management schemes are possible. For example, a prospective AIDS prevention project group (the contractor) can bid on a contract for project funding that includes an intensive evaluation component. The actual evaluation can be conducted either by the contractor alone or by the contractor working in concert with an outside independent collaborator. This mechanism has the advantage of involving project practitioners in the work of evaluation as well as building separate but mutually informing communities of experts around the country. Alternatively, a contract can be let with a single evaluator or evaluation team that will collaborate with the subset of sites that is chosen for evaluation. This variation would be managerially less burdensome than awarding separate contracts, but it would require greater dependence on the expertise of a single investigator or investigative team. ( Appendix A discusses contracting options in greater depth.) Both of these approaches accord with the parent committee's recommendation that collaboration between practitioners and evaluation researchers be ensured. Finally, in the more traditional evaluation approach, independent principal investigators or investigative teams may respond to a request for proposal (RFP) issued to evaluate individual projects. Such investigators are frequently university-based or are members of a professional research organization, and they bring to the task a variety of research experiences and perspectives.

Independent Oversight

The panel believes that coordination and oversight of multisite evaluations is critical because of the variability in investigators' expertise and in the results of the projects being evaluated. Oversight can provide quality control for individual investigators and can be used to review and integrate findings across sites for developing policy. The independence of an oversight body is crucial to ensure that project evaluations do not succumb to the pressures for positive findings of effectiveness.

When evaluation is to be conducted by a number of different evaluation teams, the panel recommends establishing an independent scientific committee to oversee project selection and research efforts, corroborate the impartiality and validity of results, conduct cross-site analyses, and prepare reports on the progress of the evaluations.

The composition of such an independent oversight committee will depend on the research design of a given program. For example, the committee ought to include statisticians and other specialists in randomized field tests when that approach is being taken. Specialists in survey research and case studies should be recruited if either of those approaches is to be used. Appendix B offers a model for an independent oversight group that has been successfully implemented in other settings—a project review team, or advisory board.

Agency In-House Team

As the parent committee noted in its report, evaluations of AIDS interventions require skills that may be in short supply for agencies invested in delivering services (Turner, Miller, and Moses, 1989:349). Although this situation can be partly alleviated by recruiting professional outside evaluators and retaining an independent oversight group, the panel believes that an in-house team of professionals within the sponsoring agency is also critical. The in-house experts will interact with the outside evaluators and provide input into the selection of projects, outcome objectives, and appropriate research designs; they will also monitor the progress and costs of evaluation. These functions require not just bureaucratic oversight but appropriate scientific expertise.

This is not intended to preclude the direct involvement of CDC staff in conducting evaluations. However, given the great amount of work to be done, it is likely a considerable portion will have to be contracted out. The quality and usefulness of the evaluations done under contract can be greatly enhanced by ensuring that there are an adequate number of CDC staff trained in evaluation research methods to monitor these contracts.

The panel recommends that CDC recruit and retain behavioral, social, and statistical scientists trained in evaluation methodology to facilitate the implementation of the evaluation research recommended in this report.

Interagency Collaboration

The panel believes that the federal agencies that sponsor the design of basic research, intervention programs, and evaluation strategies would profit from greater interagency collaboration. The evaluation of AIDS intervention programs would benefit from a coherent program of studies that should provide models of efficacious and effective interventions to prevent further HIV transmission, the spread of other STDs, and unwanted pregnancies (especially among adolescents). A marriage could then be made of basic and applied science, from which the best evaluation is born. Exploring the possibility of interagency collaboration and CDC's role in such collaboration is beyond the scope of this panel's task, but it is an important issue that we suggest be addressed in the future.

Costs of Evaluation

In view of the dearth of current evaluation efforts, the panel believes that vigorous evaluation research must be undertaken over the next few years to build up a body of knowledge about what interventions can and cannot do. Dedicating no resources to evaluation will virtually guarantee that high-quality evaluations will be infrequent and the data needed for policy decisions will be sparse or absent. Yet, evaluating every project is not feasible simply because there are not enough resources and, in many cases, evaluating every project is not necessary for good science or good policy.

The panel believes that evaluating only some of a program's sites or projects, selected under the criteria noted in Chapter 4 , is a sensible strategy. Although we recommend that intensive evaluation be conducted on only a subset of carefully chosen projects, we believe that high-quality evaluation will require a significant investment of time, planning, personnel, and financial support. The panel's aim is to be realistic—not discouraging—when it notes that the costs of program evaluation should not be underestimated. Many of the research strategies proposed in this report require investments that are perhaps greater than has been previously contemplated. This is particularly the case for outcome evaluations, which are ordinarily more difficult and expensive to conduct than formative or process evaluations. And those costs will be additive with each type of evaluation that is conducted.

Panel members have found that the cost of an outcome evaluation sometimes equals or even exceeds the cost of actual program delivery. For example, it was reported to the panel that randomized studies used to evaluate recent manpower training projects cost as much as the projects themselves (see Cottingham and Rodriguez, 1987). In another case, the principal investigator of an ongoing AIDS prevention project told the panel that the cost of randomized experimentation was approximately three times higher than the cost of delivering the intervention (albeit the study was quite small, involving only 104 participants) (Kelly et al., 1989). Fortunately, only a fraction of a program's projects or sites need to be intensively evaluated to produce high-quality information, and not all will require randomized studies.

Because of the variability in kinds of evaluation that will be done as well as in the costs involved, there is no set standard or rule for judging what fraction of a total program budget should be invested in evaluation. Based upon very limited data 10 and assuming that only a small sample of projects would be evaluated, the panel suspects that program managers might reasonably anticipate spending 8 to 12 percent of their intervention budgets to conduct high-quality evaluations (i.e., formative, process, and outcome evaluations). 11 Larger investments seem politically infeasible and unwise in view of the need to put resources into program delivery. Smaller investments in evaluation may risk studying an inadequate sample of program types, and it may also invite compromises in research quality.

The nature of the HIV/AIDS epidemic mandates an unwavering commitment to prevention programs, and the prevention activities require a similar commitment to the evaluation of those programs. The magnitude of what can be learned from doing good evaluations will more than balance the magnitude of the costs required to perform them. Moreover, it should be realized that the costs of shoddy research can be substantial, both in their direct expense and in the lost opportunities to identify effective strategies for AIDS prevention. Once the investment has been made, however, and a reservoir of findings and practical experience has accumulated, subsequent evaluations should be easier and less costly to conduct.

  • Bandura, A. (1977) Self-efficacy: Toward a unifying theory of behavioral change . Psychological Review 34:191-215. [ PubMed : 847061 ]
  • Campbell, D. T., and Stanley, J. C. (1966) Experimental and Quasi-Experimental Design and Analysis . Boston: Houghton-Mifflin.
  • Centers for Disease Control (CDC) (1988) Sourcebook presented at the National Conference on the Prevention of HIV Infection and AIDS Among Racial and Ethnic Minorities in the United States (August).
  • Cohen, J. (1988) Statistical Power Analysis for the Behavioral Sciences . 2nd ed. Hillsdale, NJ.: L. Erlbaum Associates.
  • Cook, T., and Campbell, D. T. (1979) Quasi-Experimentation: Design and Analysis for Field Settings . Boston: Houghton-Mifflin.
  • Federal Judicial Center (1981) Experimentation in the Law . Washington, D.C.: Federal Judicial Center.
  • Janz, N. K., and Becker, M. H. (1984) The health belief model: A decade later . Health Education Quarterly 11 (1):1-47. [ PubMed : 6392204 ]
  • Kelly, J. A., St. Lawrence, J. S., Hood, H. V., and Brasfield, T. L. (1989) Behavioral intervention to reduce AIDS risk activities . Journal of Consulting and Clinical Psychology 57:60-67. [ PubMed : 2925974 ]
  • Meier, P. (1957) Safety testing of poliomyelitis vaccine . Science 125(3257): 1067-1071. [ PubMed : 13432758 ]
  • Roethlisberger, F. J. and Dickson, W. J. (1939) Management and the Worker . Cambridge, Mass.: Harvard University Press.
  • Rossi, P. H., and Freeman, H. E. (1982) Evaluation: A Systematic Approach . 2nd ed. Beverly Hills, Cal.: Sage Publications.
  • Turner, C. F., editor; , Miller, H. G., editor; , and Moses, L. E., editor. , eds. (1989) AIDS, Sexual Behavior, and Intravenous Drug Use . Report of the NRC Committee on AIDS Research and the Behavioral, Social, and Statistical Sciences. Washington, D.C.: National Academy Press. [ PubMed : 25032322 ]
  • Weinstein, M. C., Graham, J. D., Siegel, J. E., and Fineberg, H. V. (1989) Cost-effectiveness analysis of AIDS prevention programs: Concepts, complications, and illustrations . In C.F. Turner, editor; , H. G. Miller, editor; , and L. E. Moses, editor. , eds., AIDS, Sexual Behavior, and Intravenous Drug Use . Report of the NRC Committee on AIDS Research and the Behavioral, Social, and Statistical Sciences. Washington, D.C.: National Academy Press. [ PubMed : 25032322 ]
  • Weiss, C. H. (1972) Evaluation Research . Englewood Cliffs, N.J.: Prentice-Hall, Inc.

On occasion, nonparticipants observe behavior during or after an intervention. Chapter 3 introduces this option in the context of formative evaluation.

The use of professional customers can raise serious concerns in the eyes of project administrators at counseling and testing sites. The panel believes that site administrators should receive advance notification that professional customers may visit their sites for testing and counseling services and provide their consent before this method of data collection is used.

Parts of this section are adopted from Turner, Miller, and Moses, (1989:324-326).

This weakness has been noted by CDC in a sourcebook provided to its HIV intervention project grantees (CDC, 1988:F-14).

The significance tests applied to experimental outcomes calculate the probability that any observed differences between the sample estimates might result from random variations between the groups.

Research participants' knowledge that they were being observed had a positive effect on their responses in a series of famous studies made at General Electric's Hawthorne Works in Chicago (Roethlisberger and Dickson, 1939); the phenomenon is referred to as the Hawthorne effect.

participants who self-select into a program are likely to be different from non-random comparison groups in terms of interests, motivations, values, abilities, and other attributes that can bias the outcomes.

A double-blind test is one in which neither the person receiving the treatment nor the person administering it knows which treatment (or when no treatment) is being given.

As discussed under ''Agency In-House Team,'' the outside evaluator might be one of CDC's personnel. However, given the large amount of research to be done, it is likely that non-CDC evaluators will also need to be used.

See, for example, chapter 3 which presents cost estimates for evaluations of media campaigns. Similar estimates are not readily available for other program types.

For example, the U. K. Health Education Authority (that country's primary agency for AIDS education and prevention programs) allocates 10 percent of its AIDS budget for research and evaluation of its AIDS programs (D. McVey, Health Education Authority, personal communication, June 1990). This allocation covers both process and outcome evaluation.

  • Cite this Page National Research Council (US) Panel on the Evaluation of AIDS Interventions; Coyle SL, Boruch RF, Turner CF, editors. Evaluating AIDS Prevention Programs: Expanded Edition. Washington (DC): National Academies Press (US); 1991. 1, Design and Implementation of Evaluation Research.
  • PDF version of this title (6.0M)

In this Page

Related information.

  • PubMed Links to PubMed

Recent Activity

  • Design and Implementation of Evaluation Research - Evaluating AIDS Prevention Pr... Design and Implementation of Evaluation Research - Evaluating AIDS Prevention Programs

Your browsing activity is empty.

Activity recording is turned off.

Turn recording back on

Connect with NLM

National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894

Web Policies FOIA HHS Vulnerability Disclosure

Help Accessibility Careers

statistics

  • DOI: 10.1353/lib.2006.0050
  • Corpus ID: 21005052

Evaluation Research: An Overview

  • Published in Library Trends 6 September 2006
  • Education, Sociology

112 Citations

A study on the development of standard indicators for college & university libraries' evaluation, quantitative research: a successful investigation in natural and social sciences, assessing the effectiveness and quality of libraries, evaluation inquiry in donor funded programmes in northern ghana: experiences of programme staff, the efficient team-driven quality scholarship model: a process evaluation of collaborative research, the context, input process, product (cipp) evaluation model as a comprehensive framework for evaluating online english learning towards the industrial revolution era 5.0, an approach to evaluating latin american university libraries, the historical development of evaluation use, assessment of effectiveness of public integrity training workshops for civil servants – a case study, duck soup and library outcome evaluation, 54 references, qualitative research and evaluation methods (3rd ed.), the development of self-assessment tool-kits for the library and information sector, benchmarking: a process for improvement., outcomes assessment in the networked environment: research questions, issues, considerations, and moving forward, evaluating the school library media center, a theory-guided approach to library services assessment, basic research methods for librarians, evaluation: a systematic approach, bridging the gulf: mixed methods and library service evaluation, building evaluation capacity: activities for teaching and training, related papers.

Showing 1 through 3 of 0 Related Papers

  • Evaluation Research Design: Examples, Methods & Types

busayo.longe

As you engage in tasks, you will need to take intermittent breaks to determine how much progress has been made and if any changes need to be effected along the way. This is very similar to what organizations do when they carry out  evaluation research.  

The evaluation research methodology has become one of the most important approaches for organizations as they strive to create products, services, and processes that speak to the needs of target users. In this article, we will show you how your organization can conduct successful evaluation research using Formplus .

What is Evaluation Research?

Also known as program evaluation, evaluation research is a common research design that entails carrying out a structured assessment of the value of resources committed to a project or specific goal. It often adopts social research methods to gather and analyze useful information about organizational processes and products.  

As a type of applied research , evaluation research typically associated  with real-life scenarios within organizational contexts. This means that the researcher will need to leverage common workplace skills including interpersonal skills and team play to arrive at objective research findings that will be useful to stakeholders. 

Characteristics of Evaluation Research

  • Research Environment: Evaluation research is conducted in the real world; that is, within the context of an organization. 
  • Research Focus: Evaluation research is primarily concerned with measuring the outcomes of a process rather than the process itself. 
  • Research Outcome: Evaluation research is employed for strategic decision making in organizations. 
  • Research Goal: The goal of program evaluation is to determine whether a process has yielded the desired result(s). 
  • This type of research protects the interests of stakeholders in the organization. 
  • It often represents a middle-ground between pure and applied research. 
  • Evaluation research is both detailed and continuous. It pays attention to performative processes rather than descriptions. 
  • Research Process: This research design utilizes qualitative and quantitative research methods to gather relevant data about a product or action-based strategy. These methods include observation, tests, and surveys.

Types of Evaluation Research

The Encyclopedia of Evaluation (Mathison, 2004) treats forty-two different evaluation approaches and models ranging from “appreciative inquiry” to “connoisseurship” to “transformative evaluation”. Common types of evaluation research include the following: 

  • Formative Evaluation

Formative evaluation or baseline survey is a type of evaluation research that involves assessing the needs of the users or target market before embarking on a project.  Formative evaluation is the starting point of evaluation research because it sets the tone of the organization’s project and provides useful insights for other types of evaluation.  

  • Mid-term Evaluation

Mid-term evaluation entails assessing how far a project has come and determining if it is in line with the set goals and objectives. Mid-term reviews allow the organization to determine if a change or modification of the implementation strategy is necessary, and it also serves for tracking the project. 

  • Summative Evaluation

This type of evaluation is also known as end-term evaluation of project-completion evaluation and it is conducted immediately after the completion of a project. Here, the researcher examines the value and outputs of the program within the context of the projected results. 

Summative evaluation allows the organization to measure the degree of success of a project. Such results can be shared with stakeholders, target markets, and prospective investors. 

  • Outcome Evaluation

Outcome evaluation is primarily target-audience oriented because it measures the effects of the project, program, or product on the users. This type of evaluation views the outcomes of the project through the lens of the target audience and it often measures changes such as knowledge-improvement, skill acquisition, and increased job efficiency. 

  • Appreciative Enquiry

Appreciative inquiry is a type of evaluation research that pays attention to result-producing approaches. It is predicated on the belief that an organization will grow in whatever direction its stakeholders pay primary attention to such that if all the attention is focused on problems, identifying them would be easy. 

In carrying out appreciative inquiry, the research identifies the factors directly responsible for the positive results realized in the course of a project, analyses the reasons for these results, and intensifies the utilization of these factors. 

Evaluation Research Methodology 

There are four major evaluation research methods, namely; output measurement, input measurement, impact assessment and service quality

  • Output/Performance Measurement

Output measurement is a method employed in evaluative research that shows the results of an activity undertaking by an organization. In other words, performance measurement pays attention to the results achieved by the resources invested in a specific activity or organizational process. 

More than investing resources in a project, organizations must be able to track the extent to which these resources have yielded results, and this is where performance measurement comes in. Output measurement allows organizations to pay attention to the effectiveness and impact of a process rather than just the process itself. 

Other key indicators of performance measurement include user-satisfaction, organizational capacity, market penetration, and facility utilization. In carrying out performance measurement, organizations must identify the parameters that are relevant to the process in question, their industry, and the target markets. 

5 Performance Evaluation Research Questions Examples

  • What is the cost-effectiveness of this project?
  • What is the overall reach of this project?
  • How would you rate the market penetration of this project?
  • How accessible is the project? 
  • Is this project time-efficient? 

performance-evaluation-survey

  • Input Measurement

In evaluation research, input measurement entails assessing the number of resources committed to a project or goal in any organization. This is one of the most common indicators in evaluation research because it allows organizations to track their investments. 

The most common indicator of inputs measurement is the budget which allows organizations to evaluate and limit expenditure for a project. It is also important to measure non-monetary investments like human capital; that is the number of persons needed for successful project execution and production capital. 

5 Input Evaluation Research Questions Examples

  • What is the budget for this project?
  • What is the timeline of this process?
  • How many employees have been assigned to this project? 
  • Do we need to purchase new machinery for this project? 
  • How many third-parties are collaborators in this project? 

evaluation of research paper

  • Impact/Outcomes Assessment

In impact assessment, the evaluation researcher focuses on how the product or project affects target markets, both directly and indirectly. Outcomes assessment is somewhat challenging because many times, it is difficult to measure the real-time value and benefits of a project for the users. 

In assessing the impact of a process, the evaluation researcher must pay attention to the improvement recorded by the users as a result of the process or project in question. Hence, it makes sense to focus on cognitive and affective changes, expectation-satisfaction, and similar accomplishments of the users. 

5 Impact Evaluation Research Questions Examples

  • How has this project affected you? 
  • Has this process affected you positively or negatively?
  • What role did this project play in improving your earning power? 
  • On a scale of 1-10, how excited are you about this project?
  • How has this project improved your mental health? 

evaluation of research paper

  • Service Quality

Service quality is the evaluation research method that accounts for any differences between the expectations of the target markets and their impression of the undertaken project. Hence, it pays attention to the overall service quality assessment carried out by the users. 

It is not uncommon for organizations to build the expectations of target markets as they embark on specific projects. Service quality evaluation allows these organizations to track the extent to which the actual product or service delivery fulfils the expectations. 

5 Service Quality Evaluation Questions

  • On a scale of 1-10, how satisfied are you with the product?
  • How helpful was our customer service representative?
  • How satisfied are you with the quality of service?
  • How long did it take to resolve the issue at hand?
  • How likely are you to recommend us to your network?

evaluation of research paper

Uses of Evaluation Research 

  • Evaluation research is used by organizations to measure the effectiveness of activities and identify areas needing improvement. Findings from evaluation research are key to project and product advancements and are very influential in helping organizations realize their goals efficiently.     
  • The findings arrived at from evaluation research serve as evidence of the impact of the project embarked on by an organization. This information can be presented to stakeholders, customers, and can also help your organization secure investments for future projects. 
  • Evaluation research helps organizations to justify their use of limited resources and choose the best alternatives. 
  •  It is also useful in pragmatic goal setting and realization. 
  • Evaluation research provides detailed insights into projects embarked on by an organization. Essentially, it allows all stakeholders to understand multiple dimensions of a process, and to determine strengths and weaknesses. 
  • Evaluation research also plays a major role in helping organizations to improve their overall practice and service delivery. This research design allows organizations to weigh existing processes through feedback provided by stakeholders, and this informs better decision making. 
  • Evaluation research is also instrumental to sustainable capacity building. It helps you to analyze demand patterns and determine whether your organization requires more funds, upskilling or improved operations.

Data Collection Techniques Used in Evaluation Research

In gathering useful data for evaluation research, the researcher often combines quantitative and qualitative research methods . Qualitative research methods allow the researcher to gather information relating to intangible values such as market satisfaction and perception. 

On the other hand, quantitative methods are used by the evaluation researcher to assess numerical patterns, that is, quantifiable data. These methods help you measure impact and results; although they may not serve for understanding the context of the process. 

Quantitative Methods for Evaluation Research

A survey is a quantitative method that allows you to gather information about a project from a specific group of people. Surveys are largely context-based and limited to target groups who are asked a set of structured questions in line with the predetermined context.

Surveys usually consist of close-ended questions that allow the evaluative researcher to gain insight into several  variables including market coverage and customer preferences. Surveys can be carried out physically using paper forms or online through data-gathering platforms like Formplus . 

  • Questionnaires

A questionnaire is a common quantitative research instrument deployed in evaluation research. Typically, it is an aggregation of different types of questions or prompts which help the researcher to obtain valuable information from respondents. 

A poll is a common method of opinion-sampling that allows you to weigh the perception of the public about issues that affect them. The best way to achieve accuracy in polling is by conducting them online using platforms like Formplus. 

Polls are often structured as Likert questions and the options provided always account for neutrality or indecision. Conducting a poll allows the evaluation researcher to understand the extent to which the product or service satisfies the needs of the users. 

Qualitative Methods for Evaluation Research

  • One-on-One Interview

An interview is a structured conversation involving two participants; usually the researcher and the user or a member of the target market. One-on-One interviews can be conducted physically, via the telephone and through video conferencing apps like Zoom and Google Meet. 

  • Focus Groups

A focus group is a research method that involves interacting with a limited number of persons within your target market, who can provide insights on market perceptions and new products. 

  • Qualitative Observation

Qualitative observation is a research method that allows the evaluation researcher to gather useful information from the target audience through a variety of subjective approaches. This method is more extensive than quantitative observation because it deals with a smaller sample size, and it also utilizes inductive analysis. 

  • Case Studies

A case study is a research method that helps the researcher to gain a better understanding of a subject or process. Case studies involve in-depth research into a given subject, to understand its functionalities and successes. 

How to Formplus Online Form Builder for Evaluation Survey 

  • Sign into Formplus

In the Formplus builder, you can easily create your evaluation survey by dragging and dropping preferred fields into your form. To access the Formplus builder, you will need to create an account on Formplus. 

Once you do this, sign in to your account and click on “Create Form ” to begin. 

formplus

  • Edit Form Title

Click on the field provided to input your form title, for example, “Evaluation Research Survey”.

evaluation of research paper

Click on the edit button to edit the form.

Add Fields: Drag and drop preferred form fields into your form in the Formplus builder inputs column. There are several field input options for surveys in the Formplus builder. 

evaluation of research paper

Edit fields

Click on “Save”

Preview form.

  • Form Customization

With the form customization options in the form builder, you can easily change the outlook of your form and make it more unique and personalized. Formplus allows you to change your form theme, add background images, and even change the font according to your needs. 

evaluation-research-from-builder

  • Multiple Sharing Options

Formplus offers multiple form sharing options which enables you to easily share your evaluation survey with survey respondents. You can use the direct social media sharing buttons to share your form link to your organization’s social media pages. 

You can send out your survey form as email invitations to your research subjects too. If you wish, you can share your form’s QR code or embed it on your organization’s website for easy access. 

Conclusion  

Conducting evaluation research allows organizations to determine the effectiveness of their activities at different phases. This type of research can be carried out using qualitative and quantitative data collection methods including focus groups, observation, telephone and one-on-one interviews, and surveys. 

Online surveys created and administered via data collection platforms like Formplus make it easier for you to gather and process information during evaluation research. With Formplus multiple form sharing options, it is even easier for you to gather useful data from target markets.

Logo

Connect to Formplus, Get Started Now - It's Free!

  • characteristics of evaluation research
  • evaluation research methods
  • types of evaluation research
  • what is evaluation research
  • busayo.longe

Formplus

You may also like:

Assessment vs Evaluation: 11 Key Differences

This article will discuss what constitutes evaluations and assessments along with the key differences between these two research methods.

evaluation of research paper

Recall Bias: Definition, Types, Examples & Mitigation

This article will discuss the impact of recall bias in studies and the best ways to avoid them during research.

What is Pure or Basic Research? + [Examples & Method]

Simple guide on pure or basic research, its methods, characteristics, advantages, and examples in science, medicine, education and psychology

Formal Assessment: Definition, Types Examples & Benefits

In this article, we will discuss different types and examples of formal evaluation, and show you how to use Formplus for online assessments.

Formplus - For Seamless Data Collection

Collect data the right way with a versatile data collection tool. try formplus and transform your work productivity today..

Information

  • Author Services

Initiatives

You are accessing a machine-readable page. In order to be human-readable, please install an RSS reader.

All articles published by MDPI are made immediately available worldwide under an open access license. No special permission is required to reuse all or part of the article published by MDPI, including figures and tables. For articles published under an open access Creative Common CC BY license, any part of the article may be reused without permission provided that the original article is clearly cited. For more information, please refer to https://www.mdpi.com/openaccess .

Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications.

Feature papers are submitted upon individual invitation or recommendation by the scientific editors and must receive positive feedback from the reviewers.

Editor’s Choice articles are based on recommendations by the scientific editors of MDPI journals from around the world. Editors select a small number of articles recently published in the journal that they believe will be particularly interesting to readers, or important in the respective research area. The aim is to provide a snapshot of some of the most exciting work published in the various research areas of the journal.

Original Submission Date Received: .

  • Active Journals
  • Find a Journal
  • Proceedings Series
  • For Authors
  • For Reviewers
  • For Editors
  • For Librarians
  • For Publishers
  • For Societies
  • For Conference Organizers
  • Open Access Policy
  • Institutional Open Access Program
  • Special Issues Guidelines
  • Editorial Process
  • Research and Publication Ethics
  • Article Processing Charges
  • Testimonials
  • Preprints.org
  • SciProfiles
  • Encyclopedia

materials-logo

Article Menu

evaluation of research paper

  • Subscribe SciFeed
  • Recommended Articles
  • Google Scholar
  • on Google Scholar
  • Table of Contents

Find support for a specific problem in the support section of our website.

Please let us know what you think of our products and services.

Visit our dedicated information section to learn more about MDPI.

JSmol Viewer

Experimental evaluation of compressive properties of early-age mortar and concrete hollow-block masonry prisms within construction stages.

evaluation of research paper

1. Introduction

2. compressive properties of masonry, 3. experimental investigation of compressive properties of early-age mortar cubes, 3.1. test setup for evaluating compressive strength of mortar cube, 3.2. results and discussions, 4. experimental investigation of compressive properties of early-age masonry prisms, 4.1. test setup for comprehensive tests of masonry prisms, 4.2. results and discussions, 5. conclusions.

  • The variation in the results of early-age mortar cubes and masonry prism testing was considerable. Therefore, the number of tests should be greater than the minimum number of tests suggested in masonry codes. For example, CSA-A179 recommends a minimum of six samples for fully cured mortar cube compressive testing; however, this study suggested an average of 20 tests for each t to minimize COV (less than 20–25%). Moreover, the threshold of 50% difference from the average of the data, suggested in masonry codes, was not enough for the analysis of early-age masonry data. Therefore, outlier analysis was necessary, as it reduced the COV of some groups of data by over 80%.
  • Based on regression analysis, the σ-ε behavior of mortar cubes regarding all t groups, as well as the regression models for σ mc and E mc against t can be predicted for any t groups, which were even not addressed in the experimental study, with R 2 of more than 0.95. σ-ε plot of t ≤ 18 h samples does not have any peak points. Therefore, this study suggested a threshold of 0.03 for ε as the failure of samples by adjusting the maximum usable ε for masonry elements in the ultimate limit state design method presented in masonry design codes. Moreover, the PE model was edited based on the output of this research. Only the parabolic part of the PE model should be used for the samples with t ≤ 18 h. However, the original PE model, including both parabolic and linear parts (regarding the ascending and descending parts of the σ-ε plot, respectively), can be applied for the analysis of samples older than 18 h.
  • E mc and σ mc increased logarithmically as t increased, and the developed regression models did not intersect the origin because mortar cubes tested at ages less than 20.8 h had not yet undergone their primary hydration phase and there was no cohesion to the mortar itself. In this case, the failure mode was more akin to a soil failure than the shear compression failure (conical shear pattern) associated with fully cured cementitious materials. For example, 24 h mortar obtains only ~5% of σ mc of fully cured mortar. However, the failure mode of older mortar cubes ( t > 20.8 h) was like the failure mode of fully cured mortar or concrete (conical shear failure mode) because of the formation of calcium silicate hydrate and calcium hydroxide.
  • Like mortar cubes, the failure mode of masonry prisms depends on the hydration phase of the mortar. The failure mode of masonry prisms regarding t ≤ 20.8 h, when the primary hydration phase has not started yet, is determined by the failure of mortar, which is deformation and detachment. However, the failure mode of prisms after the start of the hydration phase ( t > 20.8 h) depends on the failure mode of blocks. Based on the results, concrete blocks follow the failure modes presented in ASTM-C1314, including conical break, cone and shear, cone and split, tension break, semi-conical break, shear break, and face shell separation.
  • Regarding the performance perspective, there was a practical limit to the compressive loads that could be resisted by the early-age masonry without detracting from the appearance and excessive smooshing of the mortar joint, which of course would be qualitative. For example, 18 h samples obtained only ~13% of their full compressive strength. However, regarding the life safety perspective, the compressive failure load for masonry prisms was independent from t .

Author Contributions

Institutional review board statement, informed consent statement, data availability statement, acknowledgments, conflicts of interest.

  • Khan, K.; Amin, M.N. Influence of fineness of volcanic ash and its blends with quarry dust and slag on compressive strength of mortar under different curing temperatures. Constr. Build. Mater. 2017 , 154 , 514–528. [ Google Scholar ] [ CrossRef ]
  • Luso, E.; Lourenço, P.B. Bond strength characterization of commercially available grouts for masonry. Constr. Build. Mater. 2017 , 144 , 317–326. [ Google Scholar ] [ CrossRef ]
  • AMI-20 ; Building Structure Cost Comparison Report: Multi-Residential Structures. Atlantic Masonry Institute: Dartmouth, NS, Canada, 2020. Available online: https://www.canadamasonrydesigncentre.com/research/building-structure-cost-comparison-study-in-atlantic-canada-multi-residential-structures/ (accessed on 30 July 2024).
  • Abasi, A.; Hassanli, R.; Vincent, T.; Manalo, A. Influence of prism geometry on the compressive strength of concrete masonry. Constr. Build. Mater. 2020 , 264 , 120182. [ Google Scholar ] [ CrossRef ]
  • Hassanli, R.; ElGawady, M.A.; Mills, J.E. Effect of dimensions on the compressive strength of concrete masonry prisms. Adv. Civ. Eng. Mater. 2015 , 4 , 175–201. [ Google Scholar ] [ CrossRef ]
  • Drougkas, A.; Roca, P.; Molins, C. Compressive strength and elasticity of pure lime mortar masonry. Mater. Struct. 2016 , 49 , 983–999. Available online: https://link.springer.com/article/10.1617/s11527-015-0553-2 (accessed on 30 July 2024). [ CrossRef ]
  • Abasi, A.; Hassanli, R.; Vincent, T.; Manalo, A. Dependency of the compressive strength of concrete masonry on prism’s size. In Proceedings of the 14th Canadian Masonry Symposium, Quebec, QC, Canada, 17–20 May 2021. [ Google Scholar ]
  • Łątka, D. Prediction of Mortar Compressive Strength Based on Modern Minor-Destructive Tests. Materials 2023 , 16 , 2402. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Bolhassani, M.; Hamid, A.A.; Lau, A.C.W.; Moon, F. Simplified micro modeling of partially grouted masonry assemblages. Constr. Build. Mater. 2015 , 83 , 159–173. [ Google Scholar ] [ CrossRef ]
  • ACI-530 ; Building Code Requirements and Specification for Masonry Structures and Companion Commentaries. American Concrete Institute: Farmington Hills, MI, USA, 2013.
  • TMS-402/602 ; Building Code Requirements and Specifications for Masonry Structures. The Masonry Society: Longmont, CO, USA, 2016.
  • CSA-S304 ; Design of Masonry Structures. Canadian Standards Association: Mississauga, ON, Canada, 2014.
  • Abasi, A.; Sadhu, A.; Dunphy, K.; Banting, B. Evaluation of tensile properties of early-age concrete-block masonry assemblages. Constr. Build. Mater. 2023 , 369 , 130542. [ Google Scholar ] [ CrossRef ]
  • Dunphy, K.; Sadhu, A.; Banting, B. Experimental and numerical investigation of tensile properties of early-age masonry. Mater. Struct. 2021 , 54 , 1–18. Available online: https://link.springer.com/article/10.1617/s11527-021-01635-8 (accessed on 30 July 2024). [ CrossRef ]
  • MCAA-12 ; Standard Practice for Bracing Masonry Walls Under Construction. Mason Contractors Association of America: Algonquin, IL, USA, 2012. Available online: https://masoncontractors.org/2013/01/09/standard-practice-for-bracing-masonry-walls-under-construction-now-available/ (accessed on 30 July 2024).
  • CSA-A371 ; Masonry Construction for Buildings. Canadian Standards Association: Mississauga, ON, Canada, 2014.
  • Jin, Z.; Asce, S.M.; Gambatese, J.; Asce, M. Exploring the potential of technological innovations for temporary structures: A survey study. J. Constr. Eng. Manag. 2020 , 146 , 04020049. Available online: https://ascelibrary.org/doi/abs/10.1061/%28ASCE%29CO.1943-7862.0001828 (accessed on 30 July 2024). [ CrossRef ]
  • IMI. Internal Bracing Design Guide for Masonry Walls Under Construction ; International Masonry Institute: Bowie, MD, USA, 2013. [ Google Scholar ]
  • NBC-20 ; National Building Code of Canada. National Research Council Canada: Ottawa, ON, Canada, 2020.
  • ASTM-C270 ; Standard Specification for Mortar for Unit Masonry. American Society for Testing and Materials: Philadelphia, PA, USA, 2019. Available online: https://www.astm.org/c0270-19ae01.html (accessed on 30 July 2024).
  • CSA-A179 ; Mortar and Grout for Unit Masonry. Canadian Standards Association: Mississauga, ON, Canada, 2014.
  • ASTM-C109 ; Standard Test Method for Compressive Strength of Hydraulic Cement Mortars (Using 2-in. or [50-mm] Cube Specimens). American Society for Testing and Materials: Philadelphia, PA, USA, 2013. Available online: https://webstore.ansi.org/Standards/ASTM/astmc109c109m13?gclid=CjwKCAjwo_KXBhAaEiwA2RZ8hNnl2xAlPHVlwy3SaCEpPiEfMsmPQizlTCLPdQqasRNqzkquZtrw2xoCM2cQAvD_BwE (accessed on 30 July 2024).
  • Kandymov, N.; Mohd Hashim, N.F.; Ismail, S.; Durdyev, S. Derivation of Empirical Relationships to Predict Cambodian Masonry Strength. Materials 2022 , 15 , 5030. [ Google Scholar ] [ CrossRef ] [ PubMed ]
  • Nalon, G.H.; Ribeiro, J.C.L.; Pedroti, L.G.; da Silva, R.M.; de Araújo, E.N.D.; Santos, R.F.; de Lima, G.E.S. Review of recent progress on the compressive behavior of masonry prisms. Constr. Build. Mater. 2022 , 320 , 126181. [ Google Scholar ] [ CrossRef ]
  • Wu, J.; Bai, G.; Kang, K.; Liu, Y.; Wang, P.; Fu, S. Compressive Strength and Block-Mortar Bond of a New-Type of Insulation Hollow Block. IOP Conf. Ser. Mater. Sci. Eng. 2018 , 381 , 012024. Available online: https://iopscience.iop.org/article/10.1088/1757-899X/381/1/012024 (accessed on 30 July 2024). [ CrossRef ]
  • Martins, R.O.G.; Nalon, G.H.; Alvarenga, R.d.C.S.S.; Pedroti, L.G.; Ribeiro, J.C.L. Influence of blocks and grout on compressive strength and stiffness of concrete masonry prisms. Constr. Build. Mater. 2018 , 182 , 233–241. [ Google Scholar ] [ CrossRef ]
  • Ravula, M.B.; Subramaniam, K.V.L. Experimental investigation of compressive failure in masonry brick assemblages made with soft brick. Mater. Struct. 2017 , 50 , 1–11. Available online: https://link.springer.com/article/10.1617/s11527-016-0926-1 (accessed on 30 July 2024). [ CrossRef ]
  • Obaidat, A.T.; El Ezz, A.A.; Galal, K. Compression behavior of confined concrete masonry boundary elements. Eng. Struct. 2017 , 132 , 562–575. [ Google Scholar ] [ CrossRef ]
  • Thamboo, J.A.; Dhanasekar, M. Correlation between the performance of solid masonry prisms and wallettes under compression. J. Build. Eng. 2019 , 22 , 429–438. [ Google Scholar ] [ CrossRef ]
  • Sarhat, S.R.; Sherwood, E.G. The prediction of compressive strength of ungrouted hollow concrete block masonry. Constr. Build. Mater. 2014 , 58 , 111–121. [ Google Scholar ] [ CrossRef ]
  • Caldeira, F.E.; Nalon, G.H.; de Oliveira DS Pedroti, L.G.; Ribeiro, J.C.L.; Ferreira, F.A.; de Carvalho, J.M.F. Influence of joint thickness and strength of mortars on the compressive behavior of prisms made of normal and high-strength concrete blocks. Constr. Build. Mater. 2020 , 234 , 117419. [ Google Scholar ] [ CrossRef ]
  • Fonseca, F.S.; Fortes, E.S.; Parsekian, G.A.; Camacho, J.S. Compressive strength of high-strength concrete masonry grouted prisms. Constr. Build. Mater. 2019 , 202 , 861–876. [ Google Scholar ] [ CrossRef ]
  • Henrique Nalon, G.; Santos, C.F.R.; Pedroti, L.G.; Ribeiro, J.C.L.; Veríssimo, G.d.S.; Ferreira, F.A. Strength and failure mechanisms of masonry prisms under compression, flexure and shear: Components’ mechanical properties as design constraints. J. Build. Eng. 2020 , 28 , 101038. [ Google Scholar ] [ CrossRef ]
  • AbdelRahman, B.; Galal, K. Influence of pre-wetting, non-shrink grout, and scaling on the compressive strength of grouted concrete masonry prisms. Constr. Build. Mater. 2020 , 241 , 117985. [ Google Scholar ] [ CrossRef ]
  • Ouyang, J.; Wu, F.; Lü, W.; Huang, H.; Zhou, X. Prediction of compressive stress-strain curves of grouted masonry. Constr. Build. Mater. 2019 , 229 , 116826. [ Google Scholar ] [ CrossRef ]
  • Wan, D.; Liu, R.; Gao, T.; Jing, D.; Lu, F. Effect of Alkanolamines on the Early-Age Strength and Drying Shrinkage of Internal Curing of Mortars. Appl. Sci. 2022 , 12 , 9536. [ Google Scholar ] [ CrossRef ]
  • Zahra, T.; Thamboo, J.; Asad, M. Compressive strength and deformation characteristics of concrete block masonry made with different mortars, blocks and mortar beddings types. J. Build. Eng. 2021 , 38 , 102213. [ Google Scholar ] [ CrossRef ]
  • Zhou, Q.; Wang, F.; Zhu, F.; Yang, X. Stress-strain model for hollow concrete block masonry under uniaxial compression. Mater. Struct. 2017 , 50 , 1–12. [ Google Scholar ] [ CrossRef ]
  • Liu, B.; Drougkas, A.; Sarhosis, V. A material characterisation framework for assessing brickwork masonry arch bridges: From material level to component level testing. Constr. Build. Mater. 2023 , 397 , 132347. [ Google Scholar ] [ CrossRef ]
  • Thamboo, J. Performance of masonry columns confined with composites under axial compression: A state-of-the-art review. Constr. Build. Mater. 2021 , 274 , 121791. [ Google Scholar ] [ CrossRef ]
  • Kaushik, H.B.; Rai, D.C.; Jain, S.K. Stress-Strain Characteristics of Clay Brick Masonry under Uniaxial Compression. J. Mater. Civ. Eng. 2007 , 19 , 728–739. Available online: https://ascelibrary.org/doi/abs/10.1061/%28ASCE%290899-1561%282007%2919%3A9%28728%29 (accessed on 30 July 2024). [ CrossRef ]
  • Mohamad, G.; Fonseca, F.S.; Vermeltfoort, A.T.; Martens, D.R.W.; Lourenço, P.B. Strength, behavior, and failure mode of hollow concrete masonry constructed with mortars of different strengths. Constr. Build. Mater. 2017 , 134 , 489–496. [ Google Scholar ] [ CrossRef ]
  • Lakshani, M.M.T.; Jayathilaka, T.K.G.A.; Thamboo, J.A. Experimental investigation of the unconfined compressive strength characteristics of masonry mortars. J. Build. Eng. 2020 , 32 , 101558. [ Google Scholar ] [ CrossRef ]
  • Drysdale, R.; Hamid, A. Masonry Structures: Behavior and Design ; Canada Masonry Design Center (CMDC): Mississauga, ON, Canada, 2005. [ Google Scholar ]
  • Drougkas, A.; Verstrynge, E.; Hayen, R.; Van Balen, K. The confinement of mortar in masonry under compression: Experimental data and micro-mechanical analysis. Int. J. Solids Struct. 2019 , 162 , 105–120. [ Google Scholar ] [ CrossRef ]
  • CSA-A3004 ; Test Methods and Standard Practices for Cementitious Materials for Use in Concrete and Masonry. Canadian Standards Association: Mississauga, ON, Canada, 2013.
  • Tronci, E.M.; De Angelis, M.; Betti, R.; Altomare, V. Vibration-based structural health monitoring of a RC-masonry tower equipped with non-conventional TMD. Eng. Struct. 2020 , 224 , 111212. [ Google Scholar ] [ CrossRef ]
  • Sarr, C.A.T.; Chataigner, S.; Gaillet, L.; Godin, N. Nondestructive evaluation of FRP-reinforced structures bonded joints using acousto-ultrasonic: Towards diagnostic of damage state. Constr. Build. Mater. 2021 , 313 , 125499. [ Google Scholar ] [ CrossRef ]
  • ASTM-E111 ; Standard Test Method for Young’s Modulus, Tangent Modulus, and Chord Modulus. American Society for Testing and Materials: Philadelphia, PA, USA, 2017.
  • Nazar, M.E.; Sinha, S.N. Loading-unloading curves of interlocking grouted stabilised sand-flyash brick masonry. Mater. Struct. 2007 , 40 , 667–678. Available online: https://link.springer.com/article/10.1617/s11527-006-9177-x (accessed on 30 July 2024). [ CrossRef ]
  • Cavaleri, L.; Failla, A.; La Mendola, L.; Papia, M. Experimental and analytical response of masonry elements under eccentric vertical loads. Eng. Struct. 2005 , 27 , 1175–1184. [ Google Scholar ] [ CrossRef ]
  • Priestley, M.J.N.; Elder, D.M. Stress-strain curves for unconfined and confined concrete masonry. ACI J. Proc. 1983 , 80 , 192–201. [ Google Scholar ]
  • Dejaeghere, I.; Sonebi, M.; De Schutter, G. Influence of nano-clay on rheology, fresh properties, heat of hydration and strength of cement-based mortars. Constr. Build. Mater. 2019 , 222 , 73–85. [ Google Scholar ] [ CrossRef ]
  • Zagaroli, A.; Kubica, J.; Galman, I.; Falkjar, K. Study on the Mechanical Properties of Two General-Purpose Cement–Lime Mortars Prepared Based on Air Lime. Materials 2024 , 17 , 1001. [ Google Scholar ] [ CrossRef ]
  • ASTM-C1716 ; Standard Specification for Compression Testing Machine Requirements for Concrete Masonry Units, Related Units, and Prisms. American Society for Testing and Materials: Philadelphia, PA, USA, 2021.
  • ASTM-C1314 ; Standard Test Method for Compressive Strength of Masonry Prisms. American Society for Testing and Materials: Philadelphia, PA, USA, 2018.
  • CSA-A165 ; CSA Standards on Concrete Masonry Units. Canadian Standards Association: Mississauga, ON, Canada, 2014.

Click here to enlarge figure

t (h)ε Corresponding to the Inflection Points
30.005
40.007
60.014
130.009
180.010
240.015
480.014
720.017
1680.012
6720.0126
t
(h)
NE (kPa)σ (kPa)ε abcdR
(Cubic)
R
(Linear)
3161.96 × 10 4.960.030−0.9111.58--0.63-
4133.16 × 10 6.890.030−0.9951.36--0.61-
6151.46 × 10 3.22 × 10 0.030−0.6841.67--0.75-
13111.21 × 10 2.08 × 10 0.030−0.8961.92--0.92-
18154.66 × 10 6.45 × 10 0.029−1.582.48--0.64-
24115.45 × 10 9.48 × 10 0.024−1.132.10−0.0641.030.620.53
48123.29 × 10 5.18 × 10 0.021−0.8471.85−0.3141.320.720.57
72145.56 × 10 9.19 × 10 0.017−0.6281.66−0.2961.330.610.49
168101.45 × 10 1.56 × 10 0.014−0.8801.90−0.4571.470.660.73
672112.14 × 10 2.40 × 10 0.012−0.5061.53−0.4831.510.840.71
t (h) (COV%)Nσ * (kPa) (COV%)δ ** (mm)
(COV%)
ε  (COV%)σ Corresponding to Displacement of 1.17 mm (kPa)σ Corresponding to Displacement of 3 mm (kPa)
6
(6.6%)
101.16 × 10 (8.7%)1.15 × 10
(19.3%)
-3.81 × 10 1.27 × 10
18 (4.6%) 5 1.10 × 10 (8.1%)1.07 × 10 (15.3%)-1.60 × 10 4.10 × 10
168 (0.5%) 10 1.15 × 10 (13.2%)1.36
(29.4%)
0.003 (29.4%)1.15 × 10 -
The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

Abasi, A.; Banting, B.; Sadhu, A. Experimental Evaluation of Compressive Properties of Early-Age Mortar and Concrete Hollow-Block Masonry Prisms within Construction Stages. Materials 2024 , 17 , 3970. https://doi.org/10.3390/ma17163970

Abasi A, Banting B, Sadhu A. Experimental Evaluation of Compressive Properties of Early-Age Mortar and Concrete Hollow-Block Masonry Prisms within Construction Stages. Materials . 2024; 17(16):3970. https://doi.org/10.3390/ma17163970

Abasi, Ali, Bennett Banting, and Ayan Sadhu. 2024. "Experimental Evaluation of Compressive Properties of Early-Age Mortar and Concrete Hollow-Block Masonry Prisms within Construction Stages" Materials 17, no. 16: 3970. https://doi.org/10.3390/ma17163970

Article Metrics

Article access statistics, further information, mdpi initiatives, follow mdpi.

MDPI

Subscribe to receive issue release notifications and newsletters from MDPI journals

  • Skip to main content
  • Skip to search
  • Skip to footer

Products and Services

2 persons sitting in front of laptop

Cisco Security

Master your goals. innovate. we'll tackle threats..

Get powerful security across all your networks, cloud, endpoints, and email to protect everything that matters, from anywhere.

If it's connected, you're protected

Hacker working at multiple computer screens

Cisco Security “The Hacker”

More connected users and devices creates more complexity. Cisco Security Cloud makes security easier for IT and safer for everyone anywhere security meets the network.

Deliver smarter, stronger security

Protect your organization across a multicloud environment, while simplifying security operations, improving scalability, and driving data-informed outcomes, powered by Cisco Talos.

Unlock better user experiences

Create a seamless experience that frustrates attackers, not users, by granting access from any device, anywhere, and adding more proactive security controls.

Deliver cost-effective defenses

Improve ROI by consolidating vendors, reducing complexity and integrating your security.

Strengthen security resilience

Unified, end-to-end protection maximizes value, minimizes risk, and closes security gaps everywhere to defend against evolving threats. Protect access, apps, and innovation across your network to secure your future.

evaluation of research paper

Cisco Secure Firewall

Better visibility and actionable insights across networks, clouds, endpoints, and email allows users to respond confidently to the most sophisticated threats at machine scale.

Featured security products

Cisco hypershield.

A new groundbreaking security architecture that makes hyperscaler technology accessible to enterprises of all sizes and delivers AI-native security for modern data centers and cloud.

Cisco Secure Access (SSE)

A converged cybersecurity solution, grounded in zero trust, that radically reduces risk and delights both end users and IT staff by safely connecting anything to anywhere.

Detect the most sophisticated threats sooner across all vectors and prioritize by impact for faster responses.

Cisco Multicloud Defense

Gain multidirectional protection across clouds to stop inbound attacks, data exfiltration, and lateral movement.

Secure applications and enable frictionless access with strong MFA and more. Establish user and device trust, gain visibility into devices, and enable secure access to all apps.

Cisco Identity Services Engine (ISE)

Simplify highly secure network access control with software-defined access and automation.

Security Suites delivered by Cisco Security Cloud

User Protection Suite

Cisco User Protection Suite

Get secure access to any application, on any device, from anywhere. Defend against threats targeting users and deliver seamless access for hybrid work.

Cloud Protection Suite

Cisco Cloud Protection Suite

Secure your apps and data with a powerful, flexible framework for a hybrid and multicloud world.

Breach Protection Suite

Cisco Breach Protection Suite

Secure your business by investigating, prioritizing, and resolving incidents through unified defense and contextual insights from data-backed, AI-powered security.

Customer stories and insights

Global partnerships fight to end child exploitation together.

Marriott International

"Marriott has long championed human rights and human trafficking awareness. Combating CSAM is an important extension of that work. The IWF provided the level of rigor we needed in a URL list, and Cisco's security technology provided the means to easily apply it."

Abbe Horswill, Director, Human Rights and Social Impact

Company: Marriott International

The NFL relies on Cisco

NFL logo

"From securing stadiums, broadcasts, and fans to protecting the largest live sporting event in America, the right tools and the right team are key in making sure things run smoothly, avoiding disruptions to the game, and safeguarding the data and devices that make mission-critical gameday operations possible."

Add value to security solutions

Cisco Security Enterprise Agreement

Instant savings

Experience security software buying flexibility with one easy-to-manage agreement.

Services for security

Let the experts secure your business

Get more from your investments and enable constant vigilance to protect your organization.

Sharpen your security insights

Cisco Cybersecurity Viewpoints

Set your vision to a more secure future with Cisco Cybersecurity Viewpoints. With specialized content from podcasts to industry news, you'll walk away with a deeper understanding of the trends, research, and topics in our rapidly changing world.

IMAGES

  1. What Is an Evaluation Essay? Simple Examples To Guide You

    evaluation of research paper

  2. Research proposal evaluation form in Word and Pdf formats

    evaluation of research paper

  3. (PDF) Evaluation of research output

    evaluation of research paper

  4. Evaluation Essay

    evaluation of research paper

  5. (PDF) Evaluation Datasets for Research Paper Recommendation Systems

    evaluation of research paper

  6. (PDF) Evaluation of GPT-3 AI Language Model in Research Paper Writing

    evaluation of research paper

COMMENTS

  1. Evaluating Research in Academic Journals: A Practical Guide to

    Academic Journals. Evaluating Research in Academic Journals is a guide for students who are learning how to. evaluate reports of empirical research published in academic journals. It breaks down ...

  2. Evaluating Research

    Definition: Evaluating Research refers to the process of assessing the quality, credibility, and relevance of a research study or project. This involves examining the methods, data, and results of the research in order to determine its validity, reliability, and usefulness. Evaluating research can be done by both experts and non-experts in the ...

  3. What Is Evaluation?: Perspectives of How Evaluation Differs (or Not

    Overall, evaluators believed research and evaluation intersect, whereas researchers believed evaluation is a subcomponent of research. Furthermore, evaluators perceived greater differences between evaluation and research than researchers did, particularly in characteristics relevant at the beginning (e.g., purpose, questions, audience) and end ...

  4. Evaluating research: A multidisciplinary approach to assessing research

    We have used the terms research and research practice throughout this paper, as the scope of our study comprises trying to define what high quality science production might be. Moreover, as the evaluation of research practice is one of our end-goals, we may also need to define what we mean by evaluation. In our view, the practice of evaluation ...

  5. Critical appraisal of published research papers

    INTRODUCTION. Critical appraisal of a research paper is defined as "The process of carefully and systematically examining research to judge its trustworthiness, value and relevance in a particular context."[] Since scientific literature is rapidly expanding with more than 12,000 articles being added to the MEDLINE database per week,[] critical appraisal is very important to distinguish ...

  6. Evaluating Research Articles

    Critical Appraisal. Critical appraisal is the process of systematically evaluating research using established and transparent methods. In critical appraisal, health professionals use validated checklists/worksheets as tools to guide their assessment of the research. It is a more advanced way of evaluating research than the more basic method ...

  7. Research Evaluation

    Evaluation is an essential aspect of research. It is ubiquitous and continuous over time for researchers. Its main goal is to ensure rigor and quality through objective assessment at all levels. It is the fundamental mechanism that regulates the highly critical and competitive research processes.

  8. Understanding and Evaluating Research: A Critical Guide

    Understanding and Evaluating Research: A Critical Guide shows students how to be critical consumers of research and to appreciate the power of methodology as it shapes the research question, the use of theory in the study, the methods used, and how the outcomes are reported. The book starts with what it means to be a critical and uncritical ...

  9. Succeeding in postgraduate study

    For criteria recommended for the evaluation of qualitative research papers, see the article by Mildred Blaxter (1996), available online. Details are in the References. Activity 1 Critical appraisal of a scientific research paper. Timing: Allow approximately 30 minutes.

  10. Scientific Articles: How to Evaluate a Research Article

    General Tips for Evaluating Articles. There are a couple of general questions that are worth asking of any article before reading, so that you can see how important it might be. You'll want to start with the journal. Head to the journal website, look for the "about us" or "journal information" link and think about the following questions:

  11. 12.7 Evaluation: Effectiveness of Research Paper

    12.1 Introducing Research and Research Evidence; 12.2 Argumentative Research Trailblazer: Samin Nosrat; 12.3 Glance at Genre: Introducing Research as Evidence; 12.4 Annotated Student Sample: "Healthy Diets from Sustainable Sources Can Save the Earth" by Lily Tran; 12.5 Writing Process: Integrating Research; 12.6 Editing Focus: Integrating ...

  12. Research Paper: A step-by-step guide: 7. Evaluating Sources

    Evaluation Criteria. It's very important to evaluate the materials you find to make sure they are appropriate for a research paper. It's not enough that the information is relevant; it must also be credible. You will want to find more than enough resources, so that you can pick and choose the best for your paper.

  13. RCE 672: Research and Program Evaluation: APA Sample Paper

    Research Guides: RCE 672: Research and Program Evaluation: APA Sample Paper

  14. Research Guides: Writing a Research Paper: Evaluate Sources

    The Big 5 Criteria can help you evaluate your sources for credibility: Currency: Check the publication date and determine whether it is sufficiently current for your topic. Coverage (relevance): Consider whether the source is relevant to your research and whether it covers the topic adequately for your needs. Authority: Discover the credentials ...

  15. 3 Ways to Evaluate a Research Paper

    1. Check for the author's name, course, instructor name, and date. Look for the important information that helps identify the paper's owner and intended recipient on page 1. This information helps both the author and instructor keep track of the research paper. [14] 2. Look for an intriguing title on the first page.

  16. Evaluation Research: Definition, Methods and Examples

    The process of evaluation research consisting of data analysis and reporting is a rigorous, systematic process that involves collecting data about organizations, processes, projects, services, and/or resources. Evaluation research enhances knowledge and decision-making, and leads to practical applications. LEARN ABOUT:Action Research.

  17. Design and Implementation of Evaluation Research

    Evaluation has its roots in the social, behavioral, and statistical sciences, and it relies on their principles and methodologies of research, including experimental design, measurement, statistical tests, and direct observation. What distinguishes evaluation research from other social science is that its subjects are ongoing social action programs that are intended to produce individual or ...

  18. [PDF] Evaluation Research: An Overview

    Evaluation Research: An Overview. R. Powell. Published in Library Trends 6 September 2006. Education, Sociology. TLDR. It is concluded that evaluation research should be a rigorous, systematic process that involves collecting data about organizations, processes, programs, services, and/or resources that enhance knowledge and decision making and ...

  19. Evaluation: Sage Journals

    The journal Evaluation launched in 1995, publishes fully refereed papers and aims to advance the theory, methodology and practice of evaluation. We favour articles that bridge theory and practice whether through generalizable and exemplary cases or … | View full journal description. This journal is a member of the Committee on Publication ...

  20. Evaluation Research Design: Examples, Methods & Types

    Surveys can be carried out physically using paper forms or online through data-gathering platforms like Formplus. ... Qualitative Methods for Evaluation Research. One-on-One Interview; An interview is a structured conversation involving two participants; usually the researcher and the user or a member of the target market. One-on-One interviews ...

  21. Finding your way: the difference between research and evaluation

    A broadly accepted way of thinking about how evaluation and research are different comes from Michael Scriven, an evaluation expert and professor. He defines evaluation this way in his Evaluation Thesaurus: "Evaluation determines the merit, worth, or value of things.". He goes on to explain that "Social science research, by contrast, does ...

  22. Materials

    Feature papers represent the most advanced research with significant potential for high impact in the field. A Feature Paper should be a substantial original Article that involves several techniques or approaches, provides an outlook for future research directions and describes possible research applications. ... "Experimental Evaluation of ...

  23. Early findings from the NHS Type 2 Diabetes Path to Remission Programme

    Findings from the NHS T2DR programme show that remission of type 2 diabetes is possible outside of research settings, through at-scale service delivery. However, the rate of remission achieved is lower and the ascertainment of data is more limited with implementation in the real world than in randomised controlled trial settings.

  24. Cisco Security Products and Solutions

    "From securing stadiums, broadcasts, and fans to protecting the largest live sporting event in America, the right tools and the right team are key in making sure things run smoothly, avoiding disruptions to the game, and safeguarding the data and devices that make mission-critical gameday operations possible."