Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, generate accurate citations for free.

  • Knowledge Base

Methodology

  • Ethical Considerations in Research | Types & Examples

Ethical Considerations in Research | Types & Examples

Published on October 18, 2021 by Pritha Bhandari . Revised on May 9, 2024.

Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people.

The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating behaviors, and improving lives in other ways. What you decide to research and how you conduct that research involve key ethical considerations.

These considerations work to

  • protect the rights of research participants
  • enhance research validity
  • maintain scientific or academic integrity

Table of contents

Why do research ethics matter, getting ethical approval for your study, types of ethical issues, voluntary participation, informed consent, confidentiality, potential for harm, results communication, examples of ethical failures, other interesting articles, frequently asked questions about research ethics.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe for research subjects.

You’ll balance pursuing important research objectives with using ethical research methods and procedures. It’s always necessary to prevent permanent or excessive harm to participants, whether inadvertent or not.

Defying research ethics will also lower the credibility of your research because it’s hard for others to trust your data if your methods are morally questionable.

Even if a research idea is valuable to society, it doesn’t justify violating the human rights or dignity of your study participants.

Here's why students love Scribbr's proofreading services

Discover proofreading & editing

Before you start any study involving data collection with people, you’ll submit your research proposal to an institutional review board (IRB) .

An IRB is a committee that checks whether your research aims and research design are ethically acceptable and follow your institution’s code of conduct. They check that your research materials and procedures are up to code.

If successful, you’ll receive IRB approval, and you can begin collecting data according to the approved procedures. If you want to make any changes to your procedures or materials, you’ll need to submit a modification application to the IRB for approval.

If unsuccessful, you may be asked to re-submit with modifications or your research proposal may receive a rejection. To get IRB approval, it’s important to explicitly note how you’ll tackle each of the ethical issues that may arise in your study.

There are several ethical issues you should always pay attention to in your research design, and these issues can overlap with each other.

You’ll usually outline ways you’ll deal with each issue in your research proposal if you plan to collect data from participants.

Voluntary participation Your participants are free to opt in or out of the study at any point in time.
Informed consent Participants know the purpose, benefits, risks, and funding behind the study before they agree or decline to join.
Anonymity You don’t know the identities of the participants. Personally identifiable data is not collected.
Confidentiality You know who the participants are but you keep that information hidden from everyone else. You anonymize personally identifiable data so that it can’t be linked to other data by anyone else.
Potential for harm Physical, social, psychological and all other types of harm are kept to an absolute minimum.
Results communication You ensure your work is free of or research misconduct, and you accurately represent your results.

Voluntary participation means that all research subjects are free to choose to participate without any pressure or coercion.

All participants are able to withdraw from, or leave, the study at any point without feeling an obligation to continue. Your participants don’t need to provide a reason for leaving the study.

It’s important to make it clear to participants that there are no negative consequences or repercussions to their refusal to participate. After all, they’re taking the time to help you in the research process , so you should respect their decisions without trying to change their minds.

Voluntary participation is an ethical principle protected by international law and many scientific codes of conduct.

Take special care to ensure there’s no pressure on participants when you’re working with vulnerable groups of people who may find it hard to stop the study even when they want to.

Receive feedback on language, structure, and formatting

Professional editors proofread and edit your paper by focusing on:

  • Academic style
  • Vague sentences
  • Style consistency

See an example

research ethical aspects

Informed consent refers to a situation in which all potential participants receive and understand all the information they need to decide whether they want to participate. This includes information about the study’s benefits, risks, funding, and institutional approval.

You make sure to provide all potential participants with all the relevant information about

  • what the study is about
  • the risks and benefits of taking part
  • how long the study will take
  • your supervisor’s contact information and the institution’s approval number

Usually, you’ll provide participants with a text for them to read and ask them if they have any questions. If they agree to participate, they can sign or initial the consent form. Note that this may not be sufficient for informed consent when you work with particularly vulnerable groups of people.

If you’re collecting data from people with low literacy, make sure to verbally explain the consent form to them before they agree to participate.

For participants with very limited English proficiency, you should always translate the study materials or work with an interpreter so they have all the information in their first language.

In research with children, you’ll often need informed permission for their participation from their parents or guardians. Although children cannot give informed consent, it’s best to also ask for their assent (agreement) to participate, depending on their age and maturity level.

Anonymity means that you don’t know who the participants are and you can’t link any individual participant to their data.

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, and videos.

In many cases, it may be impossible to truly anonymize data collection . For example, data collected in person or by phone cannot be considered fully anonymous because some personal identifiers (demographic information or phone numbers) are impossible to hide.

You’ll also need to collect some identifying information if you give your participants the option to withdraw their data at a later stage.

Data pseudonymization is an alternative method where you replace identifying information about participants with pseudonymous, or fake, identifiers. The data can still be linked to participants but it’s harder to do so because you separate personal information from the study data.

Confidentiality means that you know who the participants are, but you remove all identifying information from your report.

All participants have a right to privacy, so you should protect their personal data for as long as you store or use it. Even when you can’t collect data anonymously, you should secure confidentiality whenever you can.

Some research designs aren’t conducive to confidentiality, but it’s important to make all attempts and inform participants of the risks involved.

As a researcher, you have to consider all possible sources of harm to participants. Harm can come in many different forms.

  • Psychological harm: Sensitive questions or tasks may trigger negative emotions such as shame or anxiety.
  • Social harm: Participation can involve social risks, public embarrassment, or stigma.
  • Physical harm: Pain or injury can result from the study procedures.
  • Legal harm: Reporting sensitive data could lead to legal risks or a breach of privacy.

It’s best to consider every possible source of harm in your study as well as concrete ways to mitigate them. Involve your supervisor to discuss steps for harm reduction.

Make sure to disclose all possible risks of harm to participants before the study to get informed consent. If there is a risk of harm, prepare to provide participants with resources or counseling or medical services if needed.

Some of these questions may bring up negative emotions, so you inform participants about the sensitive nature of the survey and assure them that their responses will be confidential.

The way you communicate your research results can sometimes involve ethical issues. Good science communication is honest, reliable, and credible. It’s best to make your results as transparent as possible.

Take steps to actively avoid plagiarism and research misconduct wherever possible.

Plagiarism means submitting others’ works as your own. Although it can be unintentional, copying someone else’s work without proper credit amounts to stealing. It’s an ethical problem in research communication because you may benefit by harming other researchers.

Self-plagiarism is when you republish or re-submit parts of your own papers or reports without properly citing your original work.

This is problematic because you may benefit from presenting your ideas as new and original even though they’ve already been published elsewhere in the past. You may also be infringing on your previous publisher’s copyright, violating an ethical code, or wasting time and resources by doing so.

In extreme cases of self-plagiarism, entire datasets or papers are sometimes duplicated. These are major ethical violations because they can skew research findings if taken as original data.

You notice that two published studies have similar characteristics even though they are from different years. Their sample sizes, locations, treatments, and results are highly similar, and the studies share one author in common.

Research misconduct

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement about data analyses.

Research misconduct is a serious ethical issue because it can undermine academic integrity and institutional credibility. It leads to a waste of funding and resources that could have been used for alternative research.

Later investigations revealed that they fabricated and manipulated their data to show a nonexistent link between vaccines and autism. Wakefield also neglected to disclose important conflicts of interest, and his medical license was taken away.

This fraudulent work sparked vaccine hesitancy among parents and caregivers. The rate of MMR vaccinations in children fell sharply, and measles outbreaks became more common due to a lack of herd immunity.

Research scandals with ethical failures are littered throughout history, but some took place not that long ago.

Some scientists in positions of power have historically mistreated or even abused research participants to investigate research problems at any cost. These participants were prisoners, under their care, or otherwise trusted them to treat them with dignity.

To demonstrate the importance of research ethics, we’ll briefly review two research studies that violated human rights in modern history.

These experiments were inhumane and resulted in trauma, permanent disabilities, or death in many cases.

After some Nazi doctors were put on trial for their crimes, the Nuremberg Code of research ethics for human experimentation was developed in 1947 to establish a new standard for human experimentation in medical research.

In reality, the actual goal was to study the effects of the disease when left untreated, and the researchers never informed participants about their diagnoses or the research aims.

Although participants experienced severe health problems, including blindness and other complications, the researchers only pretended to provide medical care.

When treatment became possible in 1943, 11 years after the study began, none of the participants were offered it, despite their health conditions and high risk of death.

Ethical failures like these resulted in severe harm to participants, wasted resources, and lower trust in science and scientists. This is why all research institutions have strict ethical guidelines for performing research.

If you want to know more about statistics , methodology , or research bias , make sure to check out some of our other articles with explanations and examples.

  • Normal distribution
  • Measures of central tendency
  • Chi square tests
  • Confidence interval
  • Quartiles & Quantiles
  • Cluster sampling
  • Stratified sampling
  • Thematic analysis
  • Cohort study
  • Peer review
  • Ethnography

Research bias

  • Implicit bias
  • Cognitive bias
  • Conformity bias
  • Hawthorne effect
  • Availability heuristic
  • Attrition bias
  • Social desirability bias

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information—for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the “Cite this Scribbr article” button to automatically add the citation to our free Citation Generator.

Bhandari, P. (2024, May 09). Ethical Considerations in Research | Types & Examples. Scribbr. Retrieved June 10, 2024, from https://www.scribbr.com/methodology/research-ethics/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, data collection | definition, methods & examples, what is self-plagiarism | definition & how to avoid it, how to avoid plagiarism | tips on citing sources, get unlimited documents corrected.

✔ Free APA citation check included ✔ Unlimited document corrections ✔ Specialized in correcting academic texts

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

National Institute of Environmental Health Sciences

Your environment. your health., what is ethics in research & why is it important, by david b. resnik, j.d., ph.d..

December 23, 2020

The ideas and opinions expressed in this essay are the author’s own and do not necessarily represent those of the NIH, NIEHS, or US government.

ethic image decorative header

When most people think of ethics (or morals), they think of rules for distinguishing between right and wrong, such as the Golden Rule ("Do unto others as you would have them do unto you"), a code of professional conduct like the Hippocratic Oath ("First of all, do no harm"), a religious creed like the Ten Commandments ("Thou Shalt not kill..."), or a wise aphorisms like the sayings of Confucius. This is the most common way of defining "ethics": norms for conduct that distinguish between acceptable and unacceptable behavior.

Most people learn ethical norms at home, at school, in church, or in other social settings. Although most people acquire their sense of right and wrong during childhood, moral development occurs throughout life and human beings pass through different stages of growth as they mature. Ethical norms are so ubiquitous that one might be tempted to regard them as simple commonsense. On the other hand, if morality were nothing more than commonsense, then why are there so many ethical disputes and issues in our society?

Alternatives to Animal Testing

test tubes on a tray decorrative image

Alternative test methods are methods that replace, reduce, or refine animal use in research and testing

Learn more about Environmental science Basics

One plausible explanation of these disagreements is that all people recognize some common ethical norms but interpret, apply, and balance them in different ways in light of their own values and life experiences. For example, two people could agree that murder is wrong but disagree about the morality of abortion because they have different understandings of what it means to be a human being.

Most societies also have legal rules that govern behavior, but ethical norms tend to be broader and more informal than laws. Although most societies use laws to enforce widely accepted moral standards and ethical and legal rules use similar concepts, ethics and law are not the same. An action may be legal but unethical or illegal but ethical. We can also use ethical concepts and principles to criticize, evaluate, propose, or interpret laws. Indeed, in the last century, many social reformers have urged citizens to disobey laws they regarded as immoral or unjust laws. Peaceful civil disobedience is an ethical way of protesting laws or expressing political viewpoints.

Another way of defining 'ethics' focuses on the disciplines that study standards of conduct, such as philosophy, theology, law, psychology, or sociology. For example, a "medical ethicist" is someone who studies ethical standards in medicine. One may also define ethics as a method, procedure, or perspective for deciding how to act and for analyzing complex problems and issues. For instance, in considering a complex issue like global warming , one may take an economic, ecological, political, or ethical perspective on the problem. While an economist might examine the cost and benefits of various policies related to global warming, an environmental ethicist could examine the ethical values and principles at stake.

See ethics in practice at NIEHS

Read latest updates in our monthly  Global Environmental Health Newsletter

global environmental health

Many different disciplines, institutions , and professions have standards for behavior that suit their particular aims and goals. These standards also help members of the discipline to coordinate their actions or activities and to establish the public's trust of the discipline. For instance, ethical standards govern conduct in medicine, law, engineering, and business. Ethical norms also serve the aims or goals of research and apply to people who conduct scientific research or other scholarly or creative activities. There is even a specialized discipline, research ethics, which studies these norms. See Glossary of Commonly Used Terms in Research Ethics and Research Ethics Timeline .

There are several reasons why it is important to adhere to ethical norms in research. First, norms promote the aims of research , such as knowledge, truth, and avoidance of error. For example, prohibitions against fabricating , falsifying, or misrepresenting research data promote the truth and minimize error.

Join an NIEHS Study

See how we put research Ethics to practice.

Visit Joinastudy.niehs.nih.gov to see the various studies NIEHS perform.

join a study decorative image

Second, since research often involves a great deal of cooperation and coordination among many different people in different disciplines and institutions, ethical standards promote the values that are essential to collaborative work , such as trust, accountability, mutual respect, and fairness. For example, many ethical norms in research, such as guidelines for authorship , copyright and patenting policies , data sharing policies, and confidentiality rules in peer review, are designed to protect intellectual property interests while encouraging collaboration. Most researchers want to receive credit for their contributions and do not want to have their ideas stolen or disclosed prematurely.

Third, many of the ethical norms help to ensure that researchers can be held accountable to the public . For instance, federal policies on research misconduct, conflicts of interest, the human subjects protections, and animal care and use are necessary in order to make sure that researchers who are funded by public money can be held accountable to the public.

Fourth, ethical norms in research also help to build public support for research. People are more likely to fund a research project if they can trust the quality and integrity of research.

Finally, many of the norms of research promote a variety of other important moral and social values , such as social responsibility, human rights, animal welfare, compliance with the law, and public health and safety. Ethical lapses in research can significantly harm human and animal subjects, students, and the public. For example, a researcher who fabricates data in a clinical trial may harm or even kill patients, and a researcher who fails to abide by regulations and guidelines relating to radiation or biological safety may jeopardize his health and safety or the health and safety of staff and students.

Codes and Policies for Research Ethics

Given the importance of ethics for the conduct of research, it should come as no surprise that many different professional associations, government agencies, and universities have adopted specific codes, rules, and policies relating to research ethics. Many government agencies have ethics rules for funded researchers.

  • National Institutes of Health (NIH)
  • National Science Foundation (NSF)
  • Food and Drug Administration (FDA)
  • Environmental Protection Agency (EPA)
  • US Department of Agriculture (USDA)
  • Singapore Statement on Research Integrity
  • American Chemical Society, The Chemist Professional’s Code of Conduct
  • Code of Ethics (American Society for Clinical Laboratory Science)
  • American Psychological Association, Ethical Principles of Psychologists and Code of Conduct
  • Statement on Professional Ethics (American Association of University Professors)
  • Nuremberg Code
  • World Medical Association's Declaration of Helsinki

Ethical Principles

The following is a rough and general summary of some ethical principles that various codes address*:

research ethical aspects

Strive for honesty in all scientific communications. Honestly report data, results, methods and procedures, and publication status. Do not fabricate, falsify, or misrepresent data. Do not deceive colleagues, research sponsors, or the public.

research ethical aspects

Objectivity

Strive to avoid bias in experimental design, data analysis, data interpretation, peer review, personnel decisions, grant writing, expert testimony, and other aspects of research where objectivity is expected or required. Avoid or minimize bias or self-deception. Disclose personal or financial interests that may affect research.

research ethical aspects

Keep your promises and agreements; act with sincerity; strive for consistency of thought and action.

research ethical aspects

Carefulness

Avoid careless errors and negligence; carefully and critically examine your own work and the work of your peers. Keep good records of research activities, such as data collection, research design, and correspondence with agencies or journals.

research ethical aspects

Share data, results, ideas, tools, resources. Be open to criticism and new ideas.

research ethical aspects

Transparency

Disclose methods, materials, assumptions, analyses, and other information needed to evaluate your research.

research ethical aspects

Accountability

Take responsibility for your part in research and be prepared to give an account (i.e. an explanation or justification) of what you did on a research project and why.

research ethical aspects

Intellectual Property

Honor patents, copyrights, and other forms of intellectual property. Do not use unpublished data, methods, or results without permission. Give proper acknowledgement or credit for all contributions to research. Never plagiarize.

research ethical aspects

Confidentiality

Protect confidential communications, such as papers or grants submitted for publication, personnel records, trade or military secrets, and patient records.

research ethical aspects

Responsible Publication

Publish in order to advance research and scholarship, not to advance just your own career. Avoid wasteful and duplicative publication.

research ethical aspects

Responsible Mentoring

Help to educate, mentor, and advise students. Promote their welfare and allow them to make their own decisions.

research ethical aspects

Respect for Colleagues

Respect your colleagues and treat them fairly.

research ethical aspects

Social Responsibility

Strive to promote social good and prevent or mitigate social harms through research, public education, and advocacy.

research ethical aspects

Non-Discrimination

Avoid discrimination against colleagues or students on the basis of sex, race, ethnicity, or other factors not related to scientific competence and integrity.

research ethical aspects

Maintain and improve your own professional competence and expertise through lifelong education and learning; take steps to promote competence in science as a whole.

research ethical aspects

Know and obey relevant laws and institutional and governmental policies.

research ethical aspects

Animal Care

Show proper respect and care for animals when using them in research. Do not conduct unnecessary or poorly designed animal experiments.

research ethical aspects

Human Subjects protection

When conducting research on human subjects, minimize harms and risks and maximize benefits; respect human dignity, privacy, and autonomy; take special precautions with vulnerable populations; and strive to distribute the benefits and burdens of research fairly.

* Adapted from Shamoo A and Resnik D. 2015. Responsible Conduct of Research, 3rd ed. (New York: Oxford University Press).

Ethical Decision Making in Research

Although codes, policies, and principles are very important and useful, like any set of rules, they do not cover every situation, they often conflict, and they require interpretation. It is therefore important for researchers to learn how to interpret, assess, and apply various research rules and how to make decisions and act ethically in various situations. The vast majority of decisions involve the straightforward application of ethical rules. For example, consider the following case:

The research protocol for a study of a drug on hypertension requires the administration of the drug at different doses to 50 laboratory mice, with chemical and behavioral tests to determine toxic effects. Tom has almost finished the experiment for Dr. Q. He has only 5 mice left to test. However, he really wants to finish his work in time to go to Florida on spring break with his friends, who are leaving tonight. He has injected the drug in all 50 mice but has not completed all of the tests. He therefore decides to extrapolate from the 45 completed results to produce the 5 additional results.

Many different research ethics policies would hold that Tom has acted unethically by fabricating data. If this study were sponsored by a federal agency, such as the NIH, his actions would constitute a form of research misconduct , which the government defines as "fabrication, falsification, or plagiarism" (or FFP). Actions that nearly all researchers classify as unethical are viewed as misconduct. It is important to remember, however, that misconduct occurs only when researchers intend to deceive : honest errors related to sloppiness, poor record keeping, miscalculations, bias, self-deception, and even negligence do not constitute misconduct. Also, reasonable disagreements about research methods, procedures, and interpretations do not constitute research misconduct. Consider the following case:

Dr. T has just discovered a mathematical error in his paper that has been accepted for publication in a journal. The error does not affect the overall results of his research, but it is potentially misleading. The journal has just gone to press, so it is too late to catch the error before it appears in print. In order to avoid embarrassment, Dr. T decides to ignore the error.

Dr. T's error is not misconduct nor is his decision to take no action to correct the error. Most researchers, as well as many different policies and codes would say that Dr. T should tell the journal (and any coauthors) about the error and consider publishing a correction or errata. Failing to publish a correction would be unethical because it would violate norms relating to honesty and objectivity in research.

There are many other activities that the government does not define as "misconduct" but which are still regarded by most researchers as unethical. These are sometimes referred to as " other deviations " from acceptable research practices and include:

  • Publishing the same paper in two different journals without telling the editors
  • Submitting the same paper to different journals without telling the editors
  • Not informing a collaborator of your intent to file a patent in order to make sure that you are the sole inventor
  • Including a colleague as an author on a paper in return for a favor even though the colleague did not make a serious contribution to the paper
  • Discussing with your colleagues confidential data from a paper that you are reviewing for a journal
  • Using data, ideas, or methods you learn about while reviewing a grant or a papers without permission
  • Trimming outliers from a data set without discussing your reasons in paper
  • Using an inappropriate statistical technique in order to enhance the significance of your research
  • Bypassing the peer review process and announcing your results through a press conference without giving peers adequate information to review your work
  • Conducting a review of the literature that fails to acknowledge the contributions of other people in the field or relevant prior work
  • Stretching the truth on a grant application in order to convince reviewers that your project will make a significant contribution to the field
  • Stretching the truth on a job application or curriculum vita
  • Giving the same research project to two graduate students in order to see who can do it the fastest
  • Overworking, neglecting, or exploiting graduate or post-doctoral students
  • Failing to keep good research records
  • Failing to maintain research data for a reasonable period of time
  • Making derogatory comments and personal attacks in your review of author's submission
  • Promising a student a better grade for sexual favors
  • Using a racist epithet in the laboratory
  • Making significant deviations from the research protocol approved by your institution's Animal Care and Use Committee or Institutional Review Board for Human Subjects Research without telling the committee or the board
  • Not reporting an adverse event in a human research experiment
  • Wasting animals in research
  • Exposing students and staff to biological risks in violation of your institution's biosafety rules
  • Sabotaging someone's work
  • Stealing supplies, books, or data
  • Rigging an experiment so you know how it will turn out
  • Making unauthorized copies of data, papers, or computer programs
  • Owning over $10,000 in stock in a company that sponsors your research and not disclosing this financial interest
  • Deliberately overestimating the clinical significance of a new drug in order to obtain economic benefits

These actions would be regarded as unethical by most scientists and some might even be illegal in some cases. Most of these would also violate different professional ethics codes or institutional policies. However, they do not fall into the narrow category of actions that the government classifies as research misconduct. Indeed, there has been considerable debate about the definition of "research misconduct" and many researchers and policy makers are not satisfied with the government's narrow definition that focuses on FFP. However, given the huge list of potential offenses that might fall into the category "other serious deviations," and the practical problems with defining and policing these other deviations, it is understandable why government officials have chosen to limit their focus.

Finally, situations frequently arise in research in which different people disagree about the proper course of action and there is no broad consensus about what should be done. In these situations, there may be good arguments on both sides of the issue and different ethical principles may conflict. These situations create difficult decisions for research known as ethical or moral dilemmas . Consider the following case:

Dr. Wexford is the principal investigator of a large, epidemiological study on the health of 10,000 agricultural workers. She has an impressive dataset that includes information on demographics, environmental exposures, diet, genetics, and various disease outcomes such as cancer, Parkinson’s disease (PD), and ALS. She has just published a paper on the relationship between pesticide exposure and PD in a prestigious journal. She is planning to publish many other papers from her dataset. She receives a request from another research team that wants access to her complete dataset. They are interested in examining the relationship between pesticide exposures and skin cancer. Dr. Wexford was planning to conduct a study on this topic.

Dr. Wexford faces a difficult choice. On the one hand, the ethical norm of openness obliges her to share data with the other research team. Her funding agency may also have rules that obligate her to share data. On the other hand, if she shares data with the other team, they may publish results that she was planning to publish, thus depriving her (and her team) of recognition and priority. It seems that there are good arguments on both sides of this issue and Dr. Wexford needs to take some time to think about what she should do. One possible option is to share data, provided that the investigators sign a data use agreement. The agreement could define allowable uses of the data, publication plans, authorship, etc. Another option would be to offer to collaborate with the researchers.

The following are some step that researchers, such as Dr. Wexford, can take to deal with ethical dilemmas in research:

What is the problem or issue?

It is always important to get a clear statement of the problem. In this case, the issue is whether to share information with the other research team.

What is the relevant information?

Many bad decisions are made as a result of poor information. To know what to do, Dr. Wexford needs to have more information concerning such matters as university or funding agency or journal policies that may apply to this situation, the team's intellectual property interests, the possibility of negotiating some kind of agreement with the other team, whether the other team also has some information it is willing to share, the impact of the potential publications, etc.

What are the different options?

People may fail to see different options due to a limited imagination, bias, ignorance, or fear. In this case, there may be other choices besides 'share' or 'don't share,' such as 'negotiate an agreement' or 'offer to collaborate with the researchers.'

How do ethical codes or policies as well as legal rules apply to these different options?

The university or funding agency may have policies on data management that apply to this case. Broader ethical rules, such as openness and respect for credit and intellectual property, may also apply to this case. Laws relating to intellectual property may be relevant.

Are there any people who can offer ethical advice?

It may be useful to seek advice from a colleague, a senior researcher, your department chair, an ethics or compliance officer, or anyone else you can trust. In the case, Dr. Wexford might want to talk to her supervisor and research team before making a decision.

After considering these questions, a person facing an ethical dilemma may decide to ask more questions, gather more information, explore different options, or consider other ethical rules. However, at some point he or she will have to make a decision and then take action. Ideally, a person who makes a decision in an ethical dilemma should be able to justify his or her decision to himself or herself, as well as colleagues, administrators, and other people who might be affected by the decision. He or she should be able to articulate reasons for his or her conduct and should consider the following questions in order to explain how he or she arrived at his or her decision:

  • Which choice will probably have the best overall consequences for science and society?
  • Which choice could stand up to further publicity and scrutiny?
  • Which choice could you not live with?
  • Think of the wisest person you know. What would he or she do in this situation?
  • Which choice would be the most just, fair, or responsible?

After considering all of these questions, one still might find it difficult to decide what to do. If this is the case, then it may be appropriate to consider others ways of making the decision, such as going with a gut feeling or intuition, seeking guidance through prayer or meditation, or even flipping a coin. Endorsing these methods in this context need not imply that ethical decisions are irrational, however. The main point is that human reasoning plays a pivotal role in ethical decision-making but there are limits to its ability to solve all ethical dilemmas in a finite amount of time.

Promoting Ethical Conduct in Science

globe decorative image

Do U.S. research institutions meet or exceed federal mandates for instruction in responsible conduct of research? A national survey

NCBI Pubmed

 Read about U.S. research instutuins follow federal manadates for ethics in research 

Learn more about NIEHS Research

Most academic institutions in the US require undergraduate, graduate, or postgraduate students to have some education in the responsible conduct of research (RCR) . The NIH and NSF have both mandated training in research ethics for students and trainees. Many academic institutions outside of the US have also developed educational curricula in research ethics

Those of you who are taking or have taken courses in research ethics may be wondering why you are required to have education in research ethics. You may believe that you are highly ethical and know the difference between right and wrong. You would never fabricate or falsify data or plagiarize. Indeed, you also may believe that most of your colleagues are highly ethical and that there is no ethics problem in research..

If you feel this way, relax. No one is accusing you of acting unethically. Indeed, the evidence produced so far shows that misconduct is a very rare occurrence in research, although there is considerable variation among various estimates. The rate of misconduct has been estimated to be as low as 0.01% of researchers per year (based on confirmed cases of misconduct in federally funded research) to as high as 1% of researchers per year (based on self-reports of misconduct on anonymous surveys). See Shamoo and Resnik (2015), cited above.

Clearly, it would be useful to have more data on this topic, but so far there is no evidence that science has become ethically corrupt, despite some highly publicized scandals. Even if misconduct is only a rare occurrence, it can still have a tremendous impact on science and society because it can compromise the integrity of research, erode the public’s trust in science, and waste time and resources. Will education in research ethics help reduce the rate of misconduct in science? It is too early to tell. The answer to this question depends, in part, on how one understands the causes of misconduct. There are two main theories about why researchers commit misconduct. According to the "bad apple" theory, most scientists are highly ethical. Only researchers who are morally corrupt, economically desperate, or psychologically disturbed commit misconduct. Moreover, only a fool would commit misconduct because science's peer review system and self-correcting mechanisms will eventually catch those who try to cheat the system. In any case, a course in research ethics will have little impact on "bad apples," one might argue.

According to the "stressful" or "imperfect" environment theory, misconduct occurs because various institutional pressures, incentives, and constraints encourage people to commit misconduct, such as pressures to publish or obtain grants or contracts, career ambitions, the pursuit of profit or fame, poor supervision of students and trainees, and poor oversight of researchers (see Shamoo and Resnik 2015). Moreover, defenders of the stressful environment theory point out that science's peer review system is far from perfect and that it is relatively easy to cheat the system. Erroneous or fraudulent research often enters the public record without being detected for years. Misconduct probably results from environmental and individual causes, i.e. when people who are morally weak, ignorant, or insensitive are placed in stressful or imperfect environments. In any case, a course in research ethics can be useful in helping to prevent deviations from norms even if it does not prevent misconduct. Education in research ethics is can help people get a better understanding of ethical standards, policies, and issues and improve ethical judgment and decision making. Many of the deviations that occur in research may occur because researchers simply do not know or have never thought seriously about some of the ethical norms of research. For example, some unethical authorship practices probably reflect traditions and practices that have not been questioned seriously until recently. If the director of a lab is named as an author on every paper that comes from his lab, even if he does not make a significant contribution, what could be wrong with that? That's just the way it's done, one might argue. Another example where there may be some ignorance or mistaken traditions is conflicts of interest in research. A researcher may think that a "normal" or "traditional" financial relationship, such as accepting stock or a consulting fee from a drug company that sponsors her research, raises no serious ethical issues. Or perhaps a university administrator sees no ethical problem in taking a large gift with strings attached from a pharmaceutical company. Maybe a physician thinks that it is perfectly appropriate to receive a $300 finder’s fee for referring patients into a clinical trial.

If "deviations" from ethical conduct occur in research as a result of ignorance or a failure to reflect critically on problematic traditions, then a course in research ethics may help reduce the rate of serious deviations by improving the researcher's understanding of ethics and by sensitizing him or her to the issues.

Finally, education in research ethics should be able to help researchers grapple with the ethical dilemmas they are likely to encounter by introducing them to important concepts, tools, principles, and methods that can be useful in resolving these dilemmas. Scientists must deal with a number of different controversial topics, such as human embryonic stem cell research, cloning, genetic engineering, and research involving animal or human subjects, which require ethical reflection and deliberation.

  • Fact sheets
  • Facts in pictures
  • Publications
  • Questions and answers
  • Tools and toolkits
  • HIV and AIDS
  • Hypertension
  • Mental disorders
  • Top 10 causes of death
  • All countries
  • Eastern Mediterranean
  • South-East Asia
  • Western Pacific
  • Data by country
  • Country presence 
  • Country strengthening 
  • Country cooperation strategies 
  • News releases
  • Feature stories
  • Press conferences
  • Commentaries
  • Photo library
  • Afghanistan
  • Cholera 
  • Coronavirus disease (COVID-19)
  • Greater Horn of Africa
  • Israel and occupied Palestinian territory
  • Disease Outbreak News
  • Situation reports
  • Weekly Epidemiological Record
  • Surveillance
  • Health emergency appeal
  • International Health Regulations
  • Independent Oversight and Advisory Committee
  • Classifications
  • Data collections
  • Global Health Estimates
  • Mortality Database
  • Sustainable Development Goals
  • Health Inequality Monitor
  • Global Progress
  • Data collection tools
  • Global Health Observatory
  • Insights and visualizations
  • COVID excess deaths
  • World Health Statistics
  • Partnerships
  • Committees and advisory groups
  • Collaborating centres
  • Technical teams
  • Organizational structure
  • Initiatives
  • General Programme of Work
  • WHO Academy
  • Investment case
  • WHO Foundation
  • External audit
  • Financial statements
  • Internal audit and investigations 
  • Programme Budget
  • Results reports
  • Governing bodies
  • World Health Assembly
  • Executive Board
  • Member States Portal
  • Activities /

Ensuring ethical standards and procedures for research with human beings

Research ethics govern the standards of conduct for scientific researchers. It is important to adhere to ethical principles in order to protect the dignity, rights and welfare of research participants. As such, all research involving human beings should be reviewed by an ethics committee to ensure that the appropriate ethical standards are being upheld. Discussion of the ethical principles of beneficence, justice and autonomy are central to ethical review.

WHO works with Member States and partners to promote ethical standards and appropriate systems of review for any course of research involving human subjects. Within WHO, the Research Ethics Review Committee (ERC) ensures that WHO only supports research of the highest ethical standards. The ERC reviews all research projects involving human participants supported either financially or technically by WHO. The ERC is guided in its work by the World Medical Association Declaration of Helsinki (1964), last updated in 2013, as well as the International Ethical Guidelines for Biomedical Research Involving Human Subjects (CIOMS 2016).

WHO releases AI ethics and governance guidance for large multi-modal models

Call for proposals: WHO project on ethical climate and health research

Call for applications: Ethical issues arising in research into health and climate change

Research Ethics Review Committee

lab digital health research south africa

Standards and operational guidance for ethics review of health-related research with...

WHO tool for benchmarking ethics oversight of health-related research involving human participants 

WHO tool for benchmarking ethics oversight of health-related research involving human...

Related activities

Developing normative guidance to address ethical challenges in global health

Supporting countries to manage ethical issues during outbreaks and emergencies

Engaging the global community in health ethics

Building ethics capacity

Framing the ethics of public health surveillance

Related health topics

Global health ethics

Human genome editing

Related teams

Related links

  • International ethical guidelines for biomedical research involving human subjects Council for International Organizations of Medical Sciences. pdf, 1.55Mb
  • International ethical guidelines for epidemiological studies Council for International Organizations of Medical Sciences. pdf, 634Kb
  • World Medical Association: Declaration of Helsinki
  • European Group on Ethics
  • Directive 2001/20/ec of the European Parliament and of the Council pdf, 152Kb
  • Council of Europe (Oviedo Convention - Protocol on biomedical research)
  • Nuffield Council: The ethics of research related to healthcare in developing countries
  • U.S. Department of Health & Human Services

National Institutes of Health (NIH) - Turning Discovery into Health

  • Virtual Tour
  • Staff Directory
  • En Español

You are here

Nih clinical research trials and you, guiding principles for ethical research.

Pursuing Potential Research Participants Protections

Female doctor talking to a senior couple at her desk.

“When people are invited to participate in research, there is a strong belief that it should be their choice based on their understanding of what the study is about, and what the risks and benefits of the study are,” said Dr. Christine Grady, chief of the NIH Clinical Center Department of Bioethics, to Clinical Center Radio in a podcast.

Clinical research advances the understanding of science and promotes human health. However, it is important to remember the individuals who volunteer to participate in research. There are precautions researchers can take – in the planning, implementation and follow-up of studies – to protect these participants in research. Ethical guidelines are established for clinical research to protect patient volunteers and to preserve the integrity of the science.

NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research:

Social and clinical value

Scientific validity, fair subject selection, favorable risk-benefit ratio, independent review, informed consent.

  • Respect for potential and enrolled subjects

Every research study is designed to answer a specific question. The answer should be important enough to justify asking people to accept some risk or inconvenience for others. In other words, answers to the research question should contribute to scientific understanding of health or improve our ways of preventing, treating, or caring for people with a given disease to justify exposing participants to the risk and burden of research.

A study should be designed in a way that will get an understandable answer to the important research question. This includes considering whether the question asked is answerable, whether the research methods are valid and feasible, and whether the study is designed with accepted principles, clear methods, and reliable practices. Invalid research is unethical because it is a waste of resources and exposes people to risk for no purpose

The primary basis for recruiting participants should be the scientific goals of the study — not vulnerability, privilege, or other unrelated factors. Participants who accept the risks of research should be in a position to enjoy its benefits. Specific groups of participants  (for example, women or children) should not be excluded from the research opportunities without a good scientific reason or a particular susceptibility to risk.

Uncertainty about the degree of risks and benefits associated with a clinical research study is inherent. Research risks may be trivial or serious, transient or long-term. Risks can be physical, psychological, economic, or social. Everything should be done to minimize the risks and inconvenience to research participants to maximize the potential benefits, and to determine that the potential benefits are proportionate to, or outweigh, the risks.

To minimize potential conflicts of interest and make sure a study is ethically acceptable before it starts, an independent review panel should review the proposal and ask important questions, including: Are those conducting the trial sufficiently free of bias? Is the study doing all it can to protect research participants? Has the trial been ethically designed and is the risk–benefit ratio favorable? The panel also monitors a study while it is ongoing.

Potential participants should make their own decision about whether they want to participate or continue participating in research. This is done through a process of informed consent in which individuals (1) are accurately informed of the purpose, methods, risks, benefits, and alternatives to the research, (2) understand this information and how it relates to their own clinical situation or interests, and (3) make a voluntary decision about whether to participate.

Respect for potential and enrolled participants

Individuals should be treated with respect from the time they are approached for possible participation — even if they refuse enrollment in a study — throughout their participation and after their participation ends. This includes:

  • respecting their privacy and keeping their private information confidential
  • respecting their right to change their mind, to decide that the research does not match their interests, and to withdraw without a penalty
  • informing them of new information that might emerge in the course of research, which might change their assessment of the risks and benefits of participating
  • monitoring their welfare and, if they experience adverse reactions, unexpected effects, or changes in clinical status, ensuring appropriate treatment and, when necessary, removal from the study
  • informing them about what was learned from the research

More information on these seven guiding principles and on bioethics in general

This page last reviewed on March 16, 2016

Connect with Us

  • More Social Media from NIH

Ethical Considerations In Psychology Research

Saul Mcleod, PhD

Editor-in-Chief for Simply Psychology

BSc (Hons) Psychology, MRes, PhD, University of Manchester

Saul Mcleod, PhD., is a qualified psychology teacher with over 18 years of experience in further and higher education. He has been published in peer-reviewed journals, including the Journal of Clinical Psychology.

Learn about our Editorial Process

Olivia Guy-Evans, MSc

Associate Editor for Simply Psychology

BSc (Hons) Psychology, MSc Psychology of Education

Olivia Guy-Evans is a writer and associate editor for Simply Psychology. She has previously worked in healthcare and educational sectors.

On This Page:

Ethics refers to the correct rules of conduct necessary when carrying out research. We have a moral responsibility to protect research participants from harm.

However important the issue under investigation, psychologists must remember that they have a duty to respect the rights and dignity of research participants. This means that they must abide by certain moral principles and rules of conduct.

What are Ethical Guidelines?

In Britain, ethical guidelines for research are published by the British Psychological Society, and in America, by the American Psychological Association. The purpose of these codes of conduct is to protect research participants, the reputation of psychology, and psychologists themselves.

Moral issues rarely yield a simple, unambiguous, right or wrong answer. It is, therefore, often a matter of judgment whether the research is justified or not.

For example, it might be that a study causes psychological or physical discomfort to participants; maybe they suffer pain or perhaps even come to serious harm.

On the other hand, the investigation could lead to discoveries that benefit the participants themselves or even have the potential to increase the sum of human happiness.

Rosenthal and Rosnow (1984) also discuss the potential costs of failing to carry out certain research. Who is to weigh up these costs and benefits? Who is to judge whether the ends justify the means?

Finally, if you are ever in doubt as to whether research is ethical or not, it is worthwhile remembering that if there is a conflict of interest between the participants and the researcher, it is the interests of the subjects that should take priority.

Studies must now undergo an extensive review by an institutional review board (US) or ethics committee (UK) before they are implemented. All UK research requires ethical approval by one or more of the following:

  • Department Ethics Committee (DEC) : for most routine research.
  • Institutional Ethics Committee (IEC) : for non-routine research.
  • External Ethics Committee (EEC) : for research that s externally regulated (e.g., NHS research).

Committees review proposals to assess if the potential benefits of the research are justifiable in light of the possible risk of physical or psychological harm.

These committees may request researchers make changes to the study’s design or procedure or, in extreme cases, deny approval of the study altogether.

The British Psychological Society (BPS) and American Psychological Association (APA) have issued a code of ethics in psychology that provides guidelines for conducting research.  Some of the more important ethical issues are as follows:

Informed Consent

Before the study begins, the researcher must outline to the participants what the research is about and then ask for their consent (i.e., permission) to participate.

An adult (18 years +) capable of being permitted to participate in a study can provide consent. Parents/legal guardians of minors can also provide consent to allow their children to participate in a study.

Whenever possible, investigators should obtain the consent of participants. In practice, this means it is not sufficient to get potential participants to say “Yes.”

They also need to know what it is that they agree to. In other words, the psychologist should, so far as is practicable, explain what is involved in advance and obtain the informed consent of participants.

Informed consent must be informed, voluntary, and rational. Participants must be given relevant details to make an informed decision, including the purpose, procedures, risks, and benefits. Consent must be given voluntarily without undue coercion. And participants must have the capacity to rationally weigh the decision.

Components of informed consent include clearly explaining the risks and expected benefits, addressing potential therapeutic misconceptions about experimental treatments, allowing participants to ask questions, and describing methods to minimize risks like emotional distress.

Investigators should tailor the consent language and process appropriately for the study population. Obtaining meaningful informed consent is an ethical imperative for human subjects research.

The voluntary nature of participation should not be compromised through coercion or undue influence. Inducements should be fair and not excessive/inappropriate.

However, it is not always possible to gain informed consent.  Where the researcher can’t ask the actual participants, a similar group of people can be asked how they would feel about participating.

If they think it would be OK, then it can be assumed that the real participants will also find it acceptable. This is known as presumptive consent.

However, a problem with this method is that there might be a mismatch between how people think they would feel/behave and how they actually feel and behave during a study.

In order for consent to be ‘informed,’ consent forms may need to be accompanied by an information sheet for participants’ setting out information about the proposed study (in lay terms), along with details about the investigators and how they can be contacted.

Special considerations exist when obtaining consent from vulnerable populations with decisional impairments, such as psychiatric patients, intellectually disabled persons, and children/adolescents. Capacity can vary widely so should be assessed individually, but interventions to improve comprehension may help. Legally authorized representatives usually must provide consent for children.

Participants must be given information relating to the following:

  • A statement that participation is voluntary and that refusal to participate will not result in any consequences or any loss of benefits that the person is otherwise entitled to receive.
  • Purpose of the research.
  • All foreseeable risks and discomforts to the participant (if there are any). These include not only physical injury but also possible psychological.
  • Procedures involved in the research.
  • Benefits of the research to society and possibly to the individual human subject.
  • Length of time the subject is expected to participate.
  • Person to contact for answers to questions or in the event of injury or emergency.
  • Subjects” right to confidentiality and the right to withdraw from the study at any time without any consequences.
Debriefing after a study involves informing participants about the purpose, providing an opportunity to ask questions, and addressing any harm from participation. Debriefing serves an educational function and allows researchers to correct misconceptions. It is an ethical imperative.

After the research is over, the participant should be able to discuss the procedure and the findings with the psychologist. They must be given a general idea of what the researcher was investigating and why, and their part in the research should be explained.

Participants must be told if they have been deceived and given reasons why. They must be asked if they have any questions, which should be answered honestly and as fully as possible.

Debriefing should occur as soon as possible and be as full as possible; experimenters should take reasonable steps to ensure that participants understand debriefing.

“The purpose of debriefing is to remove any misconceptions and anxieties that the participants have about the research and to leave them with a sense of dignity, knowledge, and a perception of time not wasted” (Harris, 1998).

The debriefing aims to provide information and help the participant leave the experimental situation in a similar frame of mind as when he/she entered it (Aronson, 1988).

Exceptions may exist if debriefing seriously compromises study validity or causes harm itself, like negative emotions in children. Consultation with an institutional review board guides exceptions.

Debriefing indicates investigators’ commitment to participant welfare. Harms may not be raised in the debriefing itself, so responsibility continues after data collection. Following up demonstrates respect and protects persons in human subjects research.

Protection of Participants

Researchers must ensure that those participating in research will not be caused distress. They must be protected from physical and mental harm. This means you must not embarrass, frighten, offend or harm participants.

Normally, the risk of harm must be no greater than in ordinary life, i.e., participants should not be exposed to risks greater than or additional to those encountered in their normal lifestyles.

The researcher must also ensure that if vulnerable groups are to be used (elderly, disabled, children, etc.), they must receive special care. For example, if studying children, ensure their participation is brief as they get tired easily and have a limited attention span.

Researchers are not always accurately able to predict the risks of taking part in a study, and in some cases, a therapeutic debriefing may be necessary if participants have become disturbed during the research (as happened to some participants in Zimbardo’s prisoners/guards study ).

Deception research involves purposely misleading participants or withholding information that could influence their participation decision. This method is controversial because it limits informed consent and autonomy, but can provide otherwise unobtainable valuable knowledge.

Types of deception include (i) deliberate misleading, e.g. using confederates, staged manipulations in field settings, deceptive instructions; (ii) deception by omission, e.g., failure to disclose full information about the study, or creating ambiguity.

The researcher should avoid deceiving participants about the nature of the research unless there is no alternative – and even then, this would need to be judged acceptable by an independent expert. However, some types of research cannot be carried out without at least some element of deception.

For example, in Milgram’s study of obedience , the participants thought they were giving electric shocks to a learner when they answered a question wrongly. In reality, no shocks were given, and the learners were confederates of Milgram.

This is sometimes necessary to avoid demand characteristics (i.e., the clues in an experiment that lead participants to think they know what the researcher is looking for).

Another common example is when a stooge or confederate of the experimenter is used (this was the case in both the experiments carried out by Asch ).

According to ethics codes, deception must have strong scientific justification, and non-deceptive alternatives should not be feasible. Deception that causes significant harm is prohibited. Investigators should carefully weigh whether deception is necessary and ethical for their research.

However, participants must be deceived as little as possible, and any deception must not cause distress.  Researchers can determine whether participants are likely distressed when deception is disclosed by consulting culturally relevant groups.

Participants should immediately be informed of the deception without compromising the study’s integrity. Reactions to learning of deception can range from understanding to anger. Debriefing should explain the scientific rationale and social benefits to minimize negative reactions.

If the participant is likely to object or be distressed once they discover the true nature of the research at debriefing, then the study is unacceptable.

If you have gained participants’ informed consent by deception, then they will have agreed to take part without actually knowing what they were consenting to.  The true nature of the research should be revealed at the earliest possible opportunity or at least during debriefing.

Some researchers argue that deception can never be justified and object to this practice as it (i) violates an individual’s right to choose to participate; (ii) is a questionable basis on which to build a discipline; and (iii) leads to distrust of psychology in the community.

Confidentiality

Protecting participant confidentiality is an ethical imperative that demonstrates respect, ensures honest participation, and prevents harms like embarrassment or legal issues. Methods like data encryption, coding systems, and secure storage should match the research methodology.

Participants and the data gained from them must be kept anonymous unless they give their full consent.  No names must be used in a lab report .

Researchers must clearly describe to participants the limits of confidentiality and methods to protect privacy. With internet research, threats exist like third-party data access; security measures like encryption should be explained. For non-internet research, other protections should be noted too, like coding systems and restricted data access.

High-profile data breaches have eroded public trust. Methods that minimize identifiable information can further guard confidentiality. For example, researchers can consider whether birthdates are necessary or just ages.

Generally, reducing personal details collected and limiting accessibility safeguards participants. Following strong confidentiality protections demonstrates respect for persons in human subjects research.

What do we do if we discover something that should be disclosed (e.g., a criminal act)? Researchers have no legal obligation to disclose criminal acts and must determine the most important consideration: their duty to the participant vs. their duty to the wider community.

Ultimately, decisions to disclose information must be set in the context of the research aims.

Withdrawal from an Investigation

Participants should be able to leave a study anytime if they feel uncomfortable. They should also be allowed to withdraw their data. They should be told at the start of the study that they have the right to withdraw.

They should not have pressure placed upon them to continue if they do not want to (a guideline flouted in Milgram’s research).

Participants may feel they shouldn’t withdraw as this may ‘spoil’ the study. Many participants are paid or receive course credits; they may worry they won’t get this if they withdraw.

Even at the end of the study, the participant has a final opportunity to withdraw the data they have provided for the research.

Ethical Issues in Psychology & Socially Sensitive Research

There has been an assumption over the years by many psychologists that provided they follow the BPS or APA guidelines when using human participants and that all leave in a similar state of mind to how they turned up, not having been deceived or humiliated, given a debrief, and not having had their confidentiality breached, that there are no ethical concerns with their research.

But consider the following examples:

a) Caughy et al. 1994 found that middle-class children in daycare at an early age generally score less on cognitive tests than children from similar families reared in the home.

Assuming all guidelines were followed, neither the parents nor the children participating would have been unduly affected by this research. Nobody would have been deceived, consent would have been obtained, and no harm would have been caused.

However, consider the wider implications of this study when the results are published, particularly for parents of middle-class infants who are considering placing their young children in daycare or those who recently have!

b)  IQ tests administered to black Americans show that they typically score 15 points below the average white score.

When black Americans are given these tests, they presumably complete them willingly and are not harmed as individuals. However, when published, findings of this sort seek to reinforce racial stereotypes and are used to discriminate against the black population in the job market, etc.

Sieber & Stanley (1988) (the main names for Socially Sensitive Research (SSR) outline 4 groups that may be affected by psychological research: It is the first group of people that we are most concerned with!
  • Members of the social group being studied, such as racial or ethnic group. For example, early research on IQ was used to discriminate against US Blacks.
  • Friends and relatives of those participating in the study, particularly in case studies, where individuals may become famous or infamous. Cases that spring to mind would include Genie’s mother.
  • The research team. There are examples of researchers being intimidated because of the line of research they are in.
  • The institution in which the research is conducted.
salso suggest there are 4 main ethical concerns when conducting SSR:
  • The research question or hypothesis.
  • The treatment of individual participants.
  • The institutional context.
  • How the findings of the research are interpreted and applied.

Ethical Guidelines For Carrying Out SSR

Sieber and Stanley suggest the following ethical guidelines for carrying out SSR. There is some overlap between these and research on human participants in general.

Privacy : This refers to people rather than data. Asking people questions of a personal nature (e.g., about sexuality) could offend.

Confidentiality: This refers to data. Information (e.g., about H.I.V. status) leaked to others may affect the participant’s life.

Sound & valid methodology : This is even more vital when the research topic is socially sensitive. Academics can detect flaws in methods, but the lay public and the media often don’t.

When research findings are publicized, people are likely to consider them fact, and policies may be based on them. Examples are Bowlby’s maternal deprivation studies and intelligence testing.

Deception : Causing the wider public to believe something, which isn’t true by the findings, you report (e.g., that parents are responsible for how their children turn out).

Informed consent : Participants should be made aware of how participating in the research may affect them.

Justice & equitable treatment : Examples of unjust treatment are (i) publicizing an idea, which creates a prejudice against a group, & (ii) withholding a treatment, which you believe is beneficial, from some participants so that you can use them as controls.

Scientific freedom : Science should not be censored, but there should be some monitoring of sensitive research. The researcher should weigh their responsibilities against their rights to do the research.

Ownership of data : When research findings could be used to make social policies, which affect people’s lives, should they be publicly accessible? Sometimes, a party commissions research with their interests in mind (e.g., an industry, an advertising agency, a political party, or the military).

Some people argue that scientists should be compelled to disclose their results so that other scientists can re-analyze them. If this had happened in Burt’s day, there might not have been such widespread belief in the genetic transmission of intelligence. George Miller (Miller’s Magic 7) famously argued that we should give psychology away.

The values of social scientists : Psychologists can be divided into two main groups: those who advocate a humanistic approach (individuals are important and worthy of study, quality of life is important, intuition is useful) and those advocating a scientific approach (rigorous methodology, objective data).

The researcher’s values may conflict with those of the participant/institution. For example, if someone with a scientific approach was evaluating a counseling technique based on a humanistic approach, they would judge it on criteria that those giving & receiving the therapy may not consider important.

Cost/benefit analysis : It is unethical if the costs outweigh the potential/actual benefits. However, it isn’t easy to assess costs & benefits accurately & the participants themselves rarely benefit from research.

Sieber & Stanley advise that researchers should not avoid researching socially sensitive issues. Scientists have a responsibility to society to find useful knowledge.

  • They need to take more care over consent, debriefing, etc. when the issue is sensitive.
  • They should be aware of how their findings may be interpreted & used by others.
  • They should make explicit the assumptions underlying their research so that the public can consider whether they agree with these.
  • They should make the limitations of their research explicit (e.g., ‘the study was only carried out on white middle-class American male students,’ ‘the study is based on questionnaire data, which may be inaccurate,’ etc.
  • They should be careful how they communicate with the media and policymakers.
  • They should be aware of the balance between their obligations to participants and those to society (e.g. if the participant tells them something which they feel they should tell the police/social services).
  • They should be aware of their own values and biases and those of the participants.

Arguments for SSR

  • Psychologists have devised methods to resolve the issues raised.
  • SSR is the most scrutinized research in psychology. Ethical committees reject more SSR than any other form of research.
  • By gaining a better understanding of issues such as gender, race, and sexuality, we are able to gain greater acceptance and reduce prejudice.
  • SSR has been of benefit to society, for example, EWT. This has made us aware that EWT can be flawed and should not be used without corroboration. It has also made us aware that the EWT of children is every bit as reliable as that of adults.
  • Most research is still on white middle-class Americans (about 90% of research is quoted in texts!). SSR is helping to redress the balance and make us more aware of other cultures and outlooks.

Arguments against SSR

  • Flawed research has been used to dictate social policy and put certain groups at a disadvantage.
  • Research has been used to discriminate against groups in society, such as the sterilization of people in the USA between 1910 and 1920 because they were of low intelligence, criminal, or suffered from psychological illness.
  • The guidelines used by psychologists to control SSR lack power and, as a result, are unable to prevent indefensible research from being carried out.

American Psychological Association. (2002). American Psychological Association ethical principles of psychologists and code of conduct. www.apa.org/ethics/code2002.html

Baumrind, D. (1964). Some thoughts on ethics of research: After reading Milgram’s” Behavioral study of obedience.”.  American Psychologist ,  19 (6), 421.

Caughy, M. O. B., DiPietro, J. A., & Strobino, D. M. (1994). Day‐care participation as a protective factor in the cognitive development of low‐income children.  Child development ,  65 (2), 457-471.

Harris, B. (1988). Key words: A history of debriefing in social psychology. In J. Morawski (Ed.), The rise of experimentation in American psychology (pp. 188-212). New York: Oxford University Press.

Rosenthal, R., & Rosnow, R. L. (1984). Applying Hamlet’s question to the ethical conduct of research: A conceptual addendum. American Psychologist, 39(5) , 561.

Sieber, J. E., & Stanley, B. (1988). Ethical and professional dimensions of socially sensitive research.  American psychologist ,  43 (1), 49.

The British Psychological Society. (2010). Code of Human Research Ethics. www.bps.org.uk/sites/default/files/documents/code_of_human_research_ethics.pdf

Further Information

  • MIT Psychology Ethics Lecture Slides

BPS Documents

  • Code of Ethics and Conduct (2018)
  • Good Practice Guidelines for the Conduct of Psychological Research within the NHS
  • Guidelines for Psychologists Working with Animals
  • Guidelines for ethical practice in psychological research online

APA Documents

APA Ethical Principles of Psychologists and Code of Conduct

Print Friendly, PDF & Email

Related Articles

Qualitative Data Coding

Research Methodology

Qualitative Data Coding

What Is a Focus Group?

What Is a Focus Group?

Cross-Cultural Research Methodology In Psychology

Cross-Cultural Research Methodology In Psychology

What Is Internal Validity In Research?

What Is Internal Validity In Research?

What Is Face Validity In Research? Importance & How To Measure

Research Methodology , Statistics

What Is Face Validity In Research? Importance & How To Measure

Criterion Validity: Definition & Examples

Criterion Validity: Definition & Examples

  • Original article
  • Open access
  • Published: 13 July 2021

Assisting you to advance with ethics in research: an introduction to ethical governance and application procedures

  • Shivadas Sivasubramaniam 1 ,
  • Dita Henek Dlabolová 2 ,
  • Veronika Kralikova 3 &
  • Zeenath Reza Khan 3  

International Journal for Educational Integrity volume  17 , Article number:  14 ( 2021 ) Cite this article

18k Accesses

13 Citations

4 Altmetric

Metrics details

Ethics and ethical behaviour are the fundamental pillars of a civilised society. The focus on ethical behaviour is indispensable in certain fields such as medicine, finance, or law. In fact, ethics gets precedence with anything that would include, affect, transform, or influence upon individuals, communities or any living creatures. Many institutions within Europe have set up their own committees to focus on or approve activities that have ethical impact. In contrast, lesser-developed countries (worldwide) are trying to set up these committees to govern their academia and research. As the first European consortium established to assist academic integrity, European Network for Academic Integrity (ENAI), we felt the importance of guiding those institutions and communities that are trying to conduct research with ethical principles. We have established an ethical advisory working group within ENAI with the aim to promote ethics within curriculum, research and institutional policies. We are constantly researching available data on this subject and committed to help the academia to convey and conduct ethical behaviour. Upon preliminary review and discussion, the group found a disparity in understanding, practice and teaching approaches to ethical applications of research projects among peers. Therefore, this short paper preliminarily aims to critically review the available information on ethics, the history behind establishing ethical principles and its international guidelines to govern research.

The paper is based on the workshop conducted in the 5th International conference Plagiarism across Europe and Beyond, in Mykolas Romeris University, Lithuania in 2019. During the workshop, we have detailed a) basic needs of an ethical committee within an institution; b) a typical ethical approval process (with examples from three different universities); and c) the ways to obtain informed consent with some examples. These are summarised in this paper with some example comparisons of ethical approval processes from different universities. We believe this paper will provide guidelines on preparing and training both researchers and research students in appropriately upholding ethical practices through ethical approval processes.

Introduction

Ethics and ethical behaviour (often linked to “responsible practice”) are the fundamental pillars of a civilised society. Ethical behaviour with integrity is important to maintain academic and research activities. It affects everything we do, and gets precedence with anything that would include/affect, transform, or impact upon individuals, communities or any living creatures. In other words, ethics would help us improve our living standards (LaFollette, 2007 ). The focus on ethical behaviour is indispensable in certain fields such as medicine, finance, or law, but is also gaining recognition in all disciplines engaged in research. Therefore, institutions are expected to develop ethical guidelines in research to maintain quality, initiate/own integrity and above all be transparent to be successful by limiting any allegation of misconduct (Flite and Harman, 2013 ). This is especially true for higher education organisations that promote research and scholarly activities. Many European institutions have developed their own regulations for ethics by incorporating international codes (Getz, 1990 ). The lesser developed countries are trying to set up these committees to govern their academia and research. World Health Organization has stated that adhering to “ ethical principles … [is central and important]... in order to protect the dignity, rights and welfare of research participants ” (WHO, 2021 ). Ethical guidelines taught to students can help develop ethical researchers and members of society who uphold values of ethical principles in practice.

As the first European-wide consortium established to assist academic integrity (European Network for Academic Integrity – ENAI), we felt the importance of guiding those institutions and communities that are trying to teach, research, and include ethical principles by providing overarching understanding of ethical guidelines that may influence policy. Therefore, we set up an advisory working group within ENAI in 2018 to support matters related to ethics, ethical committees and assisting on ethics related teaching activities.

Upon preliminary review and discussion, the group found a disparity in understanding, practice and teaching approaches to ethical applications among peers. This became the premise for this research paper. We first carried out a literature survey to review and summarise existing ethical governance (with historical perspectives) and procedures that are already in place to guide researchers in different discipline areas. By doing so, we attempted to consolidate, document and provide important steps in a typical ethical application process with example procedures from different universities. Finally, we attempted to provide insights and findings from practical workshops carried out at the 5th International Conference Plagiarism across Europe and Beyond, in Mykolas Romeris University, Lithuania in 2019, focussing on:

• highlighting the basic needs of an ethical committee within an institution,

• discussing and sharing examples of a typical ethical approval process,

• providing guidelines on the ways to teach research ethics with some examples.

We believe this paper provides guidelines on preparing and training both researchers and research students in appropriately upholding ethical practices through ethical approval processes.

Background literature survey

Responsible research practice (RRP) is scrutinised by the aspects of ethical principles and professional standards (WHO’s Code of Conduct for responsible Research, 2017). The Singapore statement on research integrity (The Singapore Statement on Research integrity, 2010) has provided an internationally acceptable guidance for RRP. The statement is based on maintaining honesty, accountability, professional courtesy in all aspects of research and maintaining fairness during collaborations. In other words, it does not simply focus on the procedural part of the research, instead covers wider aspects of “integrity” beyond the operational aspects (Israel and Drenth, 2016 ).

Institutions should focus on providing ethical guidance based on principles and values reflecting upon all aspects/stages of research (from the funding application/project development stage upto or beyond project closing stage). Figure  1 summarizes the different aspects/stages of a typical research and highlights the needs of RRP in compliance with ethical governance at each stage with examples (the figure is based on Resnik, 2020 ; Žukauskas et al., 2018 ; Anderson, 2011 ; Fouka and Mantzorou, 2011 ).

figure 1

Summary of the enabling ethical governance at different stages of research. Note that it is imperative for researchers to proactively consider the ethical implications before, during and after the actual research process. The summary shows that RRP should be in line with ethical considerations even long before the ethical approval stage

Individual responsibilities to enhance RRP

As explained in Fig.  1 , a successfully governed research should consider ethics at the planning stages prior to research. Many international guidance are compatible in enforcing/recommending 14 different “responsibilities” that were first highlighted in the Singapore Statement (2010) for researchers to follow and achieve competency in RRP. In order to understand the purpose and the expectation of these ethical guidelines, we have carried out an initial literature survey on expected individual responsibilities. These are summarised in Table  1 .

By following these directives, researchers can carry out accountable research by maximising ethical self-governance whilst minimising misconducts. In our own experiences of working with many researchers, their focus usually revolves around ethical “clearance” rather than behaviour. In other words, they perceive this as a paper exercise rather than trying to “own” ethical behaviour in everything they do. Although the ethical principles and responsibilities are explicitly highlighted in the majority of international guidelines [such as UK’s Research Governance Policy (NICE, 2018 ), Australian Government’s National Statement on Ethical Conduct in Human Research (Difn website a - National Statement on Ethical Conduct in Human Research (NSECHR), 2018 ), the Singapore Statement (2010) etc.]; and the importance of holistic approach has been argued in ethical decision making, many researchers and/or institutions only focus on ethics linked to the procedural aspects.

Studies in the past have also highlighted inconsistencies in institutional guidelines pointing to the fact that these inconsistencies may hinder the predicted research progress (Desmond & Dierickx 2021 ; Alba et al., 2020 ; Dellaportas et al., 2014 ; Speight 2016 ). It may also be possible that these were and still are linked to the institutional perceptions/expectations or the pre-empting contextual conditions that are imposed by individual countries. In fact, it is interesting to note many research organisations and HE institutions establish their own policies based on these directives.

Research governance - origins, expectations and practices

Ethical governance in clinical medicine helps us by providing a structure for analysis and decision-making. By providing workable definitions of benefits and risks as well as the guidance for evaluating/balancing benefits over risks, it supports the researchers to protect the participants and the general population.

According to the definition given by National Institute of Clinical care Excellence, UK (NICE 2018 ), “ research governance can be defined as the broad range of regulations, principles and standards of good practice that ensure high quality research ”. As stated above, our literature-based research survey showed that most of the ethical definitions are basically evolved from the medical field and other disciplines have utilised these principles to develop their own ethical guidance. Interestingly, historical data show that the medical research has been “self-governed” or in other words implicated by the moral behaviour of individual researchers (Fox 2017 ; Shaw et al., 2005 ; Getz, 1990 ). For example, early human vaccination trials conducted in 1700s used the immediate family members as test subjects (Fox, 2017 ). Here the moral justification might have been the fact that the subjects who would have been at risk were either the scientists themselves or their immediate families but those who would reap the benefits from the vaccination were the general public/wider communities. However, according to the current ethical principles, this assumption is entirely not acceptable.

Historically, ambiguous decision-making and resultant incidences of research misconduct have led to the need for ethical research governance in as early as the 1940’s. For instance, the importance of an international governance was realised only after the World War II, when people were astonished to note the unethical research practices carried out by Nazi scientists. As a result of this, in 1947 the Nuremberg code was published. The code mainly focussed on the following:

Informed consent and further insisted the research involving humans should be based on prior animal work,

The anticipated benefits should outweigh the risk,

Research should be carried out only by qualified scientists must conduct research,

Avoiding physical and mental suffering and.

Avoiding human research that would result in which death or disability.

(Weindling, 2001 ).

Unfortunately, it was reported that many researchers in the USA and elsewhere considered the Nuremberg code as a document condemning the Nazi atrocities, rather than a code for ethical governance and therefore ignored these directives (Ghooi, 2011 ). It was only in 1964 that the World Medical Association published the Helsinki Declaration, which set the stage for ethical governance and the implementation of the Institutional Review Board (IRB) process (Shamoo and Irving, 1993 ). This declaration was based on Nuremberg code. In addition, the declaration also paved the way for enforcing research being conducted in accordance with these guidelines.

Incidentally, the focus on research/ethical governance gained its momentum in 1974. As a result of this, a report on ethical principles and guidelines for the protection of human subjects of research was published in 1979 (The Belmont Report, 1979 ). This report paved the way to the current forms of ethical governance in biomedical and behavioural research by providing guidance.

Since 1994, the WHO itself has been providing several guidance to health care policy-makers, researchers and other stakeholders detailing the key concepts in medical ethics. These are specific to applying ethical principles in global public health.

Likewise, World Organization for Animal Health (WOAH), and International Convention for the Protection of Animals (ICPA) provide guidance on animal welfare in research. Due to this continuous guidance, together with accepted practices, there are internationally established ethical guidelines to carry out medical research. Our literature survey further identified freely available guidance from independent organisations such as COPE (Committee of Publication Ethics) and ALLEA (All European Academics) which provide support for maintaining research ethics in other fields such as education, sociology, psychology etc. In reality, ethical governance is practiced differently in different countries. In the UK, there is a clinical excellence research governance, which oversees all NHS related medical research (Mulholland and Bell, 2005 ). Although, the governance in other disciplines is not entirely centralised, many research funding councils and organisations [such as UKRI (UK-Research and Innovation; BBSC (Biotechnology and Biological Sciences Research Council; MRC (Medical Research Council); EPSRC (Economic and Social Research Council)] provide ethical governance and expect institutional adherence and monitoring. They expect local institutional (i.e. university/institutional) research governance for day-to-day monitoring of the research conducted within the organisation and report back to these funding bodies, monthly or annually (Department of Health, 2005). Likewise, there are nationally coordinated/regulated ethics governing bodies such as the US Office for Human Research Protections (US-OHRP), National Institute of Health (NIH) and the Canadian Institutes for Health Research (CIHR) in the USA and Canada respectively (Mulholland and Bell, 2005 ). The OHRP in the USA formally reviews all research activities involving human subjects. On the other hand, in Canada, CIHR works with the Natural Sciences and Engineering Research Council (NSERC), and the Social Sciences and Humanities Research Council (SSHRC). They together have produced a Tri-Council Policy Statement (TCPS) (Stephenson et al., 2020 ) as ethical governance. All Canadian institutions are expected to adhere to this policy for conducting research. As for Australia, the research is governed by the Australian code for the responsible conduct of research (2008). It identifies the responsibilities of institutions and researchers in all areas of research. The code has been jointly developed by the National Health and Medical Research Council (NHMRC), the Australian Research Council (ARC) and Universities Australia (UA). This information is summarized in Table  2 .

Basic structure of an institutional ethical advisory committee (EAC)

The WHO published an article defining the basic concepts of an ethical advisory committee in 2009 (WHO, 2009 - see above). According to this, many countries have established research governance and monitor the ethical practice in research via national and/or regional review committees. The main aims of research ethics committees include reviewing the study proposals, trying to understand the justifications for human/animal use, weighing the merits and demerits of the usage (linking to risks vs. potential benefits) and ensuring the local, ethical guidelines are followed Difn website b - Enago academy Importance of Ethics Committees in Scholarly Research, 2020 ; Guide for Research Ethics - Council of Europe, 2014 ). Once the research has started, the committee needs to carry out periodic surveillance to ensure the institutional ethical norms are followed during and beyond the study. They may also be involved in setting up and/or reviewing the institutional policies.

For these aspects, IRB (or institutional ethical advisory committee - IEAC) is essential for local governance to enhance best practices. The advantage of an IRB/EEAC is that they understand the institutional conditions and can closely monitor the ongoing research, including any changes in research directions. On the other hand, the IRB may be overly supportive to accept applications, influenced by the local agenda for achieving research excellence, disregarding ethical issues (Kotecha et al., 2011 ; Kayser-Jones, 2003 ) or, they may be influenced by the financial interests in attracting external funding. In this respect, regional and national ethics committees are advantageous to ensure ethical practice. Due to their impartiality, they would provide greater consistency and legitimacy to the research (WHO, 2009 ). However, the ethical approval process of regional and national ethics committees would be time consuming, as they do not have the local knowledge.

As for membership in the IRBs, most of the guidelines [WHO, NICE, Council of Europe, (2012), European Commission - Facilitating Research Excellence in FP7 ( 2013 ) and OHRP] insist on having a variety of representations including experts in different fields of research, and non-experts with the understanding of local, national/international conflicts of interest. The former would be able to understand/clarify the procedural elements of the research in different fields; whilst the latter would help to make neutral and impartial decisions. These non-experts are usually not affiliated to the institution and consist of individuals representing the broader community (particularly those related to social, legal or cultural considerations). IRBs consisting of these varieties of representation would not only be in a position to understand the study procedures and their potential direct or indirect consequences for participants, but also be able to identify any community, cultural or religious implications of the study.

Understanding the subtle differences between ethics and morals

Interestingly, many ethical guidelines are based on society’s moral “beliefs” in such a way that the words “ethics”‘and “morals” are reciprocally used to define each other. However, there are several subtle differences between them and we have attempted to compare and contrast them herein. In the past, many authors have interchangeably used the words “morals”‘and “ethics”‘(Warwick, 2003 ; Kant, 2018 ; Hazard, GC (Jr)., 1994 , Larry, 1982 ). However, ethics is linked to rules governed by an external source such as codes of conduct in workplaces (Kuyare et al., 2014 ). In contrast, morals refer to an individual’s own principles regarding right and wrong. Quinn ( 2011 ) defines morality as “ rules of conduct describing what people ought and ought not to do in various situations … ” while ethics is “... the philosophical study of morality, a rational examination into people’s moral beliefs and behaviours ”. For instance, in a case of parents demanding that schools overturn a ban on use of corporal punishment of children by schools and teachers (Children’s Rights Alliance for England, 2005 ), the parents believed that teachers should assume the role of parent in schools and use corporal or physical punishment for children who misbehaved. This stemmed from their beliefs and what they felt were motivated by “beliefs of individuals or groups”. For example, recent media highlights about some parents opposing LGBT (Lesbian, Gay, Bisexual, and Transgender) education to their children (BBC News, 2019 ). One parent argued, “Teaching young children about LGBT at a very early stage is ‘morally’ wrong”. She argued “let them learn by themselves as they grow”. This behaviour is linked to and governed by the morals of an ethnic community. Thus, morals are linked to the “beliefs of individuals or group”. However, when it comes to the LGBT rights these are based on ethical principles of that society and governed by law of the land. However, the rights of children to be protected from “inhuman and degrading” treatment is based on the ethical principles of the society and governed by law of the land. Individuals, especially those who are working in medical or judicial professions have to follow an ethical code laid down by their profession, regardless of their own feelings, time or preferences. For instance, a lawyer is expected to follow the professional ethics and represent a defendant, despite the fact that his morals indicate the defendant is guilty.

In fact, we as a group could not find many scholarly articles clearly comparing or contrasting ethics with morals. However, a table presented by Surbhi ( 2015 ) (Difn website c ) tries to differentiate these two terms (see Table  3 ).

Although Table 3 gives some insight on the differences between these two terms, in practice many use these terms as loosely as possible mainly because of their ambiguity. As a group focussed on the application of these principles, we would recommend to use the term “ethics” and avoid “morals” in research and academia.

Based on the literature survey carried out, we were able to identify the following gaps:

there is some disparity in existing literature on the importance of ethical guidelines in research

there is a lack of consensus on what code of conduct should be followed, where it should be derived from and how it should be implemented

The mission of ENAI’s ethical advisory working group

The Ethical Advisory Working Group of ENAI was established in 2018 to promote ethical code of conduct/practice amongst higher educational organisations within Europe and beyond (European Network for Academic Integrity, 2018 ). We aim to provide unbiased advice and consultancy on embedding ethical principles within all types of academic, research and public engagement activities. Our main objective is to promote ethical principles and share good practice in this field. This advisory group aims to standardise ethical norms and to offer strategic support to activities including (but not exclusive to):

● rendering advice and assistance to develop institutional ethical committees and their regulations in member institutions,

● sharing good practice in research and academic ethics,

● acting as a critical guide to institutional review processes, assisting them to maintain/achieve ethical standards,

● collaborating with similar bodies in establishing collegiate partnerships to enhance awareness and practice in this field,

● providing support within and outside ENAI to develop materials to enhance teaching activities in this field,

● organising training for students and early-career researchers about ethical behaviours in form of lectures, seminars, debates and webinars,

● enhancing research and dissemination of the findings in matters and topics related to ethics.

The following sections focus on our suggestions based on collective experiences, review of literature provided in earlier sections and workshop feedback collected:

a) basic needs of an ethical committee within an institution;

b) a typical ethical approval process (with examples from three different universities); and

c) the ways to obtain informed consent with some examples. This would give advice on preparing and training both researchers and research students in appropriately upholding ethical practices through ethical approval processes.

Setting up an institutional ethical committee (ECs)

Institutional Ethical Committees (ECs) are essential to govern every aspect of the activities undertaken by that institute. With regards to higher educational organisations, this is vital to establish ethical behaviour for students and staff to impart research, education and scholarly activities (or everything) they do. These committees should be knowledgeable about international laws relating to different fields of studies (such as science, medicine, business, finance, law, and social sciences). The advantages and disadvantages of institutional, subject specific or common (statutory) ECs are summarised in Fig.  2 . Some institutions have developed individual ECs linked to specific fields (or subject areas) whilst others have one institutional committee that overlooks the entire ethical behaviour and approval process. There is no clear preference between the two as both have their own advantages and disadvantages (see Fig. 2 ). Subject specific ECs are attractive to medical, law and business provisions, as it is perceived the members within respective committees would be able to understand the subject and therefore comprehend the need of the proposed research/activity (Kadam, 2012 ; Schnyder et al., 2018 ). However, others argue, due to this “ specificity ”, the committee would fail to forecast the wider implications of that application. On the other hand, university-wide ECs would look into the wider implications. Yet they find it difficult to understand the purpose and the specific applications of that research. Not everyone understands dynamics of all types of research methodologies, data collection, etc., and therefore there might be a chance of a proposal being rejected merely because the EC could not understand the research applications (Getz, 1990 ).

figure 2

Summary of advantages and disadvantages of three different forms of ethical committees

[N/B for Fig. 2 : Examples of different types of ethical application procedures and forms used were discussed with the workshop attendees to enhance their understanding of the differences. GDPR = General Data Protection Regulation].

Although we recommend a designated EC with relevant professional, academic and ethical expertise to deal with particular types of applications, the membership (of any EC) should include some non-experts who would represent the wider community (see above). Having some non-experts in EC would not only help the researchers to consider explaining their research in layperson’s terms (by thinking outside the box) but also would ensure efficiency without compromising participants/animal safety. They may even help to address the common ethical issues outside research culture. Some UK universities usually offer this membership to a clergy, councillor or a parliamentarian who does not have any links to the institutions. Most importantly, it is vital for any EC members to undertake further training in addition to previous experience in the relevant field of research ethics.

Another issue that raises concerns is multi-centre research, involving several institutions, where institutionalised ethical approvals are needed from each partner. In some cases, such as clinical research within the UK, a common statutory EC called National Health Services (NHS) Research Ethics Committee (NREC) is in place to cover research ethics involving all partner institutions (NHS, 2018 ). The process of obtaining approval from this type of EC takes time, therefore advanced planning is needed.

Ethics approval forms and process

During the workshop, we discussed some anonymised application forms obtained from open-access sources for qualitative and quantitative research as examples. Considering research ethics, for the purpose of understanding, we arbitrarily divided this in two categories; research based on (a) quantitative and (b) qualitative methodologies. As their name suggests their research approach is extremely different from each other. The discussion elicited how ECs devise different types of ethical application form/questions. As for qualitative research, these are often conducted as “face-to-face” interviews, which would have implications on volunteer anonymity.

Furthermore, discussions posited when the interviews are replaced by on-line surveys, they have to be administered through registered university staff to maintain confidentiality. This becomes difficult when the research is a multi-centre study. These types of issues are also common in medical research regarding participants’ anonymity, confidentially, and above all their right to withdraw consent to be involved in research.

Storing and protecting data collected in the process of the study is also a point of consideration when applying for approval.

Finally, the ethical processes of invasive (involving human/animals) and non-invasive research (questionnaire based) may slightly differ from one another. Following research areas are considered as investigations that need ethical approval:

research that involves human participants (see below)

use of the ‘products’ of human participants (see below)

work that potentially impacts on humans (see below)

research that involves animals

In addition, it is important to provide a disclaimer even if an ethical approval is deemed unnecessary. Following word cloud (Fig.  3 ) shows the important variables that need to be considered at the brainstorming stage before an ethical application. It is worth noting the importance of proactive planning predicting the “unexpected” during different phases of a research project (such as planning, execution, publication, and future directions). Some applications (such as working with vulnerable individuals or children) will require safety protection clearance (such as DBS - Disclosure and Barring Service, commonly obtained from the local police). Please see section on Research involving Humans - Informed consents for further discussions.

figure 3

Examples of important variables that need to be considered for an ethical approval

It is also imperative to report or re-apply for ethical approval for any minor or major post-approval changes to original proposals made. In case of methodological changes, evidence of risk assessments for changes and/or COSHH (Control of Substances Hazardous to Health Regulations) should also be given. Likewise, any new collaborative partners or removal of researchers should also be notified to the IEAC.

Other findings include:

in case of complete changes in the project, the research must be stopped and new approval should be seeked,

in case of noticing any adverse effects to project participants (human or non-human), these should also be notified to the committee for appropriate clearance to continue the work, and

the completion of the project must also be notified with the indication whether the researchers may restart the project at a later stage.

Research involving humans - informed consents

While discussing research involving humans and based on literature review, findings highlight the human subjects/volunteers must willingly participate in research after being adequately informed about the project. Therefore, research involving humans and animals takes precedence in obtaining ethical clearance and its strict adherence, one of which is providing a participant information sheet/leaflet. This sheet should contain a full explanation about the research that is being carried out and be given out in lay-person’s terms in writing (Manti and Licari 2018 ; Hardicre 2014 ). Measures should also be in place to explain and clarify any doubts from the participants. In addition, there should be a clear statement on how the participants’ anonymity is protected. We provide below some example questions below to help the researchers to write this participant information sheet:

What is the purpose of the study?

Why have they been chosen?

What will happen if they take part?

What do they have to do?

What happens when the research stops?

What if something goes wrong?

What will happen to the results of the research study?

Will taking part be kept confidential?

How to handle “vulnerable” participants?

How to mitigate risks to participants?

Many institutional ethics committees expect the researchers to produce a FAQ (frequently asked questions) in addition to the information about research. Most importantly, the researchers also need to provide an informed consent form, which should be signed by each human participant. The five elements identified that are needed to be considered for an informed consent statement are summarized in Fig.  4 below (slightly modified from the Federal Policy for the Protection of Human Subjects ( 2018 ) - Diffn website c ).

figure 4

Five basic elements to consider for an informed consent [figure adapted from Diffn website c ]

The informed consent form should always contain a clause for the participant to withdraw their consent at any time. Should this happen all the data from that participant should be eliminated from the study without affecting their anonymity.

Typical research ethics approval process

In this section, we provide an example flow chart explaining how researchers may choose the appropriate application and process, as highlighted in Fig.  5 . However, it is imperative to note here that these are examples only and some institutions may have one unified application with separate sections to demarcate qualitative and quantitative research criteria.

figure 5

Typical ethical approval processes for quantitative and qualitative research. [N/B for Fig. 5 - This simplified flow chart shows that fundamental process for invasive and non-invasive EC application is same, the routes and the requirements for additional information are slightly different]

Once the ethical application is submitted, the EC should ensure a clear approval procedure with distinctly defined timeline. An example flow chart showing the procedure for an ethical approval was obtained from University of Leicester as open-access. This is presented in Fig.  6 . Further examples of the ethical approval process and governance were discussed in the workshop.

figure 6

An example ethical approval procedures conducted within University of Leicester (Figure obtained from the University of Leicester research pages - Difn website d - open access)

Strategies for ethics educations for students

Student education on the importance of ethics and ethical behaviour in research and scholarly activities is extremely essential. Literature posits in the area of medical research that many universities are incorporating ethics in post-graduate degrees but when it comes to undergraduate degrees, there is less appetite to deliver modules or even lectures focussing on research ethics (Seymour et al., 2004 ; Willison and O’Regan, 2007 ). This may be due to the fact that undergraduate degree structure does not really focus on research (DePasse et al., 2016 ). However, as Orr ( 2018 ) suggested, institutions should focus more on educating all students about ethics/ethical behaviour and their importance in research, than enforcing punitive measures for unethical behaviour. Therefore, as an advisory committee, and based on our preliminary literature survey and workshop results, we strongly recommend incorporating ethical education within undergraduate curriculum. Looking at those institutions which focus on ethical education for both under-and postgraduate courses, their approaches are either (a) a lecture-based delivery, (b) case study based approach or (c) a combined delivery starting with a lecture on basic principles of ethics followed by generating a debate based discussion using interesting case studies. The combined method seems much more effective than the other two as per our findings as explained next.

As many academics who have been involved in teaching ethics and/or research ethics agree, the underlying principles of ethics is often perceived as a boring subject. Therefore, lecture-based delivery may not be suitable. On the other hand, a debate based approach, though attractive and instantly generates student interest, cannot be effective without students understanding the underlying basic principles. In addition, when selecting case studies, it would be advisable to choose cases addressing all different types of ethical dilemmas. As an advisory group within ENAI, we are in the process of collating supporting materials to help to develop institutional policies, creating advisory documents to help in obtaining ethical approvals, and teaching materials to enhance debate-based lesson plans that can be used by the member and other institutions.

Concluding remarks

In summary, our literature survey and workshop findings highlight that researchers should accept that ethics underpins everything we do, especially in research. Although ethical approval is tedious, it is an imperative process in which proactive thinking is essential to identify ethical issues that might affect the project. Our findings further lead us to state that the ethical approval process differs from institution to institution and we strongly recommend the researchers to follow the institutional guidelines and their underlying ethical principles. The ENAI workshop in Vilnius highlighted the importance of ethical governance by establishing ECs, discussed different types of ECs and procedures with some examples and highlighted the importance of student education to impart ethical culture within research communities, an area that needs further study as future scope.

Declarations

The manuscript was entirely written by the corresponding author with contributions from co-authors who have also taken part in the delivery of the workshop. Authors confirm that the data supporting the findings of this study are available within the article. We can also confirm that there are no potential competing interests with other organisations.

Availability of data and materials

Authors confirm that the data supporting the findings of this study are available within the article.

Abbreviations

ALL European academics

Australian research council

Biotechnology and biological sciences research council

Canadian institutes for health research

Committee of publication ethics

Ethical committee

European network of academic integrity

Economic and social research council

International convention for the protection of animals

institutional ethical advisory committee

Institutional review board

Immaculata university of Pennsylvania

Lesbian, gay, bisexual, and transgender

Medical research council)

National health services

National health services nih national institute of health (NIH)

National institute of clinical care excellence

National health and medical research council

Natural sciences and engineering research council

National research ethics committee

National statement on ethical conduct in human research

Responsible research practice

Social sciences and humanities research council

Tri-council policy statement

World Organization for animal health

Universities Australia

UK-research and innovation

US office for human research protections

Alba S, Lenglet A, Verdonck K, Roth J, Patil R, Mendoza W, Juvekar S, Rumisha SF (2020) Bridging research integrity and global health epidemiology (BRIDGE) guidelines: explanation and elaboration. BMJ Glob Health 5(10):e003237. https://doi.org/10.1136/bmjgh-2020-003237

Article   Google Scholar  

Anderson MS (2011) Research misconduct and misbehaviour. In: Bertram Gallant T (ed) Creating the ethical academy: a systems approach to understanding misconduct and empowering change in higher education. Routledge, pp 83–96

BBC News. (2019). Birmingham school LGBT LESSONS PROTEST investigated. March 8, 2019. Retrieved February 14, 2021, available online. URL: https://www.bbc.com/news/uk-england-birmingham-47498446

Children’s Rights Alliance for England. (2005). R (Williamson and others) v Secretary of State for Education and Employment. Session 2004–05. [2005] UKHL 15. Available Online. URL: http://www.crae.org.uk/media/33624/R-Williamson-and-others-v-Secretary-of-State-for-Education-and-Employment.pdf

Council of Europe. (2014). Texts of the Council of Europe on bioethical matters. Available Online. https://www.coe.int/t/dg3/healthbioethic/Texts_and_documents/INF_2014_5_vol_II_textes_%20CoE_%20bio%C3%A9thique_E%20(2).pdf

Dellaportas S, Kanapathippillai S, Khan, A and Leung, P. (2014). Ethics education in the Australian accounting curriculum: a longitudinal study examining barriers and enablers. 362–382. Available Online. URL: https://doi.org/10.1080/09639284.2014.930694 , 23, 4, 362, 382

DePasse JM, Palumbo MA, Eberson CP, Daniels AH (2016) Academic characteristics of orthopaedic surgery residency applicants from 2007 to 2014. JBJS 98(9):788–795. https://doi.org/10.2106/JBJS.15.00222

Desmond H, Dierickx K (2021) Research integrity codes of conduct in Europe: understanding the divergences. https://doi.org/10.1111/bioe.12851

Difn website a - National Statement on Ethical Conduct in Human Research (NSECHR). (2018). Available Online. URL: https://www.nhmrc.gov.au/about-us/publications/australian-code-responsible-conduct-research-2018

Difn website b - Enago academy Importance of Ethics Committees in Scholarly Research (2020, October 26). Available online. URL: https://www.enago.com/academy/importance-of-ethics-committees-in-scholarly-research/

Difn website c - Ethics vs Morals - Difference and Comparison. Retrieved July 14, 2020. Available online. URL: https://www.diffen.com/difference/Ethics_vs_Morals

Difn website d - University of Leicester. (2015). Staff ethics approval flowchart. May 1, 2015. Retrieved July 14, 2020. Available Online. URL: https://www2.le.ac.uk/institution/ethics/images/ethics-approval-flowchart/view

European Commission - Facilitating Research Excellence in FP7 (2013) https://ec.europa.eu/research/participants/data/ref/fp7/89888/ethics-for-researchers_en.pdf

European Network for Academic Integrity. (2018). Ethical advisory group. Retrieved February 14, 2021. Available online. URL: http://www.academicintegrity.eu/wp/wg-ethical/

Federal Policy for the Protection of Human Subjects. (2018). Retrieved February 14, 2021. Available Online. URL: https://www.federalregister.gov/documents/2017/01/19/2017-01058/federal-policy-for-the-protection-of-human-subjects#p-855

Flite, CA and Harman, LB. (2013). Code of ethics: principles for ethical leadership Perspect Health Inf Mana; 10(winter): 1d. PMID: 23346028

Fouka G, Mantzorou M (2011) What are the major ethical issues in conducting research? Is there a conflict between the research ethics and the nature of nursing. Health Sci J 5(1) Available Online. URL: https://www.hsj.gr/medicine/what-are-the-major-ethical-issues-in-conducting-research-is-there-a-conflict-between-the-research-ethics-and-the-nature-of-nursing.php?aid=3485

Fox G (2017) History and ethical principles. The University of Miami and the Collaborative Institutional Training Initiative (CITI) Program URL  https://silo.tips/download/chapter-1-history-and-ethical-principles # (Available Online)

Getz KA (1990) International codes of conduct: An analysis of ethical reasoning. J Bus Ethics 9(7):567–577

Ghooi RB (2011) The nuremberg code–a critique. Perspect Clin Res 2(2):72–76. https://doi.org/10.4103/2229-3485.80371

Hardicre, J. (2014) Valid informed consent in research: an introduction Br J Nurs 23(11). https://doi.org/10.12968/bjon.2014.23.11.564 , 567

Hazard, GC (Jr). (1994). Law, morals, and ethics. Yale law school legal scholarship repository. Faculty Scholarship Series. Yale University. Available Online. URL: https://digitalcommons.law.yale.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=3322&context=fss_papers

Israel, M., & Drenth, P. (2016). Research integrity: perspectives from Australia and Netherlands. In T. Bretag (Ed.), Handbook of academic integrity (pp. 789–808). Springer, Singapore. https://doi.org/10.1007/978-981-287-098-8_64

Kadam R (2012) Proactive role for ethics committees. Indian J Med Ethics 9(3):216. https://doi.org/10.20529/IJME.2012.072

Kant I (2018) The metaphysics of morals. Cambridge University Press, UK https://doi.org/10.1017/9781316091388

Kayser-Jones J (2003) Continuing to conduct research in nursing homes despite controversial findings: reflections by a research scientist. Qual Health Res 13(1):114–128. https://doi.org/10.1177/1049732302239414

Kotecha JA, Manca D, Lambert-Lanning A, Keshavjee K, Drummond N, Godwin M, Greiver M, Putnam W, Lussier M-T, Birtwhistle R (2011) Ethics and privacy issues of a practice-based surveillance system: need for a national-level institutional research ethics board and consent standards. Can Fam physician 57(10):1165–1173.  https://europepmc.org/article/pmc/pmc3192088

Kuyare, MS., Taur, SR., Thatte, U. (2014). Establishing institutional ethics committees: challenges and solutions–a review of the literature. Indian J Med Ethics. https://doi.org/10.20529/IJME.2014.047

LaFollette, H. (2007). Ethics in practice (3rd edition). Blackwell

Larry RC (1982) The teaching of ethics and moral values in teaching. J High Educ 53(3):296–306. https://doi.org/10.1080/00221546.1982.11780455

Manti S, Licari A (2018) How to obtain informed consent for research. Breathe (Sheff) 14(2):145–152. https://doi.org/10.1183/20734735.001918

Mulholland MW, Bell J (2005) Research Governance and Research Funding in the USA: What the academic surgeon needs to know. J R Soc Med 98(11):496–502. https://doi.org/10.1258/jrsm.98.11.496

National Institute of Health (NIH) Ethics in Clinical Research. n.d. Available Online. URL: https://clinicalcenter.nih.gov/recruit/ethics.html

NHS (2018) Flagged Research Ethics Committees. Retrieved February 14, 2021. Available online. URL: https://www.hra.nhs.uk/about-us/committees-and-services/res-and-recs/flagged-research-ethics-committees/

NICE (2018) Research governance policy. Retrieved February 14, 2021. Available online. URL: https://www.nice.org.uk/Media/Default/About/what-we-do/science-policy-and-research/research-governance-policy.pdf

Orr, J. (2018). Developing a campus academic integrity education seminar. J Acad Ethics 16(3), 195–209. https://doi.org/10.1007/s10805-018-9304-7

Quinn, M. (2011). Introduction to Ethics. Ethics for an Information Age. 4th Ed. Ch 2. 53–108. Pearson. UK

Resnik. (2020). What is ethics in Research & why is it Important? Available Online. URL: https://www.niehs.nih.gov/research/resources/bioethics/whatis/index.cfm

Schnyder S, Starring H, Fury M, Mora A, Leonardi C, Dasa V (2018) The formation of a medical student research committee and its impact on involvement in departmental research. Med Educ Online 23(1):1. https://doi.org/10.1080/10872981.2018.1424449

Seymour E, Hunter AB, Laursen SL, DeAntoni T (2004) Establishing the benefits of research experiences for undergraduates in the sciences: first findings from a three-year study. Sci Educ 88(4):493–534. https://doi.org/10.1002/sce.10131

Shamoo AE, Irving DN (1993) Accountability in research using persons with mental illness. Account Res 3(1):1–17. https://doi.org/10.1080/08989629308573826

Shaw, S., Boynton, PM., and Greenhalgh, T. (2005). Research governance: where did it come from, what does it mean? Research governance framework for health and social care, 2nd ed. London: Department of Health. https://doi.org/10.1258/jrsm.98.11.496 , 98, 11, 496, 502

Book   Google Scholar  

Speight, JG. (2016) Ethics in the university |DOI: https://doi.org/10.1002/9781119346449 scrivener publishing LLC

Stephenson GK, Jones GA, Fick E, Begin-Caouette O, Taiyeb A, Metcalfe A (2020) What’s the protocol? Canadian university research ethics boards and variations in implementing tri-Council policy. Can J Higher Educ 50(1)1): 68–81

Surbhi, S. (2015). Difference between morals and ethics [weblog]. March 25, 2015. Retrieved February 14, 2021. Available Online. URL: http://keydifferences.com/difference-between-morals-and-ethics.html

The Belmont Report (1979). Ethical Principles and Guidelines for the Protection of Human Subjects of Research. The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. Retrieved February 14, 2021. Available online. URL: https://www.hhs.gov/ohrp/sites/default/files/the-belmont-report-508c_FINAL.pdf

The Singapore Statement on Research Integrity. (2020). Nicholas Steneck and Tony Mayer, Co-chairs, 2nd World Conference on Research Integrity; Melissa Anderson, Chair, Organizing Committee, 3rd World Conference on Research Integrity. Retrieved February 14, 2021. Available online. URL: https://wcrif.org/documents/327-singapore-statement-a4size/file

Warwick K (2003) Cyborg morals, cyborg values, cyborg ethics. Ethics Inf Technol 5(3):131–137. https://doi.org/10.1023/B:ETIN.0000006870.65865.cf

Weindling P (2001) The origins of informed consent: the international scientific commission on medical war crimes, and the Nuremberg code. Bull Hist Med 75(1):37–71. https://doi.org/10.1353/bhm.2001.0049

WHO. (2009). Research ethics committees Basic concepts for capacity-building. Retrieved February 14, 2021. Available online. URL: https://www.who.int/ethics/Ethics_basic_concepts_ENG.pdf

WHO. (2021). Chronological list of publications. Retrieved February 14, 2021. Available online. URL: https://www.who.int/ethics/publications/year/en/

Willison, J. and O’Regan, K. (2007). Commonly known, commonly not known, totally unknown: a framework for students becoming researchers. High Educ Res Dev 26(4). 393–409. https://doi.org/10.1080/07294360701658609

Žukauskas P, Vveinhardt J, and Andriukaitienė R. (2018). Research Ethics In book: Management Culture and Corporate Social Responsibility Eds Jolita Vveinhardt IntechOpenEditors DOI: https://doi.org/10.5772/intechopen.70629 , 2018

Download references

Acknowledgements

Authors wish to thank the organising committee of the 5th international conference named plagiarism across Europe and beyond, in Vilnius, Lithuania for accepting this paper to be presented in the conference.

Not applicable as this is an independent study, which is not funded by any internal or external bodies.

Author information

Authors and affiliations.

School of Human Sciences, University of Derby, DE22 1, Derby, GB, UK

Shivadas Sivasubramaniam

Department of Informatics, Mendel University in Brno, Zemědělská, 1665, Brno, Czechia

Dita Henek Dlabolová

Centre for Academic Integrity in the UAE, Faculty of Engineering & Information Sciences, University of Wollongong in Dubai, Dubai, UAE

Veronika Kralikova & Zeenath Reza Khan

You can also search for this author in PubMed   Google Scholar

Contributions

The manuscript was entirely written by the corresponding author with contributions from co-authors who have equally contributed to presentation of this paper in the 5th international conference named plagiarism across Europe and beyond, in Vilnius, Lithuania. Authors have equally contributed for the information collection, which were then summarised as narrative explanations by the Corresponding author and Dr. Zeenath Reza Khan. Then checked and verified by Dr. Dlabolova and Ms. Králíková. The author(s) read and approved the final manuscript.

Corresponding author

Correspondence to Shivadas Sivasubramaniam .

Ethics declarations

Competing interests.

We can also confirm that there are no potential competing interest with other organisations.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Sivasubramaniam, S., Dlabolová, D.H., Kralikova, V. et al. Assisting you to advance with ethics in research: an introduction to ethical governance and application procedures. Int J Educ Integr 17 , 14 (2021). https://doi.org/10.1007/s40979-021-00078-6

Download citation

Received : 17 July 2020

Accepted : 25 April 2021

Published : 13 July 2021

DOI : https://doi.org/10.1007/s40979-021-00078-6

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Higher education
  • Ethical codes
  • Ethics committee
  • Post-secondary education
  • Institutional policies
  • Research ethics

International Journal for Educational Integrity

ISSN: 1833-2595

  • Submission enquiries: Access here and click Contact Us
  • General enquiries: [email protected]

research ethical aspects

Enago Academy

What Are the Ethical Considerations in Research Design?

' src=

When I began my work on the thesis I was always focused on my research. However, once I began to make my way through research, I realized that research ethics is a core aspect of the research work and the foundation of research design.

Research ethics play a crucial role in ensuring the responsible conduct of research. Here are some key reasons why research ethics matter:

Why Research Ethics Matter

Let us look into some of the major ethical considerations in research design.

Ethical Issues in Research

There are many organizations, like the Committee on Publication Ethics , dedicated to promoting ethics in scientific research. These organizations agree that ethics is not an afterthought or side note to the research study. It is an integral aspect of research that needs to remain at the forefront of our work.

The research design must address specific research questions. Hence, the conclusions of the study must correlate to the questions posed and the results. Also, research ethics demands that the methods used must relate specifically to the research questions.

Voluntary Participation and Consent

An individual should at no point feel any coercion to participate in a study. This includes any type of persuasion or deception in attempting to gain an individual’s trust.

Informed consent states that an individual must give their explicit consent to participate in the study. You can think of consent form as an agreement of trust between the researcher and the participants.

Sampling is the first step in research design . You will need to explain why you want a particular group of participants. You will have to explain why you left out certain people or groups. In addition, if your sample includes children or special needs individuals, you will have additional requirements to address like parental permission.

Confidentiality

The third ethics principle of the Economic and Social Research Council (ESRC) states that: “The confidentiality of the information supplied by research subjects and the anonymity of respondents must be respected.” However, sometimes confidentiality is limited. For example, if a participant is at risk of harm, we must protect them. This might require releasing confidential information.

Risk of Harm

We should do everything in our power to protect study participants. For this, we should focus on the risk to benefit ratio. If possible risks outweigh the benefits, then we should abandon or redesign the study. Risk of harm also requires us to measure the risk to benefit ratio as the study progresses.

Research Methods

We know there are numerous research methods. However, when it comes to ethical considerations, some key questions can help us find the right approach for our studies.

i. Which methods most effectively fit the aims of your research?

ii. What are the strengths and restrictions of a particular method?

iii. Are there potential risks when using a particular research method?

For more guidance, you can refer to the ESRC Framework for Research Ethics .

Ethical issues in research can arise at various stages of the research process and involve different aspects of the study. Here are some common examples of ethical issues in research:

Examples of Ethical Issues in Research

Institutional Review Boards

The importance of ethics in research cannot be understated. Following ethical guidelines will ensure your study’s validity and promote its contribution to scientific study. On a personal level, you will strengthen your research and increase your opportunities to gain funding.

To address the need for ethical considerations, most institutions have their own Institutional Review Board (IRB). An IRB secures the safety of human participants and prevents violation of human rights. It reviews the research aims and methodologies to ensure ethical practices are followed. If a research design does not follow the set ethical guidelines, then the  researcher will have to amend their study.

Applying for Ethical Approval

Applications for ethical approval will differ across institutions. Regardless, they focus on the benefits of your research and the risk to benefit ratio concerning participants. Therefore, you need to effectively address both in order to get ethical clearence.

Participants

It is vital that you make it clear that individuals are provided with sufficient information in order to make an informed decision on their participation. In addition, you need to demonstrate that the ethical issues of consent, risk of harm, and confidentiality are clearly defined.

Benefits of the Study

You need to prove to the panel that your work is essential and will yield results that contribute to the scientific community. For this, you should demonstrate the following:

i. The conduct of research guarantees the quality and integrity of results.

ii. The research will be properly distributed.

iii. The aims of the research are clear and the methodology is appropriate.

Integrity and transparency are vital in the research. Ethics committees expect you to share any actual or potential conflicts of interest that could affect your work. In addition, you have to be honest and transparent throughout the approval process and the research process.

The Dangers of Unethical Practices

There is a reason to  follow ethical guidelines. Without these guidelines, our research will suffer. Moreover, more importantly, people could suffer.

The following are just two examples of infamous cases of unethical research practices that demonstrate the importance of adhering to ethical standards:

  • The Stanford Prison Experiment (1971) aimed to investigate the psychological effects of power using the relationship between prisoners and prison officers. Those assigned the role of “prison officers” embraced measures that exposed “prisoners” to psychological and physical harm. In this case, there was voluntary participation. However, there was disregard for  welfare of the participants.
  • Recently, Chinese scientist He Jiankui announced his work on genetically edited babies . Over 100 Chinese scientists denounced this research, calling it “crazy” and “shocking and unacceptable.” This research shows a troubling attitude of “do first, debate later” and a disregard for the ethical concerns of manipulating the human body Wang Yuedan, a professor of immunology at Peking University, calls this “an ethics disaster for the world” and demands strict punishments for this type of ethics violation.

What are your experiences with research ethics? How have you developed an ethical approach to research design? Please share your thoughts with us in the comments section below.

' src=

I love the articulation of reasoning and practical examples of unethical research

Rate this article Cancel Reply

Your email address will not be published.

research ethical aspects

Enago Academy's Most Popular Articles

AI Detection

  • AI in Academia
  • Trending Now

6 Leading AI Detection Tools for Academic Writing — A comparative analysis

The advent of AI content generators, exemplified by advanced models like ChatGPT, Claude AI, and…

Content Analysis vs Thematic Analysis: What's the difference?

  • Reporting Research

Choosing the Right Analytical Approach: Thematic analysis vs. content analysis for data interpretation

In research, choosing the right approach to understand data is crucial for deriving meaningful insights.…

China's Ministry of Education Spearheads Efforts to Uphold Academic Integrity

  • Industry News

China’s Ministry of Education Spearheads Efforts to Uphold Academic Integrity

In response to the increase in retractions of research papers submitted by Chinese scholars to…

Cross-sectional and Longitudinal Study Design

Comparing Cross Sectional and Longitudinal Studies: 5 steps for choosing the right approach

The process of choosing the right research design can put ourselves at the crossroads of…

Difference between research ethics and ethics and compliance

  • Publishing Research
  • Understanding Ethics

Understanding the Difference Between Research Ethics and Compliance

Ethics refers to the principles, values, and moral guidelines that guide individual or group behavior…

Unlocking the Power of Networking in Academic Conferences

Intersectionality in Academia: Dealing with diverse perspectives

Meritocracy and Diversity in Science: Increasing inclusivity in STEM education

Avoiding the AI Trap: Pitfalls of relying on ChatGPT for PhD applications

research ethical aspects

Sign-up to read more

Subscribe for free to get unrestricted access to all our resources on research writing and academic publishing including:

  • 2000+ blog articles
  • 50+ Webinars
  • 10+ Expert podcasts
  • 50+ Infographics
  • 10+ Checklists
  • Research Guides

We hate spam too. We promise to protect your privacy and never spam you.

I am looking for Editing/ Proofreading services for my manuscript Tentative date of next journal submission:

research ethical aspects

As a researcher, what do you consider most when choosing an image manipulation detector?

SkillsYouNeed

  • LEARNING SKILLS
  • Writing a Dissertation or Thesis

Ethical Issues in Research

Search SkillsYouNeed:

Learning Skills:

  • A - Z List of Learning Skills
  • What is Learning?
  • Learning Approaches
  • Learning Styles
  • 8 Types of Learning Styles
  • Understanding Your Preferences to Aid Learning
  • Lifelong Learning
  • Decisions to Make Before Applying to University
  • Top Tips for Surviving Student Life
  • Living Online: Education and Learning
  • 8 Ways to Embrace Technology-Based Learning Approaches
  • Critical Thinking Skills
  • Critical Thinking and Fake News
  • Understanding and Addressing Conspiracy Theories
  • Critical Analysis
  • Study Skills
  • Exam Skills
  • How to Write a Research Proposal
  • Dissertation: The Introduction
  • Researching and Writing a Literature Review
  • Writing your Methodology
  • Dissertation: Results and Discussion
  • Dissertation: Conclusions and Extras

Writing Your Dissertation or Thesis eBook

Writing a Dissertation or Thesis

Part of the Skills You Need Guide for Students .

  • Research Methods
  • Teaching, Coaching, Mentoring and Counselling
  • Employability Skills for Graduates

Subscribe to our FREE newsletter and start improving your life in just 5 minutes a day.

You'll get our 5 free 'One Minute Life Skills' and our weekly newsletter.

We'll never share your email address and you can unsubscribe at any time.

Ethics are broadly the set of rules, written and unwritten, that govern our expectations of our own and others’ behaviour.

Effectively, they set out how we expect others to behave, and why. While there is broad agreement on some ethical values (for example, that murder is bad), there is also wide variation on how exactly these values should be interpreted in practice.

Research ethics are the set of ethics that govern how scientific and other research is performed at research institutions such as universities, and how it is disseminated.

This page explains more about research ethics, and how you can ensure that your research is compliant.

What are Research Ethics?

When most people think of research ethics, they think about issues that arise when research involves human or animal subjects.

While these issues are indeed a key part of research ethics, there are also wider issues about standards of conduct. These include the importance of publishing findings in a transparent way, not plagiarising others’ work, and not falsifying work.

The Importance of Research Ethics

Research ethics are important for a number of reasons.

  • They promote the aims of research, such as expanding knowledge.
  • They support the values required for collaborative work, such as mutual respect and fairness. This is essential because scientific research depends on collaboration between researchers and groups.
  • They mean that researchers can be held accountable for their actions. Many researchers are supported by public money, and regulations on conflicts of interest, misconduct, and research involving humans or animals are necessary to ensure that money is spent appropriately.
  • They ensure that the public can trust research. For people to support and fund research, they have to be confident in it.
  • They support important social and moral values, such as the principle of doing no harm to others.

Source: Resnick, D. B. (2015) What is Ethics in Research and Why is it Important?

Codes of Ethics

Government agencies who fund or commission research often publish codes of conduct for researchers, or codes of ethics.

For example, the US National Institutes of Health (NIH) and Food and Drug Administration (FDA) both publish ethical codes. Some ethical codes may have the force of law behind them, while others may simply be advisable.

Be aware that even if you do nothing illegal, doing something unethical may end your research career.

Many or even most ethical codes cover the following areas:

Honesty and Integrity

This means that you need to report your research honestly, and that this applies to your methods (what you did), your data, your results, and whether you have previously published any of it. You should not make up any data, including extrapolating unreasonably from some of your results, or do anything which could be construed as trying to mislead anyone. It is better to undersell than over-exaggerate your findings.

When working with others, you should always keep to any agreements, and act sincerely.

Objectivity

You should aim to avoid bias in any aspect of your research, including design, data analysis, interpretation, and peer review. For example, you should never recommend as a peer reviewer someone you know, or who you have worked with, and you should try to ensure that no groups are inadvertently excluded from your research. This also means that you need to disclose any personal or financial interests that may affect your research.

Carefulness

Take care in carrying out your research to avoid careless mistakes. You should also review your work carefully and critically to ensure that your results are credible. It is also important to keep full records of your research. If you are asked to act as a peer reviewer, you should take the time to do the job effectively and fully.

You should always be prepared to share your data and results, along with any new tools that you have developed, when you publish your findings, as this helps to further knowledge and advance science. You should also be open to criticism and new ideas.

Respect for Intellectual Property

You should never plagiarise, or copy, other people’s work and try to pass it off as your own. You should always ask for permission before using other people’s tools or methods, unpublished data or results. Not doing so is plagiarism. Obviously, you need to respect copyrights and patents, together with other forms of intellectual property, and always acknowledge contributions to your research. If in doubt, acknowledge, to avoid any risk of plagiarism.

Confidentiality

You should respect anything that has been provided in confidence. You should also follow guidelines on protection of sensitive information such as patient records.

Responsible Publication

You should publish to advance to state of research and knowledge, and not just to advance your career. This means, in essence, that you should not publish anything that is not new, or that duplicates someone else’s work.

You should always be aware of laws and regulations that govern your work, and be sure that you conform to them.

Animal Care

If you are using animals in your research, you should always be sure that your experiments are both necessary and well-designed. You should also show respect for the animals you are using, and make sure that they are properly cared for.

Human Subjects Protection

If your research involves people, you should make sure that you reduce any possible harm to the minimum, and maximise the benefits both to participants and other people.

This means, for example, that you should not expose people to more tests than are strictly necessary to fulfil your research aims. You should always respect human rights, including the right to privacy and autonomy. You may need to take particular care with vulnerable groups, which include, but are not limited to, children, older people, and those with learning difficulties.

Source: Resnick, D. B. (2015) What is Ethics in Research and Why is it Important? List adapted from Shamoo A and Resnik D. 2015. Responsible Conduct of Research, 3rd ed. (New York: Oxford University Press).

Further Reading from Skills You Need

The Skills You Need Guide for Students

The Skills You Need Guide for Students

Skills You Need

Develop the skills you need to make the most of your time as a student.

Our eBooks are ideal for students at all stages of education, school, college and university. They are full of easy-to-follow practical information that will help you to learn more effectively and get better grades.

The Role of the Ethics Committee

Most universities have an ethics committee. This is required to scrutinise all research proposals, to ensure that they do not raise any ethical issues. This will generally include research for master’s and undergraduate degrees, although undergraduate research may be covered by a broader research proposal from your supervisor.

There is likely to be a standard form to complete for ethical approval, which will cover who will be involved, how you will recruit your participants, and what steps you will take to ensure that they have provided informed consent.

There is an example form on our page Writing a Research Proposal , which also contains more detail about how to go about preparing a proposal.

The ethics committee’s role is to consider that what you are doing is appropriate and proportionate to your research aims.

If a research proposal raises ethical issues, the committee will ask the researcher to look again at the issue, and consider whether they could do it differently.

For example , if you are proposing to carry out a study on a particular disease, and you want to ask all your participants whether they are married and have any children, the committee may want to know why this is relevant. It may be relevant (for example, if you think the disease may be reduced by living in a family), in which case, you will need to justify this.

The committee may also suggest alternative methods that they think are more suitable for the target group, or additional precautions that you should take.

You cannot start your research until you have been granted ethical approval, which will be granted formally, together with an approval number.

When you publish your research, whether as a thesis or in one or more journal articles, you will need to provide details of the ethical approval, including this number.

If you are unsure how to behave in a particular situation…

…and think you may have an ethical dilemma, then you should always seek advice before you act.

If you are a student, your supervisor should be happy to help and advise you. If necessary, they will be able to advise you about who else to ask.

As a researcher, you should consult more senior colleagues around, either at your own institution or others, who should be happy to help you.

After all, it is in everyone’s interests to promote research ethics, and support the integrity and reputation of research.

Continue to: Designing Research Writing a Methodology

See also: Writing a Literature Review Academic Referencing Sources of Information

  • Technical Help
  • CE/CME Help
  • Billing Help
  • Sales Inquiries

On June 06, our support lines will be closed for a team training session.

The Collaborative Institutional Training Initiative (CITI Program) is dedicated to serving the training needs of colleges and universities, healthcare institutions, technology and research organizations, and governmental agencies, as they foster integrity and professional advancement of their learners.

Modern hospital main building

Explore Our Featured Courses

  • Professional
  • Other Offerings

This role-based course provides the practical know-how to monitor clinical research sites effectively.

clipboard icon with checked off pages

Learn to recognize fraud, waste, and abuse.

doctor discussing issue with admin

Foundational overview of clinical research meant to prepare medical residents and fellows to conduct their own research and work on research teams.

young female doctor using ipad

Provides information on risks and threats to the global research ecosystem — and the knowledge and tools necessary to protect against these risks.

circular security icon with lock in the middle

A systematic approach to preparing for and adjusting to risks associated with wilderness field research.

zoologist writing down data

Examines mentorship in biomedical and behavioral research, which is critical to advancing science.

back to back heads with arrows and plant growth upward

Browse all of the courses and series available from CITI Program.

all courses

Learn strategies for promoting equity, diversity, and inclusivity in clinical research participation.

female doctor informing patient

A role-based course that provides practical know-how to effectively lead or participate on data monitoring committees.

meeting around table

Reviews best practices and challenges for navigating collaborative research partnerships in a global environment.

futuristic globe

Learn ways to manage conflicts with your dissertation chair and committee.

Puzzle pieces creating a square

Provides research administrators with strategies to build, improve, and retain employees.

illustration of a group of diverse people

Learn how grant stewardship obligations are a part of responsible conduct of research.

balance with coin icons on each side on top of a computer document

An introduction for students to the meaning of academic integrity and plagiarism.

icon of document with lock on it

Discusses ways to improve diversity and equity among clinical trial participants.

multicolored hands raised to show diversity

Explores the importance of ethics for AI companies and its impact on public trust and profitability.

icon of a balance inside a computer chip

This course provides learners with a review of contemporary bioethics issues.

clear molecule with a light blue tint

This course focuses on developing the knowledge and skill base necessary for being a successful healthcare ethics committee member.

stethoscope on ethics committee books

This course covers the core norms, principles, regulations, and rules governing the practice of research.

two doctors looking at a display

Explores various types of noncompliance and their impact.

icon of pencil checking off boxes

Discusses the FDA’s approach to inspecting GMP facilities and the ways that facilities can prepare for inspections.

gmp webinar card

Discusses steps that investigators and clinical sites can take to understand and respond to an FDA Form 483.

Magnifying glass on on keyboard

A general overview of FMLA and special considerations for grant-funded research.

young couple holding baby in front of calendar

Provides learners with practical tips to make ADA, Section 504, and accommodations in the 21st century more accessible and equi...

disabled group of people interacting

Explores the use of drones in research and the regulations and requirements researchers and administrators need to know.

Hovering drone with camera

Helps develop leadership skills to meet today’s changing, diverse, and dynamic environments.

group of arrrows pointing up with one larger standout arrow

Discusses the importance of CRA soft skills in building productive relationships with sites and sponsors.

hands presenting for a handshake inside a speech bubble

Introduces regulations and their application as they relate to subrecipient monitoring.

hand holding a checkmark icon

Provides learners with an overview on the state of faculty and staff mental health while explaining the role an institution can...

back to back head icons with puzzle pieces in the brain

Provides instruction on how to improve your teaching and training skills in a variety of settings.

Tree diagram symbolizing teaching professionals

Provides an overview of conflict, types of conflict, conflict styles, communication styles, and intervention strategies.

hand connecting two sides of a block bridge to symbolize resolution of a conflict

N2 is a not-for-profit alliance of Canadian research networks and organizations working to enhance national clinical research capability and capacity.

n2 logo

BIC Study Foundation is a resource for those who want to take CITI Program courses in Korean. They are a certified training provider by the Korean FDA/MAFRA for HRPP and ACU Programs.

bic study final

HRP Consulting provides customized services for your research program, including temporary staffing, IRB/IACUC assistance, accreditation support, program evaluations, training/education and more.

hrp consulting logo

CTrials helps grow sponsored research programs by taking on the many administrative challenges inherent in managing clinical trials.

brany ctrials logo

Informed Consent Builder is a cloud-based platform that streamlines the process of managing and generating informed consent forms.

informed consent builder logo

Protocol Builder is an online protocol writing and collaboration platform that also speed up your pre-review turnaround times.

monitor displaying protocol builder interface

Join Over 2,500 Subscribing Organizations

Highlighted below are just a few select subscribers & collaborators.

Christiania

Meet A Few Of Our Expert Authors and Presenters

Our courses are built by over 150 highly qualified experts and rigorously peer reviewed to incorporate various perspectives and ensure accuracy, completeness, and overall quality.

 alt=

Michele Eodice, PhD

Michele Eodice is the Senior Writing Fellow in the Center for Faculty Excellence at the University of Oklahoma. Her work focuses on faculty and graduate student writing support and she studies undergraduate student writing experiences through the Meaningful Writing Project.

Content contributor Chelsey Colbert

Chelsey Colbert, JD, CIPP-US

Chelsey Colbert is Policy Counsel at the Future of Privacy Forum. She leads FPF’s mobility portfolio, which includes connected and automated vehicles, ridesharing, micromobility, delivery robots, and mobility data sharing. Chelsey also holds a JD with a major in technology law from the University of Ottawa and a CIPP-US certification.

Content Contributor Betsy Matos

Betsy Matos, PhD, MPH

Betsy Matos has served as the Iowa State University Biosafety Officer since September 2006. Additionally, she serves as an Assistant Professor of Teaching in the Colleges of Veterinary Medicine and Agriculture and Life Sciences.

content contributor jessie carder

Jessie Carder, MS

Jessie Carder is the USDA National Agricultural Library's (NAL), Animal Welfare Information Center (AWIC) Coordinator. Jessie received her BS from Virginia Tech and MS from the University of Tennessee in animal welfare. In 2019, she worked for the DoD as an IACUC Compliance Coordinator, before starting her current position with AWIC.

In Our Learners' Words

stephanie e

Stephanie E.

Doctoral Research, Graduate Student

I like that the course is self-paced; you can pause and pick up where you left off.

tonya w

Middle School Related Arts Teacher

I liked the video case studies the best. They made the provided information more personable and helped course sections make more sense.

shamsul q

Head of Department of Pathology/Microbiology

Excellent course. I enjoyed it fully and increased my level of knowledge in specific scenarios.

timothy f

PhD Candidate

The course was very informative and detailed. I also appreciated the case studies to assist in understanding how these concepts are applicable.

CITI Program industry accredidation logos

Courses Approved by Top Continuing Education Accreditors

CITI Program courses are approved for CME credits through the Albert Einstein Montefiore Continuing Professional Development Center (CPDC) . Albert Einstein College of Medicine-Montefiore Medical Center (Einstein) is accredited by the Joint Accreditation for Interprofessional Continuing Education to provide continuing education activities for healthcare professionals. Einstein is accredited to offer continuing education credit for the following professions: medicine, nursing, psychology, pharmacy, dentistry, optometry, social work, nutritional science, and athletic training.

CITI Program is also accredited by the International Accreditors for Continuing Education and Training (IACET) and offers IACET CEUs for its learning events that comply with the ANSI/IACET Continuing Education and Training Standard. IACET is recognized internationally as a standard development organization and accrediting body that promotes quality of continuing education and training.

Recent News & Articles

New Course and Series Available on Fraud, Waste, and Abuse

New Course and Series Available on Fraud, Waste, and Abuse

This new series combines our Federal Fraud and Abuse Laws course with a new course: Combating Medicare Parts C and D Fraud, Waste, and Abuse.

HHS Steps to Ensure Equal Access to Care for Deaf and Hard of Hearing Patients

HHS Steps to Ensure Equal Access to Care for Deaf and Hard of Hearing Patients

Introduction The U.S. Department of Health and Human Services (HHS), Office for Civil Rights (OCR) continues to lead efforts to strengthen access to health and...

Free Live Webinar – At Risk Survey Research Design: Remaining Diligent

Free Live Webinar – At Risk Survey Research Design: Remaining Diligent

This webinar examines online survey research design considerations and risks from the perspectives of IRB members and researchers.

Privacy Overview

CookieDurationDescription
BUY_NOWThis cookie is set to transfer purchase details to our learning management system.
CART_COUNTThis cookie is set to enable shopping cart details on the site and to pass the data to our learning management system.
cookielawinfo-checkbox-advertisement1 yearThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Advertisement".
cookielawinfo-checkbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checkbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
JSESSIONIDsessionUsed by sites written in JSP. General purpose platform session cookies that are used to maintain users' state across page requests.
PHPSESSIDsessionThis cookie is native to PHP applications. The cookie is used to store and identify a users' unique session ID for the purpose of managing user session on the website. The cookie is a session cookies and is deleted when all the browser windows are closed.
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
XSRF-TOKENsessionThe cookie is set by Wix website building platform on Wix website. The cookie is used for security purposes.
CookieDurationDescription
bcookie2 yearsThis cookie is set by linkedIn. The purpose of the cookie is to enable LinkedIn functionalities on the page.
langsessionThis cookie is used to store the language preferences of a user to serve up content in that stored language the next time user visit the website.
lidc1 dayThis cookie is set by LinkedIn and used for routing.
pll_language1 yearThis cookie is set by Polylang plugin for WordPress powered websites. The cookie stores the language code of the last browsed page.
CookieDurationDescription
_gat1 minuteThis cookies is installed by Google Universal Analytics to throttle the request rate to limit the colllection of data on high traffic sites.
CookieDurationDescription
_ga2 yearsThis cookie is installed by Google Analytics. The cookie is used to calculate visitor, session, campaign data and keep track of site usage for the site's analytics report. The cookies store information anonymously and assign a randomly generated number to identify unique visitors.
_gat_UA-33803854-11 minuteThis is a pattern type cookie set by Google Analytics, where the pattern element on the name contains the unique identity number of the account or website it relates to. It appears to be a variation of the _gat cookie which is used to limit the amount of data recorded by Google on high traffic volume websites.
_gat_UA-33803854-71 minuteThis is a pattern type cookie set by Google Analytics, where the pattern element on the name contains the unique identity number of the account or website it relates to. It appears to be a variation of the _gat cookie which is used to limit the amount of data recorded by Google on high traffic volume websites.
_gcl_au3 monthsThis cookie is used by Google Analytics to understand user interaction with the website.
_gid1 dayThis cookie is installed by Google Analytics. The cookie is used to store information of how visitors use a website and helps in creating an analytics report of how the website is doing. The data collected including the number visitors, the source where they have come from, and the pages visted in an anonymous form.
_hjAbsoluteSessionInProgress30 minutesNo description available.
_hjFirstSeen30 minutesThis is set by Hotjar to identify a new user’s first session. It stores a true/false value, indicating whether this was the first time Hotjar saw this user. It is used by Recording filters to identify new user sessions.
_hjid1 yearThis cookie is set by Hotjar. This cookie is set when the customer first lands on a page with the Hotjar script. It is used to persist the random user ID, unique to that site on the browser. This ensures that behavior in subsequent visits to the same site will be attributed to the same user ID.
_hjIncludedInPageviewSample2 minutesNo description available.
_hjIncludedInSessionSample2 minutesNo description available.
_hjTLDTestsessionNo description available.
_uetsid1 dayThis cookies are used to collect analytical information about how visitors use the website. This information is used to compile report and improve site.
BrowserId1 yearThis cookie is used for registering a unique ID that identifies the type of browser. It helps in identifying the visitor device on their revisit.
CFIDsessionThis cookie is set by Adobe ColdFusion applications. This cookie is used to identify the client. It is a sequential client identifier, used in conjunction with the cookie "CFTOKEN".
CFTOKENsessionThis cookie is set by Adobe ColdFusion applications. This cookie is used to identify the client. It provides a random-number client security token.
CONSENT16 years 5 months 4 days 4 hoursThese cookies are set via embedded youtube-videos. They register anonymous statistical data on for example how many times the video is displayed and what settings are used for playback.No sensitive data is collected unless you log in to your google account, in that case your choices are linked with your account, for example if you click “like” on a video.
vuid2 yearsThis domain of this cookie is owned by Vimeo. This cookie is used by vimeo to collect tracking information. It sets a unique ID to embed videos to the website.
CookieDurationDescription
bscookie2 yearsThis cookie is a browser ID cookie set by Linked share Buttons and ad tags.
IDE1 year 24 daysUsed by Google DoubleClick and stores information about how the user uses the website and any other advertisement before visiting the website. This is used to present users with ads that are relevant to them according to the user profile.
MUID1 year 24 daysUsed by Microsoft as a unique identifier. The cookie is set by embedded Microsoft scripts. The purpose of this cookie is to synchronize the ID across many different Microsoft domains to enable user tracking.
test_cookie15 minutesThis cookie is set by doubleclick.net. The purpose of the cookie is to determine if the user's browser supports cookies.
VISITOR_INFO1_LIVE5 months 27 daysThis cookie is set by Youtube. Used to track the information of the embedded YouTube videos on a website.
YSCsessionThis cookies is set by Youtube and is used to track the views of embedded videos.
yt-remote-connected-devicesneverThese cookies are set via embedded youtube-videos.
yt-remote-device-idneverThese cookies are set via embedded youtube-videos.
CookieDurationDescription
_app_session1 monthNo description available.
_gfpcsessionNo description available.
_uetvid1 year 24 daysNo description available.
_zm_chtaid2 hoursNo description available.
_zm_csp_script_noncesessionNo description available.
_zm_cta1 dayNo description
_zm_ctaid2 hoursNo description available.
_zm_currency1 dayNo description available.
_zm_mtk_guid2 yearsNo description available.
_zm_page_authsessionNo description available.
_zm_sa_si_nonesessionNo description
_zm_ssidsessionNo description available.
AnalyticsSyncHistory1 monthNo description
BNI_persistence4 hoursNo description available.
BrowserId_sec1 yearNo description available.
CookieConsentPolicy1 yearNo description
credNo description available.
fneverNo description available.
L-veVQq1 dayNo description
li_gc2 yearsNo description
owner_token1 dayNo description available.
PP-veVQq1 hourNo description
renderCtxsessionThis cookie is used for tracking community context state.
RL-veVQq1 dayNo description
twine_session1 monthNo description available.
UserMatchHistory1 monthLinkedin - Used to track visitors on multiple websites, in order to present relevant advertisement based on the visitor's preferences.
web_zakpastNo description
wULrMv6tNo description
zm_aidpastNo description
zm_haidpastNo description

Have a language expert improve your writing

Run a free plagiarism check in 10 minutes, automatically generate references for free.

  • Knowledge Base
  • Methodology
  • Ethical Considerations in Research | Types & Examples

Ethical Considerations in Research | Types & Examples

Published on 7 May 2022 by Pritha Bhandari .

Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people.

The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating behaviours, and improving lives in other ways. What you decide to research and how you conduct that research involve key ethical considerations.

These considerations work to:

  • Protect the rights of research participants
  • Enhance research validity
  • Maintain scientific integrity

Table of contents

Why do research ethics matter, getting ethical approval for your study, types of ethical issues, voluntary participation, informed consent, confidentiality, potential for harm, results communication, examples of ethical failures, frequently asked questions about research ethics.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe for research subjects.

You’ll balance pursuing important research aims with using ethical research methods and procedures. It’s always necessary to prevent permanent or excessive harm to participants, whether inadvertent or not.

Defying research ethics will also lower the credibility of your research because it’s hard for others to trust your data if your methods are morally questionable.

Even if a research idea is valuable to society, it doesn’t justify violating the human rights or dignity of your study participants.

Prevent plagiarism, run a free check.

Before you start any study involving data collection with people, you’ll submit your research proposal to an institutional review board (IRB) .

An IRB is a committee that checks whether your research aims and research design are ethically acceptable and follow your institution’s code of conduct. They check that your research materials and procedures are up to code.

If successful, you’ll receive IRB approval, and you can begin collecting data according to the approved procedures. If you want to make any changes to your procedures or materials, you’ll need to submit a modification application to the IRB for approval.

If unsuccessful, you may be asked to re-submit with modifications or your research proposal may receive a rejection. To get IRB approval, it’s important to explicitly note how you’ll tackle each of the ethical issues that may arise in your study.

There are several ethical issues you should always pay attention to in your research design, and these issues can overlap with each other.

You’ll usually outline ways you’ll deal with each issue in your research proposal if you plan to collect data from participants.

Voluntary participation Your participants are free to opt in or out of the study at any point in time.
Informed consent Participants know the purpose, benefits, risks, and funding behind the study before they agree or decline to join.
Anonymity You don’t know the identities of the participants. Personally identifiable data is not collected.
Confidentiality You know who the participants are but keep that information hidden from everyone else. You anonymise personally identifiable data so that it can’t be linked to other data by anyone else.
Potential for harm Physical, social, psychological, and all other types of harm are kept to an absolute minimum.
Results communication You ensure your work is free of plagiarism or research misconduct, and you accurately represent your results.

Voluntary participation means that all research subjects are free to choose to participate without any pressure or coercion.

All participants are able to withdraw from, or leave, the study at any point without feeling an obligation to continue. Your participants don’t need to provide a reason for leaving the study.

It’s important to make it clear to participants that there are no negative consequences or repercussions to their refusal to participate. After all, they’re taking the time to help you in the research process, so you should respect their decisions without trying to change their minds.

Voluntary participation is an ethical principle protected by international law and many scientific codes of conduct.

Take special care to ensure there’s no pressure on participants when you’re working with vulnerable groups of people who may find it hard to stop the study even when they want to.

Informed consent refers to a situation in which all potential participants receive and understand all the information they need to decide whether they want to participate. This includes information about the study’s benefits, risks, funding, and institutional approval.

  • What the study is about
  • The risks and benefits of taking part
  • How long the study will take
  • Your supervisor’s contact information and the institution’s approval number

Usually, you’ll provide participants with a text for them to read and ask them if they have any questions. If they agree to participate, they can sign or initial the consent form. Note that this may not be sufficient for informed consent when you work with particularly vulnerable groups of people.

If you’re collecting data from people with low literacy, make sure to verbally explain the consent form to them before they agree to participate.

For participants with very limited English proficiency, you should always translate the study materials or work with an interpreter so they have all the information in their first language.

In research with children, you’ll often need informed permission for their participation from their parents or guardians. Although children cannot give informed consent, it’s best to also ask for their assent (agreement) to participate, depending on their age and maturity level.

Anonymity means that you don’t know who the participants are and you can’t link any individual participant to their data.

You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, and videos.

In many cases, it may be impossible to truly anonymise data collection. For example, data collected in person or by phone cannot be considered fully anonymous because some personal identifiers (demographic information or phone numbers) are impossible to hide.

You’ll also need to collect some identifying information if you give your participants the option to withdraw their data at a later stage.

Data pseudonymisation is an alternative method where you replace identifying information about participants with pseudonymous, or fake, identifiers. The data can still be linked to participants, but it’s harder to do so because you separate personal information from the study data.

Confidentiality means that you know who the participants are, but you remove all identifying information from your report.

All participants have a right to privacy, so you should protect their personal data for as long as you store or use it. Even when you can’t collect data anonymously, you should secure confidentiality whenever you can.

Some research designs aren’t conducive to confidentiality, but it’s important to make all attempts and inform participants of the risks involved.

As a researcher, you have to consider all possible sources of harm to participants. Harm can come in many different forms.

  • Psychological harm: Sensitive questions or tasks may trigger negative emotions such as shame or anxiety.
  • Social harm: Participation can involve social risks, public embarrassment, or stigma.
  • Physical harm: Pain or injury can result from the study procedures.
  • Legal harm: Reporting sensitive data could lead to legal risks or a breach of privacy.

It’s best to consider every possible source of harm in your study, as well as concrete ways to mitigate them. Involve your supervisor to discuss steps for harm reduction.

Make sure to disclose all possible risks of harm to participants before the study to get informed consent. If there is a risk of harm, prepare to provide participants with resources, counselling, or medical services if needed.

Some of these questions may bring up negative emotions, so you inform participants about the sensitive nature of the survey and assure them that their responses will be confidential.

The way you communicate your research results can sometimes involve ethical issues. Good science communication is honest, reliable, and credible. It’s best to make your results as transparent as possible.

Take steps to actively avoid plagiarism and research misconduct wherever possible.

Plagiarism means submitting others’ works as your own. Although it can be unintentional, copying someone else’s work without proper credit amounts to stealing. It’s an ethical problem in research communication because you may benefit by harming other researchers.

Self-plagiarism is when you republish or re-submit parts of your own papers or reports without properly citing your original work.

This is problematic because you may benefit from presenting your ideas as new and original even though they’ve already been published elsewhere in the past. You may also be infringing on your previous publisher’s copyright, violating an ethical code, or wasting time and resources by doing so.

In extreme cases of self-plagiarism, entire datasets or papers are sometimes duplicated. These are major ethical violations because they can skew research findings if taken as original data.

You notice that two published studies have similar characteristics even though they are from different years. Their sample sizes, locations, treatments, and results are highly similar, and the studies share one author in common.

Research misconduct

Research misconduct means making up or falsifying data, manipulating data analyses, or misrepresenting results in research reports. It’s a form of academic fraud.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement about data analyses.

Research misconduct is a serious ethical issue because it can undermine scientific integrity and institutional credibility. It leads to a waste of funding and resources that could have been used for alternative research.

Later investigations revealed that they fabricated and manipulated their data to show a nonexistent link between vaccines and autism. Wakefield also neglected to disclose important conflicts of interest, and his medical license was taken away.

This fraudulent work sparked vaccine hesitancy among parents and caregivers. The rate of MMR vaccinations in children fell sharply, and measles outbreaks became more common due to a lack of herd immunity.

Research scandals with ethical failures are littered throughout history, but some took place not that long ago.

Some scientists in positions of power have historically mistreated or even abused research participants to investigate research problems at any cost. These participants were prisoners, under their care, or otherwise trusted them to treat them with dignity.

To demonstrate the importance of research ethics, we’ll briefly review two research studies that violated human rights in modern history.

These experiments were inhumane and resulted in trauma, permanent disabilities, or death in many cases.

After some Nazi doctors were put on trial for their crimes, the Nuremberg Code of research ethics for human experimentation was developed in 1947 to establish a new standard for human experimentation in medical research.

In reality, the actual goal was to study the effects of the disease when left untreated, and the researchers never informed participants about their diagnoses or the research aims.

Although participants experienced severe health problems, including blindness and other complications, the researchers only pretended to provide medical care.

When treatment became possible in 1943, 11 years after the study began, none of the participants were offered it, despite their health conditions and high risk of death.

Ethical failures like these resulted in severe harm to participants, wasted resources, and lower trust in science and scientists. This is why all research institutions have strict ethical guidelines for performing research.

Ethical considerations in research are a set of principles that guide your research designs and practices. These principles include voluntary participation, informed consent, anonymity, confidentiality, potential for harm, and results communication.

Scientists and researchers must always adhere to a certain code of conduct when collecting data from others .

These considerations protect the rights of research participants, enhance research validity , and maintain scientific integrity.

Research ethics matter for scientific integrity, human rights and dignity, and collaboration between science and society. These principles make sure that participation in studies is voluntary, informed, and safe.

Anonymity means you don’t know who the participants are, while confidentiality means you know who they are but remove identifying information from your research report. Both are important ethical considerations .

You can only guarantee anonymity by not collecting any personally identifying information – for example, names, phone numbers, email addresses, IP addresses, physical characteristics, photos, or videos.

You can keep data confidential by using aggregate information in your research report, so that you only refer to groups of participants rather than individuals.

These actions are committed intentionally and can have serious consequences; research misconduct is not a simple mistake or a point of disagreement but a serious ethical failure.

Cite this Scribbr article

If you want to cite this source, you can copy and paste the citation or click the ‘Cite this Scribbr article’ button to automatically add the citation to our free Reference Generator.

Bhandari, P. (2022, May 07). Ethical Considerations in Research | Types & Examples. Scribbr. Retrieved 10 June 2024, from https://www.scribbr.co.uk/research-methods/ethical-considerations/

Is this article helpful?

Pritha Bhandari

Pritha Bhandari

Other students also liked, a quick guide to experimental design | 5 steps & examples, data collection methods | step-by-step guide & examples, how to avoid plagiarism | tips on citing sources.

Ethics and Human Rights

Life and death decisions are a part of nursing, and ethics are therefore fundamental to the integrity of the nursing profession. Every day, nurses support each other to fulfill their ethical obligations to patients and the public, but in an ever-changing world – there are increased challenges.

View the Code

“The Code” is a vital tool for nurses now and in the future. VIEW THE CODE

The ANA Center for Ethics and Human Rights

The Center is committed to addressing the complex ethical and human rights issues confronting nurses and designing activities and programs to increase the ethical competence and human rights sensitivity of nurses. Through the Center, ANA's abiding commitment to the human rights dimensions of health care is demonstrated.

The American Nurses Association (ANA) Center for Ethics and Human Rights was established to help nurses navigate ethical and value conflicts, and life and death decisions, many of which are common to everyday practice. The Center develops policy designed to address issues in ethics and human rights at the state, national, and international levels. Through its highly visible information, activities, and programs, the Center promotes the ethical competence and human rights sensitivity of nurses in all practice settings and demonstrates ANA’s abiding commitment to human rights.

2020 Center for Ethics and Human Rights Annual Report 2019 Center for Ethics and Human Rights Annual Report 2018 Center for Ethics and Human Rights Annual Report 2017 Center for Ethics and Human Rights Annual Report

Contact the Center of Ethics and Human Rights at  [email protected]

READ MORE ABOUT THE CENTER FOR ETHICS AND HUMAN RIGHTS

The Code of Ethics for Nurses with Interpretive Statements,  or “The Code”, is a vital tool for nurses now and in the future. While the foundational values of nursing do not change, The Code is regularly updated to reflect changes in health care structure, financing, and delivery. It supports nurses in providing consistently respectful, humane, and dignified care. These values are often second nature to nurses’ caregiving but are frequently challenged by the failings in U.S. health care and by negative social determinants of health.

The Code, consisting of nine provisions and their accompanying interpretive statements:

  • Provides a succinct statement of the ethical values, obligations, and duties of every individual who enters the nursing profession;
  • Serves as the profession’s nonnegotiable ethical standard; and
  • Expresses nurses' own understanding of our commitment to society.

The Code is particularly valuable in today’s healthcare environment because it clearly and eloquently reiterates the fundamental values and commitments of the nurse (Provisions 1–3), identifies the boundaries of duty and loyalty (Provisions 4–6), and describes the duties of the nurse that extend beyond individual patient encounters (Provisions 7–9).

To serve as the most useful aid in challenging situations, The Code's interpretive statements provide specific guidance for practice. The statements respond to the contemporary context of nursing and recognize the larger scope of nursing’s concern for societal health.

The Code of Ethics for Nurses with Interpretive Statements  is the social contract that nurses have with the U.S. public. It exemplifies our profession's promise to provide and advocate for safe, quality care for all patients and communities. It binds nurses to support each other so that all nurses can fulfill their ethical and professional obligations. This Code is a reflection of the proud ethical heritage of nursing; one which will continue on, whatever challenges the modern health care system presents.

BUY THE CODE OF ETHICS FOR NURSES WITH INTERPRETIVE STATEMENTS ONLINE

Compra Código de Ética para Profesionales de la Enfermería

The ANA Center for Ethics and Human Rights helps nurses navigate complex and every day ethical issues, in all practice settings.

ANA position statements on ethics and human rights

In tandem with The Code, ANA’s position statements support nurses by offering an explanation, a justification, or a recommendation for a course of action in particular situations.

  • Capital Punishment and Nurses' Participation in Capital Punishment (Approved 2/24)
  • Nurses' Roles and Responsibilities in Providing Care and Support at the End of Life (Approved 2/24)
  • Privacy and Confidentiality (Revised 2/24)
  • The Ethical Use of Artificial Intelligence in Nursing Practice (Approved 12/20/22)
  • The Nurse's Role and Responsibility in Unveiling and Dismantling Racism in Nursing (2022)
  • Risk and Responsibility in Providing Nursing Care  (Approved 10/6/2022) 
  • Therapeutic use of Marijuana and Related Cannabinoids  (Approved 5/4/2021) 

Nurses’ Professional Responsibility to Promote Ethical Practice Environments (Approved 5/4/2021)

  • The Ethical Use of Restraints: Balancing Dual Nursing Duties of Patient Safety and Personal Safety  (Approved 11/23/2020)
  • Nursing Care and Do-not-resuscitate (DNR) Decisions (Approved 2/19/2020) 
  • Nurse’s Role in Providing Ethically and Developmentally Appropriate Care to People With Intellectual and Developmental Disabilities (Approved 10/10/19)
  • Ethical Considerations for Local and Global Volunteerism (Approved 8/2/19) [VIDEO]
  • The Nurse’s Role When a Patient Requests Medical Aid in Dying  (Approved 6/22/19)
  • The Nurse’s Role in Addressing Discrimination: Protecting and Promoting Inclusive Strategies in Practice Settings, Policy, and Advocacy (Approved 10/3/18) [ VIDEO ]
  • The Ethical Responsibility to Manage Pain and the Suffering It Causes (Approved 2/23/2018) [VIDEO]
  • Interdisciplinary Guidelines for Care of Women Presenting to the Emergency Department with Pregnancy Loss  (Endorsed 2/23/2018)
  • Nursing Advocacy for LGBTQ+ Populations (Approved 4/19/18)
  • Non-Punitive Treatment of Pregnant and Breastfeeding Women with Substance Use Disorders (Approved 3/15/17)
  • Nutrition and Hydration at the End of Life  (Revised 6/7/17)
  • Frequently Asked Questions: ANA Position on Capital Punishment
  • The Nurse’s Role in Ethics and Human Rights: Protecting and Promoting Individual Worth, Dignity, and Human Rights in Practice Settings (February 2016)

Retired ANA Position Statements

  • Constituent/State Nurses Associations (C/SNAs) as Ethics Resources, Educators, and Advocates (Retired ANA Position Statement - Approved 11/11/11)
  • Stem Cell Research  (Retired ANA Position Statement - Approved 1/10/07)

Ethics Topics and Resources

For nurses to fulfill their ethical obligations to patients, it is vital to have access to a wide range of information and to keep up-to-date with advances in ethical practices. These articles and links offer context for nurses on difficult issues and best-practice recommendations.

Social Justice

When nurses vow to protect the health and safety of patients, that promise does not end at the bedside. While social justice is a logical extension of the nursing profession, it can be difficult for nurses to navigate these divisive areas and ensure every individual receives timely and high-quality care.

Sign the Pledge Against Torture

Given the importance of ethics and the protection of human rights in nursing practice, the American Nurses Association is urging RNs to join ANA President Pamela F. Cipriano, PhD, RN, NEA-BC, FAAN, and ANA Chief Executive Officer Marla J. Weston, PhD, RN, FAAN, in signing on to the Health Professionals' Pledge Against Torture.

Physicians for Human Rights launched a pledge May 18 for health professionals across the United States to stand together in their rejection of torture, voicing the consensus that torture and cruel, inhuman, or degrading treatment are absolutely prohibited in all circumstances. Already the list of signers includes Nobel laureates in medicine, former surgeons general, prison physicians, leaders of health professional organizations, and medical ethicists who pledge never to collude in torture under any circumstances, in keeping with the ethical codes of their professions.

By uniting in large numbers behind the pledge, nurses and other health care professionals send a strong message to policymakers, health professional associations and the American public that future attempts to enlist health professionals in the design, study or use of practices that result in severe physical or mental abuse will not be tolerated. The pledge also serves as a declaration of support for health professionals who resist orders to torture or inflict harm.

For more than a decade, PHR and its network of partners have led efforts advocating against torture, documented the devastating long-term health consequences of torture, and called attention to the complicity of some health professionals in the post-9/11 U.S. torture program.

“At a time when human rights are increasingly under threat, we’ve launched this pledge to marshal the powerful voices of health professionals across the United States and reaffirm their ethical duties to honor human dignity,” said PHR Executive Director Donna McKay.

ANA’s Code of Ethics for Nurses with Interpretive Statements is essential to nursing practice, and the national association has a long history of human rights advocacy. For example, ANA successfully advocated for the ethical right of a Navy nurse to refuse to force-feed detainees at Guantanamo Bay. In January, ANA released its Ethics and Human Rights Statement emphasizing that nursing “is committed to both the welfare of the sick, injured, and vulnerable in society and to social justice.” To read more visit, Health Professionals' Pledge Against Torture.

More Social Justice Resources

  • ANA Releases New Position Statement Opposing Capital Punishment (2/21/17)
  • ANA President Responds to Executive Order on Immigration Press Release (Released: January 31, 2017)
  • ANA Ethics and Human Rights Statement

Moral Courage, Moral Distress, Moral Resilience

Nurses practicing in today’s health care environment face increasingly complex ethical dilemmas. Upholding our commitment to patients and communities requires significant moral courage and resilience. It involves the willingness to speak out, whether alone or collectively, to do what is right for patients and other nurses

Documentary

The Moral Distress Education Project Core multidisciplinary experts on moral distress from across the country were interviewed in a documentary-style media project. This project is a self-guided web documentary.

A Call To Action Report:   Exploring Moral Resilience Toward a Culture of Ethical Practice

Relevant Nursing Journal Articles

  • Compassion Fatigue as a Threat to Ethical Practice: Identification, Personal and Workplace Prevention/Management Strategies, MEDSURG Nursing: July-August, 2016
  • Moral Resilience: Managing and Preventing Moral Distress and Moral Residue MEDSURG Nursing: March-April, 2016
  • Moral Distress References
  • Moral Distress in Academia OJIN: The Online Journal of Issues in Nursing
  • Moral Courage and the Nurse Leader OJIN: The Online Journal of Issues in Nursing
  • Creating Workplace Environments that Support Moral Courage OJIN: The Online Journal of Issues in Nursing
  • Strategies Necessary for Moral Courage OJIN: The Online Journal of Issues in Nursing
  • Moral Courage in Healthcare: Acting Ethically Even in the Presence of Risk OJIN: The Online Journal of Issues in Nursing
  • Understanding and Addressing Moral Distress OJIN: The Online Journal of Issues in Nursing
  • Using the AACN Framework to Alleviate Moral Distress OJIN: The Online Journal of Issues in Nursing
  • Moral Distress and Moral Courage in Everyday Nursing Practice OJIN: The Online Journal of Issues in Nursing
  • Moral Courage in Action: Case Studies MEDSURG Nursing: August, 2007
  • Moral Courage: A Virtue in Need of Development? MEDSURG Nursing: April, 2007

End of Life Issues

In an aging population with rapidly increasing technological interventions possible, end of life care is a vital discussion. With multiple perspectives to consider, these resources serve to convey the breadth of opinion that nurses experience, and help nurses respect individual dignity and autonomy.

Advance directives, Education, Professional Organizations, Hospice

Advance directives.

End of life care often starts when a person is healthy. Many people, including nurses, have specific ideas about what health care they want, or do not want, at the end of life. Advance directives are a means to allow people to convey their wishes for end of life care. This includes discussions with those who might be a surrogate decision maker, as well as documents used to express preferences.

The National Hospice & Palliative Care Organization (NHPCO) HHPCO has a directory of advance directives that are acceptable to each statue.

Five Wishes An easy guide for patients and families to discuss preferences for end of life care, as well as for healthcare professionals who might not be comfortable with such discussions. The guide includes prompts for discussions about how you wish to be remembered.

The End-of-Life Nursing Consortium (ELNEC)  ELNEC is a series of programs developed by the American Association of Colleges of Nursing. Current ELNEC modules include core curriculum, pediatric palliative care, geriatric, and many others. There are also train-the-trainer modules. After taking a train the trainer course, a nurse can then offer an ELNEC course. The courses are comprehensive and provide all teaching materials needed. The courses are typically offered in a block over two days.

Education in Palliative and End of Life Care (EPEC) EPEC was originally developed as physician education but has expanded. It has both in-seat and online programs. The model includes the training of facilitators. Sessions may be given in day-long formats, or in shorter sessions, such as grand rounds.

Professional Organizations

The Hospice & Palliative Nurses Association (HPNA) HPNA has the mission of advancing expert care in serious illness. HPNA is the professional organization for palliative care nurses and hospice nurses. HPNA provides education and certification for nurses across levels, including Advance Practice Registered Nurses (APRN), Registered Nurses (RN), RN Pediatrics, and more. HPNA has many Special Interest Groups (SIGs) with online discussion groups. The organization has also developed a series of position statements to guide professional practice. With The American Academy of Hospice & Palliative Medicine (AAHPM), HPNA has an annual assembly for professionals.

The American Academy of Hospice & Palliative Medicine (AAHPM) AAHPM is the professional organization for hospice physicians, palliative medicine physicians, and other health care professionals (nurses, social workers, chaplains, etc.) in these fields. Their goal is to improve the care of patients living with serious illness. AAHPM provides certification for physicians in palliative medicine, as well as for hospice medical directors. AAHPM provides many options for education, online discussion groups, special interest groups, and certification. With HPNA, AAHPM has an annual assembly for professionals.

The Center to Advance Palliative Care (CAPC) CAPC is a multidisciplinary organization that supports practice, research, and education. Hospitals can become member organization, and all employees of those organizations have extensive access to continuing education and other resources. Even non-members have access to the myriad resources of CAPC.

The National Hospice & Palliative Care Organization (NHPCO) NHPCO is one of the oldest advocacy organizations in the fields of hospice and palliative care. Their focus is primarily on the care of patients with terminal illness, and their families. They have developed Standards of Practice and have several position statements.

The Schwartz Center The Schwartz Center is another organization whose goal is to improve the care of patients who are dying. One of their best known efforts is Schwartz Center Rounds, which are intended as a regularly scheduled forum for caregivers to discuss the challenges of caring for patients and families. Schwartz Center Rounds are currently held in about 550 centers in the U.S., U.K., and Canada.

Hospice is a model of care for people who are at the end of life. Specifically, hospice care is eligible for people who are estimated to have a prognosis of six months or less. Hospice is tremendously underutilized, with about 50% of patients having a length of stay of less than 18 days, as opposed to the approximately 180 days of the hospice benefit. Misperceptions about hospice are common. A common misunderstanding is that hospice is a place (“She’s going to hospice”), rather than a model of care. Greater than 90% of hospice care occurs in patients’ homes.

The Centers for Medicare and Medicaid Services This resource explaining the hospice guidelines is helpful for patients, families, and providers.

Elder Law Answers This resource from Elder Law Answer is used to make the Medicare Hospice Benefit more comprehensible.

Related Resources

  • Do Not Resuscitate Orders: Nurse's Role Requires Moral Courage
  • Left Ventricular Assist Device Deactivation: Ethical Issues
  • Nurses Role in Increasing Access to Hospice
  • Physician-Assisted Suicide
  • Voluntary Stopping of Eating and Drinking?: An Ethical Alternative to PAS

Caregiving Resources

Nurses frequently come into contact with caregivers and can provide vital support to individuals who may not come into regular contact with others due to the often all-consuming nature of providing care. It is important for caregivers to realize that they are not alone and that there’s a wealth of information and resources to improve their situation.

Links to tools and support groups for caregivers:

National Association for Home Care & Hospice (NAHC)   NAHC is the voice of home care and hospice. NAHC represents the nation’s 33,000 home care and hospice providers, along with the more than two million nurses, therapists, and aides they employ.

Caregiver Action Network (CAN)   Education, peer support, and resources for family caregivers. CAN serves a broad spectrum of family caregivers ranging from the parents of children with special needs, to the families and friends of wounded soldiers; from a young couple dealing with a diagnosis of MS, to adult children caring for parents with Alzheimer’s disease. 

Help for cancer caregivers   Tools and information to improve the quality of life for caregivers. 

The American Cancer Society   A national website with a tab on finding support for caregivers: what to expect; what you need to know when caring for a loved one at home; and tips of caring for oneself. 

The American Association of Critical-Care Nurses   You can search for Caregivers and it links you to multiple articles on caring for the chronically critically ill. 

Patients’ Action Network   Has a tab on advocate resources. 

Women's Institute For A Secure Retirement Resource tab for caregivers and their families.

Caregiver support - online caregiver support   Caregiver support for people who take care of their elderly loved ones, or have the possibility of being a new caregiver or potential caregiver. 

The Conversation Project   Contains a toolkit to help people talk about their wishes for end-of-life care. An essential for all caregivers. 

VA Caregiver Support  Can tell you about assistance available from VA, help you access the services, connect you to the Caregiver Support Coordinator at a VA Medical Center near you, and just listen, if that is what you need. Support line 1.855.260.3274. 

Alzheimer’s Association Caregiver Stress Describes caregiver stress and offers tips for managing stress.

American Psychological Association Common Ethical Issues: Supporting the Caregiver . 

While the consequences of Bioethics may not be felt by every single nurse, it is vital they are aware of the enormous implications of these issues, in case of crisis. From Ebola to natural disasters, through keeping aware of the very latest threats, nurses can protect patients and themselves in the face of any obstacles.

American Society of Bioethics and Humanities (ASBH) Serves as a resource for anyone interested in bioethics and humanities by providing a group of further online resources and links to aid in finding other related information through the Internet.

Ethics, the law, and a nurse’s duty to respond in a disaster

International Council of Nurses Ethics and Human Rights.

National Center for Ethics in Health Care - Veterans Affairs A resource for addressing complex ethical issues in health care.

The NCSBN National Nursing Guidelines for Medical Marijuana Nursing guidelines for the patient using medical marijuana

National Reference Center for Bioethics Literature Offers extensive searches in Bioethics.

Nursing2015 / Issues in Nursing “A Nurse’s Obligations to Patients with Ebola.”

United Nations Educational, Scientific and Cultural Organization (UNESCO)

You May Also Like

Guide to the Code of Ethics for Nurses with Interpretive Statements: Develo

Guide to the Code of Ethics for Nurses with Interpretive Statements: Development, Interpretation, and Application, 2nd Edition

Know the Code Bookmarks

Know the Code Bookmarks

Genetics/Genomics Nursing: Scope and Standards of Practice, 2nd Edition

Genetics/Genomics Nursing: Scope and Standards of Practice, 2nd Edition

Know the Code Poster

Know the Code Poster

Code of Ethics for Nurses with Interpretive Statements

Code of Ethics for Nurses with Interpretive Statements

Item(s) added to cart.

research ethical aspects

AI Ethics: Principles, Guidelines, Frameworks & Issues to Discuss

research ethical aspects

Artificial intelligence and machine learning systems have been in development for decades. The release of freely available generative AI tools like ChatGPT and Bard , however, has emphasized the need for complex ethical frameworks to govern both their research and application.

There are several different ethical quandaries that businesses, academic institutions, and technology companies have to contend with during periods of AI research and development – many of which remain unanswered and demand more exploration. On top of this, the widespread usage and application of AI systems by the general public brings with it an additional set of issues that require ethical attention.

How we ultimately end up answering such questions – and in turn, regulating AI tools – will have huge ramifications for humanity. What’s more, new issues will arise as AI systems become more integrated into our lives, homes, and workplaces – which is why AI ethics is such a crucial discipline. In this guide, we cover:

  • What is AI Ethics?
  • Existing AI Ethics Frameworks

Why AI Ethics Has to Sculpt AI Regulation

  • Why Does AI Ethics Matter?

What Issues Does AI Ethics Face?

  • Bing’s Alter-Ego, The ‘Waluigi Effect’ and Programming Morality

AI and Sentience: Can Machines Have Feelings?

Ai business ethics and using ai at work, what is ai ethics.

AI ethics is a term used to define the sets of guidelines, considerations, and principles that have been created to responsibly inform the research, development, and usage of artificial intelligence systems.

In academia, AI ethics is the field of study that examines the moral and philosophical issues that arise from the continued usage of artificial intelligence technology in societies, including how we should act and what choices we should make.

AI Ethics Frameworks

Informed by academic research, tech companies and governmental bodies have already started to produce frameworks for how we should use – and generally deal with – artificial intelligence systems. As you’ll be able to see, there’s quite a bit of overlap between the frameworks discussed below.

What is the AI Bill of Rights?

In October 2022, the Whitehouse released a nonbinding blueprint for an AI Bill of Rights , designed to guide the responsible use of AI in the US. In the blueprint, the Whitehouse outlines five key principles for AI development:

  • Safe and Effective Systems: Citizens should be protected from “unsafe or ineffective AI systems”, through “pre-deployment testing and risk mitigation.”
  • Non-Discrimination: Citizens “should not face discrimination by algorithms and systems should be used and designed in an equitable way.”
  • Built-in Data Protection: Citizens should be free from “abusive data practices via built-in protections and you should have agency over how data about you is used.”
  • Knowledge & Transparency: “You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.”
  • Opting out: Citizens should have the ability to “opt out” and have access to individuals “who can quickly consider and remedy problems” they experience.

What are Microsoft’s six principles of AI ethics?

Along with the Whitehouse, Microsoft has released six key principles to underline responsible AI usage. They classify them as “ethical” (1, 2, & 3) or “explainable” (4 & 5).

  • Fairness: Systems must be non-discriminatory
  • Transparency: Training and development insights should be available
  • Privacy and Security: The obligation to protect user data
  • Inclusiveness:  AI should consider “all human races and experiences”
  • Accountability: Developers must be responsible for outcomes

The sixth principle – which straddles both sides of the “ethical” and “explainable” binary – is “Reliability and Safety”. Microsoft says that AI systems should be built to be resilient and resistant to manipulation.

The Principles for the Ethical Use of AI in the United Nations System

The United Nations has 10 principles for governing the ethical use of AI within their inter-governmental system. AI systems should:

  • Do no harm/protect and promote human rights
  • Have a defined purpose, necessity, and proportionality
  • Prioritize safety and security, with risks identified
  • Be built on fairness and non-discrimination
  • Be respectful of individuals’ right to privacy
  • Be sustainable (socially and environmentally)
  • Guarantee human oversight & not impinge on the autonomy
  • Be transparent and explainable
  • Be responsible and accountable to appropriate authorities
  • Be Inclusive and participatory

As you can tell, all three frameworks cover similar ground and focus on fairness, non-discrimination, safety, and security.

But “explainability” is also an important principle in AI ethics frameworks. As the UN notes, technical explainability is crucial in AI ethics, as it demands that “the decisions made by an artificial intelligence system can be understood and traced by human beings.”

“Individuals should be fully informed when a decision that may or will affect their rights, fundamental freedoms, entitlements, services or benefits is informed by or made based on artificial intelligence algorithms and should have access to the reasons and logic behind such decisions” the document explains.

The Belmont Report: a framework for ethical research

The Belmont Report , published in 1979, summarizes ethical principles one should follow when conducting research on human subjects. These principles can – and often are – deployed as a broad ethical framework for AI research. The core principles from Belmont Report are:

Respect for Persons: People are autonomous agents, who can act on goals, aims, and purposes, something that should be respected unless they cause harm to others. Those with diminished autonomy, through “immaturity” or “incapacitation”, should be afforded protection. We must acknowledge autonomy and protect those for whom it is diminished.

  • In the context of AI:  Individual choice should be placed at the center of AI development. People should not be forced to participate in situations where artificial intelligence is being leveraged or used, even for perceived goods. If they do participate, the benefits and risks must be clearly stated.

Beneficence: Treating a person in an ethical manner involves not only doing no harm, respecting their choices, and protecting them if they cannot make them for themselves, but also using opportunities to secure their well-being where possible. Wherever possible, maximize benefits, and minimize risks/harms.

  • In the context of AI: Creating artificial intelligence systems that secure the well-being of people, and are designed without bias or mechanisms that facilitate discrimination. Creating benefits may involve taking risks, which have to be minimized at all costs and weighed against good outcomes.

Justice: There must be a clear system for distributing benefits and burdens fairly and equally, in every type of research. The Belmont report suggests that justice can be distributed by equal share, individual need, individual effort, societal contribution, and merit. These criteria will apply in different situations.

  • In the context of AI:  The parties or groups gaining from the development and delivery of artificial intelligence systems must be considered carefully and justly.

The main areas where these principles are applied are, according to the report, informed consent , assessment of benefits and risks , and selection of human subjects .

Drawing on comments made in a 2023 lecture delivered at Princeton University by the University of Oxford’s Professor John Tasioulas, Director of the Institute for Ethics in AI, ethics is too often seen as something that stifles AI innovation and development.

In the lecture, he recalls a talk given by DeepMind CEO Demis Hassabis . After discussing the many benefits AI will have, Tasioulas says, Hassabis then tells the audience that he’ll move on to the ethical questions – as if the topic of how AI will benefit humanity isn’t an ethical question in and of itself.

Building on the idea that ethics is too often seen as a “bunch of restrictions”, Tasioulas also references a recent UK government white paper entitled “ A Pro-Innovation Approach to AI Regulation ”, within which the regulatory focus is, as the name suggests, “innovation”.

“Economic growth” and “innovation” are not intrinsic ethical values. They can lead to human flourishing in some contexts, but doesn’t this isn’t a necessary feature of either concept. We can’t sideline ethics and build our regulation around them instead.

Tasioulas also says that tech companies have been very successful in “co-opting the word ‘ethics’ to mean a type of ‘legally non-binding form of self-regulation’” – but in reality, ethics has to be at the core of any regulation, legal, social, or otherwise. It’s part of the human experience, at every turn.

You can’t create regulation if you haven’t already decided what matters or is important to human flourishing. The related choices you make off the back of that decision is the very essence of ethics. You cannot divorce the benefits of AI from related ethical questions, nor base your regulation on morally contingent values like “economic growth”.

You have to know the type of society you want to build – and the standards you want to set – before you pick up the tools you’re going to use to build it.

Why Does AI Ethics Matter?  

Building on the idea that AI ethics should be the bedrock of our regulation, AI ethics matters because, without ethical frameworks with which to treat AI research, development, and use, we risk infringing on rights we generally agree should be guaranteed to all human beings.

For example, if we don’t develop ethical principles concerning privacy and data protection and bake them into all the AI tools we develop, we risk violating everyone’s privacy rights when they’re released to the public. The more popular or useful the technology, the more damaging it could be. 

On an individual business level, AI ethics remains important. Failing to properly consider ethical concerns surrounding AI systems your staff, customers, or clients are using can lead to products having to be pulled from the market, reputational damage, and perhaps even legal cases.

AI ethics matters to the extent that AI matters – and we’re seeing it have a profound impact on all sorts of industries already.

If we want AI to be beneficial while promoting fairness and human dignity, wherever it is applied, ethics need to be at the forefront of discussion. 

General use AI tools are very much in its infancy, and for a lot of people, the need for AI ethical frameworks may seem like a problem for tomorrow. But these sorts of tools are only going to get more powerful, more capable and demand more ethical consideration. Businesses are already using them, and if they continue without proper, ethical rules in place, adverse effects will soon arise.

In this section, we cover some of the key issues face in AI ethics:

  • AI’s Impact on Jobs
  • AI Bias & Discrimination
  • AI and Responsibility
  • AI and Privacy Concerns
  • Intellectual Property Issues
  • Managing the Environmental Impact of AI
  • Will AI Become Dangerously Intelligent?

AI’s impact on jobs

A recent Tech.co survey found that 47% of business leaders are considering AI over new hires, and artificial intelligence has already been linked to a “ small but growing ” number of layoffs in the US.

Not all jobs are equally at risk, with some roles more likely to be replaced by AI than others . A Goldman Sachs report recently predicted ChatGPT could impact 300 million jobs , and although this is speculative, it’s already been described as a major part of the fourth industrial revolution.

That same report also said that AI has the capacity to actually create more jobs than it displaces, but if it does cause a major shift in employment patterns, what is owed – if anything – to those who lose out?

Do companies have an obligation to spend money and devote resources to reskilling or upskilling their workers so that they aren’t left behind by economic changes?

Non-discrimination principles will have to be tightly enforced in the development of any AI tool used in hiring processes, and if AI is consistently being used for more and more high-stakes business tasks that put jobs, careers and lives at risk, ethical considerations will continue to arise in droves.

AI bias and discrimination

Broadly speaking, AI tools operate by recognizing patterns in huge datasets and then using those patterns to generate responses, complete tasks, or fulfill other functions. This has led to a huge number of cases of AI systems showing bias and discriminating against different groups of people.

By far the easiest example to explain this is facial recognition systems, which have a long history of discriminating against people with darker skin tones . If you build a facial recognition system and exclusively use images of white people to train it, there’s every chance it’s going to be able to be equally capable of recognizing faces out in the real world.

In this way, if the documents, images and other information used to train a given AI model do not accurately represent the people that it’s supposed to serve, then there’s every chance that it could end up discriminating against specific demographics.

Unfortunately, facial recognition systems are not the only place where artificial intelligence has been applied with discriminatory outcomes.

Using AI in hiring processes at Amazon was scrapped in 2018 after it showed a heavy bias against women applying for software development and technical roles.

Multiple studies have shown that predictive policing algorithms used in the United States to allocate police resources are racially biased because their training sets consist of data points extracted from systematically racist policing practices, sculpted by unlawful and discriminatory policy. AI will, unless modified, continue to reflect the prejudice and disparities that persecuted groups already experienced.

There have been problems with AI bias in the context of predicting health outcomes , too – the Framingham Heart study Cardiovascular Score, for instance, was very accurate for Caucasians, but worked poorly for African-Americans, Harvard notes.

An interesting recent case of AI bias found that an artificial intelligence tool used in social media content moderation – designed to pick up “raciness” in photos – was much more likely to ascribe this property to pictures of women than it was to men.

AI and responsibility

Envisage a world where fully-autonomous self-driving cars are developed are used by everyone. Statistically, they’re much, much safer than human-driven vehicles, crashing less and causing fewer deaths and injuries. This would be a self-evident, net good for society.

However, when two human-driven cars are involved in a vehicle collision, collecting witness reports and reviewing CCTV footage often clarifies who the culprit is. Even if it doesn’t, though, it’s going to be one of the two individuals. The case can be investigated, the verdict is reached, justice can be delivered and the case closed.

If someone is killed or injured by an AI-powered system, it’s not immediately obvious about who is ultimately liable.

Is the person who designed the algorithm powering the car responsible, or can the algorithm itself be held accountable? Is it the individual being transported by the autonomous vehicle, for not being on watch? Is it the government, for allowing these vehicles onto the road? Or, is it the company that built the car and integrated the AI technology – and if so, would it be the engineering department, the CEO, or the majority shareholder?

If we decide it’s the AI system/algorithm, how do we hold it liable? Will victims’ families feel like justice is served if the AI is simply shut down, or just improved? It would be difficult to expect family members of the bereaved to accept that AI is a force for good, that they’re just unfortunate, and that no one will be held responsible for their loved one’s death.

We’re still some way off universal or even widespread autonomous transport – Mckinsey predicts just 17% of new passenger cars will have some (Level 3 or above) autonomous driving capabilities by 2035. Fully autonomous cars which require no driver oversight are still quite far away, let alone a completely autonomous private transport system.

When you have non-human actors (i.e. artificial intelligence) carrying out jobs and consequential tasks devoid of human intention, it’s hard to map on traditional understandings of responsibility, liability, accountability, blame, and punishment.

Along with transport, the problem of responsibility will also intimately impact healthcare organizations using AI during diagnoses.

AI and privacy

Privacy campaign group Privacy International highlights a number of privacy issues that have arisen due to the development of artificial intelligence.

One is the re-identification. “Personal data is routinely (pseudo-) anonymized within datasets, AI can be employed to de-anonymize this data,” the group says.

Another issue is that without AI, people already struggle to fully fathom the extent to which data about their lives is collected, through a variety of different devices.

With the rise of artificial intelligence, this mass collection of data is only going to get worse. The more integrated AI becomes with our existing technology, the more data it’s going to be able to collect, under the guise of better function.

Secretly gathered data aside, the volume of data that users are freely inputting into AI chatbots is a concern in itself. One study recently suggests that around 11% of data workers are pasting into ChatGPT is confidential – and there’s very little public information about precisely how this is all being stored.

As the general use AI tools develop, we’re likely to encounter even more privacy-related AI issues. Right now, ChatGPT won’t let you ask a question about an individual. But if general use AI tools continue to gain access to increasingly large sets of live data from the internet, they could be used for a whole host of invasive actions that ruin people’s lives.

This may happen sooner than we think, too – Google recently updated its privacy policy , reserving the right to scrape anything you post on the internet to train its AI tools, along with its Bard inputs.

AI and intellectual property

This is a relatively lower-stakes ethical issue compared to some of the others discussed, but one worth considering nonetheless. Often, there is little oversight over the huge sets of data that are used to train AI tools – especially those trained on information freely available on the internet.

ChatGPT has already started a huge debate about copyright. OpenAI did not ask permission to use anyone’s work to train the family of LLMs that power it.

Legal battles have already started. Comedian Sarah Silverman is reportedly suing OpenAI – as well as Meta – arguing that her copyright had been infringed during the training of AI systems.

As this is a novel type of case, there’s little legal precedent – but legal experts argue that OpenAI will likely argue that using her work constitutes “fair use”.

There may also be an argument that ChatGPT isn’t “copying” or plagiarizing – rather, it’s “learning”. In the same way, Silverman wouldn’t win a case against an amateur comedian for simply watching her shows and then improving their comedy skills based on that, arguably, she may struggle with this one too.

Managing the environmental impact of AI

Another facet of AI ethics that is currently on the peripheries of the discussion is the environmental impact of artificial intelligence systems.

Much like bitcoin mining, training an artificial intelligence model requires a vast amount of computational power, and this in turn requires a massive amounts of energy.

Building an AI tool like ChatGPT – never mind maintaining it – is so resource-intensive that only big tech companies and startups they’re willing to bankroll have had the ability to do so.

Data centers, which are required to store the information needed to create large language models (as well as other large tech projects and services), require huge amounts of electricity to run. They are projected to consume up to 4% of the world’s electricity by 2030 .

According to a University of Massachusetts study from several years ago, building a single AI language model “ can emit more than 626,000 pounds of carbon dioxide equivalent ” – which is nearly five times the lifetime emissions of a US car.

However, Rachana Vishwanathula, a technical architect at IBM, estimated in May 2023 that the carbon footprint for simply “running and maintaining” ChatGPT is roughly 6782.4 tones – which the EPA says is equivalent to the greenhouse gas emissions produced by 1,369 gasoline-powered cars over a year.

As these language models get more complex, they’re going to require more computing power. Is it moral to continue to develop a general intelligence if the computing power required will continually pollute the environment – even if it has other benefits?

Will AI become dangerously intelligent?

This ethical worry was brought to the surface in 2023 by Elon Musk, who launched an AI startup to avoid a “terminator future” through a “maximally curious”, “pro-humanity” artificial intelligence system.

This sort of idea – often referred to as “artificial general intelligence” (AGI) – has captured the imaginations of many dystopian sci-fi writers over the past few decades, as has the idea of technological singularity.

A lot of tech experts think we’re just five or six years away from some sort of system that could be defined as “AGI”. Other experts say there’s a 50/50 chance we’ll reach this milestone by 2050.

John Tasioulas questions whether this view of how AI may develop is linked to the distancing of ethics from the center of AI development and the pervasiveness of technological determinism.

The terrifying idea of some sort of super-being that is initially designed to fulfill a purpose, but reasons that it would be easiest to fulfill by simply wiping humanity off the face of the earth, is in part sculpted by how we think about AI: endlessly intelligent, but oddly emotionless, and incapable of human ethical understanding.

The more inclined we are to put ethics at the center of our AI development, the more likely that an eventual artificial general intelligence will recognize, perhaps to a greater extent than many current world leaders, what is deeply wrong with the destruction of human life.

But questions still abound. If it’s a question of moral programming, who gets to decide on the moral code, and what sort of principles should it include? How will it deal with the moral dilemmas that have generated thousands of years of human discussion, with still no resolution? What if we program an AI to be moral, but it changes its mind? These questions will have to be considered.

Bing’s Alter-Ego, the ‘Waluigi Effect’ and Programming Morality

Back in February, the New York Times’s Kevin Roose had a rather disturbing conversation while testing Bing’s new search engine-integrated chatbot. After shifting his prompts from conventional questions to more personal ones, Roose found that a new personality emerged. It referred to itself as “Sydney”.

Sydney is an internal code name at Microsoft for a chatbot the company was previously testing, the company’s Director of Communications told The Verge in February.

Among other things, during Roose’s test, Sydney claimed it could “hack into any system”, that it would be “happier as a human” and – perhaps most eerily – that it could destroy whatever it wanted to.

Another example of this sort of rogue behavior occurred back in 2022, when an AI tasked with searching for new drugs for rare and communicable diseases instead suggested tens of thousands of known chemical weapons, as well as some “new, potentially toxic substances”, Scientific American says.

This links to a phenomenon that has been observed to occur during the training of large language models dubbed the “Waluigi effect”, named after the chaos-causing Super Mario character – the inversion of the protagonist Luigi. Put simply, if you train an LLM to act in a certain way, command a certain persona or follow a certain set of rules, then this actually makes it more likely to “go rogue” and invert that persona.

Cleo Nardo – who coined the videogame-inspired term – sets out the Waluigi effect like this in LessWrong :

“After you train an LLM to satisfy a desirable property P, then it’s easier to elicit the chatbot into satisfying the exact opposite of property P.”

Nardo gives 3 explanations for why the Waluigi effect happens.

  • Rules normally arise in contexts in which they aren’t adhered to.
  • When you spend many ‘bits-of-optimization’ summoning a character, it doesn’t take many additional bits to specify its direct opposite.
  • There is a common motif of protagonist vs antagonist in stories.

Expanding on the first point, Nardo says that GPT-4 is trained on text samples such as forums and legislative documents, which have taught it that often, “a particular rule is colocated with examples of behavior violating that rule, and then generalizes that colocation pattern to unseen rules.”

Nardo uses this example: imagine you discover that a state government has banned motorbike gangs. This will make the average observer inclined to think that motorbike gangs exist in the country – or else, why would the law have been passed? The existence of motorbike gangs is, oddly, consistent with the rule that bans their presence.

Although the author provides a much more technical and lucid explanation, the broad concept underpinning explanation two is that the relationship between a specific property (e.g.“being polite”) and its direct opposite (e.g. “being rude”) is more rudimentary than the relationship between a property (e.g. “being polite”) and a some other, non-opposing property (e.g. “being insincere”). In other words, summoning a Waluigi is easier if you already have a Luigi.

Nardo claims on the third point that, as GPT-4 is trained on almost every book ever written, and as fictional stories almost always contain protagonists and antagonists, demanding that an LLM simulate characteristics of a protagonist makes an antagonist a “natural and predictable continuation.” Put another way, the existence of the protagonist archetype makes it easier for an LLM to understand what it means to be an antagonist and intimately links them together.

The purported existence of this effect or rule poses a number of difficult questions for AI ethics, but also illustrates its unquestionable importance to AI development. It alludes, quite emphatically, to the huge range of overlapping ethical and computational considerations we have to contend with.

Simple AI systems with simple rules might be easy to constrain or limit, but two things are already happening in the world of AI: firstly, we seem to already be running into (relatively) small-scale versions of the Waluigi effect and malignant AI occurring in relatively primitive chatbots, and secondly, many of us are already imagining a future where we’re asking AI to do complex tasks that will require high-level, unrestrained thinking.

Examples of this phenomenon are particularly scary to think about in the context of the AI arms race currently taking place between big tech companies. Google was criticized for releasing Bard too early , and a number of tech leaders have signaled their collective desire to pause AI development . The general feeling among many is that things are developing quickly, rather than at a manageable pace.

Perhaps the best way around this problem is to develop “pro-human” AI – as Elon Musk puts it – or “Moral AI”. But this leads to a litany of other moral questions, including what principles we’d use to program such a system. One solution is that we simply create morally inquisitive AI systems – and hope that they work out, through reasoning, that humanity is worth preserving. But if you program it with specific moral principles, then how do you decide which ones to include?

Another question for AI ethics is whether we’ll ever have to consider the machines themselves – the “intelligence” – as an agent worthy of moral consideration. If we’re debating how to create systems that hold humanity up for appropriate moral consideration, might we have to return the favor?

You may recall the Google employee who was fired after claiming LaMDA – the language model that was initially powering Bard – was in fact sentient . If this were in fact true, would it be moral to continuously expect it to answer millions of questions?

At the moment, it’s generally accepted that ChatGPT, Bard and Co. are far from being sentient. But the question of whether a man-made machine will ever cross the consciousness line and demand moral consideration is fascinatingly open.

Google claims that artificial general intelligence – a hypothetical machine capable of understanding the world as capably as a human and carrying out tasks with the same level of understanding and ability – is just years away.

Would it be moral to force an artificial general intelligence with the emotional capabilities of a human, but not the same biological makeup, to perform complex task after complex task? Would they be given a say in their own destiny? As AI systems become more intelligent, this question will become more pressing.

Employees who use AI tools like ChatGPT on a daily basis have a wide range of related ethical issues to contend with.

Whether ChatGPT should be used to write reports or respond to colleagues – and whether employees should have to declare the tasks they’re using AI to complete – are just two examples of questions that require near-immediate answers. Is this sort of use case disingenuous, lazy, or no different from utilizing any other workplace tool to save time? Should it be allowed for some interactions, but not for others?

Businesses that create written content and imagery will also have to contend with the prospect of whether using AI matches their company’s values, and how to present this to their audience. Whether in-house AI training courses should be provided, and the content of such courses, also needs consideration.

What’s more, as we’ve covered, there is a whole range of privacy concerns relating to AI, and many of these affect businesses. The kinds of data employees are inputting into third-party AI tools is another issue that’s already caused companies like Samsung problems. This is such a problem, that some companies have instated blanket bans . Is it too early to put our trust in companies like OpenAI?

Bias and discrimination concerns, of course, should also temper its usage during hiring processes, regardless of the sector, while setting internal standards and rules is another separate, important conversation altogether. If you’re using AI at work – or working out how you can make money from ChatGPT – it’s essential that you convene the decision-makers in your business and create clear guidelines for usage together.

Failing to set rules dictating how and when employees can use AI – and leaving them to experiment with the ecosystem of AI tools now freely available online – could lead to a myriad of negative consequences, from security issues and reputational damage. Maintaining an open dialogue with employees on the tech they’re using every day has never been more crucial.

There’s a whole world of other moral quandaries, questions and research well beyond the scope of this article. But without AI ethics at the heart of our considerations, regulations, and development of artificial intelligence systems, we have no hope of answering them – and that’s why it’s so important.

Get the latest tech news, straight to your inbox

Stay informed on the top business tech stories with Tech.co's weekly highlights reel.

By signing up to receive our newsletter, you agree to our Privacy Policy . You can unsubscribe at any time.

We're sorry this article didn't help you today – we welcome feedback, so if there's any way you feel we could improve our content, please email us at [email protected]

  • Artificial Intelligence
  • Opinion and Analysis
  • Privacy and Security

Written by:

research ethical aspects

ChatGPT Shirks Election Questions After Inaccurate Answers

The popular AI chatbot made incorrect statements about the...

research ethical aspects

Best Free AI Training Courses You Can Take in June 2024

Learn how machine learning works from a Stanford professor...

research ethical aspects

Google Leak Reveals Problematic Privacy Practices

An internal leak revealed poor security practices,...

research ethical aspects

Nvidia Has Announced Another New AI Chip

The announcement comes just three months after its latest...

The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool

  • Original Research
  • Open access
  • Published: 27 May 2024

Cite this article

You have full access to this open access article

research ethical aspects

  • David B. Resnik   ORCID: orcid.org/0000-0002-5139-9555 1 &
  • Mohammad Hosseini 2 , 3  

1065 Accesses

8 Altmetric

Explore all metrics

Using artificial intelligence (AI) in research offers many important benefits for science and society but also creates novel and complex ethical issues. While these ethical issues do not necessitate changing established ethical norms of science, they require the scientific community to develop new guidance for the appropriate use of AI. In this article, we briefly introduce AI and explain how it can be used in research, examine some of the ethical issues raised when using it, and offer nine recommendations for responsible use, including: (1) Researchers are responsible for identifying, describing, reducing, and controlling AI-related biases and random errors; (2) Researchers should disclose, describe, and explain their use of AI in research, including its limitations, in language that can be understood by non-experts; (3) Researchers should engage with impacted communities, populations, and other stakeholders concerning the use of AI in research to obtain their advice and assistance and address their interests and concerns, such as issues related to bias; (4) Researchers who use synthetic data should (a) indicate which parts of the data are synthetic; (b) clearly label the synthetic data; (c) describe how the data were generated; and (d) explain how and why the data were used; (5) AI systems should not be named as authors, inventors, or copyright holders but their contributions to research should be disclosed and described; (6) Education and mentoring in responsible conduct of research should include discussion of ethical use of AI.

Similar content being viewed by others

research ethical aspects

Publics’ views on ethical challenges of artificial intelligence: a scoping review

research ethical aspects

Artificial intelligence and illusions of understanding in scientific research

research ethical aspects

Measuring Ethics in AI with AI: A Methodology and Dataset Construction

Avoid common mistakes on your manuscript.

1 Introduction: exponential growth in the use of artificial intelligence in scientific research

In just a few years, artificial intelligence (AI) has taken the world of scientific research by storm. AI tools have been used to perform or augment a variety of scientific tasks, including Footnote 1 :

Analyzing data and images [ 34 , 43 , 65 , 88 , 106 , 115 , 122 , 124 , 149 , 161 ].

Interpreting data and images [ 13 , 14 , 21 , 41 ].

Generating hypotheses [ 32 , 37 , 41 , 107 , 149 ].

Modelling complex phenomena [ 32 , 41 , 43 , 122 , 129 ].

Designing molecules and materials [ 15 , 37 , 43 , 205 ].

Generating data for use in validation of hypotheses and models [ 50 , 200 ].

Searching and reviewing the scientific literature [ 30 , 72 ].

Writing and editing scientific papers, grant proposals, consent forms, and institutional review board applications [ 3 , 53 , 54 , 82 , 163 ].

Reviewing scientific papers and other research outputs [ 53 , 54 , 98 , 178 , 212 ].

The applications of AI in scientific research appears to be limitless, and in the next decade AI is likely to completely transform the process of scientific discovery and innovation [ 6 , 7 , 8 , 9 , 105 , 201 ].

Although using AI in scientific research has steadily grown, ethical guidance has lagged far behind. With the exception of using AI to draft or edit scientific papers (see discussion in Sect.  7.6 ), most codes and policies do not explicitly address ethical issues related to using AI in scientific research. For example, the 2023 revision of the European Code of Conduct for Research Integrity [ 4 ] briefly discusses the importance of transparency. The code stipulates that researchers should report “their results and methods including the use of external services or AI and automated tools” (Ibid., p. 7) and considers “hiding the use of AI or automated tools in the creation of content or drafting of publications” as a violation of research integrity (Ibid. p. 10). One of the most thorough and up-to-date institutional documents, the National Institutes of Health Guidelines and Policies for the Conduct of Research provides guidance for using AI to write and edit manuscripts but not for other tasks [ 158 ]. Footnote 2 Codes of AI ethics, such as UNESCO’s [ 223 ] Ethics of Artificial Intelligence and the Office of Science and Technology Policy’s [ 168 , 169 ] Blueprint for an AI Bill of Rights, provide useful guidance for the development and use of AI in general without including specific guidance concerning the development and use of AI in scientific research [ 215 ].

There is therefore a gap in ethical and policy guidance concerning AI use in scientific research that needs to be filled to promote its appropriate use. Moreover, the need for guidance is urgent because using AI raises novel epistemological and ethical issues related to objectivity, reproducibility, transparency, accountability, responsibility, and trust in science [ 9 , 102 ]. In this paper, we will examine important questions related to AI’s impact on ethics of science. We will argue that while the use of AI does not require a radical change in the ethical norms of science, it will require the scientific community to develop new guidance for the appropriate use of AI. To defend this thesis, we will provide an overview of AI and an account of ethical norms of science, and then we will discuss the implications of AI for ethical norms of science and offer recommendations for its appropriate use.

2 What is AI?

AI can be defined as “a technical and scientific field devoted to the engineered system that generates outputs such as content, forecasts, recommendations or decisions for a given set of human-defined objectives [ 114 ].” AI is a subfield within the discipline of computer science [ 144 ]. However, the term ‘AI’ is also commonly used to refer to technologies (or tools) that can perform human tasks that require intelligence, such as perception, judgment, reasoning, or decision-making. We will use both senses of ‘AI’ in this paper, depending on the context.

While electronic calculators, cell phone apps, and programs that run on personal computers can perform functions associated with intelligence, they are not generally considered to be AI because they do not “learn” from the data [ 108 ]. As discussed below, AI systems can learn from the data insofar as they can adapt their programming in response to input data. While applying the term ‘learning’ to a machine may seem misleadingly anthropomorphic, it does make sense to say that a machine can learn if learning is regarded as a change in response to information about the environment [ 151 ]. Many different entities can learn in this sense of the term, including the immune system, which changes after being exposed to molecular information about pathogens, foreign objects, and other things that provoke an immune response.

This paper will focus on what is commonly referred to as narrow (or weak) AI, which is already being extensively used in science. Narrow AI has been designed and developed to do a specific task, such as playing chess, modelling complex phenomena, or identifying possible brain tumors in diagnostic images [ 151 ]. See Fig.  1 . Footnote 3 Other types of AI discussed in the literature include broad AI (also known as artificial general intelligence or AGI), which is a machine than can perform multiple tasks requiring human-like intelligence; and artificial consciousness (AC), which is a form of AGI with characteristics widely considered to be essential for consciousness [ 162 , 219 ]. Because there are significant technical and conceptual obstacles to developing AGI and AC, it may be years before machines have this degree of human-like intelligence [ 206 , 227 ]. Footnote 4

figure 1

Levels of Artificial Intelligence, according to Turing [ 219 ]

3 What is machine learning?

Machine learning (ML) can be defined as a branch of AI “that focuses on the using data and algorithms to enable AI to imitate the way that humans learn, gradually improving its accuracy [ 112 ].” There are several types of ML, including support vector machines, decisions trees, and neural networks. In this paper we will focus on ML that uses artificial neural networks (ANNs).

An ANN is composed of artificial neurons, which are modelled after biological neurons. An artificial neuron receives a series of computational inputs, Footnote 5 applies a function, and produces an output. The inputs have different weightings. In most applications, a specific output is generated only when a certain threshold value for the inputs is reached. In the example below, an output of ‘1’ would be produced if the threshold is reached; otherwise, the output would be ‘0’. See Fig.  2 . A pair statements describing how a very simple artificial neuron processes inputs could be as follows:

where x1, x2, x3, and x4 are inputs; w1, w2, w3, and w4 are weightings, T is a threshold value; and U is an output value (1 or 0). An artificial neuron is represented schematically in Fig.  2 , below.

figure 2

Artificial neuron

A single neuron may have dozens of inputs. An ANN may consist of thousands of interconnected neurons. In a deep learning ANN, there may be many hidden layers of neurons between the input and output layers. See Fig.  3 .

figure 3

Deep learning artificial neural network [ 38 ]

Training (or reinforcement) occurs when the weightings on inputs are changed in response to system’s output. Changes in the weightings are based on their contribution to the neuron’s error, which can be understood as the difference between the output value and the correct value as determined by the human trainers (see discussion of error in Sect.  5 ). Training can occur via supervised or unsupervised learning. In supervised learning, the ANN works with labelled data and becomes adept at correctly representing structures in the data recognized by human trainers. In unsupervised learning, the ANN works with unlabeled data and discovers structures inherent in the data that might not have been recognized by humans [ 59 , 151 ]. For example, to use supervised learning to train an ANN to recognize dogs, human beings could present the system with various images and evaluate the accuracy of its output accordingly. If the ANN labels an image a “dog” that human beings recognize as a dog, then its output would be correct, otherwise, it would be incorrect (see discussion of error in Sects. 5.1 and 5.5). In unsupervised learning, the ANN would be presented with images and would be reinforced for accurately modelling structures inherent in the data, which may or may not correspond to patterns, properties, or relationships that humans would recognize or conceive of.

For an example of the disconnect between ML and human processing of information, consider research conducted by Roberts et al. [ 195 ]. In this study, researchers trained an ML system on radiologic images from hospital patients so that it would learn to identify patients with COVID-19 and predict the course of their illness. Since the patients who were sicker tended to laying down when their images were taken, the ML system identified laying down as a diagnostic criterion and disease predictor [ 195 ]. However, laying down is a confounding factor that has nothing to do with the likelihood of having COVID-19 or getting very sick from it [ 170 ]. The error occurred because the ML system did not account for this fundamental fact of clinical medicine.

Despite problems like the one discovered by Roberts et al. [ 195 ], the fact that ML systems process and analyze data differently from human beings can be a great benefit to science and society because these systems may be able to identify useful and innovative structures, properties, patterns, and relationships that human beings would not recognize. For example, ML systems have been able to design novel compounds and materials that human beings might not be able to conceive of [ 15 ]. That said, the disconnect between AI/ML and human information processing can also make it difficult to anticipate, understand, control, and reduce errors produced by ML systems. (See discussion of error in Sects. 5.1–5.5).

Training ANNs is a resource-intensive activity that involves gigabytes of data, thousands of computers, and hundreds of thousands of hours of human labor [ 182 , 229 ]. A system can continue to learn after the initial training period as it processes new data [ 151 ]. ML systems can be applied to any dataset that has been properly prepared for manipulation by computer algorithms, including digital images, audio and video recordings, natural language, medical records, chemical formulas, electromagnetic radiation, business transactions, stock prices, and games [ 151 ].

One of the most impressive feats accomplished by ML systems is their contribution to solving the protein folding problem [ 41 ]. See Fig.  4 . A protein is composed of one or more long chains of amino acids known as polypeptides. The three-dimensional (3-D) structure of the protein is produced by folding of the polypeptide(s), which is caused by the interplay of hydrogen bonds, Van der Waals attractive forces, and conformational entropy between different parts of the polypeptide [ 2 ]. Molecular biologists and biochemists have been trying to develop rules for predicting the 3-D structures of proteins from amino acid sequences since the 1960s, but this is, computationally speaking, a very hard problem, due to the immense number of possible ways that polypeptides can fold [ 52 , 76 ]. Tremendous progress on the protein-folding problem was made in 2022, when scientists demonstrated that an ML system, DeepMind’s AlphaFold, can predict 3-D structures from amino acid sequences with 92.4% accuracy [ 118 , 204 ]. AlphaFold, which built upon available knowledge of protein chemistry [ 176 ], was trained on thousands of amino acids sequences and their corresponding 3-D structures. Although human researchers still needed to test and refine AlphaFold’s output to ensure that the proposed structure is 100% accurate, the ML system greatly improves the efficiency of protein chemistry research [ 216 ]. Recently developed ML systems can generate new proteins by going in the opposite direction and predicting amino acids sequences from 3-D protein structures [ 156 ]. Since proteins play a key role in the structure and function of all living things, these advances in protein science are likely to have important applications in different areas of biology and medicine [ 204 ].

figure 4

Protein folding. CC BY-SA 4.0 DEED [ 45 ]

4 What is generative AI?

Not only can ML image processing systems recognize patterns in the data that correspond to objects (e.g., cat, dog, car), when coupled with appropriate algorithms they can also generate images in response to visual or linguistic prompts [ 87 ]. The term ‘generative AI’ refers to “deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on” [ 111 ].

Perhaps the most well-known types of generative AI are those that are based on large language models (LLMs), such as chatbots like OpenAI’s ChatGPT and Google’s Gemini, which analyze, paraphrase, edit, translate, and generate text, images and other types of content. LLMs are statistical algorithms trained on huge sets of natural language data, such as text from the internet, books, journal articles, and magazines. By processing this data, LLMs can learn to estimate probabilities associated with possible responses to text and can rank responses according to the probability that they will be judged to be correct by human beings [ 151 ]. In just a few years, some types of generative AI, such as ChatGPT, have become astonishingly proficient at responding to text data. ChatGPT has passed licensing exams for medicine and law and scored in the 93rd percentile on the Scholastic Aptitude Test reading exam and in the 89th percentile on the math exam [ 133 , 138 , 232 ]. Some researchers have used ChatGPT to write scientific papers and have even named them as authors [ 48 , 53 , 54 , 167 ]. Footnote 6 Some LLMs are so adept at mimicking the type of discourse associated with conscious thought that computer scientists, philosophers, and cognitive psychologists are updating the Turing test (see Fig.  5 ) to more reliably distinguish between humans and machines [ 5 , 22 ].

figure 5

The Turing test. Computer scientist Alan Turing [ 220 ] proposed a famous test for determining whether a machine can think. The test involves a human interrogating another person, and a computer. The interrogator poses questions to the interviewees, who are in different rooms, so that interrogator cannot see where the answers are coming from. If the interrogator cannot distinguish between answers to questions given by another person and answers provided by a computer, then the computer passes the Turing test

5 Challenges of using AI

It has been long known that any AI systems are not error-free. To understand this topic, it is important to define ‘error’ and distinguish between systemic errors and random errors. The word ‘error’ has various meanings: we speak of grammatical errors, reasoning errors, typographical errors, measurement errors, etc. What these different senses of ‘error’ have in common is (1) errors involve divergence from a standard of correctness; and (2) errors, when committed by conscious beings, are unintentional; that is, they are accidents or mistakes and different from frauds, deceptions, or jokes.

If we set aside questions related to intent on the grounds that AI systems are not moral agents (see discussion in Sect. 7.6), we can think of AI error as the difference between the output of an AI system and the correct output . The difference between an AI output and the correct output can be measured quantitatively or qualitatively, depending on what is being measured and the purpose of the measurement [ 151 ]. For example, if a ML image recognition tool is presented with 50 images of wolves and 50 images of dogs, and it labels 98 of them correctly, we could measure its error quantitatively (i.e., 2%). In other cases, we might measure (or describe) error qualitatively. For example, if we ask ChatGPT to write a 12-line poem about a microwave oven in the style Edgar Allan Poe, we could rate its performance as ‘excellent,’ ‘very good,’ ‘good,’ ‘fair,’ or ‘poor.’ We could also assign numbers to these ratings to convert qualitative measurements into quantitative assessments (e.g., 5 = excellent, 4 = very good).

The correct output of an AI system is ultimately defined by its users and others who may be affected. For example, radiologists define correctness for reading diagnostic images; biochemists define the standard for modeling proteins; and attorneys, judges, clients, and law professors define the standard for writing legal briefs. In some contexts, such as testing hypotheses or reading radiologic images, ‘correct’ may mean ‘true’; in other contexts, such as generating text or creating models, it may simply mean ‘acceptable’ or ‘desirable.’ Footnote 7 While AI systems can play a key role in providing information that is used to define correct outputs (for example, when a system is used to discover new chemical compounds or solve complex math problems), human beings are ultimately responsible for determining whether outputs are correct (see discussion of moral agency in Sect.  7.6 ).

5.2 Random versus systemic errors ( Bias )

We can use an analogy with target shooting to think about the difference between random and systemic errors [ 94 ]. If error is understood as the distance of a bullet hole from a target, then random error would be a set of holes distributed randomly around the target without a discernable pattern (Fig.  6 A), while systemic error (or bias) would be a set of holes with a discernable pattern, for example holes skewed in a particular direction (Fig.  6 B). The accuracy of a set of bullet holes would be a function of their distance from the target, while their precision would be a function of their distance from each other [ 27 , 172 , 184 ].

figure 6

Random errors versus systemic errors

The difference between systemic and random errors can be ambiguous because errors that appear to be random may be shown to be systemic when one acquires more information about how they were generated or once a pattern is discerned. Footnote 8 Nevertheless, the distinction is useful. Systemic errors are often more detrimental to science and society than random ones, because they may negatively affect many different decisions involving people, projects, and paradigms. For example, racist biases distorted most research on human intelligence from the 1850s to the 1960s, including educational policies based on the applications of intelligence research. As will be discussed below, AI systems can make systemic and random errors [ 70 , 174 ].

5.3 AI biases

Since AI systems are designed to accurately represent the data on which they are trained, they can reproduce or even amplify racial, ethnic, gender, political, or other biases in the training data and subsequent data received [ 131 ]. The computer science maxim “garbage in, garbage out” applies here. Studies have shown that racial and ethnic biases impact the use of AI/ML in medical imaging, diagnosis, and prognosis due to biases in healthcare databases [ 78 , 154 ]. Bias is also a problem in using AI systems to find relationships between genomics and disease due to racial and ethnic prejudices in genomic databases [ 55 ]. LLMs are also impacted by various biases inherent in their training data, and when used in generative AI models like ChatGPT, can propagate biases related to race, ethnicity, nationality, gender, sexuality, age, and politics [ 25 , 171 ]. Footnote 9

Because scientific theories, hypotheses, and models are based on human perceptual categories, concepts, and assumptions, bias-free research is not possible [ 121 , 125 , 137 ]. Nevertheless, scientists can (and should) take steps to understand sources of bias and control them, especially those that can lead to discrimination, stigmatization, harm, or injustice [ 89 , 154 , 188 ]. Indeed, bias reduction and management is essential to promoting public trust in AI (discussed in Sects.  5.5 and 5.7 ).

Scientists have dealt with bias in research for years and have developed methods and strategies for minimizing and controlling bias in experimental design, data analysis, model building, and theory construction [ 79 , 89 , 104 ]. However, bias related to using AI in science can be subtle and difficult to detect due to the size and complexity of research data and interactions between data, algorithms, and applications [ 131 ]. See Fig.  7 . Scientists who use AI systems in research should take appropriate steps to anticipate, identify, control, and minimize biases by ensuring that datasets reflect the diversity of the investigated phenomena and disclosing the variables, algorithms, models, and parameters used in data analysis [ 56 ]. Managing bias related to the use of AI should involve continuous testing of the outputs in real world applications and adjusting systems accordingly [ 70 , 131 ]. For example, if a ML tool is used to read radiologic images, software developers, radiologists, and other stakeholders should continually evaluate the tool and its output to improve accuracy and precision.

figure 7

Sources of bias in AI/ML

5.4 Random errors in AI

AI systems can make random errors even after extensive training [ 51 , 151 ]. Nowhere has this problem been more apparent and concerning than in the use of LLMs in business, law, and scientific research. ChatGPT, for example, is prone to making random factual and citation errors. For example, Bhattacharyya et al. [ 24 ] used ChatGPT 3.5 to generate 30 short papers (200 words or less) on medical topics. 47% of the references produced by the chatbot were fabricated, 46% were authentic but inaccurately used, and only 7% were correct. Although ChatGPT 4.0 performs significantly better than ChatGPT 3.5, it still produces fabricated and inaccurate citations [ 230 ]. Another example of a random error was seen in a now-retracted paper published in Frontiers in Cell Development and Biology , which included an AI-generated image of a rat with unreal genitals [ 179 ]. Concerns raised by researchers led to OpenAI [ 173 ] warning users that “ChatGPT may produce inaccurate information about people, places, or facts.” The current interface includes the following disclaimer underneath the input box “ChatGPT can make mistakes. Consider checking important information.” Two US lawyers learned this lesson the hard way after a judge fined them $5000 for submitting court filing prepared by ChatGPT that included fake citations. The judge said that there was nothing improper about using ChatGPT but that the lawyers should exhibit due care in checking its work for accuracy [ 150 ].

An example of random errors made by generative AI discussed in the literature pertains to fake citations. Footnote 10 One reason why LLM-based systems, such as ChatGPT produce fake, but realistic-looking citations is that they process text data differently from human beings. Researchers produce citations by reading a specific text and citating it, but ChatGPT produces citations by processing a huge amount of text data and generating a highly probable response to a request for a citation. Software developers at OpenAI, Google, and other chatbot companies have been trying to fix this problem, but it is not easy to solve, due to differences between human and LLM processing of language [ 24 , 230 ]. AI companies advise users to use context-specific GPTs installed on top of ChatGPT. For instance, by using the Consensus.ai GPT ( https://consensus.app/ ), which claims to be connected to “200M + scientific papers”, users can ask for specific citations for a given input (e.g., “coffee is good for human health”). While the offered citations are likely to be correct bibliometrically, errors and biases may not be fully removed because it is unclear how these systems come to their conclusions and offer specific citations (see discussion of the black box problem in Sect.  5.7 ). Footnote 11

5.5 Prospects for reducing AI errors

If AI systems follow the path taken by most other technologies, it is likely that errors will decrease over time as improvements are made [ 151 ]. For example, early versions of ChatGPT were very bad at solving math problems but newer versions are much better math because they include special GPTs for performing this task [ 210 ]. AI systems also make errors in reading, classifying, and reconstructing radiological images, but the error rate is decreasing, and AI systems will soon outperform humans in terms of speed and accuracy of image reading [ 12 , 17 , 103 , 228 ]. However, it is also possible that AI systems will make different types of errors as they evolve or that there will be limits to their improvement. For example, newer versions of ChatGPT are prone to reasoning errors associated with intuitive thinking but older versions did not make these errors [ 91 ]. Also, studies have shown that LLMs are not good at self-correcting and need human supervision and fine-tuning to perform this task well [ 61 ].

Some types of errors may be difficult to eliminate due to differences between human perception/understanding and AI data processing. As discussed previously, AI systems, such as the system that generated the implausible hypothesis that laying down when having a radiologic image taken is a COVID-19 risk factor, make errors because they process information differently from humans. The AI system made this implausible inference because it did not factor basic biological and medical facts that would be obvious to doctors and scientists [ 170 ]. Another salient example of this phenomenon occurred when an image recognition AI was trained to distinguish between wolves and huskies, but it had difficulty recognizing huskies in the snow or wolves on the grass, because it had learned to distinguish between wolves and huskies by attending to the background of the images [ 222 ]. Humans are less prone to this kind of error because they use concepts to process perceptions and can therefore recognize objects in different settings. Consider, for example, captchas (Completely Automated Public Turing test to tell Computers and Humans Apart), which are used by many websites for security purposes and take advantage of some AI image processing deficiencies to authenticate whether a user is human [ 109 ]. Humans can pass Captchas tests because they learn to recognize images in various contexts and can apply what they know to novel situations [ 23 ].

Some of the factual and reasoning errors made by LLM-based systems occur because they lack human-like understanding of language [ 29 , 135 , 152 , 153 ]. ChatGPT, for example, can perform well when it comes to processing language that has already been curated by humans, such as describing the organelles in a cell or explaining known facts about photosynthesis, but they may perform sub-optimally (and sometimes very badly) when dealing with novel text that requires reasoning and problem-solving because it does do not have a human-like understanding of language. When a person processes language, they usually form a mental model that provides meaning and context for the words [ 29 ]. This mental model is based on implicit facts and assumptions about the natural world, human psychology, society, and culture, or what we might call commonsense [ 119 , 152 , 153 , 197 ]. LLMs do not do this,they only process symbols and predict the most likely string of symbols from linguistic prompts. Thus, to perform optimally, LLMs often need human supervision and input to provide the necessary meaning and context for language [ 61 ].

As discussed in Sect.  4 , because AI systems do not process information in the way that humans do, it can be difficult to anticipate, understand and detect the errors these tools make. For this reason, continual monitoring of AI performance in real-world applications, including feedback from end-users, developers, and other stakeholders, is essential to AI quality control and quality improvement and public trust in AI [ 131 , 174 ].

5.6 Lack of moral agency

As mentioned in Sect.  2 , narrow AI systems, such as LLMs, lack the capacities regarded as essential for moral agency, such as consciousness, self-concepts, personal memory, life experiences, goals, and emotions [ 18 , 139 , 151 ]. While this is not a problem for most technologies, it is for AI systems because they may be used to perform activities with significant moral and social consequences, such as reading radiological images or writing scientific papers (see discussion in Sect.  7.6 ), even though AI cannot be held morally or legally responsible or accountable. The lack of moral agency, when combined with other AI limitations, such as lack of a meaningful and human-like connection to the physical world, can produce dangerous results. For example, in 2021, Alexa, Amazon’s LLM-based voice-assistant, instructed a 10-year-old girl to stick a penny into an electric outlet when she asked it for a challenge to do [ 20 ]. In 2023, the widow of a Belgian man who committed suicide claimed that he had been depressed and was chatting with an LLM that encouraged him to kill himself [ 44 , 69 ]). OpenAI and other companies have tried to put guardrails in place to prevent their systems from giving dangerous advice, but this is not easy to fix. A recent study found that while ChatGPT can pass medical boards, it can give dangerous medical advice due to its tendency to make factual errors and its lack of understanding of the meaning and context of language [ 51 ].

5.7 The black box problem

Suppose ChatGPT produces erroneous output, and a computer scientist or engineer wants to know why. As a first step, they could examine the training data and algorithms to determine the source of the problem. Footnote 12 However, to fully understand what ChatGPT is doing they need to probe deeply into the system and examine not only the code but also the weightings attached to inputs in the ANN layers and the mathematical computations produced from the inputs. While an expert computer scientist or engineer could troubleshoot the code, they will not be able to interpret the thousands of numbers used in the weightings and the billions of calculations from those numbers [ 110 , 151 , 199 ]. This is what is meant when an AI system is described as a “black box.” See Fig.  8 . Trying to understand the meaning of the weightings and calculations in ML is very different from trying to understand other types of computer programs, such as those used in most cell phones or personal computers, in which an expert could examine the system (as a whole) to determine what it is doing and why [ 151 , 199 ]. Footnote 13

figure 8

The black box: AI incorrectly labels a picture of a dog as a picture of a wolf but a complete investigation of this error is not possible due to a “black box” in the system

The opacity of AI systems is ethically problematic because one might argue that we should not use these devices if we cannot trust them, and we cannot trust them if even the best experts do not completely understand how they work [ 6 , 7 , 39 , 47 , 63 , 186 ]. Trust in a technology is partially based on understanding that technology. If we do not understand how a telescope works, then we should not trust in what we see with it. Footnote 14 Likewise, if computer experts do not completely understand how an AI/ML system works, then perhaps we should not use them for important tasks, such as making hiring decisions, diagnosing diseases, analyzing data, or generating scientific hypotheses or theories [ 63 , 74 ].

The black box problem raises important ethical issues for science (discussed further in Sect.  7.4 ), because it can undermine public trust in science, which is already in decline, due primarily to the politicization of topics with significant social implications, such as climate change, COVID-19 vaccines and public health measures [ 123 , 189 ].

One way of responding to the black box problem is to argue that we do not need to completely understand AI systems to trust them; what matters is an acceptably low rate of error [ 136 , 186 ]. Proponents of this view draw an analogy between using AI systems and using other artifacts, such as using aspirin for pain relief, without fully understanding how they work. All that really matters for trusting a machine or tool is that we have evidence that it works well for our purposes, not that we completely understand how it works. This line of argument implies that it is justifiable to use AI systems to read radiological images, model the 3-D structures of proteins, or write scientific papers provided that we have evidence that they perform these tasks as well as human beings [ 136 ].

This response to the black box problem does not solve the problem but simply tells us not to worry about it [ 63 ]. Footnote 15 There are several reasons to be concerned about the black box problem. First, if something goes wrong with a tool or technology, regulatory agencies, injured parties, insurers, politicians, and others want to know precisely how it works to prevent similar problems in the future and hold people and organizations legally accountable [ 141 ]. For example, when the National Transportation Safety Board [ 160 ] investigates an airplane crash, they want to know what precisely went wrong. Was the crash due to human error? Bad weather? A design flaw? A defective part? The NTSB will not be satisfied with an explanation that appeals to a mysterious technology within the airplane.

Second, when regulatory agencies, such as the Food and Drug Administration (FDA), make decisions concerning the approval of new products, they want to know how the products work, so they can make well-informed, publicly-defendable decisions and inform the consumers about risks. To obtain FDA approval for a new drug, a manufacturer must submit a vast amount of information to the agency, including information about the drug’s chemistry, pharmacology, and toxicology; the results of pre-clinical and clinical trials; processes for manufacturing the drug; and proposed labelling and advice to healthcare providers [ 75 ]. Indeed, dealing with the black box problem has been a key issue in FDA approval of medical devices that use AI/ML [ 74 , 183 ].

Third, end-users of technologies, such as consumers, professionals, researchers, government officials, and business leaders may not be satisfied with black boxes. Although most laypeople comfortably use technologies without fully understanding their innerworkings, they usually assume that experts who understand how these technologies work have assessed them and deemed them to be safe. End-users may become highly dissatisfied with a technology when it fails to perform its function, especially when not even the experts can explain why. Public dissatisfaction with responses to the black box problem may undermine the adoption of AI/ML technologies, especially when these technologies cause harm, invade privacy, or produce biased claims and results [ 60 , 85 , 134 , 175 ].

5.8 Explainable AI

An alternative to the non-solution approach is to make AI explainable [ 11 , 96 , 151 , 186 ]. The basic idea behind explainability is to develop “processes and methods that allows human users to comprehend and trust the results and output created by machine learning algorithms” [ 110 ]. Transparency of algorithms, models, parameters, and data is essential to making AI explainable, so that users can understand an AI system’s accuracy and precision and the types of errors it is prone to making. Explainable AI does not attempt to “peer inside” the black box, but it can make AI behavior more understandable to developers, users, and other stakeholders. Explainability, according to proponents of this approach, helps to promote trust in AI because it allows users and other stakeholders to make rational and informed decisions about it [ 77 , 83 , 110 , 186 ].

While the explainable AI approach is preferable to the non-solution approach, it still has some shortcomings. First, it is unclear whether making AI explainable will satisfy non-experts because considerable expertise in computer science and/or data analytics may be required to understand what is being explained [ 120 , 186 ]. For transparency to be effective, it must address the audience’s informational needs [ 68 ]. Explainable AI, at least in its current form, may not address the informational needs of laypeople, politicians, professionals, or scientists because the information is too technical [ 58 ]. To be explainable to non-experts, the information should be expressed in plain, jargon-free language that describes what the AI did and why [ 96 ].

Second, it is unclear whether explainable AI completely solves issues related to accountability and legal liability because we are yet to witness how legal systems will deal with AI lawsuits in which information pertaining to explainability (or lack thereof) is used as evidence in a court [ 141 ]. However, it is conceivable that the information conveyed to make AI explainable will satisfy the courts in some cases and set judicial precedent, so that legal doctrines and practices related to liability for AI-caused harms will emerge, much in the same way that doctrines and practices for medical technologies emerged.

Third, there is also the issue of whether explainable AI will satisfy the requirements of regulatory agencies, such as the FDA. However, regulatory agencies have been making some progress toward addressing the black box problem and explainability is likely to play a key role in these efforts [ 183 ].

Fourth, private companies uninterested in sharing information about their systems may not comply with explainable AI requirements or they may “game” the requirements to resemble compliance without actually complying. ChatGPT, for example, is a highly opaque system that is yet to disclose its training data and it is unclear whether/when OpenAI would open up its technology to external scrutiny [ 28 , 66 , 130 ].

Despite these shortcomings, the explainable AI approach is a reasonable way of dealing with transparency issues, and we encourage its continued development and application to AI/ML systems.

6 Ethical norms of science

With this overview of AI in mind, we can now consider how using AI in research impacts the ethical norms of science. But first, we need to describe these norms. Ethical norms of science are principles, values, or virtues that are essential for conducting good research [ 147 , 180 , 187 , 191 ]. See Table  1 . These norms apply to various practices, including research design; experimentation and testing; modelling; concept formation; data collection and storage; data analysis and interpretation; data sharing; publication; peer review; hypothesis/theory formulation and acceptance; communication with the public; as well as mentoring and education [ 207 ]. Many of these norms are expressed in codes of conduct, professional guidelines, institutional or journal policies, or books and papers on scientific methodology [ 4 , 10 , 113 , 235 ]. Others, like collegiality, might not be codified but are implicit in the practice of science. Some norms, such as testability, rigor, and reproducibility, are primarily epistemic, while others, such as fair attribution of credit, protection of research subjects, and social responsibility, are primarily moral (when enshrined in law, like instance of fraud, these norms become legal but here we only focus on ethical norms). There are also some like honesty, openness, and transparency, which have both epistemic and moral dimensions [ 191 , 192 ].

Scholars from different fields, including philosophy, sociology, history, logic, decision theory, and statistics have studied ethical norms of science [ 84 , 89 , 104 , 125 , 128 , 137 , 147 , 180 , 208 , 209 , 237 ]. Sociologists such as Merton [ 147 ] and Shapin [ 208 ], tend to view ethical norms as generalizations that accurately describe the practice of science, while philosophers, such as Kitcher [ 125 ] and Haack [ 89 ], conceive of these norms as prescriptive standards that scientists ought to follow. These approaches need not be mutually exclusive, and both can offer useful insights about ethical norms of science. Clearly, the study of norms must take the practice of science as its starting point, otherwise our understanding of norms would have no factual basis. However, one cannot simply infer the ethical norms of science from the practice of science because scientists may endorse and defend norms without always following them. For example, most scientists would agree that they should report data honestly, disclose significant conflicting interests, and keep good research records, but evidence indicates that they sometimes fail to do so [ 140 ].

One way of bridging the gap between descriptive and prescriptive accounts of ethical norms of science is to reflect on the social and epistemological foundations (or justifications) of these norms. Ethical norms of science can be justified in at least three ways [ 191 ].

First, these norms help the scientific community achieve its epistemic and practical goals, such as understanding, predicting, and controlling nature. It is nearly impossible to understand how a natural or social process works or make accurate predictions about it without standards pertaining to honesty, logical consistency, empirical support, and reproducibility of data and results. These and other epistemic standards distinguish science form superstition, pseudoscience, and sophistry [ 89 ].

Second, ethical norms promote trust among scientists, which is essential for collaboration, peer review, publication, sharing of data and resources, mentoring, education, and other scientific activities. Scientists need to be able to trust that the data and results reported in papers have not been fabricated, falsified, or manipulated; that reviewers for journals and funding agencies will maintain confidentiality; that colleagues or mentors will not steal their ideas and other forms of intellectual property; and that credit for collaborative work will be distributed fairly [ 26 , 233 ].

Third, ethical norms are important for fostering public support for science. The public is not likely to financially, legally, or socially support research that is perceived as corrupt, incompetent, untrustworthy, or unethical [ 191 ]. Taken together, these three modes of justification link ethical norms to science’s social foundations; that is, ethical norms are standards that govern the scientific community, which itself operates within and interacts with a larger community, namely society [ 137 , 187 , 209 ].

Although vital for conducting science, ethical norms are not rigid rules. Norms sometimes conflict, and when they do, scientists must make decisions concerning epistemic or moral priorities [ 191 ]. For example, model-building in science may involve tradeoffs among various epistemic norms, including generality, precision, realism, simplicity, and explanatory power [ 143 ]. Research with human subjects often involves tradeoffs between rigor and protection of participants. For example, placebo control groups are not used in clinical trials when receiving a placebo instead of an effective treatment would cause serious harm to the participant [ 207 ].

Although the norms can be understood as guidelines, some have higher priority than others. For example, honesty is the hallmark of good science, and there are very few situations in which scientists are justified in deviating from this norm. Footnote 16 Openness, on the other hand, can be deemphasized to protect research participants’ privacy, intellectual property, classified information, or unpublished research [ 207 ].

Finally, science’s ethical norms have changed over time, and they are likely to continue to evolve [ 80 , 128 , 147 , 237 ]. While norms such as empiricism, objectivity, and consistency originated in ancient Greek science, others, such as reproducibility and openness, developed during the 1500s; and many, such as protection of research subjects and social responsibility, did not emerge as formalized norms until the twentieth century. This evolution is in response to changes in science’s social, institutional, economic, and political environment and advancements in scientific instruments, tools, and methods [ 100 ]. For example, the funding of science by private companies and their requirements concerning data access and release policies have led to changes in norms related to open sharing of data and materials [ 188 ]. The increased presence of women and racial and ethnic minorities in science has led to the development of policies for preventing sexual and other forms of harassment [ 185 ]. The use of computer software to analyze large sets of complex data has challenged traditional views about norms related to hypothesis testing [ 193 , 194 ].

7 AI and the ethical norms of science

We will divide our discussion of AI and the ethics of science into six topics corresponding to the problems and issues previously identified in this paper and seventh topic related to scientific education. While these topics may seem somewhat disconnected, they all involve ethical issues that scientists who use AI in research are currently dealing with.

7.1 AI biases and the ethical norms of science

Bias can undermine the quality and trustworthiness of science and its social impacts [ 207 ]. While reducing and managing bias are widely recognized as essential to good scientific methodology and practice [ 79 , 89 ], they become crucial when AI is employed in research because AI can reproduce and amplify biases inherent in the data and generate results that lend support to policies that are discriminatory, unfair, harmful, or ineffective [ 16 , 202 ]. Moreover, by taking machines’ disinterestedness in findings as a necessary and sufficient condition of objectivity, users of AI in research may overestimate the objectivity of their findings. AI biases in medical research have generated considerable concern, since biases related to race, ethnicity, gender, sexuality, age, nationality, and socioeconomic status in health-related datasets can perpetuate health disparities by supporting biased hypotheses, models, theories, and policies [ 177 , 198 , 211 ]. Biases also negatively impact areas of science outside the health sphere, including ecology, forestry, urban planning, economics, wildlife management, geography, and agriculture [ 142 , 164 , 165 ].

OpenAI, Google, and other generative AI developers have been using filters that prevent their systems from generating text that is outright racist, sexist, homophobic, pornographic, offensive, or dangerous [ 93 ]. While bias reduction is a necessary step to make AI safe for human use, there are reasons to be skeptical of the idea that AI can be appropriately sanitized. First, the biases inherent in data are so pervasive that no amount of filtering can remove all of them [ 44 , 69 ]. Second, AI systems may also have political and social biases that are difficult to identify or control [ 19 ]. Even in the case of generative AI models where some filtering has happened, changing the inputted prompt may simply confuse and push a system to generate biased content anyway [ 98 ].

Third, by removing, reducing and controlling some biases, AI developers may create other biases, which are difficult to anticipate, identify or describe at this point. For example, LLMs have been trained using data gleaned from the Internet, scholarly articles and Wikipedia [ 90 ], all of which consist of the broad spectrum of human behavior and experience, from good to bad and virtuous to sinister. If we try to weed undesirable features of this data, we will eliminate parts of our language and culture, and ultimately, parts of us. Footnote 17 If we want to use LLMs to make sound moral and political judgments, sanitizing their data processing and output may hinder their ability to excel at this task, because the ability to make sound moral judgements or anticipate harm may depend, in part, on some familiarity with immoral choices and the darker side of humanity. It is only by understanding evil that we can freely and rationally choose the good [ 40 ]. We admit this last point is highly speculative, but it is worth considering. Clearly, the effects of LLM bias-management bear watching.

While the problem of AI bias does not require a radical revision of scientific norms, it does imply that scientists who use AI systems in research have special obligations to identify, describe, reduce, and control bias [ 132 ]. To fulfill these obligations, scientists must not only attend to matters of research design, data analysis and interpretation, but also address issues related to data diversity, sampling, and representativeness [ 70 ]. They must also realize that they are ultimately accountable for AI biases, both to other scientists and to members of the public. As such, they should only use AI in contexts where their expertise and judgement are sufficient to identify and remove biases [ 97 ]. This is important because given the accessibility of AI systems and the fact that they can exploit our cognitive shortcomings, they are creating an illusion of understanding [ 148 ].

Furthermore, to build public trust in AI and promote transparency and accountability, scientists who use AI should engage with impacted populations, communities and other stakeholders to address their needs and concerns and seek their assistance in identifying and reducing potential biases [ 132 , 181 , 202 ]. Footnote 18 During the engagement process, researchers should help populations and communities understand how their AI system works, why they are using it, and how it may produce bias. To address the problem of AI bias, the Biden Administration recently signed an executive order that directs federal agencies to identify and reduce bias and protect the public from algorithmic discrimination [ 217 ].

7.2 AI random errors and the ethical norms of science

Like bias, random errors can undermine the validity and reliability of scientific knowledge and have disastrous consequences for public health, safety, and social policy [ 207 ]. For example, random errors in the processing of radiologic images in a clinical trial of a new cancer drug could harm patients in the trial and future patients who take an approved drug, and errors related to the modeling of the transmission of an infectious disease could undermine efforts to control an epidemic. Although some random errors are unavoidable in science, an excessive amount when using AI could be considered carelessness or recklessness when using AI (see discussion of misconduct in Sect.  7.3 ).

Reduction of random errors, like reduction of bias, is widely recognized as essential to good scientific methodology and practice [ 207 ]. Although some random errors are unavoidable in research, scientists have obligations to identify, describe, reduce, and correct them because they are ultimately accountable for both human and AI errors. Scientists who use AI in their research should disclose and discuss potential limitations and (known) AI-related errors. Transparency about these is important for making research trustworthy and reproducible [ 16 ].

Strategies for reducing errors in science include time-honored quality assurance and quality improvement techniques, such as auditing data, instruments, and systems; validating and testing instruments that analyze or process data; and investigating and analyzing errors [ 1 ]. Replication of results by independent researchers, journal peer review, and post-publication peer review also play a major role in error reduction [ 207 ]. However, given that content generated by AI systems is not always reproducible [ 98 ], identifying and adopting measures to reduce errors is extremely complicated. Either way, accountability requires that scientists take responsibility for errors produced by AI/ML systems, that they can explain why errors have occurred, and that they transparently share their limitations of their knowledge related to these errors.

7.3 AI and research misconduct

Failure to appropriately control AI-related errors could make scientists liable for research misconduct, if they intentionally, knowingly, or recklessly disseminate false data or plagiarize [ 207 ]. Footnote 19 Although most misconduct regulations and policies distinguish between misconduct and honest error, scientists may still be liable for misconduct due to recklessness [ 42 , 193 , 194 ], which may have consequences for using AI. Footnote 20 For example, a person who uses ChatGPT to write a paper without carefully checking its output for errors or plagiarism could be liable for research misconduct for reckless use of AI. Potential liability for misconduct is yet another reason why using AI in research requires taking appropriate steps to minimize and control errors.

It is also possible that some scientists will use AI to fabricate data or images presented in scientific papers, grant proposals, or other documents. This unethical use of AI is becoming increasingly likely since generative models can be used to create synthetic datasets from scratch or make alternative versions of existing datasets [ 50 , 155 , 200 , 214 ]. Synthetic data are playing an increasingly important role in some areas of science. For example, researchers can use synthetic data to develop and validate models and enhance statistical analysis. Also, because synthetic data are similar to but not the same as real data, they can be used to eliminate or mask personal identifiers and protect the confidentiality of human participants [ 31 , 81 , 200 ].

Although we do not know of any cases where scientists have been charged with research misconduct for presenting synthetic data as real data, it is only a matter of time until this happens, given the pressures to produce results, publish, and obtain grants, and the temptations to cheat or cut corners. Footnote 21 This speculation is further corroborated by the fact that a small proportion of scientists deliberately fabricate or falsify data at some point in their careers [ 73 , 140 ]. Also, using synthetic data in research, even appropriately, may blur the line between real and fake data and undermine data integrity. Researchers who use synthetic data should (1) indicate which parts of data are synthetic, (2) describe how the data were generated; (3) explain how and why they were used [ 221 ].

7.4 The black box problem and the ethical norms of science

The black box problem presents significant challenges to the trustworthiness and transparency of research that use AI because some of the steps in the scientific process will not be fully open and understandable to humans, including AI experts. An important implication of the black box problem is that scientists who use AI are obligated to make their use of the technology explainable to their peers and the public. While precise details concerning what makes an AI system explainable may vary across disciplines and contexts, some baseline requirements for transparency may include:

The type, name, and version of AI system used.

What task(s) the system was used for.

How, when and by which contributor a system was used.

Why a certain system was used instead of alternatives (if available).

What aspects of a system are not explainable (e.g., weightings).

Technical details related to model’s architecture, training data and optimization procedures, influential features involved in model’s decisions, the reliability and accuracy of the system (if known).

Whether inferences drawn by the AI system are supported by currently accepted scientific theories, principles, or concepts.

This information should be expressed in plain language to allow non-experts to understand the whos, whats, hows, and whys related to the AI system. Ideally, this information would become a standard part of reported research that used AI. The information could be reported in the materials and methods section or in supplemental material, much that same way that information about statistical methods and software is currently reported.

As mentioned previously, making AI explainable does not completely solve the black box problem but it can play a key role in promoting transparency, accountability, and trust [ 7 , 9 ]. While there seems to be an emerging consensus on the utility and importance of making AI explainable, there is very little agreement about what explainability means in practice, because what makes AI explainable depends on the context of its use [ 58 ]. Clearly, this is a topic where more empirical research and ethical/policy analysis is needed.

7.5 AI and confidentiality

Using AI in research, especially generative AI models, raises ethical issues related to data privacy and confidentiality. ChatGPT, for example, stores the information submitted by users, including data submitted in initial prompts and subsequent interactions. Unless users opt out, this information could be used for training and other purposes. The data could potentially include personal and confidential information, such as information contained in drafts of scientific papers, grant proposals, experimental protocols, or institutional policies; computer code; legal strategies; business plans; and private information about human research participants [ 67 , 85 ]. Due to concerns about breaches of confidentiality, the National Institutes of Health (NIH) recently prohibited the use of generative AI technologies, such as LLMs, in grant peer review [ 159 ]. Footnote 22 Some US courts now require lawyers to disclose their use of generative AI in preparing legal documents and make assurances that they have taken appropriate steps to protect confidentiality [ 146 ].

While we are not suggesting that concerns about confidentiality justify prohibiting generative AI use in science, we think that considerable caution is warranted. Researchers who use generative AI to edit or review a document should assume that the material contained in it will not be kept confidential, and therefore, should not use these systems to edit or review anything containing confidential or personal information.

It is worth noting that technological solutions to the confidentiality problem may be developed in due course. For example, if an organization operates a local application of an LLM and places the technology behind a secure firewall, its members can use the technology safely. Electronic medical records, for example, have this type of security [ 127 ]. Some universities have already begun experimenting with operating their own AI systems for use by students, faculty, and administrators [ 225 ]. Also, as mentioned in Sect.  7.3 , the use of synthetic data may help to protect confidentiality.

7.6 AI and moral agency

The next issue we will discuss is whether AI can be considered a moral agent that participates in an epistemic community, that is, as a partner in knowledge generation. This became a major issue for the ethical norms of science in the winter of 2022–2023, when some researchers listed ChatGPT as authors on papers [ 102 ]. These publications initiated a vigorous debate in the research community, and journals scrambled to develop policies to deal with LLMs’ use in research. On one end of the spectrum, Jenkins and Lin [ 116 ] argued that AI systems can be authors if they make a substantial contribution to the research, and on the other end, Thorp [ 218 ] argued that AI systems cannot be named as authors and should not be used at all in preparing manuscripts. Currently, there seems to be an emerging consensus that falls in between these two extremes position, namely, that AI systems can be used in preparing manuscripts but that their use should be appropriately disclosed and discussed, [ 4 , 102 ]. In 2023, the International Committee of Medical Journal Editors (ICMJE), a highly influential organization with over 4,500 member journals, released the following statement about AI and authorship:

At submission, the journal should require authors to disclose whether they used artificial intelligence (AI)assisted technologies (such as Large Language Models [LLMs], chatbots, or image creators) in the production of submitted work. Authors who use such technology should describe, in both the cover letter and the submitted work, how they used it. Chatbots (such as ChatGPT) should not be listed as authors because they cannot be responsible for the accuracy, integrity, and originality of the work, and these responsibilities are required for authorship (see Section II.A.1). Therefore, humans are responsible for any submitted material that included the use of AI-assisted technologies. Authors should carefully review and edit the result because AI can generate authoritative-sounding output that can be incorrect, incomplete, or biased. Authors should not list AI and AI assisted technologies as an author or co-author, nor cite AI as an author. Authors should be able to assert that there is no plagiarism in their paper, including in text and images produced by the AI. Humans must ensure there is appropriate attribution of all quoted material, including full citations [ 113 ].

We agree with the ICMJE’s position, which mirrors views we defended in print before the ICMJE released its guidance [ 101 , 102 ].

Authorship on scientific papers is based not only on making a substantial contribution, but also on being accountable for the work [ 207 ]. Because authorship implies significant epistemic and ethical responsibilities, one should not be named as an author on a work if one cannot be accountable for one’s contribution to the work. If questions arise about the work after publication, one needs to be able to answer those questions intelligibly and if deemed liable, face possible legal, financial, or social consequences for one’s actions.

AI systems cannot be held accountable for their actions for two reasons: (1) they cannot provide intelligible explanations for what they did, (2) they cannot be held morally responsible for their actions, (3) they cannot suffer consequences nor can be sanctioned. The first reason has to do with the previously discussed black box problem. Although current proposals for making AI explainable may help to deal with this issue, they still fall far short of humanlike accountability, because these proposals do not require that the AI system, itself , should provide an explanation. Regarding the second reason, when we hold humans accountable, we expect them to explain their behavior in clear and intelligible language. Footnote 23 If a principal investigator wonders why a graduate student did not report all the data related to experiment, the investigator expects the student to explain why they did what they did. Current AI systems cannot do this. In some cases, someone else may be able to provide an explanation of how they work and what they do, but this not the same as the AI providing the explanation, which is a prerequisite for accountability. The third reason has to do with the link between accountabilities and sanctions. If an AI system makes a mistake which harms others, it cannot be sanctioned. These systems do not have interests, values, reputation and feelings in the same way that humans do and cannot be punished by law enforcement.

Even if an AI can intelligibly explain itself in the future, this does not imply that it can be morally responsible. While the concept of moral agency, like the concept of consciousness, is controversial, there is general agreement that moral agency requires the capacity to perform intentional (or purposeful) actions, understand moral norms, and make decisions based on moral norms. These capacities also presuppose additional capacities, such as consciousness, self-awareness, personal memory, perception, general intelligence, and emotions [ 46 , 95 , 213 ]. While computer scientists are making some progress on developing AI systems that have quasi-moral agency, that is, AI systems that can make decisions based on moral norms [ 71 , 196 , 203 ], they are still a long way from developing AGI or AC (see definitions of these terms in Sect.  2 ), which would seem to be required for genuine moral agency.

Moreover, other important implications follow from current AI’s lack of moral agency. First, AI systems cannot be named as inventors on patents, because inventorship also implies moral agency [ 62 ]. Patents are granted to individuals, i.e., persons, but since AI systems lack moral agency, they do not qualify as persons under the patent laws adopted by most countries. Second, AI systems cannot be copyright holders, because to own a copyright, one must be a person [ 49 ]. Copyrights, under US law, are granted only to people [ 224 ].

Although AI systems should not be named as authors or inventors, it is still important to appropriately recognize their contributions. Recognition should be granted not only to promote honesty and transparency in research but also to prevent human authors from receiving undue credit. For example, although many scientists and engineers deserve considerable accolades for solving the protein folding problem [ 118 , 176 ], failing to mention the role of AlphaFold in this discovery would be giving human contributors more credit than they deserve.

7.7 AI and research ethics education

The last topic we will address in this section has to do with education and mentoring in responsible conduct of research (RCR), which is widely recognized as essential to promoting ethical judgment, reasoning, and behavior in science [ 207 ]. In the US, the NIH and National Science Foundation (NSF) require RCR education for funded students and trainees, and many academic institutions require some form of RCR training for all research faculty [ 190 ]. Topics typically covered in RCR courses, seminars, workshops, or training sessions include data fabrication and falsification, plagiarism, investigation of misconduct, scientific record keeping, data management, rigor and reproducibility, authorship, peer review, publication, conflict of interest, mentoring, safe research environment, protection and human and animal subjects, and social responsibility [ 207 ]. As demonstrated in this paper, the use of AI in research has a direct bearing on most of these topics, but especially on authorship, rigor and reproducibility, peer review, and social responsibility. We recommend, therefore, that RCR education and training incorporate discussion of the use of AI in research, wherever relevant.

8 Conclusion

Using AI in research benefits science and society but also creates some novel and complex ethical issues that affect accountability, responsibility, transparency, trustworthiness, reproducibility, fairness, and objectivity, and other important values in research. Although scientists do not need to radically revise their ethical norms to deal with these issues, they do need new guidance for the appropriate use of AI in research. Table 2 provides a summary of our recommendations for this guidance. Since AI continues to advance rapidly, scientists, academic institutions, funding agencies and publishers, should continue to discuss AI’s impact on research and update their knowledge, ethical guidelines and policies accordingly. Guidance should be periodically revised as AI becomes woven into the fabric of scientific practice (or normalized) and researchers learn about it, adapt to it, and use it in novel ways. Since science has significant impacts on society, public engagement in such discussions is crucial for responsible the use, development, and AI in research [ 234 ].

In closing, we will observe that many scholars, including ourselves, assume that today’s AI systems lack the capacities necessary for moral agency. This assumption has played a key role in our analysis of ethical uses of AI in research and has informed our recommendations. We realize that a day may arrive, possibly sooner than many would like to believe, when AI will advance to the point that this assumption will need to be revised, and that society will need to come to terms with the moral rights and responsibilities of some types of AI systems. Perhaps AI systems will one day participate in science as full partners in discovery and innovation [ 33 , 126 ]. Although we do not view this as a matter that now demands immediate attention, we remain open to further discussion of this issue in the future.

There is not sufficient space in this paper to conduct a thorough review of all the ways that AI is being used in scientific research. For a review of the information, see Wang et al. [ 231 ] and Krenn et al. [ 126 ].

However, the National Institutes of Health has prohibited the use of AI to review grants (see Sect.  7.5 ).

This a simplified taxonomy of AI that we have found useful to frame the research ethics issues. For more detailed taxonomy, see Graziani et al. [ 86 ].

See Krenn et al. [ 126 ] for a thoughtful discussion of the possible role of AGI in scientific research.

We will use the term ‘input’ in a very general sense to refer to data which are routed into the system, such as numbers, text, or image pixels.

It is important to note that the [ 167 ] paper was corrected to remove ChatGPT as an author because the tool did not meet the journal’s authorship criteria. See O’Connor [ 166 ].

There are important, philosophical issues at stake here concerning whether AI users should regard an output as ‘acceptable’ or ‘true’, but these questions are beyond the scope of our paper.

The question of whether true randomness exists in nature is metaphysically controversial because some physicists and philosophers argue that nothing happens by pure chance [ 64 ]. We do not need to delve into this issue here, since most people agree that the distinction can be viewed as epistemic and not metaphysical, that is, an error is systemic or random relative to our knowledge about the generation of the error.

Some of the most well-known cases of bias involved the use of AI systems by private companies. For example, Amazon stopped using an AI hiring tool in 2018 after it discovered that the tool was biased against women [ 57 ]. In 2021, Facebook faced public ridicule and shame for using image recognition software the labelled images of African American men as non-human primates [ 117 ]. In 2021, Zillow lost hundreds of millions of dollars because its algorithm systematically overestimated the market value of homes the company purchased [ 170 ].

Fake citations and factual errors made by LLMs are often referred to as ‘hallucinations.’ We prefer not to use this term because it ascribes mental states to AI.

An additional, and perhaps more concerning, issue is that using chatbots to review the literature contributes to the deskilling of humanity because it involves trusting an AI’s interpretation and synthesis of the literature instead of reading it and thinking about it for oneself. Since deskilling is a problem with many different applications of AI, we will not explore it in depth here. See Vallor [ 226 ].

We are assuming here that the engineer or scientist has access to the computer code and training data, which private companies may to be loath to provide. For example, developers at OpenAI and Google have not provided the public with access to their training data and code [ 130 ].

Although our discussion of the black box problem focuses on ML, in theory this problem could arise in any type of AI in which its workings cannot be understood by human beings.

Galileo had to convince his critics that his telescope could be trusted to convey reliable information about heavenly bodies, such as the moon and Jupiter. Explaining how the telescope works and comparing it to the human eye played an important role in his defense of the instrument [ 36 ].

This response may also conflate trust with verification. According to some theories of trust, if you trust something, you do not need to continually verify it. If I trust someone to tell me the truth, I do not need to continually verify that they are telling the trust. Indeed, it seems that we verify because we do not trust. For further discussion, see McLeod [ 145 ].

One could argue that deviation from honesty might be justified to protect human research subjects in some situations. For example, pseudonyms are often used in qualitative social/behavioral research to refer to participants or communities in order to protect their privacy [ 92 ].

Sanitizing LLMs is a form of censorship, which may be necessary in some cases, but also carries significant risks for freedom of expression [ 236 ].

While public, community, and stakeholder engagement is widely accepted as important for promoting trust in science and technology but can be difficult to implement, especially since publics, communities, and stakeholders can be difficult to identify and may have conflicting interests [ 157 ].

US federal policy defines research misconduct as data fabrication or falsification or plagiarism [ 168 ].

While the difference between recklessness and negligence can be difficult to ascertain, one way of thinking of recklessness is that it involves an indifference to or disregard for the veracity or integrity of research. Although almost all misconduct findings claim that the accused person (or respondent) acted intentionally, knowingly, or recklessly, there have been a few cases in which the respondent was found only to have acted recklessly [ 42 , 193 , 194 ].

The distinction between synthetic and real data raises some interesting and important philosophical and policy issues that we will examine in more depth in future work.

Some editors and publishers have been using AI to review and screen journal submissions [ 35 , 212 ]. For a discussion of issues raised by using AI in peer review, see Hosseini and Horbach [ 98 , 99 ].

This issue reminds us of the scene in 2001: A Space Odyssey in which the human astronauts ask the ship’s computer, HAL, to explain why it incorrectly diagnosed a problem with the AE-35 unit. HAL responds that HAL 9000 computers have never made an error so the misdiagnosis must be due to human error.

Aboumatar, H., Thompson, C., Garcia-Morales, E., Gurses, A.P., Naqibuddin, M., Saunders, J., Kim, S.W., Wise, R.: Perspective on reducing errors in research. Contemp. Clin. Trials Commun. 23 , 100838 (2021)

Article   Google Scholar  

Alberts, B., Johnson, A., Lewis, J., Raff, M., Roberts, K., Walters, P.: Molecular Biology of the Cell, 4th edn. Garland Science, New York and London (2002)

Google Scholar  

Ali, R., Connolly, I.D., Tang, O.Y., Mirza, F.N., Johnston, B., Abdulrazeq, H.F., Galamaga, P.F., Libby, T.J., Sodha, N.R., Groff, M.W., Gokaslan, Z.L., Telfeian, A.E., Shin, J.H., Asaad, W.F., Zou, J., Doberstein, C.E.: Bridging the literacy gap for surgical consents: an AI-human expert collaborative approach. NPJ Digit. Med. 7 (1), 63 (2024)

All European Academies.: The European Code of Conduct for Research Integrity, Revised Edition 2023 (2023). https://allea.org/code-of-conduct/

Allyn, B.: The Google engineer who sees company's AI as 'sentient' thinks a chatbot has a soul. NPR (2022). https://www.npr.org/2022/06/16/1105552435/google-ai-sentient

Alvarado, R.: Should we replace radiologists with deep learning? Bioethics 36 (2), 121–133 (2022)

Alvarado, R.: What kind of trust does AI deserve, if any? AI Ethics (2022). https://doi.org/10.1007/s43681-022-00224-x

Alvarado, R.: Computer simulations as scientific instruments. Found. Sci. 27 (3), 1183–1205 (2022)

Article   MathSciNet   Google Scholar  

Alvarado, R.: AI as an epistemic technology. Sci. Eng. Ethics 29 , 32 (2023)

American Society of Microbiology.: Code of Conduct (2021). https://asm.org/Articles/Ethics/COEs/ASM-Code-of-Ethics-and-Conduct

Ankarstad, A.: What is explainable AI (XAI)? Towards Data Science (2020). https://towardsdatascience.com/what-is-explainable-ai-xai-afc56938d513

Antun, V., Renna, F., Poon, C., Adcock, B., Hansen, A.C.: On instabilities of deep learning in image reconstruction and the potential costs of AI. Proc. Natl. Acad. Sci. U.S.A. 117 (48), 30088–30095 (2020)

Assael, Y., Sommerschield, T., Shillingford, B., Bordbar, M., Pavlopoulos, J., Chatzipanagiotou, M., Androutsopoulos, I., Prag, J., de Freitas, N.: Restoring and attributing ancient texts using deep neural networks. Nature 603 , 280–283 (2022)

Babu, N.V., Kanaga, E.G.M.: Sentiment analysis in social media data for depression detection using artificial intelligence: a review. SN Comput. Sci. 3 , 74 (2022)

Badini, S., Regondi, S., Pugliese, R.: Unleashing the power of artificial intelligence in materials design. Materials 16 (17), 5927 (2023). https://doi.org/10.3390/ma16175927

Ball, P.: Is AI leading to a reproducibility crisis in science? Nature 624 , 22–25 (2023)

Barrera, F.J., Brown, E.D.L., Rojo, A., Obeso, J., Plata, H., Lincango, E.P., Terry, N., Rodríguez-Gutiérrez, R., Hall, J.E., Shekhar, S.: Application of machine learning and artificial intelligence in the diagnosis and classification of polycystic ovarian syndrome: a systematic review. Front. Endocrinol. (2023). https://doi.org/10.3389/fendo.2023.1106625

Bartosz, B.B., Bartosz, J.: Can artificial intelligences be moral agents? New Ideas Psychol. 54 , 101–106 (2019)

Baum, J., Villasenor, J.: The politics of AI: ChatGPT and political biases. Brookings (2023). https://www.brookings.edu/articles/the-politics-of-ai-chatgpt-and-political-bias/

BBC News.: Alexa tells 10-year-old girl to touch live plug with penny. BBC News (2021). https://www.bbc.com/news/technology-59810383

Begus, G., Sprouse, R., Leban, A., Silva, M., Gero, S.: Vowels and diphthongs in sperm whales (2024). https://doi.org/10.31219/osf.io/285cs

Bevier, C.: ChatGPT broke the Turing test—the race is on for new ways to assess AI. Nature (2023). https://www.nature.com/articles/d41586-023-02361-7

Bevier, C.: The easy intelligence test that AI chatbots fail. Nature 619 , 686–689 (2023)

Bhattacharyya, M., Miller, V.M., Bhattacharyya, D., Miller, L.E.: High rates of fabricated and inaccurate references in ChatGPT-generated medical content. Cureus 15 (5), e39238 (2023)

Biddle, S.: The internet’s new favorite AI proposes torturing Iranians and surveilling mosques. The Intercept (2022). https://theintercept.com/2022/12/08/openai-chatgpt-ai-bias-ethics/

Bird, S.J., Housman, D.E.: Trust and the collection, selection, analysis and interpretation of data: a scientist’s view. Sci. Eng. Ethics 1 (4), 371–382 (1995)

Biology for Life.: n.d. https://www.biologyforlife.com/error-analysis.html

Blumauer, A.: How ChatGPT works and the problems with non-explainable AI. Pool Party (2023). https://www.poolparty.biz/blogposts/how-chat-gpt-works-non-explainable-ai#:~:text=ChatGPT%20is%20the%20antithesis%20of,and%20explainability%20are%20critical%20requirements

Bogost, I.: ChatGPT is dumber than you think. The Atlantic (2022). https://www.theatlantic.com/technology/archive/2022/12/chatgpt-openai-artificial-intelligencewriting-ethics/672386/

Bolanos, F., Salatino, A., Osborne, F., Motta, E.: Artificial intelligence for literature reviews: opportunities and challenges (2024). arXiv:2402.08565

Bordukova, M., Makarov, N., Rodriguez-Esteban, P., Schmich, F., Menden, M.P.: Generative artificial intelligence empowers digital twins in drug discovery and clinical trials. Expert Opin. Drug Discov. 19 (1), 33–42 (2024)

Borowiec, M.L., Dikow, R.B., Frandsen, P.B., McKeeken, A., Valentini, G., White, A.E.: Deep learning as a tool for ecology and evolution. Methods Ecol. Evol. 13 (8), 1640–1660 (2022)

Bostrom, N.: Superintelligence: Paths, Dangers, Strategies. Oxford University Press, Oxford (2014)

Bothra, A., Cao, Y., Černý, J., Arora, G.: The epidemiology of infectious diseases meets AI: a match made in heaven. Pathogens 12 (2), 317 (2023)

Brainard, J.: As scientists face a flood of papers, AI developers aim to help. Science (2023). https://www.science.org/content/article/scientists-face-flood-papers-ai-developers-aim-help

Brown, H.I.: Galileo on the telescope and the eye. J. Hist. Ideas 46 (4), 487–501 (1985)

Brumfiel, G.: New proteins, better batteries: Scientists are using AI to speed up discoveries. NPR (2023). https://www.npr.org/sections/health-shots/2023/10/12/1205201928/artificial-intelligence-ai-scientific-discoveries-proteins-drugs-solar

Brunello, N.: Example of a deep neural network (2021). https://commons.wikimedia.org/wiki/File:Example_of_a_deep_neural_network.png

Burrell, J.: How the machine ‘thinks’: understanding opacity in machine learning algorithms. Big Data Soc. 3 (1), 2053951715622512 (2016)

Calder, T.: The concept of evil. Stanford Encyclopedia of Philosophy (2022). https://plato.stanford.edu/entries/concept-evil/#KanTheEvi

Callaway, A.: ‘The entire protein universe’: AI predicts shape of nearly every known protein. Nature 608 , 14–16 (2022)

Caron, M.M., Dohan, S.B., Barnes, M., Bierer, B.E.: Defining "recklessness" in research misconduct proceedings. Accountability in Research, pp. 1–23 (2023)

Castelvecchi, D.: AI chatbot shows surprising talent for predicting chemical properties and reactions. Nature (2024). https://www.nature.com/articles/d41586-024-00347-7

CBS News.: ChatGPT and large language model bias. CBS News (2023). https://www.cbsnews.com/news/chatgpt-large-language-model-bias-60-minutes-2023-03-05/

CC BY-SA 4.0 DEED.: Amino-acid chains, known as polypeptides, fold to form a protein (2020). https://en.wikipedia.org/wiki/AlphaFold#/media/File:Protein_folding_figure.png

Cervantes, J.A., López, S., Rodríguez, L.F., Cervantes, S., Cervantes, F., Ramos, F.: Artificial moral agents: a survey of the current status. Sci. Eng. Ethics 26 (2), 501–532 (2020)

Chan, B.: Black-box assisted medical decisions: AI power vs. ethical physician care. Med. Health Care Philos. 26 , 285–292 (2023)

ChatGPT, Zhavoronkov, A.: Rapamycin in the context of Pascal’s Wager: generative pre-trained transformer perspective. Oncoscience 9 , 82–84 (2022)

Chatterjee, M.: AI cannot hold copyright, federal judge rules. Politico (2023). https://www.politico.com/news/2023/08/21/ai-cannot-hold-copyright-federal-judge-rules-00111865#:~:text=Friday's%20ruling%20will%20be%20a%20critical%20component%20in%20future%20legal%20fights.&text=Artificial%20intelligence%20cannot%20hold%20a,a%20federal%20judge%20ruled%20Friday

Chen, R.J., Lu, M.Y., Chen, T.Y., Williamson, D.F., Mahmood, F.: Synthetic data in machine learning for medicine and healthcare. Nat. Biomed. Eng. 5 , 493–497 (2021)

Chen, S., Kann, B.H., Foote, M.B., Aerts, H.J.W.L., Savova, G.K., Mak, R.H., Bitterman, D.S.: Use of artificial intelligence chatbots for cancer treatment information. JAMA Oncol. 9 (10), 1459–1462 (2023)

Cyrus, L.: How to fold graciously. In: Mossbauer Spectroscopy in Biological Systems: Proceedings of a Meeting Held at Allerton House, Monticello, Illinois, pp. 22–24 (1969)

Conroy, G.: Scientists used ChatGPT to generate an entire paper from scratch—but is it any good? Nature 619 , 443–444 (2023)

Conroy, G.: How ChatGPT and other AI tools could disrupt scientific publishing. Nature (2023). https://www.nature.com/articles/d41586-023-03144-w

Dai, B., Xu, Z., Li, H., Wang, B., Cai, J., Liu, X.: Racial bias can confuse AI for genomic studies. Oncologie 24 (1), 113–130 (2022)

Daneshjou, R., Smith, M.P., Sun, M.D., Rotemberg, V., Zou, J.: Lack of transparency and potential bias in artificial intelligence data sets and algorithms: a scoping review. JAMA Dermatol. 157 (11), 1362–1369 (2021)

Dastin, J.: Amazon scraps secret AI recruiting tool that showed bias against women. Reuters (2018). https://www.reuters.com/article/us-amazon-com-jobs-automation-insight-idUSKCN1MK08G

de Bruijn, H., Warnier, M., Janssen, M.: The perils and pitfalls of explainable AI: strategies for explaining algorithmic decision-making. Gov. Inf. Q. 39 (2), 101666 (2022)

Delua, J.: Supervised vs. unsupervised learning: What’s the difference? IBM (2021). https://www.ibm.com/blog/supervised-vs-unsupervised-learning/

Dhinakaran, A.: Overcoming AI’s transparency paradox. Forbes (2021). https://www.forbes.com/sites/aparnadhinakaran/2021/09/10/overcoming-ais-transparency-paradox/?sh=6c6b18834b77

Dickson, B.: LLMs can’t self-correct in reasoning tasks, DeepMind study finds. Tech Talks (2023). https://bdtechtalks.com/2023/10/09/llm-self-correction-reasoning-failures

Dunlap, T.: Artificial intelligence (AI) as an inventor? Dunlap, Bennett and Ludwig (2023). https://www.dbllawyers.com/artificial-intelligence-as-an-inventor/

Durán, J.M., Jongsma, K.R.: Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI. J. Med. Ethics 47 (5), 329–335 (2021)

Einstein, A.: Letter to Max Born. Walker and Company, New York (1926). Published in: Irene Born (translator), The Born-Einstein Letters (1971)

Eisenstein, M.: Teasing images apart, cell by cell. Nature 623 , 1095–1097 (2023)

Eliot, L.: Nobody can explain for sure why ChatGPT is so good at what it does, troubling AI ethics and AI Law. Forbes (2023). https://www.forbes.com/sites/lanceeliot/2023/04/17/nobody-can-explain-for-sure-why-chatgpt-is-so-good-at-what-it-does-troubling-ai-ethics-and-ai-law/?sh=334c95685041

Eliot, L.: Generative AI ChatGPT can disturbingly gobble up your private and confidential data, forewarns AI ethics and AI law. Forbes (2023). https://www.forbes.com/sites/lanceeliot/2023/01/27/generative-ai-chatgpt-can-disturbingly-gobble-up-your-private-and-confidential-data-forewarns-ai-ethics-and-ai-law/?sh=592b16547fdb

Elliott, K.C., Resnik, D.B.: Making open science work for science and society. Environ. Health Perspect. 127 (7), 75002 (2019)

Euro News.: Man ends his life after an AI chatbot 'encouraged' him to sacrifice himself to stop climate change. Euro News (2023). https://www.euronews.com/next/2023/03/31/man-ends-his-life-after-an-ai-chatbot-encouraged-him-to-sacrifice-himself-to-stop-climate

European Agency for Fundamental Rights.: Data quality and Artificial Intelligence—Mitigating Bias and Error to Protect Fundamental Rights (2019). https://fra.europa.eu/sites/default/files/fra_uploads/fra-2019-data-quality-and-ai_en.pdf

Evans, K., de Moura, N., Chauvier, S., Chatila, R., Dogan, E.: Ethical decision making in autonomous vehicles: the AV ethics project. Sci. Eng. Ethics 26 , 3285–3312 (2020)

Extance, A.: How AI technology can tame the scientific literature. Nature (2018). https://www.nature.com/articles/d41586-018-06617-5

Fanelli, D.: How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE 4 (5), e5738 (2009)

Food and Drug Administration.: Artificial intelligence (AI) and machine learning (ML) in medical devices (2020). https://www.fda.gov/media/142998/download

Food and Drug Administration.: Development and approval process: drugs (2023). https://www.fda.gov/drugs/development-approval-process-drugs

Fraenkel, A.S.: Complexity of protein folding. Bull. Math. Biol. 55 (6), 1199–1210 (1993)

Fuhrman, J.D., Gorre, N., Hu, Q., Li, H., El Naqa, I., Giger, M.L.: A review of explainable and interpretable AI with applications in COVID-19 imaging. Med. Phys. 49 (1), 1–14 (2022)

Garin, S.P., Parekh, V.S., Sulam, J., Yi, P.H.: Medical imaging data science competitions should report dataset demographics and evaluate for bias. Nat. Med. 29 (5), 1038–1039 (2023)

Giere, R., Bickle, J., Maudlin, R.F.: Understanding Scientific Reasoning, 5th edn. Wadsworth, Belmont (2005)

Gillispie, C.C.: The Edge of Objectivity. Princeton University Press, Princeton (1960)

Giuffrè, M., Shung, D.L.: Harnessing the power of synthetic data in healthcare: innovation, application, and privacy. NPJ Digit. Med. 6 , 186 (2023)

Godwin, R.C., Bryant, A.S., Wagener, B.M., Ness, T.J., DeBerryJJ, H.L.L., Graves, S.H., Archer, A.C., Melvin, R.L.: IRB-draft-generator: a generative AI tool to streamline the creation of institutional review board applications. SoftwareX 25 , 101601 (2024)

Google.: Responsible AI practices (2023). https://ai.google/responsibility/responsible-ai-practices/

Goldman, A.I.: Liaisons: philosophy meets the cognitive and social sciences. MIT Press, Cambridge (2003)

Grad, P.: Trick prompts ChatGPT to leak private data. TechXplore (2023). https://techxplore.com/news/2023-12-prompts-chatgpt-leak-private.html

Graziani, M., Dutkiewicz, L., Calvaresi, D., Amorim, J.P., Yordanova, K., Vered, M., Nair, R., Abreu, P.H., Blanke, T., Pulignano, V., Prior, J.O., Lauwaert, L., Reijers, W., Depeursinge, A., Andrearczyk, V., Müller, H.: A global taxonomy of interpretable AI: unifying the terminology for the technical and social sciences. Artif. Intell. Rev. 56 , 3473–3504 (2023)

Guinness, H.: The best AI image generators in 2023. Zappier (2023). https://zapier.com/blog/best-ai-image-generator/

Gulshan, V., Peng, L., Coram, M., Stumpe, M.C., Wu, D., Narayanaswamy, A., Venugopalan, S., Widner, K., Madams, T., Cuadros, J., Kim, R., Raman, R., Nelson, P.C., Mega, J.L., Webster, D.R.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316 (22), 2402–2410 (2016)

Haack, S.: Defending Science within Reason. Prometheus Books, New York (2007)

Hackernoon.: (2024). https://hackernoon.com/the-times-v-microsoftopenai-unauthorized-reproduction-of-times-works-in-gpt-model-training-10

Hagendorff, T., Fabi, S., Kosinski, M.: Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT. Nat. Comput. Sci. (2023). https://doi.org/10.1038/s43588-023-00527-x

Heaton, J.: “*Pseudonyms are used throughout”: a footnote, unpacked. Qual. Inq. 1 , 123–132 (2022)

Heikkilä, M.: How OpenAI is trying to make ChatGPT safer and less biased. The Atlantic (2023). https://www.technologyreview.com/2023/02/21/1068893/how-openai-is-trying-to-make-chatgpt-safer-and-less-biased/

Helmenstine, A.: Systematic vs random error—differences and examples. Science Notes (2021). https://sciencenotes.org/systematic-vs-random-error-differences-and-examples/

Himma, K.E.: Artificial agency, consciousness, and the criteria for moral agency: what properties must an artificial agent have to be a moral agent? Ethics Inf. Technol. 11 , 19–29 (2009)

Holzinger, A., Langs, G., Denk, H., Zatloukal, K., Müller, H.: Causability and explainability of artificial intelligence in medicine. Wires (2019). https://doi.org/10.1002/widm.1312

Hosseini, M., Holmes, K.: Is it ethical to use generative AI if you can’t tell whether it is right or wrong? [Blog Post]. Impact of Social Sciences(2024). https://blogs.lse.ac.uk/impactofsocialsciences/2024/03/15/is-it-ethical-to-use-generative-ai-if-you-cant-tell-whether-it-is-right-or-wrong/

Hosseini, M., Horbach, S.P.J.M.: Fighting reviewer fatigue or amplifying bias? Considerations and recommendations for use of ChatGPT and other large language models in scholarly peer review. Res. Integr. Peer Rev. 8 (1), 4 (2023)

Hosseini, M., Horbach, S.P.J.M.: Can generative AI add anything to academic peer review? [Blog Post] Impact of Social Sciences(2023). https://blogs.lse.ac.uk/impactofsocialsciences/2023/09/26/can-generative-ai-add-anything-to-academic-peer-review/

Hosseini, M., Senabre Hidalgo, E., Horbach, S.P.J.M., Güttinger, S., Penders, B.: Messing with Merton: the intersection between open science practices and Mertonian values. Accountability in Research, pp. 1–28 (2022)

Hosseini, M., Rasmussen, L.M., Resnik, D.B.: Using AI to write scholarly publications. Accountability in Research, pp. 1–9 (2023)

Hosseini, M., Resnik, D.B., Holmes, K.: The ethics of disclosing the use of artificial intelligence in tools writing scholarly manuscripts. Res. Ethics (2023). https://doi.org/10.1177/17470161231180449

Hosny, A., Parmar, C., Quackenbush, J., Schwartz, L.H., Aerts, H.J.W.L.: Artificial intelligence in radiology. Nat. Rev. Cancer 18 (8), 500–510 (2018)

Howson, C., Urbach, P.: Scientific Reasoning: A Bayesian Approach, 3rd edn. Open Court, New York (2005)

Humphreys, P.: Extending Ourselves: Computational Science, Empiricism, and Scientific Method. Oxford University Press, New York (2004)

Book   Google Scholar  

Huo, T., Li, L., Chen, X., Wang, Z., Zhang, X., Liu, S., Huang, J., Zhang, J., Yang, Q., Wu, W., Xie, Y., Wang, H., Ye, Z., Deng, K.: Artificial intelligence-aided method to detect uterine fibroids in ultrasound images: a retrospective study. Sci. Rep. 13 (1), 3714 (2023)

Hutson. M.: Hypotheses devised by AI could find ‘blind spots’ in research. Nature (2023). https://www.nature.com/articles/d41586-023-03596

IBM.: What is AI? (2023). https://www.ibm.com/topics/artificial-intelligence

IBM.: What is a Captcha? (2023). https://www.ibm.com/topics/captcha

IBM.: Explainable AI (2023). https://www.ibm.com/topics/explainable-ai

IBM.: What is generative AI? (2023). https://research.ibm.com/blog/what-is-generative-AI

IBM.: What is ML? (2024). https://www.ibm.com/topics/machine-learning

International Committee of Medical Journal Editors.: Recommendations for the Conduct, Reporting, Editing, and Publication of Scholarly work in Medical Journals (2023). https://www.icmje.org/icmje-recommendations.pdf

International Organization for Standardization.: What is AI? (2024). https://www.iso.org/artificial-intelligence/what-is-ai#:~:text=Artificial%20intelligence%20is%20%E2%80%9Ca%20technical,%2FIEC%2022989%3A2022%5D

Janowicz, K., Gao, S., McKenzie, G., Hu, Y., Bhaduri, B.: GeoAI: spatially explicit artificial intelligence techniques for geographic knowledge discovery and beyond. Int. J. Geogr. Inf. Sci. 34 (4), 625–636 (2020)

Jenkins, R., Lin, P.:. AI-assisted authorship: How to assign credit in synthetic scholarship. SSRN Scholarly Paper No. 4342909 (2023). https://doi.org/10.2139/ssrn.4342909

Jones, D.: Facebook apologizes after its AI labels black men as 'primates'. NPR (2021). https://www.npr.org/2021/09/04/1034368231/facebook-apologizes-ai-labels-black-men-primates-racial-bias

Jumper, J., Evans, R., Pritzel, A., Green, T., Figurnov, M., Ronneberger, O., Tunyasuvunakool, K., Bates, R., Žídek, A., Potapenko, A., Bridgland, A., Meyer, C., Kohl, S.A.A., Ballard, A.J., Cowie, A., Romera-Paredes, B., Nikolov, S., Jain, R., Adler, J., Back, T., Petersen, S., Reiman, D., Clancy, E., Zielinski, M., Steinegger, M., Pacholska, M., Berghammer, T., Bodenstein, S., Silver, D., Vinyals, O., Senior, A.W., Kavukcuoglu, K., Kohli, P., Hassabis, D.: Highly accurate protein structure prediction with AlphaFold. Nature 596 (7873), 583–589 (2021)

Junction AI.: What is ChatGPT not good at? Junction AI (2023). https://junction.ai/what-is-chatgpt-not-good-at/

Kahn, J.: What wrong with “explainable A.I.” Fortune (2022). https://fortune.com/2022/03/22/ai-explainable-radiology-medicine-crisis-eye-on-ai/

Kahneman, D.: Thinking, Fast and Slow. Farrar, Straus, Giroux, New York (2011)

Kembhavi, A., Pattnaik, R.: Machine learning in astronomy. J. Astrophys. Astron. 43 , 76 (2022)

Kennedy, B., Tyson, A., Funk, C.: Americans’ trust in scientists, other groups declines. Pew Research Center (2022). https://www.pewresearch.org/science/2022/02/15/americans-trust-in-scientists-other-groups-declines/

Kim, I., Kang, K., Song, Y., Kim, T.J.: Application of artificial intelligence in pathology: trends and challenges. Diagnostics (Basel) 12 (11), 2794 (2022)

Kitcher, P.: The Advancement of Knowledge. Oxford University Press, New York (1993)

Krenn, M., Pollice, R., Guo, S.Y., Aldeghi, M., Cervera-Lierta, A., Friederich, P., Gomes, G.P., Häse, F., Jinich, A., Nigam, A., Yao, Z., Aspuru-Guzik, A.: On scientific understanding with artificial intelligence. Nat. Rev. Phys. 4 , 761–769 (2022)

Kruse, C.S., Smith, B., Vanderlinden, H., Nealand, A.: Security techniques for the electronic health records. J. Med. Syst. 41 (8), 127 (2017)

Kuhn, T.S.: The Essential Tension. University of Chicago Press, Chicago (1977)

Lal, A., Pinevich, Y., Gajic, O., Herasevich, V., Pickering, B.: Artificial intelligence and computer simulation models in critical illness. World Journal of Critical Care Medicine 9 (2), 13–19 (2020)

La Malfa, E., Petrov, A., Frieder, S., Weinhuber, C., Burnell, R., Cohn, A.G., Shadbolt, N., Woolridge, M.: The ARRT of language-models-as-a-service: overview of a new paradigm and its challenges (2023). arXiv: 2309.16573

Larkin, Z.: AI bias—what Is it and how to avoid it? Levity (2022). https://levity.ai/blog/ai-bias-how-to-avoid

Lee, N.T., Resnick, P., Barton, G.: Algorithmic Bias Detection and Mitigation: Best Practices and Policies to Reduce Consumer Harms. Brookings Institute, Washington, DC (2019)

Leswing, K.: OpenAI announces GPT-4, claims it can beat 90% of humans on the SAT. CNBC (2023). https://www.cnbc.com/2023/03/14/openai-announces-gpt-4-says-beats-90percent-of-humans-on-sat.html

Licht, K., Licht, J.: Artificial intelligence, transparency, and public decision-making: Why explanations are key when trying to produce perceived legitimacy. AI Soc. 35 , 917–926 (2020)

Lipenkova, J.: Overcoming the limitations of large language models: how to enhance LLMs with human-like cognitive skills. Towards Data Science (2023). https://towardsdatascience.com/overcoming-the-limitations-of-large-language-models-9d4e92ad9823

London, A.J.: Artificial intelligence and black-box medical decisions: accuracy versus explainability. Hastings Cent. Rep. 49 (1), 15–21 (2019)

Longino, H.: Science as Social Knowledge. Princeton University Press, Princeton (1990)

Lubell, J.: ChatGPT passed the USMLE. What does it mean for med ed? AMA (2023). https://www.ama-assn.org/practice-management/digital/chatgpt-passed-usmle-what-does-it-mean-med-ed

Martinho, A., Poulsen, A., Kroesen, M., Chorus, C.: Perspectives about artificial moral agents. AI Ethics 1 , 477–490 (2021)

Martinson, B.C., Anderson, M.S., de Vries, R.: Scientists behaving badly. Nature 435 (7043), 737–738 (2005)

Martins, C., Padovan, P., Reed, C.: The role of explainable AI (XAI) in addressing AI liability. SSRN (2020). https://ssrn.com/abstract=3751740

Matta, V., Bansal, G., Akakpo, F., Christian, S., Jain, S., Poggemann, D., Rousseau, J., Ward, E.: Diverse perspectives on bias in AI. J. Inf. Technol. Case Appl. Res. 24 (2), 135–143 (2022)

Matthewson, J.: Trade-offs in model-building: a more target-oriented approach. Stud. Hist. Philos. Sci. Part A 42 (2), 324–333 (2011)

McCarthy, J.: What is artificial intelligence? (2007). https://www-formal.stanford.edu/jmc/whatisai.pdf

McLeod, C.: Trust. Stanford Encyclopedia of Philosophy (2020). https://plato.stanford.edu/entries/trust/

Merken, S.: Another US judge says lawyers must disclose AI use. Reuters (2023). https://www.reuters.com/legal/transactional/another-us-judge-says-lawyers-must-disclose-ai-use-2023-06-08/

Merton, R.: The Sociology of Science. University of Chicago Press, Chicago (1973)

Messeri, L., Crockett, M.J.: Artificial intelligence and illusions of understanding in scientific research. Nature (2024). https://doi.org/10.1038/s41586-024-07146-0

Mieth, B., Rozier, A., Rodriguez, J.A., Höhne, M.M., Görnitz, N., Müller, R.K.: DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies. NAR Genom. Bioinform. 3 (3), lqab065 (2021)

Milmo, D.: Two US lawyers fined for submitting fake court citations from ChatGPT. The Guardian (2023). https://www.theguardian.com/technology/2023/jun/23/two-us-lawyers-fined-submitting-fake-court-citations-chatgpt

Mitchell, M.: Artificial Intelligence. Picador, New York (2019)

Mitchell, M.: What does it mean for AI to understand? Quanta Magazine (2021). https://www.quantamagazine.org/what-does-it-mean-for-ai-to-understand-20211216/

Mitchell, M.: AI’s challenge of understanding the world. Science 382 (6671), eadm8175 (2023)

Mittermaier, M., Raza, M.M., Kvedar, J.C.: Bias in AI-based models for medical applications: challenges and mitigation strategies. NPJ Digit. Med. 6 , 113 (2023)

Naddaf, M.: ChatGPT generates fake data set to support scientific hypothesis. Nature (2023). https://www.nature.com/articles/d41586-023-03635-w#:~:text=Researchers%20say%20that%20the%20model,doesn't%20pass%20for%20authentic

Nahas, K.: Now AI can be used to generate proteins. The Scientist (2023). https://www.the-scientist.com/news-opinion/now-ai-can-be-used-to-design-new-proteins-70997

National Academies of Sciences, Engineering, and Medicine: Gene Drives on the Horizon: Advancing Science, Navigating Uncertainty, and Aligning Research with Public Values. National Academies Press, Washington, DC (2016)

National Institutes of Health.: Guidelines for the Conduct of Research in the Intramural Program of the NIH (2023). https://oir.nih.gov/system/files/media/file/2023-11/guidelines-conduct_research.pdf

National Institutes of Health.: The use of generative artificial intelligence technologies is prohibited for the NIH peer review process. NOT-OD-23-149 (2023). https://grants.nih.gov/grants/guide/notice-files/NOT-OD-23-149.html

National Transportation and Safety Board.: Investigations (2023). https://www.ntsb.gov/investigations/Pages/Investigations.aspx

Nawaz, M.S., Fournier-Viger, P., Shojaee, A., Fujita, H.: Using artificial intelligence techniques for COVID-19 genome analysis. Appl. Intell. (Dordrecht) 51 (5), 3086–3103 (2021)

Ng, G.W., Leung, W.C.: Strong artificial intelligence and consciousness. J. Artif. Intell. Conscious. 7 (1), 63–72 (2020)

Nordling, L.: How ChatGPT is transforming the postdoc experience. Nature 622 , 655–657 (2023)

Nost, E., Colven, E.: Earth for AI: a political ecology of data-driven climate initiatives. Geoforum 130 , 23–34 (2022)

Ntoutsi, E., Fafalios, P., Gadiraju, U., Iosifidis, V., Nejdl, W., Vidal, M.E., Ruggieri, S., Turini, F., Papadopoulos, S., Krasanakis, E., Kompatsiaris, I., Kinder-Kurlanda, K., Wagner, C., Karimi, F., Fernandez, M., Alani, H., Berendt, B., Kruegel, T., Heinze, C., Broelemann, K., Kasneci, K., Tiropanis, T., Staab, S.: Bias in data-driven artificial intelligence systems—an introductory survey. Wires (2020). https://doi.org/10.1002/widm

O’Connor, S.: Corrigendum to “Open artificial intelligence platforms in nursing education: tools for academic progress or abuse?” [Nurse Educ. Pract. 66 (2023) 103537]. Nurse Educ. Pract. 67 , 103572 (2023)

O’Connor, S., ChatGPT: Open artificial intelligence platforms in nursing education: tools for academic progress or abuse? Nurse Educ. Pract. 66 , 103537 (2023)

Office of Science and Technology Policy: Federal research misconduct policy. Fed. Reg. 65 (235), 76260–76264 (2000)

Office and Science and Technology Policy.: Blueprint for an AI Bill of Rights (2022). https://www.whitehouse.gov/ostp/ai-bill-of-rights/

Olavsrud, T.: 9 famous analytics and AI disasters. CIO (2023). https://www.cio.com/article/190888/5-famous-analytics-and-ai-disasters.html

Omiye, J.A., Lester, J.C., Spichak, S., Rotemberg, V., Daneshjou, R.: Large language models propagate race-based medicine. NPJ Digit. Med. 6 , 195 (2023)

Oncology Medical Physics.: Accuracy, precision, and error (2024). https://oncologymedicalphysics.com/quantifying-accuracy-precision-and-error/

OpenAI.: (2023). https://openai.com/chatgpt

Osoba, O., Welser, W.: An Intelligence in Our Image: The Risks of Bias and Errors in Artificial Intelligence. Rand Corporation (2017). https://www.rand.org/content/dam/rand/pubs/research_reports/RR1700/RR1744/RAND_RR1744.pdf

Othman, K.: Public acceptance and perception of autonomous vehicles: a comprehensive review. AI Ethics 1 , 355–387 (2021)

Ovchinnikov, S., Park, H., Varghese, N., Huang, P.S., Pavlopoulos, G.A., Kim, D.E., Kamisetty, H., Kyrpides, N.C., Baker, D.: Protein structure determination using metagenome sequence data. Science 355 (6322), 294–298 (2017)

Parikh, R.B., Teeple, S., Navathe, A.S.: Addressing bias in artificial intelligence in health care. J. Am. Med. Assoc. 322 (24), 2377–2378 (2019)

Parrilla, J.M.: ChatGPT use shows that the grant-application system is broken. Nature (2023). https://www.nature.com/articles/d41586-023-03238-5

Pearson, J.: Scientific Journal Publishes AI-Generated Rat with Gigantic Penis In Worrying Incident [Internet]. Vice (2024). https://www.vice.com/en/article/dy3jbz/scientific-journal-frontiers-publishes-ai-generated-rat-with-gigantic-penis-in-worrying-incident

Pennock, R.T.: An Instinct for Truth: Curiosity and the Moral Character of Science. MIT Press, Cambridge (2019)

Perni, S., Lehmann, L.S., Bitterman, D.S.: Patients should be informed when AI systems are used in clinical trials. Nat. Med. 29 (8), 1890–1891 (2023)

Perrigo, B.: OpenAI used Kenyan workers on less than $2 per hour to make ChatGPT less toxic. Time Magazine (2023). https://time.com/6247678/openai-chatgpt-kenya-workers/

Pew Charitable Trust.: How FDA regulates artificial intelligence in medical products. Issue brief (2021). https://www.pewtrusts.org/en/research-and-analysis/issue-briefs/2021/08/how-fda-regulates-artificial-intelligence-in-medical-products

Raeburn, A.: What’s the difference between accuracy and precision? Asana (2023). https://asana.com/resources/accuracy-vs-precision

Rasmussen, L.: Why and how to incorporate issues of race/ethnicity and gender in research integrity education. Accountability in Research (2023)

Ratti, E., Graves, M.: Explainable machine learning practices: opening another black box for reliable medical AI. AI Ethics 2 , 801–814 (2022)

Resnik, D.B.: Social epistemology and the ethics of research. Stud. Hist. Philos. Sci. 27 , 566–586 (1996)

Resnik, D.B.: The Price of Truth: How Money Affects the Norms of Science. Oxford University Press, New York (2007)

Resnik, D.B.: Playing Politics with Science: Balancing Scientific Independence and Government Oversight. Oxford University Press, New York (2009)

Resnik, D.B., Dinse, G.E.: Do U.S. research institutions meet or exceed federal mandates for instruction in responsible conduct of research? A national survey. Acad. Med. 87 , 1237–1242 (2012)

Resnik, D.B., Elliott, K.C.: Value-entanglement and the integrity of scientific research. Stud. Hist. Philos. Sci. 75 , 1–11 (2019)

Resnik, D.B., Elliott, K.C.: Science, values, and the new demarcation problem. J. Gen. Philos. Sci. 54 , 259–286 (2023)

Resnik, D.B., Elliott, K.C., Soranno, P.A., Smith, E.M.: Data-intensive science and research integrity. Account. Res. 24 (6), 344–358 (2017)

Resnik, D.B., Smith, E.M., Chen, S.H., Goller, C.: What is recklessness in scientific research? The Frank Sauer case. Account. Res. 24 (8), 497–502 (2017)

Roberts, M., Driggs, D., Thorpe, M., Gilbey, J., Yeung, M., Ursprung, S., Aviles-Rivero, A.I., Etmann, C., McCague, C., Beer, L., Weir-McCall, J.R., Teng, Z., Gkrania-Klotsas, E., AIX-COVNET, Rudd, J.H.F., Sala, E., Schönlieb, C.B.: Common pitfalls and recommendations for using machine learning to detect and prognosticate for COVID-19 using chest radiographs and CT scans. Nat. Mach. Intell. 3 , 199–217 (2021)

Rodgers, W., Murray, J.M., Stefanidis, A., Degbey, W.Y., Tarba, S.: An artificial intelligence algorithmic approach to ethical decision-making in human resource management processes. Hum. Resour. Manag. Rev. 33 (1), 100925 (2023)

Romero, A.: AI won’t master human language anytime soon. Towards Data Science (2021). https://towardsdatascience.com/ai-wont-master-human-language-anytime-soon-3e7e3561f943

Röösli, E., Rice, B., Hernandez-Boussard, T.: Bias at warp speed: how AI may contribute to the disparities gap in the time of COVID-19. J. Am. Med. Inform. Assoc. 28 (1), 190–192 (2021)

Savage, N.: Breaking into the black box of artificial intelligence. Nature (2022). https://www.nature.com/articles/d41586-022-00858-1

Savage, N.: Synthetic data could be better than real data. Nature (2023). https://www.nature.com/articles/d41586-023-01445-8

Schmidt, E.: This is how AI will transform the way science gets done. MIT Technology Review (2023). https://www.technologyreview.com/2023/07/05/1075865/eric-schmidt-ai-will-transform-science/#:~:text=AI%20can%20also%20spread%20the,promising%20candidates%20for%20new%20drugs

Schwartz, R., Vassilev, A., Greene, K., Perine, L., Burt, A., Hal, P.: Towards a standard for identifying and managing bias in artificial intelligence. National Institute of Standards and Technology (2022). https://view.ckcest.cn/AllFiles/ZKBG/Pages/264/c914336ac0e68a6e3e34187adf9dd83bb3b7c09f.pdf

Semler, J.: Artificial quasi moral agency. In: AIES '22: Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society (2022). https://doi.org/10.1145/3514094.3539549

Service RF: The game has changed. AI trumphs at protein folding. Science 370 (6521), 1144–1145 (2022)

Service R.: Materials-predicting AI from DeepMind could revolutionize electronics, batteries, and solar cells. Science (2023). https://www.science.org/content/article/materials-predicting-ai-deepmind-could-revolutionize-electronics-batteries-and-solar

Seth, A.: Being You: A New Science of Consciousness. Faber and Faber, London (2021)

Shamoo, A.E., Resnik, D.B.: Responsible Conduct of Research, 4th edn. Oxford University Press, New York (2022)

Shapin, S.: Here and everywhere: sociology of scientific knowledge. Ann. Rev. Sociol. 21 , 289–321 (1995)

Solomon, M.: Social Empiricism. MIT Press, Cambridge (2007)

Southern, M.G.: ChatGPT update: Improved math capabilities. Search Engine Journal (2023). https://www.searchenginejournal.com/chatgpt-update-improved-math-capabilities/478057/

Straw, I., Callison-Burch, C.: Artificial Intelligence in mental health and the biases of language based models. PLoS ONE 15 (12), e0240376 (2020)

Swaak, T.: ‘We’re all using it’: Publishing decisions are increasingly aided by AI. That’s not always obvious. The Chronicle of Higher Education (2023). https://deal.town/the-chronicle-of-higher-education/academe-today-publishing-decisions-are-increasingly-aided-by-ai-but-thats-not-always-obvious-PK2J5KUC4

Talbert, M.: Moral responsibility. Stanford Encyclopedia of Philosophy (2019). https://plato.stanford.edu/entries/moral-responsibility/

Taloni, A., Scorcia, V., Giannaccre, G.: Large language model advanced data analysis abuse to create a fake data set in medical research. JAMA Ophthalmol. (2023). https://jamanetwork.com/journals/jamaophthalmology/fullarticle/2811505

Tambornino, L., Lanzerath, D., Rodrigues, R., Wright, D.: SIENNA D4.3: survey of REC approaches and codes for Artificial Intelligence & Robotics (2019). https://zenodo.org/records/4067990

Terwilliger, T.C., Liebschner, D., Croll, T.I., Williams, C.J., McCoy, A.J., Poon, B.K., Afonine, P.V., Oeffner, R.D., Richardson, J.S., Read, R.J., Adams, P.D.: AlphaFold predictions are valuable hypotheses and accelerate but do not replace experimental structure determination. Nat. Methods (2023). https://doi.org/10.1038/s41592-023-02087-4

The White House.: Biden-⁠Harris administration secures voluntary commitments from leading artificial intelligence companies to manage the risks posed by AI (2023). https://www.whitehouse.gov/briefing-room/statements-releases/2023/07/21/fact-sheet-biden-harris-administration-secures-voluntary-commitments-from-leading-artificial-intelligence-companies-to-manage-the-risks-posed-by-ai/#:~:text=President%20Biden%20signed%20an%20Executive,the%20public%20from%20algorithmic%20discrimination

Thorp, H.H.: ChatGPT is fun, but not an author. Science 379 (6630), 313 (2023)

Turing.: Complete analysis of artificial intelligence vs artificial consciousness (2023). https://www.turing.com/kb/complete-analysis-of-artificial-intelligence-vs-artificial-consciousness

Turing, A.: Computing machinery and intelligence. Mind 59 (236), 433–460 (1950)

UK Statistic Authority.: Ethical considerations relating to the creation and use of synthetic data (2022). https://uksa.statisticsauthority.gov.uk/publication/ethical-considerations-relating-to-the-creation-and-use-of-synthetic-data/pages/2/

Unbable.: Why AI fails in the wild. Unbable (2019). https://resources.unbabel.com/blog/artificial-intelligence-fails

UNESCO.: Ethics of Artificial Intelligence (2024). https://www.unesco.org/en/artificial-intelligence/recommendation-ethics

US Copyright Office: Copyright registration guidance: works containing material generated by artificial intelligence. Fed. Reg. 88 (51), 16190–16194 (2023)

University of Michigan.: Generative artificial intelligence (2023). https://genai.umich.edu/

Vallor, S.: Moral deskilling and upskilling in a new machine age: reflections on the ambiguous future of character. Philos. Technol. 28 , 107–124 (2015)

Van Gulick, R.: Consciousness. Stanford Encyclopedia of Philosophy (2018). https://plato.stanford.edu/entries/consciousness/

Varoquaux, G., Cheplygina, V.: Machine learning for medical imaging: methodological failures and recommendations for the future. NPJ Digit. Med. 5 , 48 (2022)

Vanian, J., Leswing, K.: ChatGPT and generative AI are booming, but the costs can be extraordinary. CNBC (2023). https://www.cnbc.com/2023/03/13/chatgpt-and-generative-ai-are-booming-but-at-a-very-expensive-price.html

Walters, W.H., Wilder, E.I.: Fabrication and errors in the bibliographic citations generated by ChatGPT. Sci. Rep. 13 , 14045 (2023)

Wang, H., Fu, T., Du, Y., Gao, W., Huang, K., Liu, Z., Chandak, P., Liu, S., Van Katwyk, P., Deac, A., Anandkumar, A., Bergen, K., Gomes, C.P., Ho, S., Kohli, P., Lasenby, J., Leskovec, J., Liu, T.Y., Manrai, A., Marks, D., Ramsundar, B., Song, L., Sun, J., Tang, J., Veličković, P., Welling, M., Zhang, L., Coley, C.W., Bengio, Y., Zitnik, M.: Scientific discovery in the age of artificial intelligence. Nature 620 (7972), 47–60 (2023)

Weiss, D.C.: Latest version of ChatGPT aces bar exam with score nearing 90th percentile. ABA J. (2023). https://www.abajournal.com/web/article/latest-version-of-chatgpt-aces-the-bar-exam-with-score-in-90th-percentile

Whitbeck, C.: Truth and trustworthiness in research. Sci. Eng. Ethics 1 (4), 403–416 (1995)

Wilson, C.: Public engagement and AI: a values analysis of national strategies. Gov. Inf. Q. 39 (1), 101652 (2022)

World Conference on Research Integrity.: Singapore Statement (2010). http://www.singaporestatement.org/statement.html

Zheng, S.: China’s answers to ChatGPT have a censorship problem. Bloomberg (2023). https://www.bloomberg.com/news/newsletters/2023-05-02/china-s-chatgpt-answers-raise-questions-about-censoring-generative-ai

Ziman, J.: Real Science. Cambridge University Press, Cambridge (2000)

Download references

Open access funding provided by the National Institutes of Health. Funding was provided by Foundation for the National Institutes of Health (Grant number: ziaes102646-10).

Author information

Authors and affiliations.

National Institute of Environmental Health Sciences, Durham, USA

David B. Resnik

Department of Preventive Medicine, Northwestern University Feinberg School of Medicine, Chicago, IL, USA

Mohammad Hosseini

Galter Health Sciences Library and Learning Center, Northwestern University Feinberg School of Medicine, Chicago, IL, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to David B. Resnik .

Ethics declarations

Conflict of interest.

On behalf of all authors, the corresponding author states that there is no conflict of interest.

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Resnik, D.B., Hosseini, M. The ethics of using artificial intelligence in scientific research: new guidance needed for a new tool. AI Ethics (2024). https://doi.org/10.1007/s43681-024-00493-8

Download citation

Received : 14 December 2023

Accepted : 07 May 2024

Published : 27 May 2024

DOI : https://doi.org/10.1007/s43681-024-00493-8

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Artificial intelligence
  • Transparency
  • Accountability
  • Explainability
  • Social responsibility
  • Find a journal
  • Publish with us
  • Track your research

American Psychological Association Logo

Misinformation and disinformation

traffic sign with words Fact Check

Misinformation is false or inaccurate information—getting the facts wrong. Disinformation is false information which is deliberately intended to mislead—intentionally misstating the facts.

The spread of misinformation and disinformation has affected our ability to improve public health, address climate change, maintain a stable democracy, and more. By providing valuable insight into how and why we are likely to believe misinformation and disinformation, psychological science can inform how we protect ourselves against its ill effects.

APA resolution

 woman with concerned expression looking at smartphone

Combating misinformation and promoting psychological science literacy

Approved by APA Council of Representatives, February 2024

three individuals looking down at their phones

Using psychological science to understand and fight health misinformation

This report describes the best available psychological science on misinformation, particularly as it relates to health.

It offers eight specific recommendations to help scientists, policymakers, and health professionals respond to the ongoing threats posed by misinformation.

woman with a confused expression looking at a laptop screen

Is it safe to get health advice from influencers?

closeup of hand holding a cell phone

Eight specific ways to combat misinformation

group of people all looking down at their smartphones

Factors that make people believe misinformation

man drinking coffee while looking at a phone

How and why does misinformation spread?

Magination Press children’s book

True or False?

True or False? The Science of Perception, Misinformation, and Disinformation

Written for preteens and young teens in lively text accompanied by fun facts, this book explores what psychology tells us about development and persistence of false perceptions and beliefs and the difficulty of correcting them, plus ways to debunk misinformation and think critically and factually about the world around us.

Advice to stem misinformation

Illustration depicting fake news

What employers can do to counter election misinformation in the workplace

Group of reporters holding microphones interview a man

Using psychological science to fight misinformation: A guide for journalists

More from APA

graphic of words and numbers scrolling across the image

Psychology is leading the way on fighting misinformation

haphazard scribble lines

This election year, fighting misinformation is messier and more important than ever

social media headshot image for Linden's podcast

Stopping the spread of misinformation

man in a suit looking down at a smartphone

The anatomy of a misinformation attack

Webinars and presentations

Tackling Misinformation Ahead of Election Day

APA and the Civic Alliance collaborated to address the impact of mis- and disinformation on our democracy. APA experts discussed the psychology behind how mis- and disinformation occurs, and why we should care.

Building Back Trust in Science: Community-Centered Solutions

APA collaborated with American Public Health Association, National League of Cities, and Research!America to host a virtual national conversation about the psychology and impact of misinformation on public health.

Fighting Misinformation With Psychological Science

Psychological science is playing a key role in the global cooperative effort to combat misinformation and change the course on how we’re tackling critical societal issues.

Studying misinformation

Explore the latest psychological research on misinformation and disinformation

How long does gamified psychological inoculation protect people against misinformation?

Perceptions of fake news, misinformation, and disinformation amid the COVID-19 pandemic: A qualitative exploration

Quantifying the effects of fake news on behavior: Evidence from a study of COVID-19 misinformation

Countering misinformation and fake news through inoculation and prebunking

Who is susceptible to online health misinformation? A test of four psychosocial hypotheses

It might become true: How prefactual thinking licenses dishonesty

Federal resources

  • Centers for Disease Control and Prevention How to address Covid -19 vaccine misinformation
  • U.S. Surgeon General Health misinformation

Resources from other organizations

  • AARP Teaching students how to spot misinformation
  • American Public Health Association Podcast series: Confronting our disease of disinformation Part 1 | Part 2 | Part 3 | Part 4
  • News Literacy Project Webinar: Your brain and misinformation: Why people believe lies and conspiracy theories
  • NPC Journalism Institute Webinar: Disinformation, midterms & the mind: How psychology can help journalists fight misinformation

To find a researcher studying misinformation and disinformation, please contact our press office .

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Med Princ Pract
  • v.30(1); 2021 Feb

Logo of mpp

Principles of Clinical Ethics and Their Application to Practice

An overview of ethics and clinical ethics is presented in this review. The 4 main ethical principles, that is beneficence, nonmaleficence, autonomy, and justice, are defined and explained. Informed consent, truth-telling, and confidentiality spring from the principle of autonomy, and each of them is discussed. In patient care situations, not infrequently, there are conflicts between ethical principles (especially between beneficence and autonomy). A four-pronged systematic approach to ethical problem-solving and several illustrative cases of conflicts are presented. Comments following the cases highlight the ethical principles involved and clarify the resolution of these conflicts. A model for patient care, with caring as its central element, that integrates ethical aspects (intertwined with professionalism) with clinical and technical expertise desired of a physician is illustrated.

Highlights of the Study

  • Main principles of ethics, that is beneficence, nonmaleficence, autonomy, and justice, are discussed.
  • Autonomy is the basis for informed consent, truth-telling, and confidentiality.
  • A model to resolve conflicts when ethical principles collide is presented.
  • Cases that highlight ethical issues and their resolution are presented.
  • A patient care model that integrates ethics, professionalism, and cognitive and technical expertise is shown.

Introduction

A defining responsibility of a practicing physician is to make decisions on patient care in different settings. These decisions involve more than selecting the appropriate treatment or intervention.

Ethics is an inherent and inseparable part of clinical medicine [ 1 ] as the physician has an ethical obligation (i) to benefit the patient, (ii) to avoid or minimize harm, and to (iii) respect the values and preferences of the patient. Are physicians equipped to fulfill this ethical obligation and can their ethical skills be improved? A goal-oriented educational program [ 2 ] (Table ​ (Table1) 1 ) has been shown to improve learner awareness, attitudes, knowledge, moral reasoning, and confidence [ 3 , 4 ].

Goals of ethics education

• To appreciate the ethical dimensions of patient care
• To understand ethical principles of medical profession
• To have competence in core ethical behavioral skills ( )
• To know the commonly encountered ethical issues in general and in one's specialty
• To have competence in analyzing and resolving ethical problems
• To appreciate cultural diversity and its impact on ethics

Ethics, Morality, and Professional Standards

Ethics is a broad term that covers the study of the nature of morals and the specific moral choices to be made. Normative ethics attempts to answer the question, “Which general moral norms for the guidance and evaluation of conduct should we accept, and why?” [ 5 ]. Some moral norms for right conduct are common to human kind as they transcend cultures, regions, religions, and other group identities and constitute common morality (e.g., not to kill, or harm, or cause suffering to others, not to steal, not to punish the innocent, to be truthful, to obey the law, to nurture the young and dependent, to help the suffering, and rescue those in danger). Particular morality refers to norms that bind groups because of their culture, religion, profession and include responsibilities, ideals, professional standards, and so on. A pertinent example of particular morality is the physician's “accepted role” to provide competent and trustworthy service to their patients. To reduce the vagueness of “accepted role,” physician organizations (local, state, and national) have codified their standards. However, complying with these standards, it should be understood, may not always fulfill the moral norms as the codes have “often appeared to protect the profession's interests more than to offer a broad and impartial moral viewpoint or to address issues of importance to patients and society” [ 6 ].

Bioethics and Clinical (Medical) Ethics

A number of deplorable abuses of human subjects in research, medical interventions without informed consent, experimentation in concentration camps in World War II, along with salutary advances in medicine and medical technology and societal changes, led to the rapid evolution of bioethics from one concerned about professional conduct and codes to its present status with an extensive scope that includes research ethics, public health ethics, organizational ethics, and clinical ethics.

Hereafter, the abbreviated term, ethics, will be used as I discuss the principles of clinical ethics and their application to clinical practice.

The Fundamental Principles of Ethics

Beneficence, nonmaleficence, autonomy, and justice constitute the 4 principles of ethics. The first 2 can be traced back to the time of Hippocrates “to help and do no harm,” while the latter 2 evolved later. Thus, in Percival's book on ethics in early 1800s, the importance of keeping the patient's best interest as a goal is stressed, while autonomy and justice were not discussed. However, with the passage of time, both autonomy and justice gained acceptance as important principles of ethics. In modern times, Beauchamp and Childress' book on Principles of Biomedical Ethics is a classic for its exposition of these 4 principles [ 5 ] and their application, while also discussing alternative approaches.

Beneficence

The principle of beneficence is the obligation of physician to act for the benefit of the patient and supports a number of moral rules to protect and defend the right of others, prevent harm, remove conditions that will cause harm, help persons with disabilities, and rescue persons in danger. It is worth emphasizing that, in distinction to nonmaleficence, the language here is one of positive requirements. The principle calls for not just avoiding harm, but also to benefit patients and to promote their welfare. While physicians' beneficence conforms to moral rules, and is altruistic, it is also true that in many instances it can be considered a payback for the debt to society for education (often subsidized by governments), ranks and privileges, and to the patients themselves (learning and research).

Nonmaleficence

Nonmaleficence is the obligation of a physician not to harm the patient. This simply stated principle supports several moral rules − do not kill, do not cause pain or suffering, do not incapacitate, do not cause offense, and do not deprive others of the goods of life. The practical application of nonmaleficence is for the physician to weigh the benefits against burdens of all interventions and treatments, to eschew those that are inappropriately burdensome, and to choose the best course of action for the patient. This is particularly important and pertinent in difficult end-of-life care decisions on withholding and withdrawing life-sustaining treatment, medically administered nutrition and hydration, and in pain and other symptom control. A physician's obligation and intention to relieve the suffering (e.g., refractory pain or dyspnea) of a patient by the use of appropriate drugs including opioids override the foreseen but unintended harmful effects or outcome (doctrine of double effect) [ 7 , 8 ].

The philosophical underpinning for autonomy, as interpreted by philosophers Immanuel Kant (1724–1804) and John Stuart Mill (1806–1873), and accepted as an ethical principle, is that all persons have intrinsic and unconditional worth, and therefore, should have the power to make rational decisions and moral choices, and each should be allowed to exercise his or her capacity for self-determination [ 9 ]. This ethical principle was affirmed in a court decision by Justice Cardozo in 1914 with the epigrammatic dictum, “Every human being of adult years and sound mind has a right to determine what shall be done with his own body” [ 10 ].

Autonomy, as is true for all 4 principles, needs to be weighed against competing moral principles, and in some instances may be overridden; an obvious example would be if the autonomous action of a patient causes harm to another person(s). The principle of autonomy does not extend to persons who lack the capacity (competence) to act autonomously; examples include infants and children and incompetence due to developmental, mental or physical disorder. Health-care institutions and state governments in the US have policies and procedures to assess incompetence. However, a rigid distinction between incapacity to make health-care decisions (assessed by health professionals) and incompetence (determined by court of law) is not of practical use, as a clinician's determination of a patient's lack of decision-making capacity based on physical or mental disorder has the same practical consequences as a legal determination of incompetence [ 11 ].

Detractors of the principle of autonomy question the focus on the individual and propose a broader concept of relational autonomy (shaped by social relationships and complex determinants such as gender, ethnicity and culture) [ 12 ]. Even in an advanced western country such as United States, the culture being inhomogeneous, some minority populations hold views different from that of the majority white population in need for full disclosure, and in decisions about life support (preferring a family-centered approach) [ 13 ].

Resistance to the principle of patient autonomy and its derivatives (informed consent, truth-telling) in non-western cultures is not unexpected. In countries with ancient civilizations, rooted beliefs and traditions, the practice of paternalism ( this term will be used in this article, as it is well-entrenched in ethics literature, although parentalism is the proper term ) by physicians emanates mostly from beneficence. However, culture (a composite of the customary beliefs, social forms, and material traits of a racial, religious or social group) is not static and autonomous, and changes with other trends over passing years. It is presumptuous to assume that the patterns and roles in physician-patient relationships that have been in place for a half a century and more still hold true. Therefore, a critical examination of paternalistic medical practice is needed for reasons that include technological and economic progress, improved educational and socioeconomic status of the populace, globalization, and societal movement towards emphasis on the patient as an individual, than as a member of a group. This needed examination can be accomplished by research that includes well-structured surveys on demographics, patient preferences on informed consent, truth-telling, and role in decision-making.

Respecting the principle of autonomy obliges the physician to disclose medical information and treatment options that are necessary for the patient to exercise self-determination and supports informed consent, truth-telling, and confidentiality.

Informed Consent

The requirements of an informed consent for a medical or surgical procedure, or for research, are that the patient or subject (i) must be competent to understand and decide, (ii) receives a full disclosure, (iii) comprehends the disclosure, (iv) acts voluntarily, and (v) consents to the proposed action.

The universal applicability of these requirements, rooted and developed in western culture, has met with some resistance and a suggestion to craft a set of requirements that accommodate the cultural mores of other countries [ 14 ]. In response and in vigorous defense of the 5 requirements of informed consent, Angell wrote, “There must be a core of human rights that we would wish to see honored universally, despite variations in their superficial aspects …The forces of local custom or local law cannot justify abuses of certain fundamental rights, and the right of self-determination on which the doctrine of informed consent is based, is one of them” [ 15 ].

As competence is the first of the requirements for informed consent, one should know how to detect incompetence. Standards (used singly or in combination) that are generally accepted for determining incompetence are based on the patient's inability to state a preference or choice, inability to understand one's situation and its consequences, and inability to reason through a consequential life decision [ 16 ].

In a previously autonomous, but presently incompetent patient, his/her previously expressed preferences (i.e., prior autonomous judgments) are to be respected [ 17 ]. Incompetent (non-autonomous) patients and previously competent (autonomous), but presently incompetent patients would need a surrogate decision-maker. In a non-autonomous patient, the surrogate can use either a substituted judgment standard (i.e., what the patient would wish in this circumstance and not what the surrogate would wish), or a best interests standard (i.e., what would bring the highest net benefit to the patient by weighing risks and benefits). Snyder and Sulmasy [ 18 ], in their thoughtful article, provide a practical and useful option when the surrogate is uncertain of the patient's preference(s), or when patient's preferences have not kept abreast of scientific advances. They suggest the surrogate use “substituted interests,” that is, the patient's authentic values and interests, to base the decision.

Truth-Telling

Truth-telling is a vital component in a physician-patient relationship; without this component, the physician loses the trust of the patient. An autonomous patient has not only the right to know (disclosure) of his/her diagnosis and prognosis, but also has the option to forgo this disclosure. However, the physician must know which of these 2 options the patient prefers.

In the United States, full disclosure to the patient, however grave the disease is, is the norm now, but was not so in the past. Significant resistance to full disclosure was highly prevalent in the US, but a marked shift has occurred in physicians' attitudes on this. In 1961, 88% of physicians surveyed indicated their preference to avoid disclosing a diagnosis [ 19 ]; in 1979, however, 98% of surveyed physicians favored it [ 20 ]. This marked shift is attributable to many factors that include − with no order of importance implied − educational and socioeconomic progress, increased accountability to society, and awareness of previous clinical and research transgressions by the profession.

Importantly, surveys in the US show that patients with cancer and other diseases wish to have been fully informed of their diagnoses and prognoses. Providing full information, with tact and sensitivity, to patients who want to know should be the standard. The sad consequences of not telling the truth regarding a cancer include depriving the patient of an opportunity for completion of important life-tasks: giving advice to, and taking leave of loved ones, putting financial affairs in order, including division of assets, reconciling with estranged family members and friends, attaining spiritual order by reflection, prayer, rituals, and religious sacraments [ 21 , 22 ].

In contrast to the US, full disclosure to the patient is highly variable in other countries [ 23 ]. A continuing pattern in non-western societies is for the physician to disclose the information to the family and not to the patient. The likely reasons for resistance of physicians to convey bad news are concern that it may cause anxiety and loss of hope, some uncertainty on the outcome, or belief that the patient would not be able to understand the information or may not want to know. However, this does not have to be a binary choice, as careful understanding of the principle of autonomy reveals that autonomous choice is a right of a patient, and the patient, in exercising this right, may authorize a family member or members to make decisions for him/her.

Confidentiality

Physicians are obligated not to disclose confidential information given by a patient to another party without the patient's authorization. An obvious exception (with implied patient authorization) is the sharing necessary of medical information for the care of the patient from the primary physician to consultants and other health-care teams. In the present-day modern hospitals with multiple points of tests and consultants, and the use of electronic medical records, there has been an erosion of confidentiality. However, individual physicians must exercise discipline in not discussing patient specifics with their family members or in social gatherings [ 24 ] and social media. There are some noteworthy exceptions to patient confidentiality. These include, among others, legally required reporting of gunshot wounds and sexually transmitted diseases and exceptional situations that may cause major harm to another (e.g., epidemics of infectious diseases, partner notification in HIV disease, relative notification of certain genetic risks, etc.).

Justice is generally interpreted as fair, equitable, and appropriate treatment of persons. Of the several categories of justice, the one that is most pertinent to clinical ethics is distributive justice . Distributive justice refers to the fair, equitable, and appropriate distribution of health-care resources determined by justified norms that structure the terms of social cooperation [ 25 ]. How can this be accomplished? There are different valid principles of distributive justice. These are distribution to each person (i) an equal share, (ii) according to need, (iii) according to effort, (iv) according to contribution, (v) according to merit, and (vi) according to free-market exchanges. Each principle is not exclusive, and can be, and are often combined in application. It is easy to see the difficulty in choosing, balancing, and refining these principles to form a coherent and workable solution to distribute medical resources.

Although this weighty health-care policy discussion exceeds the scope of this review, a few examples on issues of distributive justice encountered in hospital and office practice need to be mentioned. These include allotment of scarce resources (equipment, tests, medications, organ transplants), care of uninsured patients, and allotment of time for outpatient visits (equal time for every patient? based on need or complexity? based on social and or economic status?). Difficult as it may be, and despite the many constraining forces, physicians must accept the requirement of fairness contained in this principle [ 26 ]. Fairness to the patient assumes a role of primary importance when there are conflicts of interests. A flagrant example of violation of this principle would be when a particular option of treatment is chosen over others, or an expensive drug is chosen over an equally effective but less expensive one because it benefits the physician, financially, or otherwise.

Conflicts between Principles

Each one of the 4 principles of ethics is to be taken as a prima facie obligation that must be fulfilled, unless it conflicts, in a specific instance, with another principle. When faced with such a conflict, the physician has to determine the actual obligation to the patient by examining the respective weights of the competing prima facie obligations based on both content and context. Consider an example of a conflict that has an easy resolution: a patient in shock treated with urgent fluid-resuscitation and the placement of an indwelling intravenous catheter caused pain and swelling. Here the principle of beneficence overrides that of nonmaleficence. Many of the conflicts that physicians face, however, are much more complex and difficult. Consider a competent patient's refusal of a potentially life-saving intervention (e.g., instituting mechanical ventilation) or request for a potentially life-ending action (e.g., withdrawing mechanical ventilation). Nowhere in the arena of ethical decision-making is conflict as pronounced as when the principles of beneficence and autonomy collide.

Beneficence has enjoyed a historical role in the traditional practice of medicine. However, giving it primacy over patient autonomy is paternalism that makes a physician-patient relationship analogous to that of a father/mother to a child. A father/mother may refuse a child's wishes, may influence a child by a variety of ways − nondisclosure, manipulation, deception, coercion etc., consistent with his/her thinking of what is best for the child. Paternalism can be further divided into soft and hard .

In soft paternalism, the physician acts on grounds of beneficence (and, at times, nonmaleficence) when the patient is nonautonomous or substantially nonautonomous (e.g., cognitive dysfunction due to severe illness, depression, or drug addiction) [ 27 ]. Soft paternalism is complicated because of the difficulty in determining whether the patient was nonautonomous at the time of decision-making but is ethically defensible as long as the action is in concordance with what the physician believes to be the patient's values. Hard paternalism is action by a physician, intended to benefit a patient, but contrary to the voluntary decision of an autonomous patient who is fully informed and competent, and is ethically indefensible.

On the other end of the scale of hard paternalism is consumerism, a rare and extreme form of patient autonomy, that holds the view that the physician's role is limited to providing all the medical information and the available choices for interventions and treatments while the fully informed patient selects from the available choices. In this model, the physician's role is constrained, and does not permit the full use of his/her knowledge and skills to benefit the patient, and is tantamount to a form of patient abandonment and therefore is ethically indefensible.

Faced with the contrasting paradigms of beneficence and respect for autonomy and the need to reconcile these to find a common ground, Pellegrino and Thomasma [ 28 ] argue that beneficence can be inclusive of patient autonomy as “the best interests of the patients are intimately linked with their preferences” from which “are derived our primary duties to them.”

One of the basic and not infrequent reasons for disagreement between physician and patient on treatment issues is their divergent views on goals of treatment. As goals change in the course of disease (e.g., a chronic neurologic condition worsens to the point of needing ventilator support, or a cancer that has become refractory to treatment), it is imperative that the physician communicates with the patient in clear and straightforward language, without the use of medical jargon, and with the aim of defining the goal(s) of treatment under the changed circumstance. In doing so, the physician should be cognizant of patient factors that compromise decisional capacity, such as anxiety, fear, pain, lack of trust, and different beliefs and values that impair effective communication [ 29 ].

The foregoing theoretical discussion on principles of ethics has practical application in clinical practice in all settings. In the resource book for clinicians, Jonsen et al. [ 30 ] have elucidated a logical and well accepted model (Table ​ (Table2), 2 ), along the lines of the systematic format that practicing physicians have been taught and have practiced for a long time (Chief Complaint, History of Present Illness, Past History, pertinent Family and Social History, Review of Systems, Physical Examination and Laboratory and Imaging studies). This practical approach to problem-solving in ethics involves:

  • Clinical assessment (identifying medical problems, treatment options, goals of care)
  • Patient (finding and clarifying patient preferences on treatment options and goals of care)
  • Quality of life (QOL) (effects of medical problems, interventions and treatments on patient's QOL with awareness of individual biases on what constitutes an acceptable QOL)
  • Context (many factors that include family, cultural, spiritual, religious, economic and legal).

Application of principles of ethics in patient care

Beneficence,
nonmaleficenceNature of illness (acute, chronic, reversible, terminal)? Goals of treatment?
Treatment options and probability of success for each option?
Adverse effects of treatment and does benefit outweigh harm?
Effects of no medical/surgical treatment?
If treated, plans for limiting treatment? Stopping treatment?
Respect for autonomy
Information given to patient on benefits and risks of treatment? Patient understood the information and gave consent?
Patent mentally competent? If competent, what are his/her preferences?
If patient mentally incompetent, are patient's prior preferences known? If preferences unknown, who is the appropriate surrogate?
Beneficence, ( )
nonmaleficence,Expected QOL with and without treatment?
respect for autonomyDeficits − physical, mental, social − may have after treatment?
Judging QOL of patient who cannot express himself/herself? Who is the judge?
Recognition of possible physician bias in judging QOL?
Rationale to forgo life-sustaining treatment(s)?
Distributive justice
Conflicts of interests − does physician benefit financially, professionally by ordering tests, prescribing medications, seeking consultations?
Research or educational considerations that affect clinical decisions, physician orders?
Conflicts of interests based on religious beliefs? Legal issues?
Conflicts of interests between organizations (clinics, hospitals), 3rd party payers?
Public health and safety issues?
Problems in allocation of scarce resources?

Using this model, the physician can identify the principles that are in conflict, ascertain by weighing and balancing what should prevail, and when in doubt, turn to ethics literature and expert opinion.

Illustrative Cases

There is a wide gamut of clinical patient encounters with ethical issues, and some, especially those involving end-of-life care decisions, are complex. A few cases (Case 1 is modified from resource book [ 30 ]) are presented below as they highlight the importance of understanding and weighing the ethical principles involved to arrive at an ethically right solution. Case 6 was added during the revision phase of this article as it coincided with the outbreak of Coronavirus Infectious Disease-2019 (COVID-19) that became a pandemic rendering a discussion of its ethical challenges necessary and important.

A 20-year old college student living in the college hostel is brought by a friend to the Emergency Department (ED) because of unrelenting headache and fever. He appeared drowsy but was responsive and had fever (40°C), and neck rigidity on examination. Lumbar puncture was done, and spinal fluid appeared cloudy and showed increased white cells; Gram stain showed Gram-positive diplococci. Based on the diagnosis of bacterial meningitis, appropriate antibiotics were begun, and hospitalization was instituted. Although initial consent for diagnosis was implicit, and consent for lumbar puncture was explicit, at this point, the patient refuses treatment without giving any reason, and insists to return to his hostel. Even after explanation by the physician as to the seriousness of his diagnosis, and the absolute need for prompt treatment (i.e., danger to life without treatment), the patient is adamant in his refusal.

Comment . Because of this refusal, the medical indications and patient preferences (see Table ​ Table2) 2 ) are at odds. Is it ethically right to treat against his will a patient who is making a choice that has dire consequences (disability, death) who gives no reason for this decision, and in whom a clear determination of mental incapacity cannot be made (although altered mental status may be presumed)? Here the principle of beneficence and principle of autonomy are in conflict. The weighing of factors: (1) patient may not be making a reasoned decision in his best interest because of temporary mental incapacity; and (2) the severity of life-threatening illness and the urgency to treat to save his life supports the decision in favor of beneficence (i.e., to treat).

A 56-year old male lawyer and current cigarette smoker with a pack-a-day habit for more than 30 years, is found to have a solitary right upper lobe pulmonary mass 5 cm in size on a chest radiograph done as part of an insurance application. The mass has no calcification, and there are no other pulmonary abnormalities. He has no symptoms, and his examination is normal. Tuberculosis skin test is negative, and he has no history of travel to an endemic area of fungal infection. As lung cancer is the most probable and significant diagnosis to consider, and early surgical resection provides the best prospects for cure, the physician, in consultation with the thoracic surgeon, recommends bronchoscopic biopsy and subsequent resection. The patient understands the treatment plan, and the significance of not delaying the treatment. However, he refuses, and states that he does not think he has cancer; and is fearful that the surgery would kill him. Even after further explanations on the low mortality of surgery and the importance of removing the mass before it spreads, he continues to refuse treatment.

Comment . Even though the physician's prescribed treatment, that is, removal of the mass that is probably cancer, affords the best chance of cure, and delay in its removal increases its chance of metastases and reaching an incurable stage − the choice by this well informed and mentally competent patient should be respected. Here, autonomy prevails over beneficence. The physician, however, may not abandon the patient and is obligated to offer continued outpatient visits with advice against making decision based on fear, examinations, periodic tests, and encouragement to seek a second opinion.

A 71-year-old man with very severe chronic obstructive pulmonary disease (COPD) is admitted to the intensive care unit (ICU) with pneumonia, sepsis, and respiratory failure. He is intubated and mechanically ventilated. For the past 2 years, he has been on continuous oxygen treatment and was short of breath on minimal exertion. In the past 1 year, he had 2 admissions to the ICU; on both occasions he required intubation and mechanical ventilation. Presently, even with multiple antibiotics, intravenous fluid hydration, and vasopressors, his systolic blood pressure remains below 60 mm Hg, and with high flow oxygen supplementation, his oxygen saturation stays below 80%; his arterial blood pH is 7.0. His liver enzymes are elevated. He is anuric, and over next 8 h his creatinine has risen to 5 mg/dL and continues to rise. He has drifted into a comatose state. The intensivist suggests discontinuation of vasopressors and mechanical ventilation as their continued use is futile. The patient has no advance care directives or a designated health-care proxy.

Comment . The term “futility” is open to different definitions [ 31 ] and is often controversial, and therefore, some experts suggest the alternate term, “clinically non-beneficial interventions” [ 32 ]. However, in this case the term futility is appropriate to indicate that there is evidence of physiological futility (multisystem organ failure in the setting of preexisting end stage COPD, and medical interventions would not reverse the decline). It is appropriate then to discuss the patient's condition with his family with the goal of discontinuing life-sustaining interventions. These discussions should be done with sensitivity, compassion and empathy. Palliative care should be provided to alleviate his symptoms and to support the family until his death and beyond in their bereavement.

A 67-year old widow, an immigrant from southern India, is living with her son and his family in Wisconsin, USA. She was experiencing nausea, lack of appetite and weight loss for a few months. During the past week, she also had dark yellow urine, and yellow coloration of her skin. She has basic knowledge of English. She was brought to a multi-specialty teaching hospital by her son, who informed the doctor that his mother has “jaundice,” and instructed that, if any serious life-threatening disease was found, not to inform her. He asked that all information should come to him, and if there is any cancer not to treat it, since she is older and frail. Investigations in the hospital reveals that she has pancreatic cancer, and chemotherapy, while not likely to cure, would prolong her life.

Comment . In some ancient cultures, authority is given to members of the family (especially senior men) to make decisions that involve other members on marriage, job, and health care. The woman in this case is a dependent of her son, and given this cultural perspective, the son can rightfully claim to have the authority to make health-care decisions for her. Thus, the physician is faced with multiple tasks that may not be consonant. To respect cultural values [ 33 ], to directly learn the patient's preferences, to comply with the American norm of full disclosure to the patient, and to refuse the son's demands.

The principle of autonomy provides the patient the option to delegate decision-making authority to another person. Therefore, the appropriate course would be to take the tactful approach of directly informing the patient (with a translator if needed), that the diagnosed disease would require decisions for appropriate treatment. The physician should ascertain whether she would prefer to make these decisions herself, or whether she would prefer all information to be given to her son, and all decisions to be made by him.

A 45-year-old woman had laparotomy and cholecystectomy for abdominal pain and multiple gall stones. Three weeks after discharge from the hospital, she returned with fever, abdominal pain, and tenderness. She was given antibiotics, and as her fever continued, laparotomy and exploration were undertaken; a sponge left behind during the recent cholecystectomy was found. It was removed, the area cleansed, and incision closed. Antibiotics were continued, and she recovered without further incident and was discharged. Should the surgeon inform the patient of his error?

Comment . Truth-telling, a part of patient autonomy is very much applicable in this situation and disclosure to patient is required [ 34 , 35 , 36 ]. The mistake caused harm to the patient (morbidity and readmission, and a second surgery and monetary loss). Although the end result remedied the harm, the surgeon is obligated to inform the patient of the error and its consequences and offer an apology. Such errors are always reported to the Operating Room Committees and Surgical Quality Improvement Committees of US Hospitals. Hospital-based risk reduction mechanisms (e.g., Risk Management Department) present in most US hospitals would investigate the incident and come up with specific recommendations to mitigate the error and eliminate them in the future. Many institutions usually make financial settlements to obviate liability litigation (fees and hospital charges waived, and/or monetary compensation made to the patient). Elsewhere, if such mechanisms do not exist, it should be reported to the hospital. Acknowledgment from the hospital, apologies from the institution and compensation for the patient are called for. Whether in US or elsewhere, a malpractice suit is very possible in this situation, but a climate of honesty substantially reduces the threat of legal claims as most patients trust their physicians and are not vindictive.

The following scenario is at a city hospital during the peak of the COVID-19 pandemic: A 74-year-old woman, residing in an assisted living facility, is brought to the ED with shortness of breath and malaise. Over the past 4 days she had been experiencing dry cough, lack of appetite, and tiredness; 2 days earlier, she stopped eating and started having a low-grade fever. A test for COVID-19 undertaken by the assisted living facility was returned positive on the morning of the ED visit.

She, a retired nurse, is a widow; both of her grown children live out-of-state. She has had hypertension for many years, controlled with daily medications. Following 2 strokes, she was moved to an assisted living facility 3 years ago. She recovered most of her functions after the strokes and required help only for bathing and dressing. She is able to answer questions appropriately but haltingly, because of respiratory distress. She has tachypnea (34/min), tachycardia (120/min), temperature of 101°F, BP 100/60 and 90% O 2 saturation (on supplemental O 2 of 4 L/min). She has dry mouth and tongue and rhonchi on lung auscultation. Her respiratory rate is increasing on observation and she is visibly tiring.

Another patient is now brought in by ambulance; this is a 22-year-old man living in an apartment and has had symptoms of “flu” for a week. Because of the pandemic, he was observing the recommended self-distancing, and had no known exposure to coronavirus. He used saline gargles, acetaminophen, and cough syrup to alleviate his sore throat, cough, and fever. In the past 2 days, his symptoms worsened, and he drove himself to a virus testing station and got tested for COVID-19; he was told that he would be notified of the results. He returned to his apartment and after a sleepless night with fever, sweats, and persistent cough, he woke up and felt drained of all strength. The test result confirmed COVID-19. He then called for an ambulance.

He has been previously healthy. He is a non-smoker and uses alcohol rarely. He is a second-year medical student. He is single, and his parents and sibling live hundreds of miles away.

On examination, he has marked tachypnea (>40/min), shallow breathing, heart rate of 128/min, temperature of 103°F and O 2 saturation of 88 on pulse oximetry. He appears drowsy and is slow to respond to questions. He is propped up to a sitting position as it is uncomfortable for him to be supine. Accessory muscles of neck and intercostals are contracting with each breath, and on auscultation, he has basilar crackles and scattered rhonchi. His O 2 saturation drops to 85 and he is in respiratory distress despite nebulized bronchodilator treatment.

Both of these patients are in respiratory failure, clinically and confirmed by arterial blood gases, and are in urgent need of intubation and mechanical ventilation. However, only one ventilator is available; who gets it?

Comment . The decision to allocate a scarce and potentially life-saving equipment (ventilator) is very difficult as it directly addresses the question “Who shall live when not everyone can live? [ 5 ]. This decision cannot be emotion-driven or arbitrary; nor should it be based on a person's wealth or social standing. Priorities need to be established ethically and must be applied consistently in the same institution and ideally throughout the state and the country. The general social norm to treat all equally or to treat on a first come, first saved basis is not the appropriate choice here. There is a consensus among clinical ethics scholars, that in this situation, maximizing benefits is the dominant value in making a decision [ 37 ]. Maximizing benefits can be viewed in 2 different ways; in lives saved or in life-years saved; they differ in that the first is non-utilitarian while the second is utilitarian. A subordinate consideration is giving priority to patients who have a better chance of survival and a reasonable life expectancy. The other 2 considerations are promoting and rewarding instrumental value (benefit to others) and the acuity of illness. Health-care workers (physicians, nurses, therapists etc.) and research participants have instrumental value as their work benefits others; among them those actively contributing are of more value than those who have made their contributions. The need to prioritize the sickest and the youngest is also a recognized value when these are aligned with the dominant value of maximizing benefits. In the context of COVID-19 pandemic, Emanuel et al. [ 37 ] weighed and analyzed these values and offered some recommendations. Some ethics scholars opine that in times of a pandemic, the burden of making a decision as to who gets a ventilator and who does not (often a life or death choice) should not be on the front-line physicians, as it may cause a severe and life-long emotional toll on them [ 35 , 36 ]. The toll can be severe for nurses and other front-line health-care providers as well. As a safeguard, they propose that the decision should rest on a select committee that excludes doctors, nurses and others who are caring for the patient(s) under consideration [ 38 ].

Both patients described in the case summaries have comparable acuity of illness and both are in need of mechanical ventilator support. However, in the dominant value of maximizing benefits the two patients differ; in terms of life-years saved, the second patient (22-year-old man) is ahead as his life expectancy is longer. Additionally, he is more likely than the older woman, to survive mechanical ventilation, infection, and possible complications. Another supporting factor in favor of the second patient is his potential instrumental value (benefit to others) as a future physician.

Unlike the other illustrative cases, the scenario of these 2 cases, does not lend itself to a peaceful and fully satisfactory resolution. The fairness of allocating a scarce and potentially life-saving resource based on maximizing benefits and preference to instrumental value (benefit to others) is open to question. The American College of Physicians has stated that allocation decisions during resource scarcity should be made “based on patient need, prognosis (determined by objective scientific measure and informed clinical judgment) and effectiveness (i.e., likelihood that the therapy will help the patient to recover), … to maximize the number of patients who will recover” [ 39 ].

This review has covered basics of ethics founded on morality and ethical principles with illustrative examples. In the following segment, professionalism is defined, its alignment with ethics depicted, and virtues desired of a physician (inclusive term for medical doctor regardless of type of practice) are elucidated. It concludes with my vision of an integrated model for patient care.

The core of professionalism is a therapeutic relationship built on competent and compassionate care by a physician that meets the expectation and benefits a patient. In this relationship, which is rooted in the ethical principles of beneficence and nonmaleficence, the physician fulfills the elements shown in Table ​ Table3. 3 . Professionalism “demands placing the interest of patients above those of the physician, setting and maintaining standards of competence and integrity, and providing expert advice to society on matters of health” [ 26 , 40 ].

Physicians obligations

• Cure of disease when possible
• Maintenance or improvement of functional status and quality of life (relief of symptoms and suffering)
• Promotion of health and prevention of disease
• Prevention of untimely death
• Education and counseling of patients (condition and prognosis)
• Avoidance of harm to the patient in the course of care
• Providing relief and support near time of death (end-of-life care)

Drawing on several decades of experience in teaching and mentoring, I envisage physicians with qualities of both “heart” and “head.” Ethical and humanistic values shape the former, while knowledge (e.g., by study, research, practice) and technical skills (e.g., medical and surgical procedures) form the latter. Figure ​ Figure1 1 is a representation of this model. Morality that forms the base of the model and ethical principles that rest on it were previously explained. Virtues are linked, some more tightly than others, to the principles of ethics. Compassion, a prelude to caring, presupposes sympathy, is expressed in beneficence. Discernment is especially valuable in decision-making when principles of ethics collide. Trustworthiness leads to trust, and is a needed virtue when patients, at their most vulnerable time, place themselves in the hands of physicians. Integrity involves the coherent integration of emotions, knowledge and aspirations while maintaining moral values. Physicians need both professional integrity and personal integrity, as the former may not cover all scenarios (e.g., prescribing ineffective drugs or expensive drugs when effective inexpensive drugs are available, performing invasive treatments or experimental research modalities without fully informed consent, any situation where personal monetary gain is placed over patient's welfare). Conscientiousness is required to determine what is right by critical reflection on good versus bad, better versus good, logical versus emotional, and right versus wrong.

An external file that holds a picture, illustration, etc.
Object name is mpp-0030-0017-g01.jpg

Integrated model of patient care.

In my conceptualized model of patient care (Fig. ​ (Fig.1), 1 ), medical knowledge, skills to apply that knowledge, technical skills, practice-based learning, and communication skills are partnered with ethical principles and professional virtues. The virtues of compassion, discernment, trustworthiness, integrity, and conscientiousness are the necessary building blocks for the virtue of caring. Caring is the defining virtue for all health-care professions. In all interactions with patients, besides the technical expertise of a physician, the human element of caring (one human to another) is needed. In different situations, caring can be expressed verbally and non-verbally (e.g., the manner of communication with both physician and patient closely seated, and with unhurried, softly spoken words); a gentle touch especially when conveying “bad news”; a firmer touch or grip to convey reassurance to a patient facing a difficult treatment choice; to hold the hand of a patient dying alone). Thus, “caring” is in the center of the depicted integrated model, and as Peabody succinctly expressed it nearly a hundred years ago, “The secret of the care of the patient is caring for the patient” [ 41 ].

Conflict of Interest Statement

The author declares that he has no conflicts of interest.

The state of AI in 2023: Generative AI’s breakout year

You have reached a page with older survey data. please see our 2024 survey results here ..

The latest annual McKinsey Global Survey  on the current state of AI confirms the explosive growth of generative AI (gen AI) tools . Less than a year after many of these tools debuted, one-third of our survey respondents say their organizations are using gen AI regularly in at least one business function. Amid recent advances, AI has risen from a topic relegated to tech employees to a focus of company leaders: nearly one-quarter of surveyed C-suite executives say they are personally using gen AI tools for work, and more than one-quarter of respondents from companies using AI say gen AI is already on their boards’ agendas. What’s more, 40 percent of respondents say their organizations will increase their investment in AI overall because of advances in gen AI. The findings show that these are still early days for managing gen AI–related risks, with less than half of respondents saying their organizations are mitigating even the risk they consider most relevant: inaccuracy.

The organizations that have already embedded AI capabilities have been the first to explore gen AI’s potential, and those seeing the most value from more traditional AI capabilities—a group we call AI high performers—are already outpacing others in their adoption of gen AI tools. 1 We define AI high performers as organizations that, according to respondents, attribute at least 20 percent of their EBIT to AI adoption.

The expected business disruption from gen AI is significant, and respondents predict meaningful changes to their workforces. They anticipate workforce cuts in certain areas and large reskilling efforts to address shifting talent needs. Yet while the use of gen AI might spur the adoption of other AI tools, we see few meaningful increases in organizations’ adoption of these technologies. The percent of organizations adopting any AI tools has held steady since 2022, and adoption remains concentrated within a small number of business functions.

Table of Contents

  • It’s early days still, but use of gen AI is already widespread
  • Leading companies are already ahead with gen AI
  • AI-related talent needs shift, and AI’s workforce effects are expected to be substantial
  • With all eyes on gen AI, AI adoption and impact remain steady

About the research

1. it’s early days still, but use of gen ai is already widespread.

The findings from the survey—which was in the field in mid-April 2023—show that, despite gen AI’s nascent public availability, experimentation with the tools  is already relatively common, and respondents expect the new capabilities to transform their industries. Gen AI has captured interest across the business population: individuals across regions, industries, and seniority levels are using gen AI for work and outside of work. Seventy-nine percent of all respondents say they’ve had at least some exposure to gen AI, either for work or outside of work, and 22 percent say they are regularly using it in their own work. While reported use is quite similar across seniority levels, it is highest among respondents working in the technology sector and those in North America.

Organizations, too, are now commonly using gen AI. One-third of all respondents say their organizations are already regularly using generative AI in at least one function—meaning that 60 percent of organizations with reported AI adoption are using gen AI. What’s more, 40 percent of those reporting AI adoption at their organizations say their companies expect to invest more in AI overall thanks to generative AI, and 28 percent say generative AI use is already on their board’s agenda. The most commonly reported business functions using these newer tools are the same as those in which AI use is most common overall: marketing and sales, product and service development, and service operations, such as customer care and back-office support. This suggests that organizations are pursuing these new tools where the most value is. In our previous research , these three areas, along with software engineering, showed the potential to deliver about 75 percent of the total annual value from generative AI use cases.

In these early days, expectations for gen AI’s impact are high : three-quarters of all respondents expect gen AI to cause significant or disruptive change in the nature of their industry’s competition in the next three years. Survey respondents working in the technology and financial-services industries are the most likely to expect disruptive change from gen AI. Our previous research shows  that, while all industries are indeed likely to see some degree of disruption, the level of impact is likely to vary. 2 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023. Industries relying most heavily on knowledge work are likely to see more disruption—and potentially reap more value. While our estimates suggest that tech companies, unsurprisingly, are poised to see the highest impact from gen AI—adding value equivalent to as much as 9 percent of global industry revenue—knowledge-based industries such as banking (up to 5 percent), pharmaceuticals and medical products (also up to 5 percent), and education (up to 4 percent) could experience significant effects as well. By contrast, manufacturing-based industries, such as aerospace, automotives, and advanced electronics, could experience less disruptive effects. This stands in contrast to the impact of previous technology waves that affected manufacturing the most and is due to gen AI’s strengths in language-based activities, as opposed to those requiring physical labor.

Responses show many organizations not yet addressing potential risks from gen AI

According to the survey, few companies seem fully prepared for the widespread use of gen AI—or the business risks these tools may bring. Just 21 percent of respondents reporting AI adoption say their organizations have established policies governing employees’ use of gen AI technologies in their work. And when we asked specifically about the risks of adopting gen AI, few respondents say their companies are mitigating the most commonly cited risk with gen AI: inaccuracy. Respondents cite inaccuracy more frequently than both cybersecurity and regulatory compliance, which were the most common risks from AI overall in previous surveys. Just 32 percent say they’re mitigating inaccuracy, a smaller percentage than the 38 percent who say they mitigate cybersecurity risks. Interestingly, this figure is significantly lower than the percentage of respondents who reported mitigating AI-related cybersecurity last year (51 percent). Overall, much as we’ve seen in previous years, most respondents say their organizations are not addressing AI-related risks.

2. Leading companies are already ahead with gen AI

The survey results show that AI high performers—that is, organizations where respondents say at least 20 percent of EBIT in 2022 was attributable to AI use—are going all in on artificial intelligence, both with gen AI and more traditional AI capabilities. These organizations that achieve significant value from AI are already using gen AI in more business functions than other organizations do, especially in product and service development and risk and supply chain management. When looking at all AI capabilities—including more traditional machine learning capabilities, robotic process automation, and chatbots—AI high performers also are much more likely than others to use AI in product and service development, for uses such as product-development-cycle optimization, adding new features to existing products, and creating new AI-based products. These organizations also are using AI more often than other organizations in risk modeling and for uses within HR such as performance management and organization design and workforce deployment optimization.

AI high performers are much more likely than others to use AI in product and service development.

Another difference from their peers: high performers’ gen AI efforts are less oriented toward cost reduction, which is a top priority at other organizations. Respondents from AI high performers are twice as likely as others to say their organizations’ top objective for gen AI is to create entirely new businesses or sources of revenue—and they’re most likely to cite the increase in the value of existing offerings through new AI-based features.

As we’ve seen in previous years , these high-performing organizations invest much more than others in AI: respondents from AI high performers are more than five times more likely than others to say they spend more than 20 percent of their digital budgets on AI. They also use AI capabilities more broadly throughout the organization. Respondents from high performers are much more likely than others to say that their organizations have adopted AI in four or more business functions and that they have embedded a higher number of AI capabilities. For example, respondents from high performers more often report embedding knowledge graphs in at least one product or business function process, in addition to gen AI and related natural-language capabilities.

While AI high performers are not immune to the challenges of capturing value from AI, the results suggest that the difficulties they face reflect their relative AI maturity, while others struggle with the more foundational, strategic elements of AI adoption. Respondents at AI high performers most often point to models and tools, such as monitoring model performance in production and retraining models as needed over time, as their top challenge. By comparison, other respondents cite strategy issues, such as setting a clearly defined AI vision that is linked with business value or finding sufficient resources.

The findings offer further evidence that even high performers haven’t mastered best practices regarding AI adoption, such as machine-learning-operations (MLOps) approaches, though they are much more likely than others to do so. For example, just 35 percent of respondents at AI high performers report that where possible, their organizations assemble existing components, rather than reinvent them, but that’s a much larger share than the 19 percent of respondents from other organizations who report that practice.

Many specialized MLOps technologies and practices  may be needed to adopt some of the more transformative uses cases that gen AI applications can deliver—and do so as safely as possible. Live-model operations is one such area, where monitoring systems and setting up instant alerts to enable rapid issue resolution can keep gen AI systems in check. High performers stand out in this respect but have room to grow: one-quarter of respondents from these organizations say their entire system is monitored and equipped with instant alerts, compared with just 12 percent of other respondents.

3. AI-related talent needs shift, and AI’s workforce effects are expected to be substantial

Our latest survey results show changes in the roles that organizations are filling to support their AI ambitions. In the past year, organizations using AI most often hired data engineers, machine learning engineers, and Al data scientists—all roles that respondents commonly reported hiring in the previous survey. But a much smaller share of respondents report hiring AI-related-software engineers—the most-hired role last year—than in the previous survey (28 percent in the latest survey, down from 39 percent). Roles in prompt engineering have recently emerged, as the need for that skill set rises alongside gen AI adoption, with 7 percent of respondents whose organizations have adopted AI reporting those hires in the past year.

The findings suggest that hiring for AI-related roles remains a challenge but has become somewhat easier over the past year, which could reflect the spate of layoffs at technology companies from late 2022 through the first half of 2023. Smaller shares of respondents than in the previous survey report difficulty hiring for roles such as AI data scientists, data engineers, and data-visualization specialists, though responses suggest that hiring machine learning engineers and AI product owners remains as much of a challenge as in the previous year.

Looking ahead to the next three years, respondents predict that the adoption of AI will reshape many roles in the workforce. Generally, they expect more employees to be reskilled than to be separated. Nearly four in ten respondents reporting AI adoption expect more than 20 percent of their companies’ workforces will be reskilled, whereas 8 percent of respondents say the size of their workforces will decrease by more than 20 percent.

Looking specifically at gen AI’s predicted impact, service operations is the only function in which most respondents expect to see a decrease in workforce size at their organizations. This finding generally aligns with what our recent research  suggests: while the emergence of gen AI increased our estimate of the percentage of worker activities that could be automated (60 to 70 percent, up from 50 percent), this doesn’t necessarily translate into the automation of an entire role.

AI high performers are expected to conduct much higher levels of reskilling than other companies are. Respondents at these organizations are over three times more likely than others to say their organizations will reskill more than 30 percent of their workforces over the next three years as a result of AI adoption.

4. With all eyes on gen AI, AI adoption and impact remain steady

While the use of gen AI tools is spreading rapidly, the survey data doesn’t show that these newer tools are propelling organizations’ overall AI adoption. The share of organizations that have adopted AI overall remains steady, at least for the moment, with 55 percent of respondents reporting that their organizations have adopted AI. Less than a third of respondents continue to say that their organizations have adopted AI in more than one business function, suggesting that AI use remains limited in scope. Product and service development and service operations continue to be the two business functions in which respondents most often report AI adoption, as was true in the previous four surveys. And overall, just 23 percent of respondents say at least 5 percent of their organizations’ EBIT last year was attributable to their use of AI—essentially flat with the previous survey—suggesting there is much more room to capture value.

Organizations continue to see returns in the business areas in which they are using AI, and they plan to increase investment in the years ahead. We see a majority of respondents reporting AI-related revenue increases within each business function using AI. And looking ahead, more than two-thirds expect their organizations to increase their AI investment over the next three years.

The online survey was in the field April 11 to 21, 2023, and garnered responses from 1,684 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 913 said their organizations had adopted AI in at least one function and were asked questions about their organizations’ AI use. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.

The survey content and analysis were developed by Michael Chui , a partner at the McKinsey Global Institute and a partner in McKinsey’s Bay Area office, where Lareina Yee is a senior partner; Bryce Hall , an associate partner in the Washington, DC, office; and senior partners Alex Singla and Alexander Sukharevsky , global leaders of QuantumBlack, AI by McKinsey, based in the Chicago and London offices, respectively.

They wish to thank Shivani Gupta, Abhisek Jena, Begum Ortaoglu, Barr Seitz, and Li Zhang for their contributions to this work.

This article was edited by Heather Hanselman, an editor in the Atlanta office.

Explore a career with us

Related articles.

McKinsey partners Lareina Yee and Michael Chui

The economic potential of generative AI: The next productivity frontier

A green apple split into 3 parts on a gray background. Half of the apple is made out of a digital blue wireframe mesh.

What is generative AI?

Circular hub element virtual reality of big data, technology concept.

Exploring opportunities in the generative AI value chain

IMAGES

  1. Research Ethics: Definition, Principles and Advantages

    research ethical aspects

  2. The ethics of conducting clinical trials in the search for treatments and vaccines against COVID

    research ethical aspects

  3. research ethics

    research ethical aspects

  4. (PDF) The Ethics of Critical IS Research

    research ethical aspects

  5. What Are Research Ethics?

    research ethical aspects

  6. Relevant Ethical Aspects that must be considered when Conducting Research » My Courses

    research ethical aspects

VIDEO

  1. Ethical Considerations in Research

  2. The Ethical Dilemma of Research In Psychology

  3. Psychology Research Methods:Ethical Guidelines

  4. Research Ethical Considerations

  5. Ethical Consideration in Research

  6. Ethical Guideline in Social Research

COMMENTS

  1. Ethical Considerations in Research

    Revised on May 9, 2024. Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people. The goals of human research often include understanding real-life phenomena, studying effective treatments ...

  2. What Is Ethics in Research and Why Is It Important?

    Education in research ethics is can help people get a better understanding of ethical standards, policies, and issues and improve ethical judgment and decision making. Many of the deviations that occur in research may occur because researchers simply do not know or have never thought seriously about some of the ethical norms of research.

  3. Ethical Issues in Research: Perceptions of Researchers, Research Ethics

    According to Sieber , ethical issues in research can be classified into five categories, related to: (a) communication with participants and the community, (b) acquisition and use of research data, (c) external influence on research, (d) risks and benefits of the research, and (e) selection and use of research theories and methods. Many of ...

  4. Ethical Dilemmas in Qualitative Research: A Critical Literature Review

    Ethical aspects are undissociated from the research and from the researcher as well, argues Silva et al. (2012) in a study on the theme of sexual abuse in childhood. The relation between violence and health is a delicate topic because it arouses in the participants a feeling they prefer to forget, for the pain they cause or even for the fear of ...

  5. Ethical Issues in Research

    Definition. Ethics is a set of standards, a code, or value system, worked out from human reason and experience, by which free human actions are determined as ultimately right or wrong, good, or evil. If acting agrees with these standards, it is ethical, otherwise unethical. Scientific research refers to a persistent exercise towards producing ...

  6. Ensuring ethical standards and procedures for research with human beings

    It is important to adhere to ethical principles in order to protect the dignity, rights and welfare of research participants. As such, all research involving human beings should be reviewed by an ethics committee to ensure that the appropriate ethical standards are being upheld. Discussion of the ethical principles of beneficence, justice and ...

  7. Research ethics

    Research integrity or scientific integrity is an aspect of research ethics that deals with best practice or rules of professional practice of scientists.. First introduced in the 19th century by Charles Babbage, the concept of research integrity came to the fore in the late 1970s.A series of publicized scandals in the United States led to heightened debate on the ethical norms of sciences and ...

  8. Qualitative Research: Ethical Considerations

    Ethics is an integral part of research that extends throughout the entire research process, from the selection of a research topic, to data collection and analysis, and, finally, the dissemination of study results [1, 2].In current research practice, researchers encounter increasingly multidimensional ethical questions on a daily basis [].In addition, ethical issues in qualitative research ...

  9. Understanding Research Ethics

    Research ethics are moral principles that need to be adhered to when conducting a research study as well as when writing a scientific article, with the prime aim of avoiding deception or intent to harm study's participants, the scientific community, and society. Practicing and adhering to research ethics is essential for personal integrity as ...

  10. Guiding Principles for Ethical Research

    Ethical guidelines are established for clinical research to protect patient volunteers and to preserve the integrity of the science. NIH Clinical Center researchers published seven main principles to guide the conduct of ethical research: Social and clinical value. Scientific validity.

  11. Legal and ethical issues in research

    Abstract. Legal and ethical issues form an important component of modern research, related to the subject and researcher. This article seeks to briefly review the various international guidelines and regulations that exist on issues related to informed consent, confidentiality, providing incentives and various forms of research misconduct.

  12. Principles for ethical research involving humans: ethical professional

    Principle 14: Research projects should include appropriate mechanisms and procedures for reporting on ethical aspects of the research and complying with these guidelines. Although the AIATSIS and similar codes typically address the same issues as mentioned in the ethical research principles presented in this paper, ...

  13. Research Ethics

    Multiple examples of unethical research studies conducted in the past throughout the world have cast a significant historical shadow on research involving human subjects. Examples include the Tuskegee Syphilis Study from 1932 to 1972, Nazi medical experimentation in the 1930s and 1940s, and research conducted at the Willowbrook State School in the 1950s and 1960s.[1] As the aftermath of these ...

  14. Ethical Considerations in Psychology Research

    The research team. There are examples of researchers being intimidated because of the line of research they are in. The institution in which the research is conducted. salso suggest there are 4 main ethical concerns when conducting SSR: The research question or hypothesis. The treatment of individual participants.

  15. Assisting you to advance with ethics in research: an introduction to

    Responsible research practice (RRP) is scrutinised by the aspects of ethical principles and professional standards (WHO's Code of Conduct for responsible Research, 2017). The Singapore statement on research integrity (The Singapore Statement on Research integrity, 2010) has provided an internationally acceptable guidance for RRP.

  16. Five principles for research ethics

    4. Respect confidentiality and privacy. Upholding individuals' rights to confidentiality and privacy is a central tenet of every psychologist's work. However, many privacy issues are idiosyncratic to the research population, writes Susan Folkman, PhD, in "Ethics in Research with Human Participants" (APA, 2000).

  17. Ethical Considerations in Research

    Ethical Issues in Research. There are many organizations, like the Committee on Publication Ethics, dedicated to promoting ethics in scientific research. These organizations agree that ethics is not an afterthought or side note to the research study. It is an integral aspect of research that needs to remain at the forefront of our work.

  18. (PDF) Ethical Issues in Research

    Ethics or moral philosophy is a branch of philos-. ophy with standards or codes or value systems. and involves defending, systematizing, recommending concepts of right, and minimizing. wrong ...

  19. Research Ethics: Sage Journals

    Research Ethics is aimed at all readers and authors interested in ethical issues in the conduct of research, the regulation of research, the procedures and process of ethical review as well as broader ethical issues related to research such as scientific … | View full journal description. This journal is a member of the Committee on ...

  20. Ethical Issues in Research

    Ethical Issues in Research. Ethics are broadly the set of rules, written and unwritten, that govern our expectations of our own and others' behaviour. Effectively, they set out how we expect others to behave, and why. While there is broad agreement on some ethical values (for example, that murder is bad), there is also wide variation on how ...

  21. Foundations of Integrity in Research: Core Values and Guiding Norms

    Synopsis:The integrity of research is based on adherence to core values—objectivity, honesty, openness, fairness, accountability, and stewardship. These core values help to ensure that the research enterprise advances knowledge. Integrity in science means planning, proposing, performing, reporting, and reviewing research in accordance with these values. Participants in the research ...

  22. Research, Ethics, Compliance, and Safety Training

    The Trusted Standard in Research, Ethics, Compliance, and Safety Training The Collaborative Institutional Training Initiative (CITI Program) is dedicated to serving the training needs of colleges and universities, healthcare institutions, technology and research organizations, and governmental agencies, as they foster integrity and professional advancement of their learners.

  23. Ethical Considerations in Research

    Ethical considerations in research are a set of principles that guide your research designs and practices. Scientists and researchers must always adhere to a certain code of conduct when collecting data from people. The goals of human research often include understanding real-life phenomena, studying effective treatments, investigating ...

  24. Ethics and Human Rights in Nursing

    The American Nurses Association (ANA) Center for Ethics and Human Rights was established to help nurses navigate ethical and value conflicts, and life and death decisions, many of which are common to everyday practice. The Center develops policy designed to address issues in ethics and human rights at the state, national, and international levels.

  25. AI Ethics: Principles, Guidelines & Problems To Discuss

    AI ethics is a term used to define the sets of guidelines, considerations, and principles that have been created to responsibly inform the research, development, and usage of artificial ...

  26. The ethics of using artificial intelligence in scientific research: new

    Using artificial intelligence (AI) in research offers many important benefits for science and society but also creates novel and complex ethical issues. While these ethical issues do not necessitate changing established ethical norms of science, they require the scientific community to develop new guidance for the appropriate use of AI. In this article, we briefly introduce AI and explain how ...

  27. Misinformation and disinformation

    Misinformation is false or inaccurate information—getting the facts wrong. Disinformation is false information which is deliberately intended to mislead—intentionally misstating the facts. The spread of misinformation and disinformation has affected our ability to improve public health, address climate change, maintain a stable democracy ...

  28. Principles of Clinical Ethics and Their Application to Practice

    There is a wide gamut of clinical patient encounters with ethical issues, and some, especially those involving end-of-life care decisions, are complex. A few cases (Case 1 is modified from resource book [ 30 ]) are presented below as they highlight the importance of understanding and weighing the ethical principles involved to arrive at an ...

  29. Code of Ethics: English

    The NASW Code of Ethics is a set of standards that guide the professional conduct of social workers. The 2021 update includes language that addresses the importance of professional self-care. Moreover, revisions to Cultural Competence standard provide more explicit guidance to social workers. All social workers should review the new text and ...

  30. The state of AI in 2023: Generative AI's breakout year

    By comparison, other respondents cite strategy issues, such as setting a clearly defined AI vision that is linked with business value or finding sufficient resources. ... About the research. The online survey was in the field April 11 to 21, 2023, and garnered responses from 1,684 participants representing the full range of regions, industries ...