Compliance: a concept analysis

Affiliation.

Compliance can be defined in many ways. The meaning of the concept is directly dependent upon the discipline and the context in which it is used. Lacking a gold standard for its measurement, a clear definition for the concept of compliance in nursing and other health-related professions should be explored. This article explores the definitions of the concept as it relates to various disciplines, and examines its true meaning as it relates to the nursing profession. Nurses are encouraged to take a patient-centered approach to patient care, thereby forming alliances and empowering patients and family members to take an active role in their health care.

Publication types

  • Case Reports
  • Models, Nursing*
  • Nurse-Patient Relations*
  • Obesity / nursing*
  • Obesity / psychology
  • Patient Participation / psychology*
  • Patient-Centered Care / methods*
  • Power, Psychological

Compliance Disengagement in Research: Development and Validation of a New Measure

  • Original Paper
  • Open access
  • Published: 15 July 2015
  • Volume 22 , pages 965–988, ( 2016 )

Cite this article

You have full access to this open access article

compliance definition in research paper

  • James M. DuBois 1 ,
  • John T. Chibnall 2 &
  • John Gibbs 3  

3260 Accesses

18 Citations

2 Altmetric

Explore all metrics

In the world of research, compliance with research regulations is not the same as ethics, but it is closely related. One could say that compliance is how most societies with advanced research programs operationalize many ethical obligations. This paper reports on the development of the How I Think about Research (HIT-Res) questionnaire, which is an adaptation of the How I Think (HIT) questionnaire that examines the use of cognitive distortions to justify antisocial behaviors. Such an adaptation was justified based on a review of the literature on mechanisms of moral disengagement and self-serving biases, which are used by individuals with normal personalities in a variety of contexts, including research. The HIT-Res adapts all items to refer to matters of research compliance and integrity rather than antisocial behaviors. The HIT-Res was administered as part of a battery of tests to 300 researchers and trainees funded by the US National Institutes of Health. The HIT-Res demonstrated excellent reliability (Cronbach’s alpha = .92). Construct validity was established by the correlation of the HIT-Res with measures of moral disengagement ( r  = .75), cynicism ( r  = .51), and professional decision-making in research ( r  = −.36). The HIT-Res will enrich the set of assessment tools available to instructors in the responsible conduct of research and to researchers who seek to understand the factors that influence research integrity.

Similar content being viewed by others

compliance definition in research paper

Professional Decision-Making in Research (PDR): The Validity of a New Measure

Ethical reasoning in action: validity evidence for the ethical reasoning identification test (erit), further developments of the santa clara ethics questionnaire.

Avoid common mistakes on your manuscript.

Introduction

This paper describes the rationale for developing a measure of compliance disengagement in research (the How I Think about Research measure), the process of developing the measure, and a study involving 300 researchers funded by the US National Institutes of Health (NIH) to test the validity of the new measure.

In the world of research, compliance with research regulations is not the same as ethics, but it is closely related. One could say that compliance is how most societies with advanced research programs operationalize many ethical obligations. US federal regulations for the protection of human subjects make this explicit: The “common rule” for human subjects protection is viewed as a specification of the Belmont principles of respect for persons, beneficence, and justice (National Commission 1979 ; Office of Human Research Protections 2009 ). Similarly, research ethicists recognize responsibilities to care for animals, to respect the privacy of health information, and to cite articles when using excerpts. All of these responsibilities have been translated into research regulations, and ethics textbooks routinely discuss ethical and regulatory obligations side by side (Levine 1986 ; Shamoo and Resnik 2015 ; Emanuel et al. 2003 ).

The Rationale for Compliance

The rationale for compliance pertains to accountability, protection from harm, and appropriate balance or reduction of self-serving biases. The American Association for the Advancement of Science ( 2015 ) reports that the US spent nearly $63 billion on nondefense research and development in 2014. Burk ( 1995 ) has suggested that the “expansion of actionable misconduct beyond the bounds of outright fraud should not be surprising, and may be inevitable … Where public monies are used, there must be public accountability” (p. 340).

Research regulations serve multiple functions. In most instances, regulations resulted from failures in the responsible conduct of research; they were efforts to compel, for example, the protection of human subjects, proper care and use of animals, or the integrity of data (Rollin 2006 ; Jones 1993 ; National Bioethics Advisory Commission 2001 ).

While researchers may appreciate a certain amount of latitude in making decisions about research design, regulations also serve to provide guidance where expectations were once vague (Steneck 2007 ). Research ethics is rarely about choosing good over evil, but rather about balancing competing good aims, e.g., balancing access to data or biospecimens with respect for the persons who provided the data or biospecimens. Regulations can provide guidance on how to strike the balance or on what processes to use when making such decisions.

Finally, regulations and compliance systems can serve to reduce the influence of bias. Violations of the responsible conduct of research sometimes are intentional, but at other times are not. A growing body of literature indicates that self-serving bias operates below the threshold of awareness and influences professional decisions (AAMC-AAU 2008 ; Moore and Loewenstein 2004 ; Irwin 2009 ). Compliance and oversight programs can play an important role in ensuring that research is conducted with integrity even when researchers might have powerful subliminal motives to cut corners.

Compliance should be part and parcel of quality research. To the extent that compliance can promote the aim of good research, one could say that compliance is a virtue of research professionals (DuBois 2004 ). It is simply part of being an effective researcher.

The Burden of Compliance

At the same time, the burden of compliance has grown significantly over the past three decades. It is not uncommon for an academic institution in the United States to have written policies and require training of selected personnel in 10 or more domains, including animal welfare, conflicts of interest, controlled substances, effort reporting, export control, Health Insurance Portability and Accountability Act’s (HIPAA) privacy rule, human research protections, intellectual property, and research integrity. The National Council of University Research Administrators publishes a book, Research and compliance , that “distills essential information from mounds of federal laws, regulations and circulars, covering more than 100 of the most significant sets of requirements referenced in federal contracts and grants” (Youngers and Webb 2014 , back cover).

The resulting burdens of compliance can slow and discourage research:

The past two decades have witnessed increasing recognition that the administrative workload placed on federally funded researchers at U.S. institutions is interfering with the conduct of science in a form and to an extent substantially out of proportion to the well-justified need to ensure accountability, transparency and safety. (National Science Board 2014 , p. 1)

Principal investigators report spending 42 % of their grant-funded time on administrative tasks (National Science Board 2014 ). A report from the National Research Council asserted that the problem of excessive regulatory burdens is expected to cost universities “billions of dollars over the next decade.” (National Research Council 2012 , p. 16) Moreover, institutional policies sometimes appear to have a primary intention of protecting institutions rather than protecting the integrity of science or the welfare of human and animal subjects (Koski 2003 ).

Finally, compliance can have an unintended side effect. Ethical obligations to society, to human or animal subjects, or to scientific peers may be reduced to a ritual performance aimed at satisfying a regulatory requirement. For example, the obligation to obtain informed consent—which should involve presentation of information and ascertaining understanding and voluntary agreement, with follow-up over time—can be reduced to “consenting someone,” which means obtaining a signature on a consent form, that is, doing the minimum needed to satisfy requirements. And when compliance requirements appear unjust or unreasonable, researchers may bypass the system and conduct research without oversight (Keith-Spiegel and Koocher 2005 ; Martinson et al. 2006 ). The Federation of Societies for Experimental Biology’s 2013 survey on administrative burden (N = 1324 individuals, mostly principal investigators) found that “One common perception of regulatory oversight among responders was that regulations ‘punished all’ for the ‘mistakes of a few’” (Federation of American Societies for Experimental Biology 2013 ). Thus, it is not difficult to understand how some researchers may disengage from compliance—that is, they may rationalize spending less time on compliance than necessary or avoiding some domains of compliance altogether.

The Problem of Noncompliance

We explained above that there are good reasons for compliance requirements, and good reasons why they can be perceived as problematic. Nevertheless, noncompliance causes significant problems for institutions, investigators, subjects and science. For institutions, noncompliance can involve spending enormous amounts of time and money on investigations, paying fines, and having research programs suspended. Institutions invest heavily in compliance education because federal sentencing guidelines for institutions provide reduced penalties for institutions that have an effective and vital compliance program, including effective compliance training (Grant et al. 1999 ; Olson 2010 ). For investigators, noncompliance can lead to suspension of protocols, loss of research privileges, loss of privileges to obtain government funding, and prohibitions from publishing data (Neely et al. 2014 ). For human subjects, noncompliance may involve failures of informed consent, privacy protection, or safety monitoring (National Bioethics Advisory Commission 2001 ). For science, noncompliance can contribute to bad publicity for the field, diminished public trust, and the publication of questionable research data (Irwin 2009 ).

Why Does Noncompliance Occur?

Despite the problems that noncompliance causes, research-intensive universities conduct an estimated 2–3 investigations of serious noncompliance each year involving violations of human subjects protections, research integrity, animal care, or conflict of interest policies (DuBois et al. 2013a ). A meta-analysis of self-report surveys found that 2 % of investigators admitted to engaging at least once in research misconduct defined as plagiarism or data fabrication or falsification (Fanelli 2009 ).

Why do researchers fail to comply with regulations and policies? Many different answers are plausible. In a literature review, DuBois, Anderson et al. identified 10 environmental factors that are hypothesized to contribute to professional wrongdoing by providing a motive, means, or opportunity (DuBois et al. 2012 ). Factors included financial rewards, lack of oversight, ambiguous norms, vulnerable victims, and playing conflicting roles. However, when the same research group later examined 40 cases of actual research misconduct, they found that few environmental factors characterized the cases. The most common characteristic of cases was self-centered thinking, which was mentioned in 48 % of cases (DuBois et al. 2013b ).

Drawing from their experience on review boards, Neely et al. ( 2014 ) state that the cause of noncompliance can be “that an investigator is overloaded, does not know the regulations, or does not take the time to pay attention to the details” (p. 716). However, each of these “causes” begs a question. Overload leads to prioritization—why is compliance given low priority? Why does the principal investigator not know the regulations? When researchers lack knowledge of technical matters they frequently turn to colleagues or the literature to find answers—why do they not do the same with questions about compliance? Why is the investigator not taking time to pay attention to the details? Do they pay attention to details of their data analysis or their research budgets? Explanations that focus on the demands of the research environment (Martinson et al. 2009 )—the pressure to publish and obtain external research funding—similarly lead us to ask, using the logic of Samenow’s ( 2001 ) exploration of criminal behavior, why most researchers behave with integrity in the face of the same pressures.

Based on the experience of two authors over the past 3 years delivering remediation training to investigators who were referred for noncompliance, we hypothesize that many instances of noncompliance occur when researchers use cognitive distortions that support disengagement from compliance. We propose this as an extension of moral disengagement theory.

Cognitive Distortions that Support Moral Disengagement

Most of the literature on moral disengagement has focused on antisocial behavior such as theft and violent crime. Central to moral disengagement theory is the idea that human beings tend to use cognitive strategies to protect a self-identity as a decent person. “People do not ordinarily engage in reprehensible conduct until they have justified to themselves the rightness of their actions. What is culpable can be made righteous through cognitive reconstrual” (Bandura et al. 1996 , p. 365).

In describing the self-concept of convicted criminals who engaged in antisocial behaviors, Samenow ( 2001 ) observes:

The antisocial person regards himself as a good human being. He may admit momentarily to having done something wrong, especially if he believes it will be to his advantage. … But if one were to inquire whether, deep down, he regards himself as a bad person, the answer would be in the negative. As one man remarked, ‘If I thought of myself as evil, I couldn’t live.’ (p. 297)

Mechanisms of moral disengagement permit this self-identity to persist despite antisocial behaviors. In a longitudinal study of youths considered at risk for antisocial behavior, Hyde, Shaw, et al. found that moral disengagement was more strongly correlated with the development of antisocial behavior than any environmental variables. Moral disengagement further served as a moderator variable that explained whether neighborhood, rejecting parenting, and lack of empathy would predict antisocial behavior (Hyde et al. 2010 ).

This same psychological dynamic appears to characterize “normal” (as opposed to antisocial) levels of dishonesty. After conducting a series of six experiments on dishonest behavior, Mazar et al. ( 2008 ) conclude that “people who think highly of themselves in terms of honesty make use of various mechanisms that allow them to engage in a limited amount of dishonesty while retaining positive views of themselves” (p. 642). One of the primary mechanisms they use is characterization—changing the way they characterize their dishonesty. This fits well with Bandura’s work on moral disengagement, which involves the use of justification strategies such as using euphemistic labels or advantageous comparisons to prevent self-sanctioning (Bandura et al. 1996 ; Bandura 1999 ).

In an analysis of cases of research misconduct that involved coding of statements by the wrongdoers and cluster analysis, Davis et al. ( 2007 ) found that rationalizations comprised 2 of 7 clusters. Rationalizations included statements that clearly minimized the harmfulness of misconduct, blamed others, or assumed the worst if they did not fabricate data or plagiarize. This fits well with the growing body of literature cited above that indicates that self-serving bias characterizes the decision-making of normal individuals and professionals (AAMC-AAU 2008 ; Moore and Loewenstein 2004 ). It would appear that rationalization is not just for antisocial personalities, and self-serving bias is not just for narcissists (although it is heightened in these clinical populations).

The How I Think Questionnaire

The How I Think (HIT) Questionnaire was developed to assess self-serving cognitive distortions particularly in adolescent populations with antisocial tendencies or behaviors (Barriga et al. 2001 ). The HIT questionnaire focuses on four cognitive distortions, that is, four thinking errors that distort the interpretation of a situation in favor of self-interests. The four cognitive distortions are assuming the worst, blaming others, minimizing/mislabeling, and self-centered thinking. Table  1 provides a definition of each of the cognitive distortions and presents a HIT item that represents each. All of the HIT’s cognitive distortion items are written with reference to one or another of four behaviors that are representative of delinquent behavior: oppositional-defiance, physical aggression, lying, and stealing. The HIT also includes anomalous responding (AR) items, which serve as a built-in measure of socially desirable responding, and positive filler (PF) items which serve to reduce the questionnaire’s emphasis on negative behaviors, perhaps making its purpose less transparent. The HIT questionnaire underwent modification and improvement during the course of its development. The original version included 52 cognitive distortion items and 8 AR items. Item analyses of development sample data utilized selection criteria such as criterion group discrimination and correlation with antisocial behavior measures. The final version comprised 54 items (39 cognitive distortion, 8 AR items, and 7 PF items).

The HIT consists of a series of statements that are rated on a 1–6 Likert-type scale ranging from (1) strongly disagree to (6) strongly agree without a “neutral” option. The overall HIT score consists of the mean value of responses to items representing any of the four cognitive distortions, thus allowing for a range from 1 to 6, with higher scores indicating higher usage of self-serving cognitive distortions. Wallinius et al. ( 2011 ) found a mean score of 1.88 (SD = .46) in a sample of adult non-offenders and 2.72 (SD = .90) among adult offenders.

A recent meta-analysis of studies conducted with 29 independent samples ( N  = 8186) found that the HIT has demonstrated high levels of reliability and validity. Internal consistency reliability (Cronbach’s alpha) was excellent across populations, with a mean alpha of .93, 95 % CI [.92, .94] (Gini and Pozzoli 2013 ). The validity of the HIT has been supported in relation to numerous constructs, including the ability to distinguish delinquent from non-delinquent populations, and highly significant ( p  < .001) positive correlations with measures of externalizing behavior ( r  = .52), aggressive behavior ( r  = .38), antisocial behavior ( r  = .55), delinquent behavior ( r  = .41), and low empathy ( r  = .42) (Gini and Pozzoli 2013 ).

Wallinius, Johansson et al. found similar results from their administration of the HIT to adult and adolescent offenders and non-offenders in Sweden (Wallinius et al. 2011 ). Across all four samples the HIT demonstrated excellent reliability (alpha = .90 to .96). Among both adult and adolescent samples, the HIT identified significantly higher levels of self-serving cognitive distortions among offenders. The study by Wallinius et al. is notable for our purposes because it supported the validity and reliability of the HIT with adults, with non-offenders, and with both males and females.

We considered alternatives to adapting the HIT. For example, Medeiros et al. ( 2014 ) developed a taxonomy of biases in ethical decision-making. While it is useful as a research framework, we rejected it as a framework for assessment because we felt (a) most of the self-serving cognitive biases they identified could be subsumed under one of the four distortions operationalized by the HIT-Res (e.g., “abdication of responsibility,” “diffusion of responsibility,” and “unquestioning deference to authority” are all forms of “blaming others” as we operationalized the concept); (b) Bandura found that the biases that support moral disengagement are not multifactorial; and (c) a simpler framework is advantageous from a pedagogical perspective. While they examine additional “biases”—such as moral insensitivity and changing norms—these are quite different from self-serving biases and require different measurement approaches.

In what follows we report on the initial psychometric evaluation of the HIT-Res in a sample of researchers funded by NIH.

Adapting the HIT

Given that the HIT has demonstrated reliability and validity in identifying self-serving cognitive distortions that play a role in perpetuating deviant behavior, and that recent data suggest that cognitive distortions are used by ordinary people who engage in milder forms of wrongdoing (such as displaying “normal” levels of dishonesty), we decided to adapt the HIT for use with researchers. We maintained a focus on the original four cognitive distortions, but changed the four behavioral referents to matters of research compliance: conflicts of interest (COI); human and animal subject protections (HSP/ASP); research misconduct (RM: falsification, fabrication, and plagiarism); and the general responsible conduct of research (RCR). To be clear, our primary focus in developing the HIT-Res was the use of cognitive distortions—not the behavioral referents, which were meant simply to increase the ecological validity of the test. As we developed behavioral referents, the purpose was to provide research-specific examples of the use of four cognitive distortions; no attempt was made to address all matters of research compliance and integrity, and no plan was made to analyze the behavioral referents as distinct constructs.

The HIT includes 39 cognitive distortion items (11 assuming the worst, 10 blaming others, 9 minimization/mislabeling, and 9 self-centered thinking), 8 anomalous responding items, and 7 positive filler items. Under the direction and review of the original author of the HIT, item content was adapted to the research context by the first two authors (both of whom have extensive research and assessment experience with research ethics and psychological constructs), with the goal of staying as close as possible to the original HIT item wording (see Table  1 ) and maintaining a balance of the 4 behavioral referents within each of the 4 cognitive distortions. Items were also edited generally to reduce ambiguity and promote equivalence of length. In rare cases, HIT items could not be converted to the research context and/or could not be linked to a research behavioral referent; in these cases, a HIT item was dropped and/or a new item was written to balance item content for a given cognitive distortion. These adaptations resulted in a set of 42 cognitive distortion items: 10 assuming the worst (3 RM, 3 RCR, 2 COI, 2 HSP/ASP), 12 blaming others (4 RCR, 3 RM, 3 HSP/ASP, 2 COI), 10 minimization/mislabeling (3 RM, 3 HSP/ASP, 3 COI, 1 RCR), and 10 self-centered thinking (3 RCR, 3 COI, 2 RM, 2 HSP/ASP); 7 anomalous responding items; and 6 positive filler items. The reading level of this set of items was assessed using the Flesch–Kinkaid measure, which returned a reading level of grade 4.8, which is similar to the 4th grade reading level of the HIT (Barriga et al. 2001 ). This was considered advantageous given that, according to the National Science Foundation, nearly half of all post-doctoral trainees were born in non-native-English-speaking countries (National Science Foundation 2014 ). We named this set of 55 items the How I Think about Research (HIT-Res) questionnaire. We anticipated that HIT-Res might be shortened following testing by dropping items with low item-total correlations. Table  1 presents examples of original HIT and HIT-Res items in each domain.

Participants and Recruitment

We recruited a convenience sample of 300 researchers who were funded by the NIH, working in the United States (US), and diverse in terms of career stage, age, and field of research. We built a recruitment database using the NIH RePORTER, an online database of all grants awarded by NIH, which can be sorted by funding mechanisms and identifies the principal investigator of each grant. In order to represent diverse career stages, we targeted individuals with two distinct kinds of funding: Training grants (T and K) and independent investigator grants (R01). In order to increase the number of eligible trainees, we also contacted the principal investigators of institutional research training programs funded through the Clinical and Translational Science Award (CTSA) program with the request that they share our recruitment email with their NIH-funded trainees.

From February through May 2014, potential participants were contacted by email with an invitation to participate in a study that aimed to evaluate a measure of how researchers make professional decisions. We estimated that participation would require 75–120 min to complete the full battery of measures and offered $100 in payment. If an individual did not complete the measures, reminder emails were sent at 1 and 4 weeks following initial contact. Each email contained a link to the online survey as well as a link to opt out of further contact.

Survey Instrument: Platform, Measures, and Convergent Validity Hypotheses

The survey was conducted using Qualtrics survey software, which allows the use of many different response formats and provides HIPAA-compliant data security ( www.qualtrics.com ). We used the Qualtrics forced choice option and received complete data on the full battery of measures from 300 participants.

The survey included the following measures. Measures 2, 3, and 4 were used to assess convergent validity, measure 5 to assess concurrent criterion validity, and measure 6 to control for social desirability.

The HIT-Res, which was used for the first time in this validity study.

Propensity to Morally Disengage Scale (Moore et al. 2012 ). We used the 8-item version of the PMD (alpha reliability = .80), which consists of one item representing each of eight mechanisms of moral disengagement, such as euphemistic labeling and displacement of responsibility (e.g., “Some people have to be treated roughly because they lack feelings that can be hurt”). Moore et al. ( 2012 ) reported strong evidence for the convergent, discriminant, incremental, and predictive validity of the PMD. We expected the HIT-Res to be positively correlated with the PMD because they are both intended to measure the use of cognitive distortions or thinking errors that support moral disengagement. However, we also expected some divergence given that the HIT-Res assesses the use of cognitive distortions specifically with reference to research compliance, rather than general moral norms.

Global Cynicism Scale (GCS) (Turner and Valentine 2001 ). The GCS is an 11-item scale (alpha reliability = .86) that assesses level of cynicism (e.g., “When you come right down to it, it’s human nature never to do anything without an eye to one’s own profit”). Turner and Valentine ( 2001 ) reported compelling evidence for the convergent, discriminant, criterion-related, and nomological validity of the GCS. Cynicism has also been shown to be negatively correlated with good ethical decision making and positive organizational behavior (Mumford et al. 2006 ; Turner and Valentine 2001 ). We hypothesized that GCS scores would be positively correlated with the cognitive distortions assessed by the HIT-Res.

Narcissistic Personality Inventory (NPI-16) (Raskin and Terry 1988 ; Ames et al. 2006 ). The NPI-16 is a well-validated (convergent, discriminant, and predictive), reliable (alpha reliability = .72; retest reliability = .85), 16-item measure of narcissism (e.g., “I know that I am good because everybody keeps telling me so”). Previous studies have found that narcissism is negatively correlated with good ethical decision-making and positive organizational behavior (Mumford et al. 2006 ; Antes et al. 2007 ; Penny and Spector 2002 ). We expected that narcissism would be positively correlated with HIT-Res scores.

The Professional Decision-making in Research (PDR) measure (DuBois et al. 2015 ). The PDR is an adaptation of the Ethical Decision-Making Measure (EDM), which examines the use of good decision-making strategies when confronted with difficult decisions in research that defy one simple right answer (Mumford et al. 2006 ; Antes and DuBois 2014 ). In a validation study with 300 NIH-funded researchers, it demonstrated strong reliability (alpha reliability = .84; parallel forms reliability = .70) and construct validity (DuBois et al. 2015 ). Such strategies include seeking help, managing emotions, anticipating consequences, recognizing rules, and testing personal assumptions and motives. It consists of 16 vignette items presenting challenging situations; each item is followed by six response options (3 illustrate the use of good decision-making strategies, 3 violate one of more of the strategies) from which participants select the two they would be most likely to do if they were actually in the situation. One point is awarded when both options selected illustrate the use of good professional decision-making strategies. We hypothesized that higher scores on the HIT-Res would correlate with lower scores on the PDR, because compliance disengagement would decrease the use of strategies such as recognizing rules, considering consequences, and testing personal assumptions.

Marlowe–Crowne Social Desirability Scale (MCSDS) (Crowne and Marlowe 1960 ; Reynolds 1982 ). We used the 13-item form of the MCSDS (alpha reliability = .76) as a control variable to determine the extent to which responses on the HIT-Res might be determined by socially-desirable responding (e.g., “I have never been irked when people expressed ideas very different from my own”), and to examine convergent validity of the anomalous responding (AR) scale of the HIT-Res. Reynolds ( 1982 ) found strong evidence for convergent validity of the MCSDS and recommended the 13-item version as the best substitute for the original 33-item MCSDS.

A demographic survey that allowed us to describe our population and examine whether the HIT-Res correlates with variables such as gender, age, years of experience, field of study, and native language. See Table  2 for a description of demographic data collected.

Statistical Analysis

Data were analyzed using IBM’s SPSS Statistics edition 22 software. Data analysis focused on producing descriptive statistics, reliability statistics for the HIT-Res, confirmatory factor analysis of the HIT-Res, and testing the convergent validity of the HIT-Res by examining correlations with the PDR, PMD, GCS, and NPI-16.

Research Ethics

The Institutional Review Board at Washington University in St. Louis approved the study using an expedited protocol (201401153). The survey included a 4-page consent form. Participants indicated consent by clicking a button to proceed to the measures.

The results pertained to the psychometric properties of the HIT-Res in relation to reliability, demographic group, internal factor structure, and construct validity.

HIT-Res Reliability

Nine cognitive distortion (CD) items and one anomalous responding (AR) item had a corrected item-total correlation less than .38 and were dropped from further analyses. Subsequent analyses focused on the 45-item version of the HIT-Res presented in “Appendix”, which consists of 33 CD items—8 assuming the worst (AW), 9 blaming others (BO), 8 minimizing/mislabeling, and 8 self-centered (SC) items—as well as 6 AR and 6 positive filler (PF) items. The remaining items have an item-total correlation range from .38 to .64.

Cronbach’s alpha reliability coefficients were .92 for the 33 CD items and .75 for the AR scale. (PF is not a scale and therefore reliability scores were not generated.) These alphas are nearly identical to those observed in the meta-analysis of studies conducted with the original HIT (.93 and .72, respectively).

All four CD subscales (AW, BO, MM, and SC) were correlated with each other in the range of r  = .69 to .88 ( p  < .001) and with the HIT-Res total score in the range of r  = .85 to .89 ( p  < .001). This raised the question of whether the subscales are in fact separate factors; however, such strong correlations were also observed in the original HIT (Barriga et al. 2001 ).

Distribution and Demographic Differences

The overall sample ( N  = 300) had a mean HIT-Res score of 2.49 (SD = .63) with a range of 1.06–5.55. While the HIT-Res is not the same test as the HIT, the mean score observed among control populations (non-offenders, N  = 3676) in a meta-analysis of original HIT data is nearly identical to what we observed ( m  = 2.47). Figure  1 indicates the distribution of HIT-Res scores, which resembles closely the bell curve typical of normally distributed traits.

Distribution of HIT-Res mean scores

Table  2 presents demographic data, including mean scores and tests of significant differences ( t tests and ANOVAs) between groups. Higher scores are associated with being male ( t  = 3.42, p  < .001), an animal researcher ( t  = 2.97, p  = .003) or wet lab researcher ( t  = 3.52, p  < .001), and having English as a second language ( t  = 3.28, p  < .001). No significant differences were associated with years of experience, status as a trainee versus independent investigator, or with race (once English as a second language was taken into account).

HIT-Res Confirmatory Factor Analysis

Past studies of the original HIT have used confirmatory factor analysis to test the fit of two different models: A six-factor model that treats each of four cognitive distortions as unique factors, plus the AR scale, and PF items; and a three-factor model that treats all of the four cognitive distortions as one factor, plus the AR scale, and PF. Barriga et al. ( 2001 ) found a better fit using the six-factor model, while Wallinius et al. ( 2011 ) found a better fit with the three-factor model. Accordingly, we ran both six-factor and three-factor models using confirmatory factor analysis.

Maximum likelihood estimation confirmatory factor analysis (IBM SPSS AMOS 22.0.0, Amos Development Corporation, Meadville, PA) was used to assess the structure of the HIT-Res. Model fit was evaluated using the Chi squared test, ratio of the Chi squared value to the model degrees of freedom (df), root mean square error of approximation (RMSEA), goodness-of-fit index (GFI), and parsimony goodness-of-fit index (PGFI). Standardized path coefficients were estimated (with standard errors) for all models.

A six-factor model was tested first. This model characterized the cognitive distortion construct by the original four dimensions of BO, AW, SC, and MM, and also included AR and PF. The confirmatory model was not admissible, secondary to a non-positive definite covariance matrix and negative variance estimates. A three-factor model treated the cognitive distortion items as one factor, again including AR and PF. This model demonstrated adequate fit: N = 300, χ 2  = 1897, df  = 942, p  < .001; χ 2 / df  = 2; RMSEA = .058, GFI = .77, PGFI = .70. Table  3 displays the standardized coefficients for the three-factor model and the internal consistency reliability (coefficient alpha) for the CD and AR factors. Correlations among the three factors were: CD-AR, r  = −.50 ( p  < .001); CD-PF, r  = −.30 ( p  < .001); and AR-PF r  = .13 ( p  = .02).

HIT-Res Validity

Table  4 reports the correlation of the HIT-Res with various measures of convergent and concurrent validity. The HIT-Res was strongly correlated with the PMD scale ( r  = .75, p  < .001). This is strong evidence of convergent validity; that is, the HIT-Res appears to measure moral disengagement in the context of research compliance. This correlation is much stronger than any of the correlations with other convergent validation measures reported in the meta-analysis of data from 29 independent samples using the original HIT, which ranged from .38 to .55 (Gini and Pozzoli 2013 ).

As expected, the HIT-Res was also positively correlated with the GCS ( r  = .51, p  < .001). Somewhat surprisingly given that the HIT-Res assesses self-serving cognitive distortions, it was not significantly correlated with the NPI-16 ( r  = .10, p  = .09). However, this may suggest that the thinking patterns assessed by the HIT-Res and used in moral disengagement are not unique to any personality type.

Overall HIT-Res scores were weakly correlated with social desirability as measured by the MCSDS ( r  = .23, p  < .001). As expected, the MCSDS was significantly correlated with the AR score ( r  = .56, p  ≤ .001), thus providing support for the AR scale as a built in measure of social desirability. At the same time, the statistical significance of the various relationships identified above was not affected when we treated MCSDS or AR responding as a covariate.

Regarding concurrent criterion validity, the HIT-Res was negatively correlated with the PDR ( r  = −.38, p  < .001). This was expected because good professional decision-making involves considering the rules for research, questioning one’s motives and assumptions, and anticipating consequences in a realistic manner. To investigate the incremental validity of the HIT-Res in relation to the PDR, two multiple regression analyses were conducted. In the first analysis, PMD, GCS, NPI-16, and MCSDS were entered as a block to predict PDR. This block explained 14 % of the variance in PDR, with significant regression coefficients associated with PMD (beta = −.27, p  < .001), GCS (beta = −.15, p  = .015), and NPI-16 (beta = −.12, p  = .027); MCSDS was not a significant predictor (beta = −.07, p  = .20). Following this block, the HIT-Res was entered into the equation. The HIT-Res significantly predicted PDR (beta = −.28, p  = .001), accounting for an additional 3.2 % of variance explained. With HIT-Res in the equation, the regression coefficients for PMD and GCS were no longer significant (beta = −.09 and −.08, p  = .28 and .20, respectively). In the second regression analysis, all predictors were considered for inclusion in an equation to predict PDR based on a statistical (stepwise) inclusion rule. In this analysis, HIT-Res entered the equation first, explaining 15 % of the variance in PDR, followed by NPI-16, which explained an additional 1.3 % of the variance. Beta coefficients were −.37 ( p  < .001) for HIT-Res and −.12 ( p  = .03) for NPI-16. PMD and GCS did not enter the equation.

The validity study presented in this article supports the HIT-Res as a valid and reliable adaptation of the original HIT in this population, with psychometric properties (alphas and means) nearly identical to the original HIT. By adapting the behavioral referents from delinquent behaviors to matters of research compliance, we have produced the first measure to assess the use of cognitive distortions regarding research compliance.

Our analysis of the psychometric properties of the HIT-Res did differ from the original HIT in one regard: Whereas the original HIT was determined to be multifactorial—with each of the cognitive distortions functioning as a unique subscale—the HIT-Res clearly represented a single factor for the cognitive distortions. This is not entirely inconsistent with prior data on the original HIT. The HIT manual reported that all cognitive distortions were correlated with each other at .82 or higher. Thus it is questionable whether it was appropriate to run a 6-factor confirmatory factor analysis model. Moreover, Wallinius et al. ( 2011 ) also found that the four cognitive distortions in the original HIT functioned as one factor. They speculated that this could be due to population-related factors: Their sample contained adults, whereas the original validity samples for the HIT were comprised of adolescents exclusively. Nevertheless, we believe a one-factor model is theoretically defensible, particularly in light of the strong correlation of the HIT-Res with the Propensity to Morally Disengage scale: A correlation of .75 is typical of parallel forms, suggesting that the two tests tap into essentially the same construct, despite their obviously different referents. At the same time, regression analysis indicated that the HIT-Res retains independent predictive validity, likely because it contextualized to the research setting as is the PDR.

What Does the HIT-Res Measure?

Here it is worth quoting at length what Moore et al. ( 2012 ) wrote about the development of the Propensity to Morally Disengage scale:

Consistent with Bandura’s theoretical claim that moral disengagement is best understood to be “multifaceted” (Bandura et al. 1996 : 367), not multifactorial, and in line with both his (e.g., Bandura et al. 1996 ) as well as subsequent published and unpublished work on moral disengagement…, our aim was to create a unidimensional measure of the general propensity to morally disengage. That is, while acknowledging that the eight individual mechanisms of moral disengagement represent different facets of the construct, our overarching goal was to tap these facets as part of a valid scale that assesses the general propensity to morally disengage as a higher order concept. (p. 13)

Accordingly, the four cognitive distortions can be construed as mechanisms of moral disengagement, and as such represent different facets of a higher order concept. Table  5 illustrates how each of the eight mechanisms of moral disengagement can be mapped onto the four cognitive distortions.

Practical Applications of the HIT-Res

The HIT-Res may be valuable to Responsible Conduct of Research instructors in at least two ways. First, the HIT-Res provides a new outcome measure—the first measure to examine cognitive distortions in the service of compliance disengagement (Antes and DuBois 2014 ; Redman 2014 ). Recent meta-analyses of research ethics courses have found that few courses demonstrate any positive outcomes (Antes et al. 2009 , 2010 ). Although this is explained in part by the instructional methods used, another reason for this finding may be the use of inappropriate outcome measures such as measures of moral development (Antes et al. 2009 ), which is unlikely to be affected by short-term interventions. In contrast, short-term interventions have been shown to reduce the use of cognitive distortions even in clinically challenging populations (Gibbs et al. 1995 ).

Second, were courses in the responsible conduct of research to succeed in reducing rates of noncompliance or research misconduct, one would want to know why they had a positive effect. The HIT-Res provides a way of exploring compliance disengagement as a mediating variable in the context of clarification research—that is, research that examines not only whether an educational program is effective in achieving an aim (such as increased compliance) but also why (Cook et al. 2008 ).

Limitations and Next Steps

Unlike some measures of moral reasoning, which are difficult to “fake high” (Rest et al. 1999 ), the HIT-Res is susceptible to socially desirable responding as measured by both the Marlowe–Crowne and by the built in AR scale. However, the relationships between the HIT-Res and cynicism, moral disengagement, and professional decision-making remained even when controlling for socially desirable responding. That is to say, the HIT-Res works as a device for assessing compliance disengagement even when some individuals engage in positive impression management in their responses. Moreover, the HIT-Res has a built-in AR scale, which is strongly positively correlated with the Marlowe–Crowne; we strongly recommend that it be used as a control variable in studies that use the HIT-Res as a correlate or outcome measure.

Second, ours was a convenience sample of 300 NIH-funded researchers. Compared to a limited-variable dataset of over 2500 potential participants in this research, the 300 respondents represented here were more likely to be female (57.3 vs. 40.9 %), χ 2 (1) = 29.1, p  < .001, and were more likely to be native English speakers (84.0 vs. 77.5 %), χ 2 (1) = 6.5, p  = .01. The groups were comparable, however, regarding any human subjects research: 60.3 % in the present sample versus 56.2 % in the comparison sample, χ 2 (1) = 1.8, p  = .178. As a result, to the extent that gender and language are associated with responses, the results obtained here may not be representative of the universe of NIH researchers, though we have no reason to believe that participating female and ESL researchers were different from their non-participating counterparts. Participants were also paid for their time, given the extensive time burden of our testing (approximately 1 h). This may have had the advantage of including a broader and less biased range of participants than we might have had if we relied on altruism alone. To the extent that payments may have created a desire to respond in socially desirable ways, we assessed it using two measures and controlled for it.

In this study, we used a measure of professional decision-making in research as a measure of concurrent validity. While low-fidelity performance simulations have been shown to correlate with actual job performance (Helton-Fauth et al. 2003 ), it would be desirable to examine directly the relationship of the HIT-Res to job performance, though directly assessing relatively rare and hidden events (such as data fabrication or failure to report conflicts of interest) is challenging. Next steps in our research agenda with the HIT-Res include examining its relationship to self - reports of violations of research ethics and compliance, though we concede that self-reports are liable to reflect socially desirable responding. Plans for future research also include using the HIT-Res in two analyses of data obtained from participants in the PI Program, which enrolls researchers who have had difficulty with compliance expectations. We wish to examine whether the PI Program, which directly addresses thinking patterns, can effect significant reductions in HIT-Res scores from pre- to post-testing.

We expect that this reduction is possible. After all, Bandura’s theory of moral disengagement was not meant to explain how individuals with antisocial personality could commit atrocities so much as how “normal” people could—in some sectors of their lives at a particular point in history—violate the most basic rules of decent society (Bandura 1999 ). Perhaps the concept of “compliance disengagement” can accomplish something similar with regard to understanding how “normal” researchers come to violate the basic rules of science.

AAAS (2015). Historical Trends in Federal R&D . http://www.aaas.org/page/historical-trends-federal-rd . Accessed January 4 2015.

AAMC-AAU (2008). Protecting patients, preserving integrity, advancing health: accelerating the implementation of COI policies in human subjects research. In AAMC (pp. 1–87). Washington DC: AAMC-AAU.

Ames, D. R., Rose, P., & Anderson, C. P. (2006). The NPI-16 as a short measure of narcissism. Journal of Research in Personality, 40 , 440–450.

Article   Google Scholar  

Antes, A. L., Brown, R. P., Murphy, S. T., Waples, E. P., Mumford, M. D., Connelly, S., et al. (2007). Personality and ethical decision-making in research: The role of perceptions of self and others. Journal of Empirical Research on Human Research Ethics, 2 (4), 15–34. doi: 10.1525/jer.2007.2.4.15 .

Antes, A. L., & DuBois, J. M. (2014). Aligning objectives and assessment in responsible conduct of research instruction. Journal of Microbiology & Biology Education, 15 (2), doi: 10.1128/jmbe.v15i2.852 .

Antes, A. L., Murphy, S. T., Waples, E. P., Mumford, M. D., Brown, R. P., Connelly, S., et al. (2009). A meta-analysis of ethics instruction effectiveness in the sciences. Ethics and Behavior, 19 (5), 379–402. doi: 10.1080/10508420903035380 .

Antes, A. L., Wang, X., Mumford, M. D., Brown, R. P., Connelly, S., & Devenport, L. D. (2010). Evaluating the effects that existing instruction on responsible conduct of research has on ethical decision making. Academic Medicine, 85 (3), 519–526. doi: 10.1097/ACM.0b013e3181cd1cc5 .

Bandura, A. (1999). Moral disengagement in the perpetration of inhumanities. Personality and Social Psychology Review, 3 (3), 193–209. doi: 10.1207/s15327957pspr0303_3 .

Bandura, A., Barbaranelli, C., Caprara, G. V., & Pastorelli, C. (1996). Mechanisms of moral disengagement in the exercise of moral agency. Journal of Personality and Social Psychology, 71 (2), 364–374.

Barriga, A. Q., Gibbs, J. C., Potter, G. B., & Liau, A. K. (2001). How I Think (HIT) Questionnaire manual . Champaign, IL: Research Press.

Google Scholar  

Burk, D. L. (1995). Research misconduct: Deviance, due process, and the disestablishment of science. George Mason Independent Law Review, 3 , 305–515.

Cook, D. A., Bordage, G., & Schmidt, H. G. (2008). Description, justification and clarification: A framework for classifying the purposes of research in medical education. Medical Education, 42 (2), 128–133. doi: 10.1111/j.1365-2923.2007.02974.x .

Crowne, D. P., & Marlowe, D. (1960). A new scale of social desirability independent of psychopathology. Journal of Consulting Psychology, 24 , 349–354.

Davis, M. S., Riske-Morris, M., & Diaz, S. R. (2007). Causal factors implicated in research misconduct: Evidence from ORI case files. Science and Engineering Ethics, 13 (4), 395–414. doi: 10.1007/s11948-007-9045-2 .

DuBois, J. M. (2004). Is compliance a professional virtue of researchers? Reflections on promoting the responsible conduct of research. Ethics and Behavior, 14 (4), 383–395.

DuBois, J. M., Anderson, E. E., & Chibnall, J. (2013a). Assessing the need for a research ethics remediation program. Clinical and Translational Science, 6 (3), 209–213.

DuBois, J. M., Anderson, E. E., Chibnall, J., Carroll, K., Gibb, T., Ogbuka, C. et al. (2013b). Understanding research misconduct: A comparative analysis of 120 cases of professional wrongdoing. [Research Support, N.I.H., Extramural Research Support, Non-U.S. Gov’t]. Accounting Research, 20 (5–6), 320–338, doi: 10.1080/08989621.2013.822248 .

DuBois, J. M., Chibnall, J. T., Tait, R. C., Vander Wal, J. S., Baldwin, K. A., Antes, A. L., et al. (2015). Professional decision-making in research (PDR): The validity of a new measure. Science and Engineering Ethics , 1–26. doi: 10.1007/s11948-015-9667-8 .

DuBois, J. M., et al. (2012). Environmental factors contributing to wrongdoing in medicine: A criterion-based review studies and cases. Ethics and Behavior, 22 (3), 163–188.

Emanuel, E. J., Crouch, R. A., Arras, J. D., Moreno, J. D., & Grady, C. (Eds.). (2003). Ethical and regulatory aspects of clinical research: Readings and commentaries . Baltimore: Johns Hopkins University Press.

Fanelli, D. (2009). How many scientists fabricate and falsify research? A systematic review and meta-analysis of survey data. PLoS ONE, 4 (5), e5738. doi: 10.1371/journal.pone.0005738 .

Federation of American Societies for Experimental Biology. (2013). Findings of the FASEB survey on administrative burden . Bethesda, MD: Federation of American Societies for Experimental Biology.

Gibbs, J. C., Potter, G., & Goldstein, A. (1995). The EQUIP program. Teaching youth to think and act responsibly . Champaign, IL: Research Press.

Gini, G., & Pozzoli, T. (2013). Measuring self-serving cognitive distortions: A meta-analysis of the psychometric properties of the How I Think Questionnaire (HIT). European Journal of Developmental Psychology, 10 (4), 510–517. doi: 10.1080/17405629.2012.707312 .

Grant, G., Guyton, O., & Forrester, R. (1999). Creating effective research compliance programs in academic institutions. Academic Medicine, 74 , 951–971.

Helton-Fauth, W., Gaddis, B., Scott, G., Mumford, M., Devenport, L., Connelly, S., et al. (2003). A new approach to assessing ethical conduct in scientific work. Accounting Research, 10 (4), 205–228. doi: 10.1080/714906104 .

Hyde, L. W., Shaw, D. S., & Moilanen, K. L. (2010). Developmental precursors of moral disengagement and the role of moral disengagement in the development of antisocial behavior. Journal of Abnormal Child Psychology, 38 (2), 197–209. doi: 10.1007/s10802-009-9358-5 .

Irwin, R. S. (2009). The role of conflict of interest in reporting of scientific information. Chest, 136 (1), 253–259. doi: 10.1378/chest.09-0890 .

Jones, J. H. (1993). Bad blood: The Tuskegee syphilis experiment (2nd revised ed.). New York, NY: Free Press.

Keith-Spiegel, P., & Koocher, G. P. (2005). The IRB paradox: Could the protectors also encourage deceit? Ethics and Behavior, 15 (4), 339–349. doi: 10.1207/s15327019eb1504_5 .

Koski, G. (2003). Beyond compliance…Is it too much to ask? IRB: Ethics and Human Research, 25 (5), 5–6.

Levine, R. J. (1986). Ethics and regulation of clinical research (2nd ed.). Baltimore: Urban & Schwarzenberg.

Martinson, B. C., Anderson, M. S., Crain, A. L., & De Vries, R. (2006). Scientists’ perceptions of organizational justice and self-reported misbehaviors. Journal of Empirical Research on Human Research Ethics, 1 (1), 51–66. doi: 10.1525/jer.2006.1.1.51 .

Martinson, B., Crain, A. L., Anderson, M., & DeVries, R. (2009). Institutions’ expectations for researchers’ self-funding, federal grant holding, and private industry involvement: Manifold drivers of self-interest and researcher behavior. Academic Medicine, 84 (11), 1491–1499.

Mazar, N., On, A., & Ariely, D. (2008). The dishonesty of honest people: A theory of self-concept maintenance. Journal of Marketing Research, 45 , 633–644.

Medeiros, K. E., Mecca, J. T., Gibson, C., Giorgini, V. D., Mumford, M. D., Devenport, L., & Connelly, S. (2014). Biases in ethical decision making among university faculty. Accountability In Research: Policies & Quality Assurance, 21 (4), 218–240. doi: 10.1080/08989621.2014.847670 .

Moore, C., Detert, J. R., Trevino, L. K., Baker, V. L., & Mayer, D. M. (2012). Why employees do bad things: Moral disengagement and unethical organizational behavior. Personnel Psychology, 65 , 1–48.

Moore, D. A., & Loewenstein, G. (2004). Self-interest, automaticity, and the psychology of conflict of interest. Social Justice Research, 17 (2), 189–202.

Mumford, M., Devenport, L., Brown, R., Connelly, S., Murphy, S., Hill, J., et al. (2006). Validation of ethical decision making measures: Evidence for a new set of measures. Ethics and Behavior, 16 (4), 319–345. doi: 10.1207/s15327019eb1604_4 .

National Bioethics Advisory Commission (2001). Ethical and policy issues in research involving human participants . Bethesda, MD.

National Commission. (1979). The Belmont report: Ethical principles and guidelines for the protection of human subjects of research . Washington, DC: Department of Health, Education, and Welfare.

National Research Council. (2012). Research universities and the future of America: Ten breakthrough actions vital to our nation’s prosperity and security . Washington, DC: National Academy of Science.

National Science Board. (2014). Reducing investigators’ administrative workload for federally funded research . Arlington, VA: National Science Foundation.

National Science Foundation (2014). Science and engineering indicators 2014 . http://www.nsf.gov/statistics/seind14/index.cfm/chapter-5/c5h.htm-s4 . Accessed April 4 2014.

Neely, J. G., Paniello, R. C., Graboyes, E. M., Sharon, J. D., Grindler, D. J., & Nussenbaum, B. (2014). Practical guide to understanding clinical research compliance. Otolaryngology - Head and Neck Surgery, 150 (5), 716–721. doi: 10.1177/0194599814524895 .

Office of Human Research Protections (2009). Protection of human subjects. In Office of Human Research Protections (Ed.), 45CFR46 . Washington, DC: Office of Human Research Protections.

Olson, L. E. (2010). Developing a framework for assessing responsible conduct of research education programs. Science and Engineering Ethics, 16 (1), 185–200. doi: 10.1007/s11948-010-9196-4 .

Penny, L. M., & Spector, P. E. (2002). Narcissism and counterproductive work behavior: Do bigger egos mean bigger problems? International Journal of Selection and Assessment, 10 , 126–134.

Raskin, R., & Terry, H. (1988). A principal-components analysis of the Narcissistic Personality Inventory and further evidence of its construct validity. Journal of Personality and Social Psychology, 54 , 890–902.

Redman, B. (2014). Review of measurement instruments in research ethics in the biomedical sciences, 2008–2012. Research ethics, 20 (4), 141–150. doi: 10.1177/1747016114538963 .

Rest, J. R., Narvaez, D., Bebeau, M. J., & Thoma, S. J. (1999). Postconventional moral thinking: A neo-Kohlbergian approach . Mahwah, NJ: Lawrence Erlbaum Associates Inc.

Reynolds, W. M. (1982). Development of reliable and valid short forms of the Marlowe–Crowne Social Desirability Scale. Journal of Clinical Psychology, 38 , 119–125.

Rollin, B. E. (2006). Animal rights and human morality (3rd ed.). Amherst, NY: Promethius.

Samenow, S. E. (2001). Understanding the criminal mind: A phenomenological approach. Journal of Psychiatry and Law, 29 , 275–293.

Shamoo, A. E., & Resnik, D. B. (2015). Responsible conduct of research (3rd ed.). New York: Oxford University Press.

Steneck, N. H. (2007). ORI introduction to the responsible conduct of research . (Vol. Revised): US Government Printing Office.

Turner, J. H., & Valentine, S. R. (2001). Cynicism as a fundamental dimension of moral decision-making: A scale development. Journal of Business Ethics, 34 , 123–136.

Wallinius, M., Johansson, P., Larden, M., & Dernevik, M. (2011). Self-serving cognitive distortions and antisocial behavior among adults and adolescents. Criminal Justice and Behavior, 38 (3), 286–301. doi: 10.1177/0093854810396139 .

Youngers, J., & Webb, P. (2014). Regulations and compliance: 2014. A compendium of regulations and certifications applicable to sponsored programs . Washington, DC: NCURA.

Download references

Acknowledgments

The adaptation of the HIT into the HIT-Res was made possible with CTSA supplement funding from NIH to establish the Restoring Professionalism and Integrity in Research Program (UL1 RR024992-05S2). The validation of the HIT-Res in this study was supported by the US Office of Research Integrity (6 ORIIR130002-01-01). We thank Kari Baldwin for support in recruitment of participants. The How I Think (HIT) questionnaire is owned by Research Press (Champaign, IL). The Professionalism and Integrity in Research Program (St. Louis, MO) purchased from Research Press the right to adapt the HIT into the HIT-Res and holds copyright of the HIT-Res. Permission to use the HIT-Res can be obtained by writing to [email protected].

Author information

Authors and affiliations.

Division of General Medical Sciences, School of Medicine, Washington University in St. Louis, Campus Box 8005, 4523 Clayton Avenue, St. Louis, MO, 63110, USA

James M. DuBois

Saint Louis University, St. Louis, MO, USA

John T. Chibnall

Ohio State University, Columbus, OH, USA

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to James M. DuBois .

Ethics declarations

Ethical approval.

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. Informed consent was obtained from all individual participants included in the study.

Appendix: How I Think About Research (HIT-Res)

Instructions: Each statement in this survey may describe how you think about research. Read each statement carefully then rate the extent to which you agree or disagree with the statement as it describes your current thinking. Your answers will be treated confidentially.

The following abbreviations are used in this questionnaire:

IRB = Institutional Review Board. (In many nations, these boards are called Research Ethics Committees).

IACUC = Institutional Animal Care and Use Committee.

When IRBs or IACUCs require a lot of changes to research protocols they are just asking for protocol violations.

I have done things that I’m not proud of. (AR)

You can’t advance a career in research without stepping on someone’s toes.

Researchers should strive for excellence in their work. (PF)

IRBs and IACUCs focus so much on rules and regulations they can make it impossible to do research.

The pressure to get grants almost forces people to take liberties with their data.

No matter what I do, someone will find a compliance problem in my research.

You cannot expect research collaborators to act with complete integrity.

The advancement of my science should have priority over the quality of life of a lab mouse.

It’s not my fault if I lose my temper when others produce poor work.

I do not have time to deal with IRBs, IACUCs, and other oversight offices.

I have sometimes said something bad about a colleague. (AR)

Everyone drops data sometimes when they know it’s leading to wrong results.

I know which corners I can cut to meet a deadline.

I have covered up some things that I have done at work. (AR)

My institution makes it too hard to disclose all conflicts of interest.

People who don’t understand the realities of animal research are responsible for all of these strict animal care regulations.

It’s annoying that institutional committees do not trust researchers to do their jobs right.

We can’t be perfect–we just have to muddle our way through when it comes to research ethics.

Consent forms don’t protect participants because no one reads them anyway.

I really do not need community members or a council to tell me whether my research has social significance.

You cannot blame international researchers for plagiarism if they feel forced to publish in English.

It’s important to think of colleague’s feelings. (PF)

I have misrepresented something to get myself out of trouble. (AR)

If I know I’m going to do good science, I really don’t care much about compliance.

It is not the principal investigator’s fault when students and lab personnel mismanage animal care.

I have tried to get back at someone at work. (AR)

In the past, I took something from work without asking. (AR)

Courtesy authorship is just part of the culture of research.

Journal readers have only themselves to blame if they can’t figure out the bias in industry-funded research.

All people prioritize their self-interests whether or not they have conflicts of interest.

It’s no one’s business if I make extra money consulting.

Sometimes researchers need to deviate from approved protocols to keep study sponsors happy.

Most successful researchers bend inclusion criteria a little bit to meet their enrollment goals.

No one discloses all of their conflicts of interest.

Everybody has conflicts of interest, it’s no big deal.

I would require students to complete a survey if it were important to my research.

Exaggerating your percent effort on a project is not so bad if the funding is available.

When trainees need you, you should be there for them. (PF)

Deviating from an approved research protocol is not really a problem if there is no harm to participants, animals, or data.

I am generous with my time. (PF)

Keeping animal cages clean is so challenging, passing a random inspection is more a matter of luck.

I’m happy to share my knowledge with others. (PF)

Publications should be open access. (PF)

The requirements for animal care are out of proportion to the actual risks to the animals.

Response scale: 1–6 Likert Scale

Strongly Disagree

Slightly Disagree

Slightly Agree

Strongly Agree

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License ( http://creativecommons.org/licenses/by/4.0/ ), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

DuBois, J.M., Chibnall, J.T. & Gibbs, J. Compliance Disengagement in Research: Development and Validation of a New Measure. Sci Eng Ethics 22 , 965–988 (2016). https://doi.org/10.1007/s11948-015-9681-x

Download citation

Received : 03 April 2015

Accepted : 06 July 2015

Published : 15 July 2015

Issue Date : August 2016

DOI : https://doi.org/10.1007/s11948-015-9681-x

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • How I Think Questionnaire
  • Research compliance
  • Moral disengagement
  • Ethical decision-making
  • Find a journal
  • Publish with us
  • Track your research

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Pharmacy (Basel)

Logo of pharmacy

Medication Adherence and Compliance: Recipe for Improving Patient Outcomes

Taiwo opeyemi aremu.

1 Department of Pharmaceutical Care & Health Systems (PCHS), College of Pharmacy, University of Minnesota, 308 Harvard Street SE, Minneapolis, MN 55455, USA

2 Division of Environmental Health Sciences, School of Public Health, University of Minnesota, 420 Delaware Street SE, Minneapolis, MN 55455, USA

Oluwatosin Esther Oluwole

3 Division of Epidemiology & Community Health, School of Public Health, University of Minnesota, 1300 S. 2nd Street, Minneapolis, MN 55455, USA

Kehinde Oluwatosin Adeyinka

4 Department of Radiation Oncology, University College Hospital (UCH), Ibadan 200248, Nigeria

Jon C. Schommer

The indices of patients’ health outcomes have historically included recurrence of symptoms, number of emergency visits, hospitalization and re-admission rates, morbidity, and mortality. As significant healthcare players, providers can influence these events, including the timeliness of diagnosis and disease management, the cost of treatment, access to health insurance, and medication adherence. Beyond healthcare availability and access, the ability of patients to adhere to providers’ treatment recommendations goes a long way to serve as a recipe for improving patient outcomes. Unfortunately, medication nonadherence has been prevalent, culminating in worsened health conditions, increased cost of care, and increased healthcare spending. This article provides some innovative ideas and good considerations for encouraging medication adherence. Improving providers’ and patients’ education and adopting active and passive communication, including consented reminders, could enhance compliance. Embracing partnerships between providers’ organizations and faith-based and community organizations could drive adherence. Adopting an income-based cap on out-of-pocket spending and adapting the physical properties, bioavailability, and dosage regimen of medications to accommodate diverse patient population preferences could encourage refills and compliance. Good medication adherence can culminate in improved patient outcomes.

1. Introduction

Beyond the glitz and glamor of advances in medical technology in the 21st century and the technicalities of drug development, well-informed prescribing, and medication dispensing, what happens to the patient is the ultimate ideal known as the patient’s outcome. Patients’ outcomes encompass the events surrounding illness, disease investigation, and medication use, including persistence, recurrence, and remission of symptoms, number of visits to the emergency room, hospitalization and readmission, timeliness, effectiveness, and safety of care, morbidity, and mortality. Some factors influencing patients’ health outcomes may include timeliness of diagnosis and disease management, cost of treatment, access to medical insurance, and medication adherence.

2. Medication Adherence and Nonadherence

Medication adherence, otherwise known as medication compliance, is a successor to treatment recommendations. Therefore, it can be defined as the “act or extent of conforming to a provider recommendation/prescription based on timing, dosage, and frequency of medication use” [ 1 ]. It can also be defined as “a ratio of the number of drug doses taken to the number of doses prescribed over a given time period” [ 2 ]. Medication compliance can be determined using the medication possession ratio (MPR), self-report adherence scale, pharmacy refill records and pill counts, micro-electric event monitoring, biological indices (blood or urine levels of drugs or its metabolites), and supervised dosing [ 3 ]. Medication compliance is a precursor of patients’ health outcomes. As such, nonadherence to medication can result in poor health outcomes, including worsening medical conditions, an increase in comorbidities, and death [ 4 ]. The resultant increase in health needs from medication nonadherence often culminates in an increase in the cost of care, which invariably increases overall healthcare spending [ 4 ]. Medication nonadherence is, therefore, an issue of global health importance. To proffer solutions, it is essential to understand some of the influencing factors, grouped into three: providers, patients, and medications.

2.1. Providers’ Factors

As one of the stakeholders of patient care, providers, including physicians, pharmacists, nurse practitioners, and physician assistants, play a role in determining whether patients comply with prescriptions or not. Due to the demand of daily routines, providers may get carried away focusing on disease dynamics and treatment options, neglecting to focus on patients’ acceptance of treatment modalities, especially when it involves using medications. This results in providers failing to adequately educate the patients about the formulation, timing, dosage, frequency, side effects, and costs of the prescribed medicine [ 5 ].

2.2. Patients’ Factors

Patients are the primary stakeholders of health care, hence the need to consider their needs while dealing with nonadherence to medications. While acknowledging that patients are custodians of their wellbeing, it is imperative to note that some deviations may be due to misinformation about their diagnosis and treatment options. Factors including illiteracy, polypharmacy (multiple medications), alcohol use, cultural issues, religious beliefs, and the paucity of knowledge about the effect of treatment options can adversely influence medication adherence [ 6 , 7 , 8 ]. Mental health issues beyond patients’ control, such as depression and cognitive impairment, can equally contribute to nonadherence [ 6 , 7 ]. The socioeconomic status of patients, a sequel to whether they are employed or unemployed, determines their access to health insurance and, consequently, their ability to afford their medications.

2.3. Medication/Treatment Factors

The medications’ characteristics, including the pharmaceutical formulation, dosage, size, frequency of use, and the dosage forms of the drug (for example, tablets, capsule, powder, suspension, emulsion, syrup, injection, aerosol, and foam), can influence adherence. Cost, timing, and side effects could also be potential barriers to adherence [ 7 ]. For example, the side effects of antiretroviral therapy (ART), including headaches, diarrhea, vomiting, and peripheral neuropathy, can discourage compliance to what is supposed to be a lifelong medication. Non-compliance to ARTs can result in increased viral load, reduced CD4 counts, and poor health outcomes.

3. Approaches to Improving Medication Adherence

3.1. providers’ education.

The providers’ education can influence the patients’ ability to adhere to prescribed therapies/medications [ 9 , 10 ]. As catalysts for medication adherence, providers must be well-informed about the characteristics of the drug options available for the illness being managed. Routine hospital grand rounds and continuing medication education can actively focus on improving providers’ education. Health care teams can adopt a care protocol that includes discussing each drug option’s pros and cons. The cost of brand versus generic, side effects of the medications, and the number of times patients would use the medication daily should be considered as possible content of the providers’ study guide. An understanding of drug characteristics puts the providers in a position that identifies with the patients’ realities.

3.2. Communication

Beyond providers’ knowledge is a strong patient–provider relationship and providers’ ability to pass the message on to recipients (patients). A possession of communication and interpersonal skills permeated with empathy and acknowledgment of the patients’ challenge of using medications for acute and chronic diseases can foster compliance. By their proximity to the community, community pharmacists can facilitate a line of communication to patients with dignity, respect, and understanding as fulcrums [ 4 , 11 ]. Providers are instrumental in encouraging compliance by increasing awareness, serving as worthy role models, and sharing personal testimonies if any. They can foster compliance by adopting consented reminders via text messages, emails, automated calls, and weekly mailed letters. This measure can help mitigate the unintended effect of forgetfulness.

3.3. Patients’ Education

An informed patient population is a recipe for improving medication adherence. Knowledge about the implications of not adhering to the providers’ instructions on medication use can foster compliance. An understanding of the importance of attending the required clinic visits/routine follow-ups can equally enhance compliance. Delivery of these messages can be through any of the following:

3.3.1. One-on-One Interaction with Healthcare Professionals

This could entail the historic word-of-mouth method of disseminating information. Messages are most likely to be accepted by patients if the messengers are healthcare professionals because they are often regarded as credible sources of health information [ 12 ]. To reinforce such information, handouts or pamphlets could be handed from providers to patients or caregivers in a one-on-one situation. This form of interaction that fosters patients’ education can occur during routine clinic visits and follow-ups that may occur either in person or remotely (telemedicine).

3.3.2. Mass Communication Using Social and Digital Media

With the rising trends of misinformation and conspiracy theories about medication use on social media fueling medication nonadherence, health care professionals/providers bear the considerable responsibility of tactically refuting this misleading information. Patients should be aware that their providers and health departments/ministries (local, state, and federal) are credible sources of information. The respective providers’ local and national associations can be at the front line of educating the public through printed media (e.g., flyers, posters, billboards, and newspapers), social networking sites (e.g., Facebook, Twitter, and Instagram), text messages, mobile applications (e.g., WhatsApp and Telegram), blogs, websites, televisions, and radios [ 13 ]. Printed media materials can be accessible in publicly available stores selling or promoting health care products and services (e.g., pharmacies and drug stores). Free digital media materials can be easily accessible to individuals with access to computers, laptops, tablets, or cell phones.

3.3.3. Community Organizations, Including Faith-Based Organizations

Studies have shown that families are usually willing to accept messages from community and faith-based/spiritual leaders [ 12 , 14 ]. These leaders are well-known and trusted community members. The providers’ national and local associations and allied bodies, including public health associations, can adopt an active partnership with community and faith-based organizations. This union can serve as a driver for improving the health knowledge base of members and followers.

3.4. Advocacy

The high cost of healthcare in the United States (US) and elsewhere can limit the ability of patients to afford and adhere to treatment options. Therefore, through their respective associations/bodies, providers can champion policy proposals that advocate a cap on out-of-pocket spending on all prescription drugs based on the income and socioeconomic status of the patient population served. Because the affordability of medications that puts less strain on patients’ income can improve refills and adherence, providers can educate elected officials about the importance of being at the forefront of improving health outcomes and how to enhance health outcomes. Providers can also encourage policymakers to enact well-informed health-related policies that focus on cost, availability, and access to prescription drugs.

3.5. Adaptation of Medications

While acknowledging the ease of medication access through medication synchronization programs, medication packaging, and delivery services in the US [ 15 , 16 ], a universal adaptation of these efforts could aid more robust patient outcomes due to the influence of global connectivity and population growth. Considering nonadherence owing to the physical properties of medicines, when possible, manufacturers can ensure that every drug has different formulations, such as a liquid or solid, to accommodate patients’ preferences. Manufacturers can adapt medications’ strength, bioavailability, and dosage regimens to accommodate a less frequent administration. For example, adopting slow-release capsules that patients can use once per week can encourage refills and compliance, especially in recipients of polypharmacy. Creating a capsule option of tablets and offering patients the opportunity to choose can enhance compliance. Because humans are uniquely different in desires, tastes, and wants, medicines can be created in different flavors, giving them an appealing appearance and boosting users’ morale. Globally, researchers can continue to work towards improving existing medications to include fewer side effects. Table 1 provides a snapshot of these approaches to enhancing medication adherence.

Approaches to improving medication adherence.

4. Conclusions

Patients’ adherence to medications is a recipe for improved health outcomes and overall wellbeing. With the continued existence of life, there will continue to be atypical conditions that negatively affect the body for which medications are designed as a corrective measure. While acknowledging factors contributing to nonadherence, providers, patients, and drug manufacturers should continue to explore innovative ways to encourage compliance. A well-rounded, informed, and defensive provider cohort with a well-informed patient population and diverse medication options can promote medication adherence, which invariably improves patients’ health outcomes.

Abbreviations

Funding statement.

This research received no external funding.

Author Contributions

T.O.A., O.E.O., K.O.A. and J.C.S. met the ICMJE criteria for authorship. T.O.A. and O.E.O. designed and wrote the first draft of the article. K.O.A. and J.C.S. reviewed and edited the draft. All authors have read and agreed to the published version of the manuscript.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Conflicts of interest.

The authors declare no conflict of interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

U.S. flag

Federal Acquisition Regulation

Full far download in various formats, browse far part/subpart and download in various formats.

  • Data Initiatives
  • Regulations
  • Smart Matrix
  • Regulations Search
  • Acquisition Regulation Comparator (ARC)
  • Large Agencies
  • Small Agencies
  • CAOC History
  • CAOC Charter
  • Civilian Agency Acquisition Council (CAAC)
  • Federal Acquisition Regulatory Council
  • Interagency Suspension and Debarment Committee (ISDC)

GSA logo

ACQUISITION.GOV

An official website of the General Services Administration

The state of AI in early 2024: Gen AI adoption spikes and starts to generate value

If 2023 was the year the world discovered generative AI (gen AI) , 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey  on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our previous survey just ten months ago. Respondents’ expectations for gen AI’s impact remain as high as they were last year , with three-quarters predicting that gen AI will lead to significant or disruptive change in their industries in the years ahead.

About the authors

This article is a collaborative effort by Alex Singla , Alexander Sukharevsky , Lareina Yee , and Michael Chui , with Bryce Hall , representing views from QuantumBlack, AI by McKinsey, and McKinsey Digital.

Organizations are already seeing material benefits from gen AI use, reporting both cost decreases and revenue jumps in the business units deploying the technology. The survey also provides insights into the kinds of risks presented by gen AI—most notably, inaccuracy—as well as the emerging practices of top performers to mitigate those challenges and capture value.

AI adoption surges

Interest in generative AI has also brightened the spotlight on a broader set of AI capabilities. For the past six years, AI adoption by respondents’ organizations has hovered at about 50 percent. This year, the survey finds that adoption has jumped to 72 percent (Exhibit 1). And the interest is truly global in scope. Our 2023 survey found that AI adoption did not reach 66 percent in any region; however, this year more than two-thirds of respondents in nearly every region say their organizations are using AI. 1 Organizations based in Central and South America are the exception, with 58 percent of respondents working for organizations based in Central and South America reporting AI adoption. Looking by industry, the biggest increase in adoption can be found in professional services. 2 Includes respondents working for organizations focused on human resources, legal services, management consulting, market research, R&D, tax preparation, and training.

Also, responses suggest that companies are now using AI in more parts of the business. Half of respondents say their organizations have adopted AI in two or more business functions, up from less than a third of respondents in 2023 (Exhibit 2).

Gen AI adoption is most common in the functions where it can create the most value

Most respondents now report that their organizations—and they as individuals—are using gen AI. Sixty-five percent of respondents say their organizations are regularly using gen AI in at least one business function, up from one-third last year. The average organization using gen AI is doing so in two functions, most often in marketing and sales and in product and service development—two functions in which previous research  determined that gen AI adoption could generate the most value 3 “ The economic potential of generative AI: The next productivity frontier ,” McKinsey, June 14, 2023. —as well as in IT (Exhibit 3). The biggest increase from 2023 is found in marketing and sales, where reported adoption has more than doubled. Yet across functions, only two use cases, both within marketing and sales, are reported by 15 percent or more of respondents.

Gen AI also is weaving its way into respondents’ personal lives. Compared with 2023, respondents are much more likely to be using gen AI at work and even more likely to be using gen AI both at work and in their personal lives (Exhibit 4). The survey finds upticks in gen AI use across all regions, with the largest increases in Asia–Pacific and Greater China. Respondents at the highest seniority levels, meanwhile, show larger jumps in the use of gen Al tools for work and outside of work compared with their midlevel-management peers. Looking at specific industries, respondents working in energy and materials and in professional services report the largest increase in gen AI use.

Investments in gen AI and analytical AI are beginning to create value

The latest survey also shows how different industries are budgeting for gen AI. Responses suggest that, in many industries, organizations are about equally as likely to be investing more than 5 percent of their digital budgets in gen AI as they are in nongenerative, analytical-AI solutions (Exhibit 5). Yet in most industries, larger shares of respondents report that their organizations spend more than 20 percent on analytical AI than on gen AI. Looking ahead, most respondents—67 percent—expect their organizations to invest more in AI over the next three years.

Where are those investments paying off? For the first time, our latest survey explored the value created by gen AI use by business function. The function in which the largest share of respondents report seeing cost decreases is human resources. Respondents most commonly report meaningful revenue increases (of more than 5 percent) in supply chain and inventory management (Exhibit 6). For analytical AI, respondents most often report seeing cost benefits in service operations—in line with what we found last year —as well as meaningful revenue increases from AI use in marketing and sales.

Inaccuracy: The most recognized and experienced risk of gen AI use

As businesses begin to see the benefits of gen AI, they’re also recognizing the diverse risks associated with the technology. These can range from data management risks such as data privacy, bias, or intellectual property (IP) infringement to model management risks, which tend to focus on inaccurate output or lack of explainability. A third big risk category is security and incorrect use.

Respondents to the latest survey are more likely than they were last year to say their organizations consider inaccuracy and IP infringement to be relevant to their use of gen AI, and about half continue to view cybersecurity as a risk (Exhibit 7).

Conversely, respondents are less likely than they were last year to say their organizations consider workforce and labor displacement to be relevant risks and are not increasing efforts to mitigate them.

In fact, inaccuracy— which can affect use cases across the gen AI value chain , ranging from customer journeys and summarization to coding and creative content—is the only risk that respondents are significantly more likely than last year to say their organizations are actively working to mitigate.

Some organizations have already experienced negative consequences from the use of gen AI, with 44 percent of respondents saying their organizations have experienced at least one consequence (Exhibit 8). Respondents most often report inaccuracy as a risk that has affected their organizations, followed by cybersecurity and explainability.

Our previous research has found that there are several elements of governance that can help in scaling gen AI use responsibly, yet few respondents report having these risk-related practices in place. 4 “ Implementing generative AI with speed and safety ,” McKinsey Quarterly , March 13, 2024. For example, just 18 percent say their organizations have an enterprise-wide council or board with the authority to make decisions involving responsible AI governance, and only one-third say gen AI risk awareness and risk mitigation controls are required skill sets for technical talent.

Bringing gen AI capabilities to bear

The latest survey also sought to understand how, and how quickly, organizations are deploying these new gen AI tools. We have found three archetypes for implementing gen AI solutions : takers use off-the-shelf, publicly available solutions; shapers customize those tools with proprietary data and systems; and makers develop their own foundation models from scratch. 5 “ Technology’s generational moment with generative AI: A CIO and CTO guide ,” McKinsey, July 11, 2023. Across most industries, the survey results suggest that organizations are finding off-the-shelf offerings applicable to their business needs—though many are pursuing opportunities to customize models or even develop their own (Exhibit 9). About half of reported gen AI uses within respondents’ business functions are utilizing off-the-shelf, publicly available models or tools, with little or no customization. Respondents in energy and materials, technology, and media and telecommunications are more likely to report significant customization or tuning of publicly available models or developing their own proprietary models to address specific business needs.

Respondents most often report that their organizations required one to four months from the start of a project to put gen AI into production, though the time it takes varies by business function (Exhibit 10). It also depends upon the approach for acquiring those capabilities. Not surprisingly, reported uses of highly customized or proprietary models are 1.5 times more likely than off-the-shelf, publicly available models to take five months or more to implement.

Gen AI high performers are excelling despite facing challenges

Gen AI is a new technology, and organizations are still early in the journey of pursuing its opportunities and scaling it across functions. So it’s little surprise that only a small subset of respondents (46 out of 876) report that a meaningful share of their organizations’ EBIT can be attributed to their deployment of gen AI. Still, these gen AI leaders are worth examining closely. These, after all, are the early movers, who already attribute more than 10 percent of their organizations’ EBIT to their use of gen AI. Forty-two percent of these high performers say more than 20 percent of their EBIT is attributable to their use of nongenerative, analytical AI, and they span industries and regions—though most are at organizations with less than $1 billion in annual revenue. The AI-related practices at these organizations can offer guidance to those looking to create value from gen AI adoption at their own organizations.

To start, gen AI high performers are using gen AI in more business functions—an average of three functions, while others average two. They, like other organizations, are most likely to use gen AI in marketing and sales and product or service development, but they’re much more likely than others to use gen AI solutions in risk, legal, and compliance; in strategy and corporate finance; and in supply chain and inventory management. They’re more than three times as likely as others to be using gen AI in activities ranging from processing of accounting documents and risk assessment to R&D testing and pricing and promotions. While, overall, about half of reported gen AI applications within business functions are utilizing publicly available models or tools, gen AI high performers are less likely to use those off-the-shelf options than to either implement significantly customized versions of those tools or to develop their own proprietary foundation models.

What else are these high performers doing differently? For one thing, they are paying more attention to gen-AI-related risks. Perhaps because they are further along on their journeys, they are more likely than others to say their organizations have experienced every negative consequence from gen AI we asked about, from cybersecurity and personal privacy to explainability and IP infringement. Given that, they are more likely than others to report that their organizations consider those risks, as well as regulatory compliance, environmental impacts, and political stability, to be relevant to their gen AI use, and they say they take steps to mitigate more risks than others do.

Gen AI high performers are also much more likely to say their organizations follow a set of risk-related best practices (Exhibit 11). For example, they are nearly twice as likely as others to involve the legal function and embed risk reviews early on in the development of gen AI solutions—that is, to “ shift left .” They’re also much more likely than others to employ a wide range of other best practices, from strategy-related practices to those related to scaling.

In addition to experiencing the risks of gen AI adoption, high performers have encountered other challenges that can serve as warnings to others (Exhibit 12). Seventy percent say they have experienced difficulties with data, including defining processes for data governance, developing the ability to quickly integrate data into AI models, and an insufficient amount of training data, highlighting the essential role that data play in capturing value. High performers are also more likely than others to report experiencing challenges with their operating models, such as implementing agile ways of working and effective sprint performance management.

About the research

The online survey was in the field from February 22 to March 5, 2024, and garnered responses from 1,363 participants representing the full range of regions, industries, company sizes, functional specialties, and tenures. Of those respondents, 981 said their organizations had adopted AI in at least one business function, and 878 said their organizations were regularly using gen AI in at least one function. To adjust for differences in response rates, the data are weighted by the contribution of each respondent’s nation to global GDP.

Alex Singla and Alexander Sukharevsky  are global coleaders of QuantumBlack, AI by McKinsey, and senior partners in McKinsey’s Chicago and London offices, respectively; Lareina Yee  is a senior partner in the Bay Area office, where Michael Chui , a McKinsey Global Institute partner, is a partner; and Bryce Hall  is an associate partner in the Washington, DC, office.

They wish to thank Kaitlin Noe, Larry Kanter, Mallika Jhamb, and Shinjini Srivastava for their contributions to this work.

This article was edited by Heather Hanselman, a senior editor in McKinsey’s Atlanta office.

Explore a career with us

Related articles.

One large blue ball in mid air above many smaller blue, green, purple and white balls

Moving past gen AI’s honeymoon phase: Seven hard truths for CIOs to get from pilot to scale

A thumb and an index finger form a circular void, resembling the shape of a light bulb but without the glass component. Inside this empty space, a bright filament and the gleaming metal base of the light bulb are visible.

A generative AI reset: Rewiring to turn potential into value in 2024

High-tech bees buzz with purpose, meticulously arranging digital hexagonal cylinders into a precisely stacked formation.

Implementing generative AI with speed and safety

IMAGES

  1. PPT

    compliance definition in research paper

  2. Research Compliance Presentation 020112

    compliance definition in research paper

  3. PPT

    compliance definition in research paper

  4. Research Compliance Training Requirements

    compliance definition in research paper

  5. (PDF) A Conceptual Research Paper on Tax Compliance and Its Relationships

    compliance definition in research paper

  6. PPT

    compliance definition in research paper

VIDEO

  1. Compliance

  2. Webinar :Paper-IV : Statutory Compliance (06.01.2024)

  3. Week 1: Module 1: Part1: Market Research

  4. THEORETICAL FRAMEWORK CHECKLISTS l PART 2

  5. Webinar :Paper-IV : Statutory Compliance (27.01.2024)

  6. Webinar :Paper-IV : Statutory Compliance (25.01.2024)

COMMENTS

  1. (PDF) Compliance Theories: A Literature Review

    compliance is to raise the fine for infractions (principally in relation to fighting crime) or. to offer a bonus or other monetary incentive for cooperation (an argument that is often. made for ...

  2. PDF How to Maintain Research Compliance During the Life of a Study

    A standard for the design, conduct, performance, monitoring, auditing, recording, analyses, and reporting of clinical trials. Essential documentation per GCP: − Investigator's Brochure − Protocol − Consent Form(s) − Recruitment Material − IRB Approvals − CVs (Qualifications) Essential documentation per GCP: − Normal Lab ...

  3. Patient compliance: A concept analysis

    Therefore, the literature search revealed only 17 papers that provided a clear definition of patient compliance (Table 1). Table 1. Definitions of patient compliance. ... The new definition of compliance given in this concept analysis provides clarity and directions for future inquiry and nursing practice. ... Psychotherapy Research, 22 (6 ...

  4. PDF the cambridge handbook of compliance

    Compliance has become key to our contemporary markets, societies, and modes of governance across a variety of public and private domains. While this has stimulated a rich body of empirical and practical expertise on compliance, thus far, there has been no comprehensive understanding of what compliance is or how it inßuences various Þelds and ...

  5. PDF Research Compliance and Quality Assurance

    Process: Identification, analysis and mitigation of research related. Determine highest risk areas and prioritize accordingly. For example: risks to patients/subjects, risks to PI/Sponsor-Investigator, financial risks to Institution, reputational risks to Institution, etc. Research Compliance and Quality Assurance.

  6. A systematic literature review on compliance requirements ...

    About 47 research papers were got but 28 papers satisfied the criteria to providing answers to the research questions. ... The definition of the compliance conditions were defined to support the approach by examining adherence patterns that reoccurred in the business procedure. He made use of a declarative language based on high level ...

  7. PDF The Compliance Role: A Proposal for Improved Understanding

    RESEARCH PAPER SERIES NO. 03 4 Abstract Although clearly defining the role of the compliance function is difficult, it is crucial in order to make an organizational compliance programme effective. Reviewing examples from previous discussions, this paper suggests two new and improved ways of conceiving the compliance role.

  8. Compliance: a concept analysis

    Compliance can be defined in many ways. The meaning of the concept is directly dependent upon the discipline and the context in which it is used. Lacking a gold standard for its measurement, a clear definition for the concept of compliance in nursing and other health-related professions should be explored. This article explores the definitions ...

  9. Compliance Disengagement in Research: Development and ...

    In the world of research, compliance with research regulations is not the same as ethics, but it is closely related. One could say that compliance is how most societies with advanced research programs operationalize many ethical obligations. This paper reports on the development of the How I Think about Research (HIT-Res) questionnaire, which is an adaptation of the How I Think (HIT ...

  10. (PDF) Measuring Compliance: The Challenges in Assessing and

    A major question in corporate compliance research and practice is how to establish the effectiveness of compliance programs and policies on promoting desirable outcomes. To assess such ...

  11. Information Security Policy Compliance: Systematic Literature Review

    5. Conclusion This paper found Research GAP from the previous study. First, research on compliance with information security in the majority uses human behavior theory; it needs further study of other factors that can influence compliance with organizational theory. Second, lack of study to evaluating compliance with information security policy.

  12. Regulations and Policies Related to the Conduct of Research

    CONFLICT OF INTEREST. A number of organizations have defined COIs in research and medicine. The Institute of Medicine has defined COI broadly as a set of circumstances resulting in a risk that a person's professional judgments or actions regarding a primary interest will be unduly influenced by a secondary interest. 1 The Public Health Service (PHS) has taken a narrower view and specifically ...

  13. Maintaining business process compliance despite changes: a decision

    8. Implication for research and practice. In this paper, we presented methods for the identification of compliance requirements and compliance violations in case of replacing or removing an element. Further, we presented a method that recommends adaptions of business process models following compliance violations.

  14. [PDF] Definition of compliance.

    Several definitions of compliance are presented and the importance which the differentiation between these types of compliance holds for research and therapy is discussed. Several definitions of compliance are presented. These are Aspirational Compliance, Standard Compliance. Therapeutic Compliance, Attitudinal Compliance, Habitual Compliance ...

  15. The Compliance Function: An Overview by Geoffrey P. Miller

    Abstract. The compliance function consists of efforts organizations undertake to ensure that employees and others associated with the firm do not violate applicable rules, regulations or norms. It is a form of internalized law enforcement which, if it functions effectively, can substitute for much (although not all) of the enforcement ...

  16. PDF Research Compliance 101

    Compliance Officers function to protect their institutions from risk, preserve the integrity of their clinical programs, and enhance the safety of their institution's facilities, data, patients, staff, and physicians. Compliance & Research. Introduction. Achieving such a broad objective requires careful review and ongoing assessment of key ...

  17. Identifying legitimacy: Experimental evidence on compliance with ...

    In keeping with prior experimental research on compliance with authority, subjects in both experiments participated in a linear public goods contribution game with a centralized enforcer. The experimental protocol is described in detail in Methods, including the payoff functions and information structure. A brief summary follows here.

  18. [PDF] The Compliance Function: An Overview

    16. PDF. The compliance function consists of efforts organizations undertake to ensure that employees and others associated with the firm do not violate applicable rules, regulations or norms. It is a form of internalized law enforcement which, if it functions effectively, can substitute for much (although not all) of the enforcement activities ...

  19. Medication Adherence and Compliance: Recipe for Improving Patient

    Medication compliance is a precursor of patients' health outcomes. As such, nonadherence to medication can result in poor health outcomes, including worsening medical conditions, an increase in comorbidities, and death [ 4 ]. The resultant increase in health needs from medication nonadherence often culminates in an increase in the cost of ...

  20. FAR

    DITA. Print. Part 1 - Federal Acquisition Regulations System. Part 2 - Definitions of Words and Terms. Part 3 - Improper Business Practices and Personal Conflicts of Interest. Part 4 - Administrative and Information Matters.

  21. The state of AI in early 2024: Gen AI adoption spikes and starts to

    If 2023 was the year the world discovered generative AI (gen AI), 2024 is the year organizations truly began using—and deriving business value from—this new technology. In the latest McKinsey Global Survey on AI, 65 percent of respondents report that their organizations are regularly using gen AI, nearly double the percentage from our ...

  22. Department of Human Services

    Overview. Our mission is to assist Pennsylvanians in leading safe, healthy, and productive lives through equitable, trauma-informed, and outcome-focused services while being an accountable steward of commonwealth resources. Report Abuse or Neglect. Report Assistance Fraud. Program Resources & Information.