Proceedings of the 2021 International Conference on Public Art and Human Development ( ICPAHD 2021)

A Case Study of Media Influence on Public Attitudes Towards Celebrities

How the sex scandal of kris wu influences his public recognition and celebrity endorsement.

In the new media era, media reports can highly influence public attitudes, and events involving celebrities usually draw great attention from the public. Besides, celebrities usually enjoy a high level of public recognition and endorse products, which is known as celebrity endorsement. Also, fandom economy, the operational income generating behavior relationship between fans and their idols, has grown stronger in recent decades[1]. A recent social event calls public attention in China. Chinese-Canadian pop star Kris Wu was detained by Beijing police for alleged rape and there is extensive media coverage of this event[2]. The author intends to test the drench hypothesis on the extent to which public attitudes toward Kris Wu and the whole idol community are affected by conducting an online survey of it. The change of idol community’s influence as celebrity endorsers is also investigated in the author’s survey. According to the data and results of this survey, it can be concluded that public attitudes toward Kris Wu and idol community and idol community’s influence as celebrity endorsers are both declined after the public was exposed to this scandal. What’s more, future research can be done on similar cases where celebrities are involved in well-known scandals, focusing more on the change difference between public attitudes toward a specific celebrity and public attitudes toward the community to which the celebrity belongs to.

Download article (PDF)

Cite this article

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Int J Environ Res Public Health

Logo of ijerph

Digital Media for Behavior Change: Review of an Emerging Field of Study

William douglas evans.

1 Milken Institute School of Public Health, The George Washington University, 950 New Hampshire Avenue NW, Washington, DC 20052, USA; ude.uwg@neirol (L.C.A.); ude.uwg@ikswotainorb (D.B.); ude.uwg@onatilopanm (M.N.); ude.uwg.liamwg@dlonraeinaej (J.A.); ude.uwg.liamwg@7imugemi (M.I.)

2 The BRIGHT Institute, The George Washington University, 950 New Hampshire Avenue NW, Washington, DC 20052, USA

Lorien C. Abroms

David broniatowski, melissa napolitano, jeanie arnold, megumi ichimiya, sohail agha.

3 Stanford Behavior Design Lab, Seattle, WA 98109, USA; moc.liamg@ahgaliahos

Associated Data

Not applicable.

Digital media are omnipresent in modern life, but the science on the impact of digital media on behavior is still in its infancy. There is an emerging evidence base of how to use digital media for behavior change. Strategies to change behavior implemented using digital technology have included a variety of platforms and program strategies, all of which are potentially more effective with increased frequency, intensity, interactivity, and feedback. It is critical to accelerate the pace of research on digital platforms, including social media, to understand and address its effects on human behavior. The purpose of the current paper is to provide an overview and describe methods in this emerging field, present use cases, describe a future agenda, and raise central questions to be addressed in future digital health research for behavior change. Digital media for behavior change employs three main methods: (1) digital media interventions, (2) formative research using digital media, and (3) digital media used to conduct evaluations. We examine use cases across several content areas including healthy weight management, tobacco control, and vaccination uptake, to describe and illustrate the methods and potential impact of this emerging field of study. In the discussion, we note that digital media interventions need to explore the full range of functionality of digital devices and their near-constant role in personal self-management and day-to-day living to maximize opportunities for behavior change. Future experimental research should rigorously examine the effects of variable levels of engagement with, and frequency and intensity of exposure to, multiple forms of digital media for behavior change.

1. Introduction

1.1. brief summary of current digital health research.

Digital media (i.e., electronic media where data are stored in digital form) are omnipresent in modern life, but the science of how digital media impact behavior is still in its infancy [ 1 ]. For example, approximately 45% of the world’s population or 3.5 billion people use social media, with the average user spending approximately 3 h of their day overall on social media [ 2 ]. These statistics make it critical to understand both how technologies such as social media influence health decision making and behavior and to design and evaluate effective behavior change interventions using social and other digital media platforms (i.e., programs aimed at changing specific behaviors in a population using one or more digital platforms as the delivery channel, including research aimed at assessing the effectiveness of such programs).

There is an emerging evidence base of how to use digital media for behavior change. Strategies to change behavior implemented using digital technology have included a variety of platforms (e.g., text messaging, social media, apps) and program strategies (e.g., social media support groups and tailored coaching), all of which are potentially more effective with increased frequency, intensity, interactivity, and feedback [ 3 ]. Overall, the most effective health behavior change interventions use a combination of both digital and face-to-face components, lending credence to the importance of classical social behavior change modalities, including human interaction and in-person accountability [ 1 , 4 ].

The most commonly cited research gap is the use of inconsistent, non-standardized measures (e.g., engagement, reach) to evaluate digital media-related behavior change interventions [ 5 , 6 ]. Other areas highlighted for improvement include clarification of exposure and dose, intensity of intervention delivery, and measurement of long-term outcomes [ 7 ]. As noted in multiple systematic reviews, the preponderance of evidence characterizing effective behavior change techniques using digital interventions has been collected through focusing on residents of high-income countries (HICs) [ 8 ].

What is the distinctive approach in this domain? The field of digital media for behavior change is characterized by its use of digital device-human interactions as an intervention strategy, as a methodology for data collection and research, and as environmental influences (i.e., the study of infodemiology) that may affect behavior and moderate the effects of interventions aimed at behavior change [ 9 ]. This paper focuses primarily on the first of these two approaches and explores case studies to illustrate how they have been applied and studied in recent research aimed at identifying effective strategies to bring about positive, population-level health behavior change.

In fact, social media is a core feature of the current social environment, and researchers need to study it to identify media effects, both positive (potential as a behavior change intervention platform) and negative (the effects of mis- and dis-information on decision-making and health behaviors). For example, social media facilitates access to communities around maladaptive health behaviors (e.g., restrictive or binge eating related to anorexia/bulimia), access to networks for illegal purchases such as guns and illegal drugs; its design features (e.g., such as design of News Feed) affect not only health information, but norms and sense of social support [ 10 ]. Social media use has been associated with mental health conditions including loneliness and depression, especially among adolescents and young adults [ 11 ]. It is critical to accelerate the pace of research on digital platforms, including social media, to understand and address its effects on human behavior.

1.2. Overview of Digital Interventions

As an intervention strategy, digital media for behavior change uses all features of digital devices to communicate and create environmental cues and incentives (i.e., following behavioral economics) that encourage the adoption of new behaviors and the maintenance of existing ones. These approaches may take place on social media, mobile phone apps, chatbots, text, and social messaging (e.g., WhatsApp), as well as many specific modalities of information that can be communicated within them, such as video, memes, website links, and graphic images, among others. Digital interventions have the flexibility to be based on tailored, individual communication and on group-level communication, such as in a Facebook group, enabling group interaction and social support. Peer-to-peer interventions (where the intervention is essentially delivered by the participants) are also possible [ 12 ].

Digital platforms such as social media have the inherent feature of interactivity and the potential to engage participants and populations in the context of their social networks, thus building a sense of identification and connection with the intervention. This can be done through ‘gamification’ features, such as virtual incentives and rewards and through social role modeling by individuals that are appealing and aspirational for the audience. Digital platforms, like social media, provide the opportunity for participant co-creation (i.e., content co-generated by users and investigators). For example, in the US, adolescents developed and disseminated their own sexual health, substance use, and violence risk behavior prevention messages as part of a community-based participatory Latino and Immigrant health intervention [ 13 ].

1.3. Examples of Digital Health Research

As a data collection and research strategy, digital media may be used in many ways. Social media provides large quantities of available data on registered users (e.g., Facebook analytics) that can be used to identify potential study participants (e.g., individuals who, based on Facebook data, are likely to fit a specific socio-demographic profile, such as young male Latinos). Apps such as Facebook Messenger and other ‘chat’ functions, may be automated to run surveys through ‘chatbots’ that deliver questions through individual messages with pre-defined response options for study participants. Such technologies can be used to design randomized controlled trials [ 14 ].

Additionally, intervention research may be conducted using the combination of social media technology for remarketing (i.e., delivery of specific content, such as advertising, to an individual user based on previous online activity, such as viewing social media content) and chatbot data collection. Therefore, participants may be recruited into a research study using Facebook advertising (e.g., to join a research study), the data are then collected using chatbot technology, and then the same participants may be exposed to an intervention using remarketing technology, and subsequent data may be collected. This creates the potential for social media based randomized experimental studies to evaluate online behavior change interventions.

Behavior change interventions that rely on social networks for their success are hypothesized to have greater impact, and to generate greater interactivity and feedback, than interventions that rely on changes in individual behavior, due to the amplifying effects of social support and social participation [ 15 , 16 ]. Indeed, social network researchers have measured the impact of interventions beyond those immediately exposed to an intervention. Thus, in addition to the effect of treatment-on-the-treated, social network researchers have measured the effect of treatment on the untreated (e.g., for healthy weight management and weight loss) [ 17 ]. Researchers who have evaluated door-to-door campaigns to get out the vote, for example, have measured a smaller, discernable effect of such campaigns on the untreated (i.e., effects on members of the household who are not directly exposed to the canvasser) [ 18 ]. This additive effect may be due to effects on social norms and other social influences (e.g., based on the observation of those who receive treatment by those who do not).

1.4. Evidence on Digital Media for Behavior Change

A recent study illustrates the state of the evidence for digital media behavior change interventions. In this scoping review, the authors found over 3300 articles that met inclusion criteria such as (1) using some form of digital media in an intervention, and (2) having a focus on behavior change or change in antecedents of behavior [ 19 ]. This review aimed to (1) routinely monitor and identify literature around digital media-based behavior change interventions to get insights for future digital media studies, and (2) systematically review relevant literature to identify evidence and areas for improvement in the field. Initial findings confirmed that there is emerging published data on exposure to and evaluations of digital health behavior change interventions, but this research is still in its infancy. However, only some 298 reported original research aimed at establishing the effectiveness of a digital behavior change intervention.

The purpose of the current paper is to describe methods in the field, present illustrative use cases, describe a future agenda, and raise central questions to be addressed in future digital health research for behavior change. The science of digital health and its application to promoting health and preventing disease is in its infancy. But new technologies and opportunities to apply digital media technology for behavior change (e.g., how social media, perhaps the most widespread digital media channel, can be used to promote healthy behaviors) are growing rapidly. Social and behavioral scientists must harness the potential of these technologies and develop this emerging field of study. This paper lays out key considerations in that endeavor informed by the extant literature.

Digital health for behavior change employs both intervention and research methodologies. In the context of behavior change interventions, there are three main methods: (1) digital media interventions aimed at behavior change, (2) formative research using digital media aimed at aiding in the design of interventions, and (3) digital media used to conduct outcome and impact evaluations. We note that these are methods for conducting digital media practice and research. Below, we provide detailed use cases on the implementation (or results) of such methods.

2.1. Examples of Methods to Deliver Digital Media Campaigns for Behavior Change

On social media (the predominant digital communication channel), interventions can be delivered in a variety of ways. Social media (e.g., Facebook, Twitter) can be used as a digital environment for serving ads that can be microtargeted to the individual characteristics of users. Additionally, social media accounts with many followers (e.g., influencers) have been used to promote behavior change messaging with their followers. Social media platforms can also serve as a platform for group-based interventions aimed at behavior change. Several large RCTs have tested the efficacy of randomizing people into private social media groups and then using the groups as the settings for providing education content to users in the form of group posts, including stimulating engagement among users (e.g., polls, questions, conversations) and social support among group members [ 20 , 21 , 22 ]. These can be supplemented with direct messages to individual users providing additional individualized support or information. Increasingly, there is also an interest in shaping the content of existing groups and pages on social media by having lay health workers either join such groups and post health-promoting content or reach out to group and page administrators requesting them to post such content.

These strategies have potential to create a sense of widespread support for and adoption of specific behaviors. A kind of bandwagon effect may result, leading to behavior change. Theoretically, the effect of such social media campaigns may be to promote a social norm in support of specific health behaviors, such as COVID-19 vaccination, healthy eating and physical activity, or avoidance of nicotine consumption [ 19 ]. The following case studies illustrate interventions and research that investigate this hypothesized effect of social media campaigns.

2.2. Examples of Digital Media Methods in Formative Research

Another important research opportunity and capability while using digital media is to conduct formative research, or research aimed at helping to design campaigns for behavior change. One example of such efforts was a formative study conducted by Agha and colleagues in Nigeria in 2021. This study applied a behavioral lens to understand drivers of COVID-19 vaccination uptake among healthcare workers (HCWs) in Nigeria. The study used data from an online survey of Nigerian HCWs ages 18 and older conducted in July 2021. Analyses examined the predictors of getting two doses of a COVID-19 vaccine. One-third of HCWs in this study reported that they had gotten two doses of a COVID-19 vaccine. Motivation and ability were powerful predictors of being fully vaccinated: HCWs with high motivation and high ability had a 15-times higher odds ratio of being fully vaccinated. However, only 27% of HCWs had high motivation and high ability. This was primarily because the ability to get vaccinated was quite low among HCWs: only 32% of HCWs reported that it was very easy to get a COVID-19 vaccination. By comparison, motivation was relatively high: 69% of HCWs reported that a COVID-19 vaccine was very important for their health. Much of the recent literature coming out of Nigeria and other LMICs focuses on increasing motivation to get a COVID-19 vaccination. Findings highlight the urgency of making it easier for HCWs to get COVID-19 vaccinations.

The Agha et al. study is an example of how research using digital media, and specifically social media platforms, has tremendous potential to provide insight into behavior change campaign design and implementation. Findings from this study informed a large scale COVID-19 vaccination campaign that began in Nigeria in late 2021 to continue through early 2023 [CITE]. The purpose of this effort is to use social media delivered by trusted organizations and influencers to promote HCW vaccination and broader population vaccination, in part through the role modeling effects of increased rates of vaccinated HCW. This approach has broad applicability to other social media-based research.

2.3. Examples of Digital Media Methods for Evaluation Research

Finally, the outcomes and impact of digital media interventions for behavior change may be evaluated using digital media and social media platforms. The previously mentioned example of virtual lab is a state-of-the-art example in the social media domain. The power of social media to first identity individual users who appear, based on publicly available data, to fit a specific demographic, behavioral, or lifestyle profile (e.g., being a health care provider in a specific country of a certain age range) allows for targeting of recruitment efforts. Facebook advertising, for example, may be used to reach a specified population based on Facebook proprietary data, and then those individuals may be invited through the ads to join a research study. Upon initial expression of interest through the clicking on a link to a survey, additional screening may be done (e.g., through eligibility questions) to confirm study inclusion or exclusion, followed by informed consent.

Such studies may follow multiple designs, including observational, quasi-experimental (e.g., examining the effects of exposure to social media messages on outcomes of interest, such as vaccine hesitancy or vaccination) and randomized controlled studies. In social media, the Facebook Messenger (DM) app is one relatively simple way to deliver surveys, as individual questionnaire items may be delivered as chats DMs in sequence to allow a participant to complete a survey wave. Such surveys may be followed by randomization to study condition to receive a social media and/or other treatment aimed at promoting behavior change and/or other outcomes (e.g., changes in social norms or intentions). Because the participants have provided contact data through the social media platform (e.g., through DM), they may be recontacted for longitudinal data collection. In this way, bespoke panels may be created to run studies. Additionally, surveys, as individual questionnaire items, can be delivered with text messaging and apps, offering real-time assessments (i.e., ecological momentary assessments, EMA) that can monitor time sensitive symptoms or conditions such as urges and cravings related to addiction.

An important advantage of this methodology and technology is that it is relatively low cost [ 23 ]. Large scale data collection may be conducted with relatively low marginal costs of initial survey programming and data management. Incentives may be provided through such modes as electronic gift cards or mobile phone use credits, the latter being highly valuable in many low- and middle-income countries (LMIC) where pay-by-use mobile phone plans are common. The following use cases illustrate examples of how these methodologies have been recently applied, and we explore the potential of social media platforms in the future development of digital health research.

3.1. Use Cases

In this section, we briefly describe several use cases and their contribution to building the field of digital health for behavior change. These case studies represent results in that they illustrate the application and range of methods in the field, recognizing that there are many other forms that such interventions and research may take.

3.1.1. Use Case #1: Social Media to Deliver Weight Loss Information to Young Adults on University Campuses

Nearly 40% of young adults, ages 20–39 [ 24 ], have overweight and obesity. Young adulthood is a pivotal life period, marked by changes in academic and social functioning, food environment and availability, as well as declines in physical activity [ 25 , 26 ]. The availability of weight loss services on university campuses has been delayed in implementation in comparison to other health-risk behaviors [ 27 , 28 ], thus making this an important target for scalable, easy-to-implement interventions. The Healthy Body Healthy U (HBHU) project was designed to examine the differential efficacy of two digitally delivered weight loss treatments compared with a health education control [ 29 ]. We will discuss the use of social media for recruitment, consent processes specifically focused on using a commercially available platform (i.e., Facebook) or research purposes, ongoing monitoring and safety procedures, and ways to measure impact.

Participants were young adults (18–35 years) with a body mass index (BMI) of 25–45 kg/m 2 who attended a university within the Washington, DC or Boston area. Additional inclusion criteria specified that those eligible must be an active Facebook user (i.e., having logged into the platform at least in 1 time within the last month), able to send/receive text messages and no health contraindications for participating in physical activity or weight loss.

There were different channels for recruitment using social marketing principles [ 30 ], such as placement of materials in high-trafficked areas and standardized branding using study-specific colors, logos, and fonts. Furthermore, technology was used as a specific outreach strategy. The following figures illustrate how these elements of the program were implemented. “Micro targeting” or delivering advertisements through Facebook targeting specific demographics, was used (see Figure 1 ). Key leaders on campus, such as academic deans, also tweeted or retweeted study-related recruitment efforts (see Figure 2 ).

An external file that holds a picture, illustration, etc.
Object name is ijerph-19-09129-g001.jpg

HBCU Facebook post 1.

An external file that holds a picture, illustration, etc.
Object name is ijerph-19-09129-g002.jpg

HBHU Facebook post 2.

After participants responded to the study advertisements, they completed screening to determine eligibility to participate in the study, at which point they were scheduled for an in-person assessment to further review the study commitment and consent to participate [ 29 ]. The consent forms highlighted the potential risks of participating in an online group through a commercial platform with unknown follow-participants. We followed procedures as suggested by Moreno and included the following in the consent document [ 31 ]: “ … while the study team will keep your study information confidential, anything you post on Facebook is technically governed by and can be used by Facebook ; therefore, we cannot ensure complete confidentiality of all of your Facebook posts and information ” and referred the participant to periodically review the Facebook terms of conditions. We also highlighted the community guidelines and limits to confidentiality based on the presence of others in the group, i.e., “ Confidentiality of your identity or information discussed on the Facebook group page cannot be guaranteed by the research staff due to the presence of other research participants. Study volunteers are encouraged to maintain strict confidentiality regarding your information and the information of others who are participating in the program ”.

Briefly, the interventions were delivered via private Facebook and SMS text messaging. The participants ( n = 459) were randomly assigned to one of two weight loss groups delivered via these channels or a health education contact control. The two weight loss interventions were based on the Diabetes Prevention Program [ 32 , 33 ]. The two weight loss interventions differed in the amount of personalization provided. Specifically, one group (tailored) received personalized information based on their own feedback and high-risk barriers, while the other (targeted) received generic weight loss information specific to young adults.

For programs delivered via digital channels with limited synchronous interactions, participant safety monitoring is critical. For those tailored participants, study staff were alerted to weight losses (or gains) that exceeded three pounds within a one-week period. Furthermore, all participants completed monthly health screens by responding to text messages and all weight loss participants were asked about their rate of weight loss or gain via text. Study staff monitored the Facebook groups for inappropriate or offensive content which was removed or responded to.

At 6 months, there was no overall effect of study group for weight loss. There was a moderating effect such that among those with the lowest BMIs (25–27.5 kg/m 2 ), participants who were assigned to the tailored group lost 2.7 kg [−3.86, −0.68] and those in the targeted group lost 1.72 kg [−3.16, −0.29] more than those in the control group after adjustment for covariates [ 34 ].

What qualifies as a critical threshold for engagement in digital interventions has yet to be established [ 30 ]. Furthermore, frameworks and methods for evaluating the impact of Facebook posts also need to be standardized (Arnold CE reference) [ 35 ]. One such framework for examining the impact of messages is McGuire’s Model of Communication and Persuasion [ 36 ], which can be used to examine characteristics of posts (e.g., informational, lesson-based, poll, study-generated or user-generated). The effect of each post can be calculated using a traditional Facebook engagement equation of impact per post = [(Total Engaged Users / Total Reach) × 100] [ 37 ]. Encouraging researchers to proactively plan to collect and measure these characteristics will help future researchers design and implement interventions with effective messaging.

3.1.2. Use Case #2: Nigeria Social Media to Promote COVID-19 Vaccination Impact Evaluation

The Bill & Melinda Gates Foundation (BMGF) has the goal of immunizing at least 500 million health care providers (HCPs) and high-risk people with COVID-19 vaccines worldwide. By achieving this goal, the foundation seeks a return to routine immunization, maternal and child health, and reproductive health services, which is critical to the COVID-19 response and to the strengthening of health systems. This effort focuses especially on low- and middle-income countries (LMIC), which have been disproportionately impacted by the COVID-19 pandemic. Nigeria, the largest country by population in Africa, is an important focus of these efforts.

Recently, BMGF sponsored a social media campaign in Nigeria to promote COVID-19 vaccination, with a primary focus on health care providers (HCP). The theory of change (ToC) of this campaign posits that HCP serve as crucial role models to encourage patients and the broader population to vaccinate. The campaign engaged a widespread group of health and other public-facing organizations, including the private and public sectors, to help design and deliver posts on Facebook, Instagram, and other platforms to promote vaccination. A series of campaigns using these social media platforms ran in 2022 in six targeted states within Nigeria.

In a companion project, Evans and colleagues developed an impact evaluation to evaluate the performance of an associated BMGF investment; Reducing Vaccine Hesitancy among HCP in Nigeria, which is a series of social media behavior change campaigns to reduce vaccine hesitancy amongst HCPs. The impact evaluation aims to determine which campaigns and strategies are effective in reducing vaccine hesitancy among HCPs and at what level of cost-effectiveness. The evaluation analyzes the campaign ToC, based on the 5 “C’s” model of vaccination promotion [ 38 ], co-developed by Dr. Evans and the campaign implementation team in order to understand the processes and effects of social media in influencing vaccine hesitancy among HCPs.

The evaluation examines the mediating effects of social media engagement metrics and changes in norms and related beliefs about the efficacy and safety of vaccines on COVID-19 vaccine hesitancy, intentions, and uptake. The analysis uses structural equation modeling (SEM) and multi-level modeling (MLM) techniques to test the campaign’s ToC for mediation based on changes in attitude, beliefs, and norms about vaccination, and for moderation of environmental, personal characteristics of HCPs, and other social ecological factors on the intervention’s effectiveness.

This study uses a novel social media platform for recruitment and data collection, Virtual Lab LLC ( https://vlab.digital/ accessed on 22 May 2022), a data collection platform and a tool to enable study design. This platform enables researchers to create bespoke panels of respondents (i.e., a group of participants meeting evaluation eligibility criteria who agree to complete a baseline (BL) survey and to be re-contacted for follow up surveys). Participants may be recruited specifically from the target audiences for a behavioral intervention, such as a health campaign, and stratified by individual-level covariates of interest (e.g., geographic locations where the intervention is conducted, being a certain age, gender, race/ethnicity, or other demographic) and other relevant variables identified by researchers.

The use of the Virtual Lab platform for this study is a major innovation and will advance the field of digital media intervention research. The project will demonstrate (1) how social media recruitment and data collection may be effectively used to support social media interventions, especially as they focus on vaccination promotion, and (2) will demonstrate multiple evaluation strategies including geographic cluster comparisons as well as individual-level randomized controlled trials.

The materials developed in the project include surveys, interview guides, and social media content and metrics on dosage/exposure to the vaccination campaign at the population level. Researchers are compiling a library of such materials to be shared publicly, which will contribute to the design and development of future programs and campaigns, as well as increase the evidence for what works in digital health interventions worldwide.

The project includes substantial data collection, with some 8500 surveys completed in a baseline time point, prior to the campaign in late 2021. Additionally, 4 rounds of in-depth qualitative interviews will be conducted with some 30 social media outreach organizations and influencers that deliver the campaign in Nigeria. These data will be a major resource for building the evidence base on the effectiveness of social media for vaccination promotion and reducing vaccine hesitancy. This dissemination of findings and methodologies will make a substantial contribution to the social and behavioral sciences communities and the field of digital health [ 39 ].

3.1.3. Use Case #3: Randomized Controlled Trial to Assess the Efficacy of Facebook Groups for COVID-19 Vaccination Uptake

Another example is a study aimed at promoting COVID-19 vaccination on Facebook. At the time of the study, over 35% of US adults had not received the COVID-19 vaccine [ 40 ]. While much had been published about vaccine misinformation on social media, little was known about how to use social media as an intervention platform.

Participants were eligible if they were adults who lived in the US and had not been vaccinated for COVID-19 (not even one dose) and had a Facebook account. While initially we planned to recruit on Facebook, recruitment on Facebook proved to be expensive and we soon shifted to recruiting on Amazon Mechanical Turk (Amazon MTurk). We recruited unvaccinated individuals (N = 353) and randomized them into one of two private Facebook groups (intervention, control) and measured their attitudes and behaviors at baseline and 2, 4, and 6 weeks after enrollment.

The intervention group consisted of novel educational content promoting the COVID-19 vaccine that was delivered daily for 4 weeks and sponsored by the GW Health Communication Corps, while the control group consisted of a referral to the COVID-19 Information Center on Facebook. In the intervention group, group moderators posted twice daily with information about the threat of COVID-19 in the US, and on the efficacy and safety of the COVID-19 vaccine (See Figure 3 ). Group moderators also used polls and posts to engage group members and encouraged comments and group discussion. Furthermore, moderators took on the role of replying to comments and questions raised by group members and were trained to use an empathic and non-judgmental tone. We compared the efficacy of these strategies across groups on vaccine acceptance, hesitancy, and uptake. In additional waves of the study, we also examined the role of group size and the use of group moderation features (e.g., allowing posting by group members) on group functioning (e.g., violating the group rules) and outcomes of interest. This study demonstrated that a new intervention format and platform for promoting vaccine uptake was feasible. Participants are being followed-up longitudinally to assess the efficacy of the intervention.

An external file that holds a picture, illustration, etc.
Object name is ijerph-19-09129-g003.jpg

Sample post.

3.1.4. Use Case #4: Pilot Randomized Controlled Trial to Evaluation the Efficacy of Anti-Vaping Advertisements

There is a dearth of published research on the effectiveness of social media interventions for behavior change in tobacco control [ 19 ]. The objective of a study led by Evans and colleagues was to determine the feasibility of the Virtual Lab platform to recruit participants and assess awareness of an anti-e-cigarette health campaign on Facebook. For this pilot feasibility study, researchers aimed to recruit 300 participants through Facebook using the Virtual Lab platform [ 14 ]. In order to recruit participants, we created recruitment ads and delivered them through the Facebook page we created called “Digital Media Experiment”. In order to demonstrate the credibility of the page, we posted related content and acquired likes. We used this Facebook page to run recruitment ads in August 2021. The recruitment ads were served to our target audience which included men and women 18–24 years of age who are located in the United States. After recruiting participants, we created another Facebook page called “Consumer Consciousness”. This page was used to run the target ads on the recruited participants’ Facebook newsfeeds during the study time period. We created this second Facebook page, with a different name, to control for any bias. Although we created two different Facebook Pages, they were both under the same Facebook Business Account, “Digital Health Research”.

The pre-test and post-test surveys were completed using a survey platform called Typeform. Surveys were linked to our Facebook pages using the Virtual Lab interface. This platform allowed our Facebook pages to send automated messages through Facebook Messenger to participants who clicked on the recruitment ads. The two recruitment ads used in this feasibility study were designed using 99design.com. The ads used the text “Take a 15 min survey, get paid $10”. See Figure 4 . After participants clicked on the study’s ad, they were sent a message via Facebook Messenger inviting them to participate in the study. Before beginning the pre survey, participants were sent messages regarding the topic of the survey, compensation, privacy measures regarding collected data, and contact information if they had any questions regarding the study. Following those messages, the participants were sent a message asking if they would like to continue in order to consent to their participation in the study.

An external file that holds a picture, illustration, etc.
Object name is ijerph-19-09129-g004.jpg

Facebook study recruitment ad.

The study’s Facebook recruitment ads had a reach of 10,309 which is defined as the number of people who saw the ad at least once. The recruitment ad generated 15,718 impressions which is the number of times the ad was displayed on a person’s screen, and it was clicked on a total of 790 times. The Link-Click-Through Rate was 4.77% which is the percentage of times a person saw the recruitment ad and clicked on it. The study’s Facebook target ads had a reach of 191 people and generated 441 impressions. The target ads were played a total of 353 times and only 11 times where the ads played at 100% of its length.

The development of methods for experimentation within social media platforms is essential for the progress of public health media campaign research, especially in the context of tobacco control. This study demonstrated a new platform that allows for customized recruitment and longitudinal follow-up of participants, as well as the execution of survey research within Facebook, which was found to be feasible for media campaign awareness studies [ 14 ].

4. Discussion

The field of digital media for behavior change is growing rapidly, but research is still in its infancy. While a recent review found some 3300 digital media publications related to behavior change, only a fraction of these provided evidence for the effectiveness of a digital intervention in changing behavior. Many studies monitor the digital media landscape, such as “digital listening” projects, and studies of large social media datasets (infodemiology), which are critically important and advance our understanding of the digital landscape. However, rigorous intervention studies on the effectiveness of digital media in actually changing behavior, specifically in public health, remain relatively sparse [ 41 ]. The field has tremendous room for growth in the coming years.

Behavior change can take time, and the potential for regression to earlier states is well-known [ 42 ]. Future research should include longitudinal follow-up to assess the long-term effect of social media behavioral interventions. Additionally, there was a lack of evidence on the effectiveness of theories of change in social media interventions, and future research should focus on testing the processes of change.

Given the relative dearth of rigorously evaluated digital media interventions for behavior change, more formative research evaluating the feasibility, appropriateness, and acceptability of specific types of projects is needed. A more rigorous application of the principles of program evaluation will help develop targeted, effective digital media interventions.

In particular, digital media interventions need to explore the full range of functionality of digital devices and their near-constant role in personal self-management and day-to-day living to maximize opportunities for behavior change. One area that deserves further attention is the potential for multi-factorial studies that examine the effects of adding and subtracting features of digital devices (e.g., an intervention with and without an app, with and without social media interactivity, etc.) on behavior change. More elaborated research designs that examine how to optimize delivery of digital interventions using the full range of functions of devices such as mobile phones and tablets is needed.

One of the strengths of social media interventions is that objective dosage and exposure data from analytics are available. However, some studies have reported that their social media efforts were effective without clearly reporting quantitative data (e.g., clicks, shares, views, etc.) on social media use [ 1 ]. Future research should examine the characteristics of engagement exposure to evaluate dose-response effects—i.e., to determine whether more exposure or exposure of a specific type is associated with successful behavior change. This is important in order to be able to objectively attribute intervention effects to observed behavior changes and build the evidence base in the field.

Another important dimension of digital media for behavior change is health literacy. Digital media represent another dimension of health literacy: the ability to successfully use, navigate, and obtain benefit from health-related information on digital devices [ 43 ]. Research should focus on how to make digital media behavior change interventions more sensitive to health literacy needs, and on how to improve the extent to which participants can successfully consume and use digital health information.

Finally, future experimental research should rigorously examine the effects of variable levels of engagement with, and frequency and intensity of exposure to, multiple forms of digital media for behavior change. In particular, longitudinal studies that follow participants over extended periods of time are needed to evaluate more distal outcomes (e.g., beyond immediate content recognition, engagement, and short-term attitudinal outcomes. Studies should also investigate dose-response effects by increasing the number of digital ad exposures over an extended period of time and evaluating varying dose-response curves for different campaign outcomes [ 42 ]. Adding greater levels of digital exposure would allow a study to plot possible threshold and drop-off effects of exposure. Such research will inform our understanding of the impact of varying levels of digital ad exposure on longer-term outcomes.

5. Conclusions

This review provides an overview of the emerging field of digital media for behavior change. The field represents an important, relatively new domain that overlaps digital health, and the broader use of digital media for social change programs. While the evidence base in this field, as illustrated in this paper, is relatively small, the field is growing. Future research and programs should expand the domains of subject matter addressed by digital media for behavior change. More rigorous, controlled, and externally valid studies are needed and the field will need to stay current with rapidly emerging new digital technologies.

Funding Statement

This research received no external funding.

Author Contributions

Conceptualization, W.D.E. and L.C.A.; methodology, W.D.E. and S.A.; software, W.D.E. and M.I.; validation, W.D.E., L.C.A. and M.N.; formal analysis, W.D.E., L.C.A. and M.N.; investigation, W.D.E.; resources, W.D.E.; data curation, W.D.E., L.C.A. and M.N.; writing—original draft preparation, W.D.E., L.C.A., D.B. and M.N.; writing—review and editing, W.D.E., L.C.A., M.N. and J.A.; visualization, L.C.A., M.N. and J.A.; supervision, W.D.E.; project administration, W.D.E. All authors have read and agreed to the published version of the manuscript.

Informed Consent Statement

Data availability statement, conflicts of interest.

The authors declare no conflict of interest.

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

The effect of social media influencers' on teenagers Behavior: an empirical study using cognitive map technique

  • Published: 31 January 2023
  • Volume 42 , pages 19364–19377, ( 2023 )

Cite this article

media influence case study

  • Karima Lajnef   ORCID: orcid.org/0000-0003-1084-6248 1  

74k Accesses

6 Citations

Explore all metrics

The increase in the use of social media in recent years has enabled users to obtain vast amounts of information from different sources. Unprecedented technological developments are currently enabling social media influencers to build powerful interactivity with their followers. These interactions have, in one way or another, influenced young people's behaviors, attitudes, and choices. Thus, this study contributes to the psychological literature by proposing a new approach for constructing collective cognitive maps to explain the effect of social media influencers' distinctive features on teenagers' behavior. More in depth, this work is an attempt to use cognitive methods to identify adolescents' mental models in the Tunisian context. The findings reveal that the influencers' distinctive features are interconnected. As a result, the influencer's distinctive features are confirmed in one way or another, to the teenagers' behavior. These findings provide important insights and recommendations for different users, including psychologists and academics.

Similar content being viewed by others

media influence case study

The future of social media in marketing

Advances in social media research: past, present and future.

media influence case study

The Impact of TikTok on Digital Marketing

Avoid common mistakes on your manuscript.

Introduction

The number of social media users has increased rapidly in the last few years. According to the global ‘State of Digital’ report (2021), the number of social media users reached 4.20 billion, which represents 53% of the world’s total population. This number has risen by more than 13% compared to the last year (2020). In Tunisia, until January 2021 the number of social media users has increased to 8.20 million, which represents 69 percent of the total population, while 97%, are accessed via mobile phones. According to the ALEXA report ( 2021 ), Google.com, Facebook are the most used networks by Tunisian people. Most importantly, 18, 5% of Facebook users are under 13 years old.

In fact, the emphasis on social media has created a consensus among tech companies, leading to the creation of more platforms. Today, the diversity of such platforms has created a new horizon of social media in terms of usage and ideas.

Many people whose careers’ are largely reliant on social media are known as "influencers". More than a profession, for some people, it is even considered as a way of life. Influencers use social media every day to express their opinions and critiques on many topics (like lifestyle, health, beauty) and objects (e.g. brands, services, and products). Accordingly, one of the most important marketing strategies in the market is relying on influencers, which has known as influencer marketing (Audrezet et al., 2020 ; Boerman, 2020 ; Lou & Yuan, 2019 ). In 2017, influencer marketing was considered as the most widespread and trendiest’ communication strategy used by the companies. Therefore, influencers have been considered by many marketing experts as opinion leaders because of their important role in persuading and influencing their followers (De Veirman et al., 2017 ). According to the two-step flow of communication theory, the influencer, as a representative of an organization, is inviting to filter, decode and create messages to match with his particular follower base (Lazarsfeld et al., 1944 ). An influencer is a mediator between consumers and organizations. According to Tarsakoo and Charoensukmongkol ( 2019 ), social media marketing implementation capabilities have a positive effect on customer relationship sustainability. In line with the premise of observational learning theory, influence occurs when the consumers use precedent information and observations shared with them gradually to extend their decision-making by evolving their beliefs, attitudes, and behaviors, (Bandura & Adams, 1977 ). In fact, the consumers are sizeable social networks of followers. In their turn, consumers, especially youth and adolescents, consider influencers as a source of transparency, credibility, and source of personal information from what helps the offered brands to be enlarged through the large social media network (e.g. Jin and Phua, 2014).

Social media influencers play a greater role in controlling and influencing the behavior of the consumer especially young people and teenagers (e.g. Marwick, 2015 ; Sokolova & Kefi, 2020 ). Actually, the use of Smartphone's has become an integral part of the lives of both young people and adolescents. According to Anderson ( 2018 ), 95% of teenagers aged between 13 and 17 own a Smartphone. For young people, the pre-social media era has become something of a blur. This generation has known as Generation Z where its members were born between the nineties and the 2000s. What distinguishes this generation is its extensive use of the Internet at an early age. For them, the social media presents an important part of their social life and since then many thinkers set out to explore the effects of using social media platforms at an early age on adolescents' lives. The excessive use of social media may have an effect on teens' mental health. In fact, adolescence is the interval period between childhood and adulthood. A teenager is not a child to act arbitrarily and is not an adult to make critical decisions. Therefore, young people and teenagers have considered as the most sensitive class of consumers. Teenagers' brain creates many changes that make them more sensitive to the impressions of others, especially the view of their peers (e.g. Elkind, 1967 ; Dacey & Kenny, 1994 ; Arnett, 2000 ). Adolescents' mental changes cause many psychological and cognitive problems. According to Social identity theory, teens appreciate the positive reinforcement they get by being included in a group and dislike the feeling of social rejection (Tajfel, 1972 ). To reinforce their sense of belonging, teens are following influencers on social media (e.g., Loureiro & Sarmento, 2019 ). In line with psychological theories, the attachment theory helps to clarify interpersonal relationships between humans. This theory provides the framework to explain the relationship between adolescents and influencers. Several studies have confirmed that the distinctive feature of social media influencers, including relatedness, autonomy and competence affects the behavior, the psychological situation and the emotional side of the consumers (Deci & Ryan, 2000 ). Does the distinctive feature of social media influencers affect teens' behavior? This kind of questions have become among the most controversial ones (e.g. Djafarova & Rushworth, 2017 ). This problem is still inconclusive, even not addressed in some developing countries like Tunisia. Indeed, it is clear that there are considerable gaps in terms of the academic understanding of what characteristics of social media influencers and their effect on teen behaviors. This problem still arises because the lack of empirical works is investigating in this area.

Therefore, this study contributes to the literature by different ways. First, this paper presents a review of the social media influencers' distinctive features in Tunisian context. This is important because social influencers have been considered as credible and trustworthy sources of information (e.g. Sokolova & Kefi, 2020 ). On the others hand, this study identifies the motivations that teens have for following social influencers. MICS6 Survey (2020) shows a gradual increase in suicide rates among Tunisian children (0–19 years). According to the general delegate for child protection, the phenomenon is in part linked to the intensive use of online games. Understanding the main drivers of social media influence among young Tunisians can help professionals and families guide them. Empirically, this study provides the first investigation of teens’ mental models using the cognitive approach.

The rest of this paper is organized as the following: The second part presents thetheoretical background and research hypotheses. The third part introduces the research methodology. The forth part is reserved to application and results. In the last part, both the conclusion and recommendations are highlighted.

Theoretical background and research hypotheses

Social media influencers' distinctive features.

"Informational social influence" is a concept that has been used in literature by Deutsch & Gerard, 1955 ), and defined as the change in behavior or opinions that happened when people (consumers) are conformed to other people (influencers) because they believe that they have precise and true information (e.g. Djafarova & Rushworth, 2017 , Alotaibi et al., 2019 ). According to (Chahal, 2016 ), there are two kinds of "influencers". The classic ones are the scientists, reporters, lawyers, and all others examples of people who have expert-level knowledge and the new ones are the Social media influencers. Accordingly, social media influencers have many followers that trust them especially on the topics related to their domain of knowledge (e.g. Moore et al., 2018 ). According to the Psychology of Influence perspective, people, often, do not realize that they are influenced because the effect occurs mainly in their subconscious (Pligt & Vliek, 2016 ). When influencers advocate an idea, a service, or a product, they can make a psychological conformity effect on followers through their distinctive features (Colliander, 2019 ; Jahoda, 1959 ).

Vollenbroek et al. ( 2014 ) investigated a study about social media influencers and the impact of these actors on the corporate reputation. To create their model, the authors use the Delphi method. The experts have exposed to a questionnaire that included the characteristics of influential actors, interactions, and networks. The first round of research indicates that a bulk of experts has highlighted the importance of intrinsic characteristics of influencers such as knowledge, commitment, and trust etcetera. While others believe that, the size of the network or the reach of a message determines the influence. The results of the second round indicate that the most agreed-upon distinctive characteristics to be a great influencer are being an active mind, being credible, having expertise, being authoritative, being a trendsetter, and having a substantive influence in discussions and conversations. According to previous literature, among the characteristics that distinguish the influencers is the ability to be creative, original, and unique. Recently, Casaló et al. ( 2020 ) indicated that originality and uniqueness positively influence opinion leadership on Instagram. For the rest of this section, we are going to base on the last two studies to draw on the most important distinctive features of social media influencers.

Credibility (expertise and trustworthiness)

According to Lou and Yuan ( 2019 ), one of the most distinctive characteristics that attract the audience is the influencer's credibility specifically the expertise and trustworthiness. In fact, source credibility is a good way of persuasion because it has related to many conceptualizations. Following Hovland et al. ( 1953 ), credibility has subdivided into expertise and trustworthiness. The expertise has reflected the knowledge and competence of the source (influencer) in a specific area (Ki & Kim, 2019 ; McCroskey, 1966 ). While trustworthiness is represented in influencer honesty and sincerity (Giffin, 1967 ). Such characteristics help the source (influencer) to be more convincing. According to the source credibility theory, consumers (social media audience) give more importance to the source of information to take advantage of the expertise and knowledge of influencers (e.g. Ohanian, 1990 ; Teng et al., 2014 ). Spry et al., ( 2011 ) pointed out that a trusted influencer's positive perception of a product and/or service positively affects consumers' attitudes towards recommended brandsHowever, if the product does not meet the required specifications, consumers lose trust in the product and the influencer (Cheung et al., 2009 ). Based on source credibility theory, this work tested one of the research goals: the effect of expertise and credibility on adolescent behavior.

Originality and creativity

Originality in social media represents the ability of an influencer to provide periodically new and differentiate content that attracts the attention of the audience. The content has perceived as innovative, sophisticated, and unusual. Social media influencers look for creating an authentic image in order to construct their own online identity. Marwick ( 2013 ) defined authenticity as "the way in which individuals distinguish themselves, not only from each other but from other types of media". Most of the time, an authentic and different content attracts attention, and sometimes the unusual topics make surprising (Derbaix & Vanhamme, 2003 ). According to Khamis et al. ( 2017 ), social media influencers attract the consumers' attention by posting authentic content. In fact, the audience often appreciates the originality and the creativity of the ideas (Djafarova & Rushworth, 2017 ).The originality of the content posted by an influencer has considered as a way to resonate with their public (Hashoff, 2017 ). When a company seeks to promote its products and services through social media, it is looking for an influential representative who excels at presenting original and different content. The brand needs to be presented by credible and believable influencers that create authentic content (Sireni, 2020 ). One of the aims of this work is to identify the effect of the authentic content on teen’s behaviors.

Trendsetter and uniqueness

According to Maslach et al. ( 1985 ), uniqueness is the case in which the individual feels distinguished compared to others. Tian et al. ( 2001 ) admitted that individuals attempt to be radically different from others to enhance their selves and social images. The uniqueness in content represents the ability of the influencer to provide an uncirculated content specific to him. Gentina et al. ( 2014 ) proved that male adolescents take into account the uniqueness of the content when they evaluated the influencer role particularly in evaluating the role of an opinion leader. Casaló et al. ( 2020 ) indicated that uniqueness positively influences the leadership opinion. Thus, the uniqueness of influencers’ contents may affect audiences’ attitude. Therefore, we aim to test the effect of the influencers’ contents uniqueness and trendsetter on teenagers’ behaviors.

Persuasion has a substantive influence in discussions and conversations. According to the Psychology of Persuasion, the psychological tactic that revolves around harnessing the principles of persuasion supports in one way or another the influencer’s marketing. The objective is to persuade people to make purchase decisions. Persuasion aims commonly to change others attitudes and behavior in a context of relative freedom (e.g. Perloff, 2008 ; Crano & Prislin, 2011 ; Shen & Bigsb, 2013 ). According to Scheer and Stern ( 1992 ), the dynamic effect of marketing occurs when an influencer persuades consumers to participate in a specific business. Influencers' goal is to convince the audiences of their own ideas, products, or services. There are six principles of persuasion, which are consensus, consistency, scarcity, reciprocity, authority, and liking. Thus, among the objectives of this study is to set the effect of influencers' persuasion on teens' behavior.

To sum up, our hypothesis is as the following:

H1: Social media influencers' distinctive features affect teenagers’ behavior.

Social media influencers' and teenagers’ behavior

Young people and adolescents are increasingly using social media, consequently, they receive a lot of information from different sources that may influence in one way or another their behavior and decisions. Accordingly, the Digital report (2021) (published in partnership with Hootsuite and we Are Social) indicated that connected technologies became an integral part of people's lives, and it has seen great development in the last twelve months especially with regard to social media, e-commerce, video games, and streaming content. According to the statistics raised in the global State of Digital (2021), the number of social media users has increased by 490 million users around the world compared to last year to attain 4.20 billion. In Tunisia, until January 2021 the number of social media users has increased to attain 8.20 million, which represents 69 percent of the total population while 97% accessing via mobile phone. According to the ALEXA report ( 2021 ), Google.com, Facebook and YouTube are the networks most used by Tunisian people. In addition, 18, 5% of Facebook users are under 13 years old.The use of social media by young people has recently increased, which led us to ask about the influence of such an alternative on their psychological and mental conditions, their identity formation, and their self-estimation. One of this study aims is also to answer the question: why teens follow Social media influencers?

Identity formation

Identity formation relates to the complex way in which human beings institute a continued unique view of the self (Erikson, 1950 ). Consequently, this concept has largely attached to terms like self-concept, personality development, and value. Identity, in a simplified way, is an aggregation of the “self-concept, who we are” and “self-awareness” (Aronson et al., 2005 ). In line with communication theory, Scott ( 1987 ) indicated that interpersonal connection is a key factor in identity formation. Most importantly, the individual's identity formation is the cornerstone of building a personality. A stream of research indicates that consumers accept influence from others they identify with and refuse influence when they desire to disconnect (Berger & Heath, 2007 ; White & Dahl, 2006 ).

Adolescence is a transitional stage in individuals' lives that represents the interval between childhood and adulthood (e.g. Hogan & Astone, 1986 ; Sawyer et al., 2018 ). From here begins teens' psychological conflicts that call into question-related to themselves and about their role in society (e.g. Hill et al., 2018 ). In fact, teens go through many experiences because of the physical and psychological changes during the self-establishment phase, which influences not only their identity formation but also their own personality. At this stage, radical changes occur in their lives, which may affect the course of their future life. The family (precisely parents' behaviors) represents the first influencer on their kids' view of themselves, but this is not the main side. In the era of globalization and technological development, social media has become an important role in shaping the identity of adolescents (see Gajaria et al., 2011 ). In the adolescent stage, individuals start to use the flood of information received from various sources (especially from social media) to find out a sense of self and personal identity. Davis ( 2013 ) affirmed that students who communicated online with their peers express better visibility of self-concept. In its turn, self-concept visibility has related to friendship quality. According to Arnett and Hughes ( 2014 ), identity formation is the result of "thinking about the type of person you want to be” (p. 340). Due to the intense appearance of social media in the lives of teenagers, identity formation is highly affected by social media influencers' personalities. Kunkel et al. ( 2004 ) affirmed that targeted advertisements in social media affect the identity molding of teens by encouraging them to espouse new habits of appearance and consumption. Identification is easier when there is a previous model to mimic.

This work aims to explore the effect of social media influencers' distinctive features on the healthy identity development of teens.

Mimetic bias

Investigating mimicry in the psychological literature is not a recent subject. Kendon ( 1970 ) and LaFrance ( 1982 ) were the first researchers that introduce the mimicry concept in literature. Nevertheless, exploring mimicry effect on peoples’ behavior presents a new area of research. Many researchers like Chartrand and Dalton ( 2009 ) and Stel & Vonk ( 2010 ) presented mimicry as the interaction of an individual with others through observing and mirroring their behaviors, attitudes, expressions, and postures. Chartrand and Dalton ( 2009 ) indicated that social surroundings are easily contagious and confirmed the high ability of individuals to mimic what they see in their social environment. Individuals resort to mimicry to fulfill their desire to belong to a group and be active members of society. Therefore, Lakin et al. ( 2003 ) affirmed that mimicry could be used to enhance social links with others. Such behavior aims to bring people closer to each other and create intimacy. White and Argo ( 2011 ) classified mimicry as conscious and unconscious. According to the Neuroscience literature, unconscious mimicry occurs due to the activation of individual mirror neurons that lead to mimic others (e.g. Hatfield et al., 1994 ). Thus, mimickers “automatically” imitate others in many situations like facial expressions (e.g., smiling), behavioral expressions (e.g., laughing), and postural expressions (e.g., hand positioning) (Meltzoff & Moore, 1983 ; LaFrance & Broadbent, 1976 ; Simner, 1971 ). On the other hand, a recent stream of research has advocated conscious mimicry (White & Argo, 2011 ; Ruvio et al., 2013 ). Ruvio et al. ( 2013 ) have presented the "Consumer’s Doppelganger Effect" theory. According to the authors, when consumers have the intention to look like their role models, they imitate them.

One of the paradoxical challenges in the adolescence period is the teens' simultaneous need for "mimic" and "differentiation ".Among the most common questions asked between adolescents is "Who we are?”. The identification of themselves based commonly on a comparison between them and members of the group to which they aim to belong. The feeling of being normal is an obsession that haunts the majority of teenagers. Their sense of being within the norm and not being alienated or disagreed with others prompts teenagers to do anything even if this poses a danger to them just to be accepted by others. Today, with the development of social media, family, peers and friends are no longer the only influencers that teens mimic, but this environment has expanded to include social media influencers. Teens give more attention to their online image and mimic social media influencers to achieve a sense of belonging. According to Cabourg and Manenti ( 2017 ), the content shared by adolescents with each other about their lives on their own social networks helps them understand and discover each other, and create their identity away from their parents. This phenomenon turns into a problem when adolescents mimic each other only not to be excluded or rejected, even if these actions do not represent them.

Another important aim of this study is to explore the effect of social media influencers' distinctive features on teen’s mimicry behavior.

Confirmation bias

Cabourg and Manenti ( 2017 ) pointed out that it is a necessity for a teenager to be a part of a peer group. Belonging to the group for a teenager reinforces his/her sense of existence away from family restrictions. As we have mentioned before and in line with Hernandez et al. ( 2014 ), teens need to create peer relationships, whether to contribute positively or negatively to their psychosocial side and undoubtedly play a crucial role in the development of identity. Araman and Brambilla ( 2016 ) argued that: "Teenage is an important stage in life, full of physical and psychological transformation, awakening in love and professional concerns. Identifying yourself with a group makes you feel stronger, to say that you exist, and even to distinguish yourself from society”. The development of social media platforms promotes the desire of teens to a group belonging. Social media platforms, such as tick-tock, Facebook, and Instagram, motivate their users to interact with likes and comments on others people’s posts. In fact, according to Davis ( 2012 ), casual communication between teens through social networking using text and instant messages enhances their sense of belonging. Furthermore, the author indicates that social media helps teens to compare their ideas and experiences with their peers, which support their sense of belonging. According to Zeng et al. ( 2017 ), social media interactions aim to create strong social bonds and raise emotional belonging to a community. Confirmation bias occurs when an individual cannot think and create outside the herd. Equally important, due to the confirmation bias, teens cannot identify themselves, except by flying inside the swarm. Teens may identify themselves as fans of a famous influencer just to feel the sense of belonging. This work tests the effect of social media influencers' distinctive features on teens’ sense of belonging.

Self-esteem

Psychological literature defines Self-esteem as the individual’s evaluation of himself or herself that can be positive or negative (Smith et al., 2014 ). Coopersmith ( 1965 ) affirmed that the self-esteem is the extent to which an individual views his self as competent and worthwhile. A stream of past works highlighted the effects of social media on self-esteem (Błachnio et al., 2016 ; Denti et al., 2012 ; Gonzales & Hancock, 2011 ). The majority of them found that audiences with low self-esteem use more social networks’ to reinforce their self-esteem. Due to technological developments, social media networks offer a self-comparison between users. According to Festinger ( 1954 ), social media users focus more on self-evaluations by making social comparisons with others concerning many issues like beauty, popularity, social classes or roles, wealth accumulation, etc. Social comparison is a part of building a teen's personal identity (Weinstein, 2017 ). Among adolescents, there are two types of comparisons on social media, which are upward comparison, and downward comparison (Steers et al., 2014 ). The first one has related to weakened levels of self-esteem and high depressive symptoms. The second one is characterized by expanding levels of self-esteem and low levels of anxiety (Burrow & Rainone, 2017 ). According to Wright et al. ( 2018 ), self-presentation on social media is related to the extent to which others accept and the determined level of belonging that based on the number of likes and comments.

This study aims to test the effect of social media influencers' distinctive features on teens’ self-esteem.

Digital distraction

Social media has taken over most of the spare time. It has displaced the time spent on other activities like reading, watching TV, make sports etc.… (Twenge et al., 2019 ). Consequently, the phenomenon of digital distraction has widely spread, especially with the rise of smartphones use. The results of a study established by Luna ( 2018 ) indicated that the use of smartphones during a meal leads to minimize the levels of connectedness and enjoyment and increase the levels of distraction comparing to those who set devices off. Martiz ( 2015 ) found that students with Internet addiction often feel lonely and depressed. Recently, Emerick et al. ( 2019 ) affirmed that the students themselves agree that spending a lot of time using social media leads to distraction. Many studies have proven that most teens spend a lot of time online (e.g., Anderson & Jiang, 2018 ; Twenge et al., 2018 ). Thus, they are the most vulnerable to digital distraction. We believe that whenever distinctive features of influencers are good, the most important impact they have on young people, leads to distraction.

At this level, our second hypothesis is as the following:

H2. The behavior and cognitive biases of teens are affected by social media influence.

Research methods

The cognitive maps.

The cognitive map is relatively an old technique (Huff, 1990 ). However, the use of cognitive maps in scientific research has increased in recent years. According to Axelrod ( 1976 ), a cognitive map is a mathematical model that reflects a belief system of a person. In another words, a cognitive map is a representation of causal assertion way of a person on a limited area. At the beginning of the 1970s, it was intellectually popular amongst behavioral geographers to investigate the significance of cognitive maps, and their impacts on people’s spatial behavior. A cognitive map is a type of mental representation, which serves an individual to acquire, store, recall, code, and decode information about the relative locations and attributes of phenomena in their everyday or metaphorical spatial environment. It is usually defined as the graphical representation of a person belief about a particular field. A map is not a scientific model based on objective reality, but a graphical representation of an individual's specific beliefs and ideas about complex local situations and issues. It is relatively easy for humans to look at maps (cognitive maps in our case) and understand connections, between different concepts. Cognitive maps can therefore also be thought of as graphs. Graphs can be used to represent many interesting things about our world. It can also be used to solve various problems. According to Bueno & Salmeron ( 2009 ), Cognitive Maps are a powerful technique that helps to study human cognitive phenomena and specific topics in the world. This study uses cognitive maps as a tool to investigate the mental schema of teenagers in Tunisian Scouts. In fact, cognitive mapping helps to explore the impact of social media on teenage behavior in the Tunisian context. In other words, we focus on the effect of influencers' distinctive features on teen behavior.

Data collection and sample selection

The aim of this work is to explore the effect of social media influencers' distinctive features on teenagers' behavior in Tunisian context. On the other hand, this work investigates if the psychological health of teens is affected by social media influence. To analyze mentally processing multifactor-interdependencies by the human mind or a scenario with highly complex problems, we need more complex analysis methods like the cognitive map technique.

The questionnaire is one of the appropriate methods used to construct a collective cognitive map (Özesmi & Özesmi, 2004 ). Following Eden and Ackermann ( 1998 ), this study uses face-to-face interviews because it is the most flexible method for data collection and it is the appropriate way to minimize the questionnaire mistiness. The questionnaire contains two parts: the first part is reserved to identify the interviewees. The second part provides the list of concepts for each approach via cross-matrix. The questionnaire takes the form of an adjacency matrix (see Table 1 ). The data collection technique appropriate to build a cognitive map is the adjacent matrix. The adjacency matrix of a graph is an (n × n) matrix:

The variables used in the matrix can be pre-defined (by the interviewer using the previous literature) or it can be identified in the interview by the interviewees. This paper uses the first method to restrict the large number of variables related to both influencers’ distinctive features and teenagers' behavioral biases (see Table 2 ). This work identified two types of social media influencers that are Facebook bloggers and Instagrammers for two reasons. Facebook is the most coveted social network for Tunisians. It has more than 6.9 million active users in 2020 or 75% of the population (+ 13 years) of which 44.9% were female users and 55.1% male. On the other hand, Instagram is the second popular social media platform. It has more than 1.9 million, namely 21% of the Tunisian population (+ 13 years).

In this work, we deal with (10 × 10) adjacency matrix.

Experts (psychologists, academics, etc.) often analyze the relationships between social media and young people’s behavior. The contribution of this work is that we rely on the adolescents' point of view in order to test this problem using the cognitive maps method. To our knowledge, no similar research has been done before.

This work is in parallel to the framework of the Tunisian State project "Strengthening the partnership between the university and the economic and social environment". It aims to merge the scientific track with the association work. We have organized an intellectual symposium in conjunction with the Citizen Journalism Club of youth home and the Mohamed-Jlaiel Scouts Group of Mahres entitled "Social Influencers and Their Role in Changing Youth Behaviors”.This conference took place on April 3, 2021, in the hall of the municipality, under the supervision of an inspector of youth and childhood”. In fact, Scouts is a voluntary educational movement that aims to contribute to the development of young people to reach the full benefit of their physical and social capabilities to make them responsible individuals. Scouts offer children and adolescents an educational space complementary to that of the family and the school. The association emphasizes community life, taking responsibility, and learning resourcefulness.Scouting contributes to enhancing the individual's self-confidence and sense of belonging and keeps them away from digital distraction. Therefore, our sample has based on a questionnaire answered by young people belonging to the Tunisian Scoutsaged between 14 and 17 and, who belong to the Mohamed-Jlaiel Scouts Group of Mahres. In fact, scouting strengthens the willpower of young people and allows them to expand their possibilities for self-discipline. In addition, Scout youth are integrated into the community and spend more time in physical and mental activities than their peers who spend most of their free time on social media. Unfortunately, because of the epidemiological situation that Tunisia experienced during this period due to the spread of the Coronavirus, we could not summon more than 35 people, and the first sample was limited only to 25 young people. Thus, a second study with another data collection is needed. Over two successive months (November and December 2021), we make a few small workshops (due to the pandemic situation) with scouts’ young people. The second sample contains 38 teens. Therefore, our total data hold 63young people (26 female and 37 male). It should be noted that the surveys were carried out after parental consent.

We start our interviews with presenting the pros and cons of social mediaand its effect on audiences’ behavior. After forming an idea with the topic, we asked young people to answer the questionnaire presented to them after we defined and explained all the variables. We have directly supervised the questionnaire. Teens are invited to fulfill the questionnaire (in the form of a matrix) using four possibilities:

If variable i has no influence on variable j, the index (i, j) takes a value of zero

1 if variable I has a weak influence on variable j.

2 if variable I has a strong influence on variable j.

3 if variable I has a very strong influence on variable j.

To sumup, the final data contains 63 individual matrices. The aim of the questionnaire is then to build the perception maps (Lajnef et al., 2017 ).

Collective cognitive map method

This work is of qualitative investigation. The research instrument used in this study is the cognitive approach. This work aims to create a collective cognitive map using an interviewing process. Young peopleare invited to fill the adjacencymatrices by giving their opinion about the effect of social media influencers' distinctive features on teenagers' behavior. To draw up an overall view, individual maps (creating based on adjacency matrices) aggregated to create a collective cognitive map. Since individual maps denote individual thinking, collective map is used to understand the group thinking. The aggregation map aimed to show the point of similarities and differences between individuals (Lajnef et al., 2017 ). The cognitive map has formed essentially by two elements: concepts (variables) and links (relations between variables). The importance of a concept is mainly related to its link with other variables.

This technique helps to better understand the individual and collective cognitive universe. A cognitive map became a mathematical model that reflects a belief system of individuals since the pioneering work of Tolman ( 1948 ). Axelrod ( 1976 ) investigated the political and economic field and considered "cognitive maps" as graphs, reflecting a mental model to predict, understand and improve people's decisions. Recently, Garoui & Jarboui ( 2012 ) have defined the cognitive map as a tool aimed to view certain ideas and beliefs of an individual in a complex area. This work aims to explore a collective cognitive map to set the complex relationships between teenagers and social media influencers. For this reason, we investigate the effect of social media influencers' distinctive features on teenagers' behavior using an aggregated cognitive map.

Results and discussion

In this study, we report all measures, manipulations and exclusions.

Structural analysis and collective cognitive map

This paper uses the structural analysis method to test the relationship between the concepts and to construct a collective cognitive map. According to Godet et al. ( 2008 ), the structural analysis is “A systematic, matrix form, analysis of relations between the constituent variables of the studied system and those of its explanatory environment”. The structural analysis purpose is aimed to distinguish the key factors that identify the evolution of the system based on a matrix that determines the relationships among them (Villacorta et al., 2012 ). To deal with our problem, Micmac software allows us to treat the collected information in the form of plans and graphs in order to configure the mental representation of interviewees.

The influence × dependence chart

This work uses the factor analysis of the influence-dependence chart in which factors have categorized due to their clustered position. The influence × dependence plan depends on four categories of factors, which are the determinants variables, the result variables the relay variables, and the excluded variables. The chart has formed by four zones presented as the following (Fig.  1 ):

figure 1

Influence-dependence chart, according to MICMAC method

Zone 1: Influent or determinant variables

Influent variables are located in the top left of the chart. According to Arcade et al. ( 1999 ) this category of variables represents a high influence and low dependence. These kinds of variables play and affect the dynamics of the whole system, depending on how much we can control them as key factors. The obtained results identify uniqueness, trustworthiness, and Mimetic as determinant variables. The ability of influencers’ is to provide personalized and unique content that influence Tunisian teens’ behavior. This finding is in line with Casaló et al. ( 2020 ) work. On the other hand, the results indicate that teens mimic social media influencers to feel their belonging. Such an act allows them to discover each other, and create their identity away from their parents (Cabourg & Manenti, 2017 ). The most Influential variable of the system is trustworthiness.The more trustworthiness influencers via social media are, the higher their influence on young people will be. This finding is conformed to previous studies (Giffin, 1967 ; Spry et al., 2011 ).

Zone 2: Relay variables

The intermediate or relay variables are situated at the top right of the chart. These concepts have characterized by high influence and sensitivity. They are also named “stake factors” because they are unstable. Relay variables influence the system depending on the other variables. Any effect of these factors will influence themselves and other external factors to adjust the system. In this study, most of influencers' distinctive features (persuasion, originality, and expertise) play the role of relay variables. The results indicate that the influence of persuasion affects young people's convictions, depending on other variables. The results are in line with previous studies (e.g. Perloff, 2008 ; Shen et al., 2013 ). Furthermore, the findings indicate that the more expertise social media influencers' are, the higher their influence on young people will be. The study of Ki and Kim ( 2019 ) supported our findings. Additionally, the originality of the content presented on social media attracts the audience more than the standard content. The results are in line with those of Khamis et al., ( 2017 ) and Djafarova & Rushworth ( 2017 ).

Based on the results of zone 1 and zone 2, we can sum up that Social media influencers' distinctive features tested on this work affect teenagers’ behavior. Therefore, H1 is accepted.

Zone 3: Excluded or autonomous variables

The excluded variables are positioned in the bottom left of the chart. This category of variables is characterized by a low level of influence and dependence. Such variables have no impact on the overall dynamic changes of the system because their distribution is very close to the origin. This work did not obtain this class of variables.

Zone 4: Dependent variables

The dependent variables are located at the bottom right of the chart. These variables have characterized by a low degree of influence and a high degree of dependence. These variables are less influential and highly sensitive to the rest of variables (influential and relay variables). According to our results, the dependent variables are those related to teens' behavior and cognitive biases. Social media influencers affect the identity development of teens. These findings are in line with those of Kunkel et al. ( 2004 ).The results show also that young people often identify themselves as fans of a famous influencer just to feel the belonging. These results are in line with previous studies like those of Davis ( 2012 ) and Zeng et al. ( 2017 ). Furthermore, the findings indicate that young people use more social networks’ to reinforce their self-esteem.The results confirm with those of Denti et al. ( 2012 ) and Błachnio et al. ( 2016 ).Influencers via social media play a role in digital distraction. Thus, the result found by Emerick et al. ( 2019 ) supports our findings.

Based on the results of zone 3, we can sum up that the behavior and cognitive biases of teens are affected by social media influencers. Therefore, H2 is accepted.

Collective cognitive maps

During this study, we have gathered the individuals’ matrices to create a collective cognitive mind map. The direct influence graph (Figs.  2 and 3 ) present many interesting findings. First, the high experience of influencers via social media enhances the production of original content. Furthermore, the more expertise the influencers' are, the higher their degree of persuasion on young people will be. As similar to this work, Kirmani et al. ( 2004 ) found that the influencers' experience with persuasion emerges as factors that affect customers. Beside the experience, the more an influencer provides unique and uncirculated content specific to him, the higher the originality of the content will be. Previous studies hypothesized that unique ideas are the most stringent method for producing original ideas (e.g., Wallach & Kogan,  1965 ; Wallach & Wing, 1969 ).Generally; influencers that produce different contents have a great popularity because they produce new trends. Therefore, our results indicate that young people want to be one of their fans just to feel their belonging. Furthermore, our findings indicate that the originality of content can be a source of digital distraction. Teenagers spend a lot of time on social media to keep up with new trends (e.g. Chassiakos & Stager, 2020 ).

figure 2

The collective cognitive maps (25% of links)

figure 3

The collective cognitive map (100% of links)

The influencers' experience and their degree of trustworthiness, besides the originality of the content, enhance their abilities to persuade adolescents. During adolescence, young people look for a model to follow. According to our results, it can be a social media influencer with a great ability to persuade.

In recent years, the increasing use of social media has enabled users to obtain a large amount of information from different sources. This evolution has affected in one way or another audience's behavior, attitudes, and decisions, especially the young people. Therefore, this study contributes to the literature in many ways. On the first hand, this paper presents the most distinctive features of social media influencers' and tests their effect on teenagers' behavior using a non-clinical sample of young Tunisians. On the other hand, this paper identifies teens' motivations for following social media influencers. This study exercises a new methodology. In fact, it uses the cognitive approach based on structural analysis. According to Benjumea-Arias et al. ( 2016 ), the aim of structural analysis is to determine the key factors of a system by identifying their dependency or influence, thus playing a role in decreasing system complexity. The present study successfully provides a collective cognitive map for a sample of Tunisian young people. This map helps to understand the impact of Facebook bloggers and Instagrammers on Tunisian teen behavior.

This study presents many important findings. First, the results find that influencers' distinctive features tested on this work affect teenagers’ behavior. In fact, influencers with a high level of honesty and sincerity prove trustworthiness among teens. This result is in line with those of Giffin ( 1967 ). Furthermore, the influencer’s ability to provide original and unique content affects the behavior of teens. These findings confirm those of Casaló et al. ( 2020 ). In addition, the ability to influence is related with the ability to persuade and expertise.

The findings related to the direct influence graph reveal that the influencers' distinctive features are interconnected. The experience, the degree of trustworthiness, and the originality of the submitted content influence the ability of an influencer to persuade among adolescents. In return, the high degree of persuasion impresses the behavior, attitudes, and decisions of teens with influences in their identity formation. The high experience and uniqueness help the influencer to make content that is more original. Young people spend more time watching original content (e.g. Chassiakos & Stager, 2020 ). Thus, the originality of content can be a source of digital distraction.

The rise in psychological problems among adolescents in Tunisia carries troubling risks. According to MICS6 Survey (2020), 18.7% of children aged 15–17 years suffer from anxiety, and 5.2% are depressed. The incidence of suicide among children (0–19 years old) was 2.07 cases per 100,000 in 2016, against 1.4 per 100,000 in 2015. Most child suicides concern 15–19-year-olds. They are in part linked to intensive use of online games, according to the general delegate of child protection. However, scientific studies rarely test the link between social media use and psychological disorders for young people in the Tunisian context. In fact, our result emphasized the important role of influencers' distinctive features and their effect on teens' behavior.

Thus, it is necessary and critical to go deeper into those factors that influence the psychological health of teens. We promote researchers to explore further this topic. They can uncover ways to help teens avoid various psychological and cognitive problems, or at least realize them and know the danger they can cause to themselves and others.

These results have many implications for different actors like researchers and experts who were interested in the psychological field.

This work suffers from some methodological and contextual limitations that call recommendations for future research. Fist, the sample size used is relatively small because of the epidemiological situation that Tunisia experienced at the time of completing this work. On the other hand, this work was limited only to study the direct relationship between variables. Therefore, we suggest expanding the questionnaire circle. We can develop this research by interviewing specialists in the psychological field. From an empirical point of view, we can go deeper into this topic by testing the indirect relationship among variables.

Alexa. (2021). Amazon Alexa. Retrieved January 24, 2021 from https://www.alexa.com/topsites/countries/TN

Alotaibi, T. S., Alkhathlan, A. A., & Alzeer, S. S. (2019). Instagram shopping in Saudi Arabia: What influences consumer trust and purchase decisions. International Journal of Advanced Computer Science and Applications , 10 (11). https://thesai.org/Publications/ViewPaper?Volume=10&Issue=11&Code=IJACSA&SerialNo=81

Anderson, M. (2018, May 31). Teens, social media and technology 2018.  Pew Research Center: Internet, Science & Tech . Retrieved January 1, 2020 from https://www.pewresearch.org/internet/2018/05/31/teens-social-mediatechnology-2018/

Anderson, M., & Jiang, J. (2018). Teens, social media & technology 2018. Pew Research Center, 31 (2018), 1673–1689.

Google Scholar  

Araman T., & Brambilla, P. (2016). School in the digital age. Migros magazine - MM46 . pp. 13–19.

Arcade. J, Godet .M, Meunier. F, Roubelat. F. (1999). Structural analysis with the MICMAC method & Actor's strategy with MACTOR method. In J. Glenn (Ed.), Futures research methodology . American Council for the United Nations University, the millennium project.

Arnett, J. J. (2000). Emerging adulthood a theory of development from the late teens through the twenties. American Psychologist, 55 , 469–480.

Article   PubMed   Google Scholar  

Arnett, J. J., & Hughes, M. (2014). Adolescence and emerging adulthood (pp. 102–111). Pearson.

Book   Google Scholar  

Aronson, E., Wilson, T. D., & Akert, R. M. (2005). Social psychology (Vol. 5). Prentice Hall.

Audrezet, A., de Kerviler, G., & Moulard, J. G. (2020). Authenticity under threat: When social media influencers need to go beyond self-presentation. Journal of business research, 117 , 557–569.‏

Axelrod, R. (1976). The cognitive mapping approach to decision making. Structure of Decision, 1 (1), 221–250.

Bandura, A., & Adams, N. E. (1977). Analysis of self-efficacy theory of behavioral change. Cognitive Therapy and Research, 1 (4), 287–310.

Article   Google Scholar  

Benjumea-Arias, M., Castañeda, L., & Valencia-Arias, A. (2016). Structural analysis of strategic variables through micmac use: Case study. Mediterranean Journal of Social Sciences, 7 (4), 11.

Berger, J., & Heath, C. (2007). Where consumers diverge from others: Identity signaling and product domains. Journal of Consumer Research, 34 , 121–134.

Błachnio, A., Przepiorka, A., & Pantic, I. (2016). Association between Facebook addiction, self-esteem and life satisfaction: A cross-sectional study. Computers in Human Behavior, 55 , 701–705.

Boerman, S. C. (2020). The effects of the standardized Instagram disclosure for micro-and meso-influencers. Computers in Human Behavior, 103 , 199–207.

Bueno, S., & Salmeron, J. L. (2009). Benchmarking main activation functions in fuzzy cognitive maps. Expert Systems with Applications, 36 (3), 5221–5229.

Burrow, A. L., & Rainone, N. (2017). How many likes did I get?: Purpose moderates links between positive social media feedback and self-esteem. Journal of Experimental Social Psychology, 69 , 232–236.

Cabourg, C., & Manenti, B. (2017). Portables: La face cachée des ados . Flammarion.

Casaló, L. V., Flavián, C., & Ibáñez-Sánchez, S. (2020). Influencers on instagram: Antecedents and consequences of opinion leadership. Journal of business research, 117 , 510–519.‏‏

Chahal, M. (2016). Four trends that will shape media in 2016. Marketing Week. Available from: http://www.marketingweek.Com/2016/01/08/four-trendsthat-will-shape-media-in-2016 . Accessed 1 May 2018.

Chartrand T. L., Dalton A. N. (2009). Mimicry: Its ubiquity, importance, and functionality. In Morsella E., Bargh J. A., Gollwitzer P. M. (Eds.), Oxford handbook of human action (pp. 458–483). New York, NY: Oxford University Press.

Chassiakos, Y. R., & Stager, M. (2020). Chapter 2 - Current trends in digital media: How and why teens use technology. In M. A. Moreno & A. J. Hoopes (Eds.), Technology and adolescent health (pp. 25–56). Academic Press. https://doi.org/10.1016/B978-0-12-817319-0.00002-5

Cheung, M. Y., Luo, C., Sia, C. L., & Chen, H. (2009). Credibility of electronic word-of-mouth: Informational and normative determinants of on-line consumer recommendations. International Journal of Electronic Commerce, 13 (4), 9–38.

Colliander, J. (2019). “This is fake news”: Investigating the role of conformity to other users’ views when commenting on and spreading disinformation in social media. Computers in Human Behavior, 97 , 202–215.

Coopersmith, S. (1965). The antecedents of self-esteem . Princeton.

Crano, W. D., & Prislin, R. (2011). Attitudes and attitude change . Psychology Press.

Dacey, J. S., & ve Kenny, M. (1994). Adolescent development . Brown ve Benchmark Publishers.

Davis, K. (2012). Friendship 2.0: Adolescents’ experiences of belonging and self-disclosure online. Journal of Adolescence, 35 (6), 1527–1536.

Davis, T. (2013). Building and using a personal/professional learning network with social media. The Journal of Research in Business Education, 55 (1), 1.

De Veirman, M., Cauberghe, V., & Hudders, L. (2017). Marketing through Instagram influencers: The impact of number of followers and product divergence on brand attitude. International Journal of Advertising, 36 (5), 798–828.

Deci, E. L., & Ryan, R. M. (2000). The" what" and" why" of goal pursuits: Human needs and the self-determination of behavior. Psychological Inquiry, 11 (4), 227–268.

Denti, L., Barbopuolos, I., Nilsson, I., Holmberg, L., Thulin, M., Wendeblad, M., Andén, L., & Davidsson, E. (2012). Sweden’s largest facebook study. Gothenburg Research Institute, 2012 :3.

Derbaix, C., & Vanhamme, J. (2003). Inducing word-of-mouth by eliciting surprise–a pilot investigation. Journal of Economic Psychology, 24 (1), 99–116.

Deutsch, M., & Gerard, H. B. (1955). A study of normative and informational social influences upon individual judgment. The Journal of Abnormal and Social Psychology, 51 (3), 629.

Djafarova, E., & Rushworth, C. (2017). Exploring the credibility of online celebrities’ Instagram profiles in influencing the purchase decisions of young female users. Computers in Human Behavior, 68 , 1–7.

Eden, C., & Ackermann, F., (1998). Analyzing and comparing idiographic causal maps. In Eden, C., Spender, J.-C. (Eds.), Managerial and organizational cognition theory, methods and research . Sage, London, pp. 192–209

Elkind, D. (1967). Egocentrism in adolescence. Child Development, 38 (4), 1025–1033.

Emerick, E., Caldarella, P., & Black, S. J. (2019). Benefits and distractions of social media as tools for undergraduate student learning. College Student Journal, 53 (3), 265–276.

Erikson, E. H. (1950). Childhood and society . Norton.

Festinger, L. (1954). A theory of social comparison processes. Human Relations, 7 (2), 117–140.

Gajaria, A., Yeung, E., Goodale, T., & Charach, A. (2011). Beliefs about attention- deficit/hyperactivity disorder and response to stereotypes: Youth postings in facebook groups. Journal of Adolescent Health, 49 (1), 15–20.

Garoui, N., Jarboui, A., (2012). Cognitive approach of corporate governance: A visualization test of mental models with cognitive mapping technique.  Romanian Economic Journal, 15 (43), 61–96.

Gentina, E., Butori, R., & Heath, T. B. (2014). Unique but integrated: The role of individuation and assimilation processes in teen opinion leadership. Journal of Business Research, 67 (2), 83–91.

Giffin, K. (1967). Interpersonal trust in small-group communication. Quarterly Journal of Speech, 53 (3), 224–234.

Godet, M., Durance, P. H., & Gerber. (2008). Strategic foresight (la prospective): use and misuse of scenario building . LIPSOR Working Paper (Cahiers du LIPSOR).

Gonzales, A. L., & Hancock, J. T. (2011). Mirror, mirror on my Facebook wall: Effects of exposure to Facebook on self-esteem. Cyberpsychology, Behavior, and Social Networking, 14 (1–2), 79–83.

Hashoff. (2017) Influencer marketer . A #Hashoff state of the union report. Available at:  https://www.hashoff.com/ . Accessed October 2019.

Hatfield, E., Cacioppo, J. T., & Rapson, R. L. (1994). Emotional contagion . Cambridge University Press.

Hernandez, L., Oubrayrie-Roussel, N., & Lender, Y. (2014). Self-affirmation in the group of peers to school demobilization. In NecPlus (Ed.) Childhood (2), pp. 135–157. Recovered on https://www.cairn.info/revue-enfance2-2014-2-page-135.htm

Hill, R. M., Del Busto, C. T., Buitron, V., & Pettit, J. W. (2018). Depressive symptoms and perceived burdensomeness mediate the association between anxiety and suicidal ideation in adolescents. Archives of Suicide Research, 22 (4), 555–568.

Hogan, D., & Astone, N. (1986). The transition to adulthood. Annual Review of Sociology, 12 , 109–130.

Hovland, C. I., Janis, I. L., & Kelley, H. H. (1953). Communication and persuasion: Psychological studies of opinion change . New Haven, CT: Yale University Press.

Huff, A. S. (1990). Mapping strategic thought. In A. S. Huff (Ed.), Mapping strategic thought (pp. 11–49). Wiley.

Jahoda, G. (1959). Development of the perception of social differences in children from 6 to 10. British Journal of Psychology, 50 (2), 159–175.

Kendon, A. (1970). Movement coordination in social interaction: Some examples described. Actapsychologica, 32 , 101–125.

Khamis, S., Ang, L., & Welling, R. (2017). Self-branding, ‘micro-celebrity’and the rise of Social Media Influencers. Celebrity Studies, 8 (2), 191–208.

Ki, C. W. C., & Kim, Y. K. (2019). The mechanism by which social media influencers persuade consumers: The role of consumers’ desire to mimic. Psychology & Marketing, 36 (10), 905–922.

Kirmani, A., & Campbell, M. C. (2004). Goal seeker and persuasion sentry: How consumer targets respond to interpersonal marketing persuasion. Journal of Consumer Research, 31 (3), 573–582.

Kunkel, D., Wilcox, B. L., Cantor, J., Palmer, E., Linn, S., & Dowrick, P. (2004). Report of the APA task force on advertising and children. Washington, DC: American Psychological Association, 30 , 60.

LaFrance, M., & Broadbent, M. (1976). Group rapport: Posture sharing as a nonverbal indicator. Group & Organization Studies, 1 (3), 328–333.

LaFrance, M. (1982). Posture mirroring and rapport: Interaction rhythms. New York: Human Sciences Press, 279–298.

Lajnef, K., Ellouze, S., & Mohamed, E. B. (2017). How to explain accounting manipulations using the cognitive mapping technique? An evidence from Tunisia. American Journal of Finance and Accounting, 5 (1), 31–50.

Lakin, J. L., Jefferis, V. E., Cheng, C. M., & Chartrand, T. L. (2003). The chameleon effect as social glue: Evidence for the evolutionary significance of nonconscious mimicry. Journal of Nonverbal Behavior, 27 (3), 145–162.

Lazarsfeld, P. F., Berelson, B., & Gaudet, H. (1944). The people’s choice: How the voter makes up his mind in a presidential campaign . Duell, Sloan and Pearce.

Lou, C., & Yuan, S. (2019). Influencer marketing: How message value and credibility affect consumer trust of branded content on social media. Journal of Interactive Advertising, 19 (1), 58–73.

Loureiro, S. M. C., & Sarmento, E. M. (2019). Exploring the determinants of instagram as a social network for online consumer-brand relationship. Journal of Promotion Management, 25 (3), 354–366.

Luna, K. (2018). Dealing with digital distraction. Retrieved January 2, 2020 from https://www.apa.org/news/press/releases/2018

Martiz, G. (2015). A qualitative case study on cell phone appropriation for language learning purposes in a Dominican context . Utah State University.

Marwick, A. E. (2015). Status update: Celebrity, publicity, and branding in the social media age . Wiley.

Marwick, A. (2013). They’re really profound women, they’re entrepreneurs. Conceptions of authenticity in fashion blogging. In 7th international AIII conference on weblogs and social media (ICWSM), July (vol. 8).

Maslach, C., Stapp, J., & Santee, R. T. (1985). Individuation: Conceptual Analysis and Assessment. Journal of Personality and Social Psychology, 49 (September), 729–738.

McCroskey, J. C. (1966). Scales for the measurement of ethos. Speech Monographs, 33 (1), 65–72.

Meltzoff, A. N., & Moore, M. K. (1983). Newborn infants imitate adult facial gestures. Child Development  54 (3), 702–709.

Moore, A., Yang, K., & Kim, H. M. (2018). Influencer marketing: Influentials’ authenticity, likeability and authority in social media. In International Textile and Apparel Association Annual Conference Proceedings . Iowa State University Digital Press.

Ohanian, R. (1990). Construction and validation of a scale to measure celebrity endorsers’ perceived expertise, trustworthiness, and attractiveness. Journal of Advertising, 19 (3), 39–52.

Özesmi, U., & Özesmi, S. L. (2004). Ecological models based on people’s knowledge: A multi-step fuzzy cognitive mapping approach. Ecological Modelling, 176 (1–2), 43–64.

Perloff, R. M. (2008). Political Communication: Politics, Press, and Public in America. Boca Raton, FL: Routledge. The SAGE Handbook of Persuasion, 258–277.

Pligt, J., & Vliek, M. (2016). The Psychology of Influence: Theory, research and practice . Routledge.

Ruvio, A., Gavish, Y., & Shoham, A. (2013). Consumer’s doppelganger: A role model perspective on intentional consumer mimicry. Journal of Consumer Behaviour, 12 (1), 60–69.

Sawyer, S. M., Azzopardi, P. S., Wickremarathne, D., & Patton, G. C. (2018). The age of adolescence. The Lancet Child & Adolescent Health, 2 (3), 223–228.

Scheer, L. K., & Stern, L. W. (1992). The effect of influence type and performance outcomes on attitude toward the influencer. Journal of Marketing Research (JMR), 29 (1), 128–142.

Scott, W. R. (1987). The adolescence of institutional theory. Administrative Science Quarterly, 32 (4), 493–511.

Shen, L. J., & Bigsby, E. (2013). The effects of message features: Content, structure, and style. In J. P. Dillard & L. Shen (Eds.), The Sage handbook of persuasion: Developments in theory and practice (2nd ed., pp. 20–35). Los Angeles, CA: Sage.

Simner, M. L. (1971). Newborn’s response to the cry of another infant. Developmental Psychology, 5 (1), 136.

Sireni. (2020). The role of Instagram influencers and their impact on millennials’ consumer behaviour. Theseus. http://www.theseus.fi/handle/10024/347659

Smith, E. R., Mackie, D. M., & Claypool, H. M. (2014). Social psychology. https://doi.org/10.4324/9780203833698 .

Sokolova, K., & Kefi, H. (2020). Instagram and YouTube bloggers promote it, why should I buy? How credibility and parasocial interaction influence purchase intentions. Journal of Retailing and Consumer Services, 53 , 1–9. https://doi.org/10.1016/j.jretconser.2019.01.011

Spry, A., Pappu, R., & Bettina Cornwell, T. (2011). Celebrity endorsement, brand credibility and brand equity. European Journal of Marketing, 45 (6), 882–909.

Steers, M. L. N., Wickham, R. E., & Acitelli, L. K. (2014). Seeing everyone else’s highlight reels: How Facebook usage is linked to depressive symptoms. Journal of Social and ClinicalPsychology, 33 (8), 701–731.

Stel, M., & Vonk, R. (2010). Mimicry in social interaction: Benefits for mimickers, mimickees, and their interaction. British Journal of Psychology, 101 (2), 311–323.

Tajfel, H. (1972). La catégorisation sociale. In S. Moscovici (Ed.), Introduction à la psychologie sociale (pp. 272–302). Larousse.

Tarsakoo, P., & Charoensukmongkol, P. (2019). Dimensions of social media marketing capabilities and their contribution to business performance of firms in Thailand. Journal of Asia Business Studies, 14 (4), 441–461. https://doi.org/10.1108/jabs-07-2018-0204

Teng, S., Khong, K. W., Goh, W. W., & Chong, A. Y. L. (2014). Examining the antecedents of persuasive eWOM messages in social media. Online Information Review, 38 (6), 746–768.

Tian, K. T., Bearden, W. O., & Hunter, G. L. (2001). Consumers’ need for uniqueness: Scale development and validation. Journal of Consumer Research, 28 (1), 50–66.

Tolman, E. C. (1948). Cognitive maps in rats and men. Psychological Review, 55 (4), 189.

Twenge, J. M., Joiner, T. E., Rogers, M. L., & Martin, G. N. (2018). Increases in depressive symptoms, suicide-related outcomes, and suicide rates among US adolescents after 2010 and links to increased new media screen time. Clinical Psychological Science, 6 (1), 3–17.

Twenge, J. M., Martin, G. N., & Spitzberg, B. H. (2019). Trends in US Adolescents’ media use, 1976–2016: The rise of digital media, the decline of TV, and the (near) demise of print. Psychology of Popular Media Culture, 8 (4), 329.

Villacorta, P. J., Masegosa, A. D., Castellanos, D., & Lamata, M. T. (2012). A linguistic approach to structural analysis in prospective studies. In International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (pp. 150–159). Springer.

Vollenbroek, W., De Vries, S., Constantinides, E., & Kommers, P. (2014). Identification of influence in social media communities. International Journal of Web Based Communities, 10 (3), 280–297.

Wallach, M. A., & Kogan, N. (1965). Modes of thinking in young children . New York, NY: Holt, Rinehart, & Winston.

Wallach, M. A., & Wing, C. W. (1969). The talented student: A validation of the creativity- intelligence distinction. New York, NY: Holt, Rinehart & Winston

Weinstein, E. (2017). Adolescents' differential responses to social media browsing: Exploring causes and consequences for intervention. Computers in Human Behavior, 76 , 396–405.‏‏

White, K., & Argo, J. J. (2011). When imitation doesn’t flatter: The role of consumer distinctiveness in responses to mimicry. Journal of Consumer Research, 38 (4), 667–680.

White, K., & Dahl, D. W. (2006). To be or not be? The influence of dissociative reference groups on consumer preferences. Journal of Consumer Psychology, 16 (4), 404–414.

Wright, E. J., White, K. M., & Obst, P. L. (2018). Facebook false self-presentation behaviors and negative mental health. Cyberpsychology, Behavior, and Social Networking, 21 (1), 40–49.

Zeng, F., Tao, R., Yang, Y., & Xie, T. (2017). How social communications influence advertising perception and response in online communities? Frontiers in Psychology, 8 , 1349.

Article   PubMed   PubMed Central   Google Scholar  

Download references

Author information

Authors and affiliations.

Faculty of Economics and Management at Sfax Tunisia, University of Sfax, FSEG, 3018, Sfax, Tunisia

Karima Lajnef

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Karima Lajnef .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Lajnef, K. The effect of social media influencers' on teenagers Behavior: an empirical study using cognitive map technique. Curr Psychol 42 , 19364–19377 (2023). https://doi.org/10.1007/s12144-023-04273-1

Download citation

Accepted : 12 January 2023

Published : 31 January 2023

Issue Date : August 2023

DOI : https://doi.org/10.1007/s12144-023-04273-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Social media influencers
  • Teenagers' behavior
  • Cognitive approach
  • Find a journal
  • Publish with us
  • Track your research

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here .

Loading metrics

Open Access

Peer-reviewed

Research Article

Dog Movie Stars and Dog Breed Popularity: A Case Study in Media Influence on Choice

* E-mail: [email protected]

Affiliations Department of Psychology, Brooklyn College, Brooklyn, New York, United States of America, Centre for the Study of Cultural Evolution, Stockholm University, Stockholm, Sweden

Affiliations Centre for the Study of Cultural Evolution, Stockholm University, Stockholm, Sweden, Department of Archaeology and Anthropology, University of Bristol, Bristol, United Kingdom

Affiliation Department of Psychology, Western Carolina University, Cullowhee, North Carolina, United States of America

  • Stefano Ghirlanda, 
  • Alberto Acerbi, 
  • Harold Herzog

PLOS

  • Published: September 10, 2014
  • https://doi.org/10.1371/journal.pone.0106565
  • Reader Comments

Figure 1

Fashions and fads are important phenomena that influence many individual choices. They are ubiquitous in human societies, and have recently been used as a source of data to test models of cultural dynamics. Although a few statistical regularities have been observed in fashion cycles, their empirical characterization is still incomplete. Here we consider the impact of mass media on popular culture, showing that the release of movies featuring dogs is often associated with an increase in the popularity of featured breeds, for up to 10 years after movie release. We also find that a movie's impact on breed popularity correlates with the estimated number of viewers during the movie's opening weekend—a proxy of the movie's reach among the general public. Movies' influence on breed popularity was strongest in the early 20 th century, and has declined since. We reach these conclusions through a new, widely applicable method to measure the cultural impact of events, capable of disentangling the event's effect from ongoing cultural trends.

Citation: Ghirlanda S, Acerbi A, Herzog H (2014) Dog Movie Stars and Dog Breed Popularity: A Case Study in Media Influence on Choice. PLoS ONE 9(9): e106565. https://doi.org/10.1371/journal.pone.0106565

Editor: Alex Mesoudi, Durham University, United Kingdom

Received: March 9, 2014; Accepted: July 30, 2014; Published: September 10, 2014

Copyright: © 2014 Ghirlanda et al. This is an open-access article distributed under the terms of the Creative Commons Attribution License , which permits unrestricted use, distribution, and reproduction in any medium, provided the original author and source are credited.

Data Availability: The authors confirm that all data underlying the findings are fully available without restriction. Dog breed popularity data available at figshare. http://dx.doi.org/10.6084/m9.figshare.715895 Movie data available at http://dx.doi.org/10.6084/m9.figshare.715262

Funding: AA has been supported by the Uniquely Human project funded by the Swedish Research Council and by a Newton International Fellowship. The funders had no role in study design, data collection and analysis, decision to publish, or preparation of the manuscript.

Competing interests: The authors have declared that no competing interests exist.

Introduction

Fashions and fads are ubiquitous in modern societies [1] , [2] , as well as in “traditional” societies [3] and in past societies [4] , and have been studied in disciplines as diverse as philosophy, sociology, anthropology, and economics [5] – [10] . Recently, fashions have received renewed attention as a source of data to test models of cultural dynamics [11] – [14] . In this context, fashions and fads are defined intuitively as cultural traits whose popularity undergoes striking fluctuations (often short-term) that do not have any obvious cause, and therefore appear whimsical or erratic. Some statistical regularities have nevertheless been found.

Bentley and coworkers showed that, in many cultural domains, relatively few traits are common while the vast majority are very rare (trait frequency follows log-normal or power law distributions, see [12] , [13] , [15] ). They also showed that the hypothesis that individuals copy each other at random is sufficient to explain this pattern. Other findings, however, challenge the idea that chance dominates cultural dynamics. Popularity trends may have a consistent direction for many years [16] , while random copying generally predicts no correlation between years. Furthermore, rates of increase in popularity appear correlated with rates of decrease: what becomes popular rapidly is also rapidly forgotten [14] , [17] . Berger and coworkers have also showed that the popularity of a first name is influenced by the popularity of phonetically similar names [18] . Several models have been developed to accommodate these findings [14] , [16] , [17] .

This paper continues the search for quantitative data in order to better characterize cultural dynamics. In particular, we ask whether it is possible to detect the effect of a specific class of events on fashion dynamics. Within this broader context, we have investigated whether the release of movies featuring dogs is associated with changes in the popularity of featured breeds. This choice was motivated by high interest of the general public in both dogs and movies, and by the availability of good quality data. We show that, indeed, movies have had a significant impact on dog breed popularity in the U.S.A., sometimes influencing sales of featured breeds for a decade or more, but also that their effect has been declining over time. Our results show that, while fashions may appear erratic, it may be possible, at least sometimes, to identify specific underlying causes.

Data sources

The American Kennel Club (AKC) maintains the world's largest dog registry and provided us with the number of registrations for each recognized breed between 1926 and 2005, totaling over 65 million registered dogs (see [19] , [20] for details). To identify movies featuring dogs, we used the following Internet resources: http://www.caninest.com/dog-movies , http://en.wikipedia.org/wiki/List_of_fictional_dogs#Dogs_in_film , and http://www.disneymovieslist.com/best/top-dog-movies.asp , retrieved between August and September, 2012. The results of our search and successive data selection are summarized below. The data are publicly available [21] .

We located 87 movies featuring dogs, of which 81 had been released in the U.S.A. between 1927 and 2004 (the years for which we can calculate at least one-year trend changes). Of these, 63 featured a breed for which data is available in the AKC database. We excluded four movies because the dog was not a main character: Thin man (Metro-Goldwyn-Mayer, 1934), The Swiss family Robinson , (Walt Disney, 1960), The nightmare before Christmas (Touchstone Pictures, 1993), and Meet the Fockers (TriBeCa Productions, 2004). Dogs that we considered “main characters” are typically mentioned in the movie title or prominently featured in movie synopses. We excluded the movie Cujo (Taft Entertainment, 1983) because the dog is a negative character. Of the remaining 59 movies, some featuring the same breed were released only a few years apart. For example, there are seven movies of the Lassie series released between 1943 and 1951, all featuring a collie as the main character. It would be statistically unsound to include all of these movies in our analysis because the impact of different movies on the popularity of collies would then be estimated based partly on the same data. To safeguard the independence of data points entering statistical analysis, we retained movies featuring the same breed only if they were released more than 20 years apart. We could thus compute breed popularity trends for up to 10 years before and after movie release. When we found movies featuring the same breed, we retained the earliest one for analysis, and moved forward in time to include the first movie released more than 20 years later, and so on until all movies were either included or excluded from analysis. In the case of collies, for example, we retained Lassie movies released in 1943 and 1978, excluding seven movies released in 1945–1963 and one movie released in 1994. This step of data selection resulted in the retention of 30 movies. Of these we had to exclude The Plague Dogs (Embassy Pictures, 1982) because the featured breed (the smooth fox terrier) was not recognized by the AKC in 1982. The final data set included thus 29 movies. One movie featured four breeds, and four movies featured two, resulting in a total of 36 data points.

By excluding some movies for the purpose of statistical analysis we do not mean to imply that these movies have had not effect on breed popularity. For example, the rise in the popularity of collies observed after the release of the first Lassie movie in 1943 may have been partly caused by movies with the same character released in the next few years. In the following, we leave it understood that the effects that, nominally, we attribute to one movie may have been caused by several movies.

We estimated the number of viewers for each movie by dividing the movie's U.S.A. earnings by the average movie ticket price at the time of movie release. These data were obtained from Box Office Mojo ( http://boxofficemojo.com , preferred) or the English language Wikipedia entry of the movie ( http://en.wikipedia.org ). Ticket prices were missing for some years, and were linearly interpolated based on adjacent years. We found total earnings for 23 of the 29 movies retained for analysis. We also found earnings during the opening-weekend for 16 movies.

Estimate of movie effect

The effect of a movie on breed popularity cannot be estimated simply by looking for an increase in breed registrations after movie release. Such an increase, in fact, could be part of a trend in breed popularity that had started before movie release. Indeed, it is possible that a breed is chosen for a movie precisely because it is becoming popular. Thus we study the effect of movies by investigating changes in registration trends rather than in registrations per se . We have constructed an index of trend change such that a value of 100 means that after movie release per capita registrations increased 100% over what was expected based on the pre-release trend ( Fig. 1 ).

thumbnail

  • PPT PowerPoint slide
  • PNG larger image
  • TIFF original image

media influence case study

https://doi.org/10.1371/journal.pone.0106565.g001

media influence case study

Using this method, we investigated trends over periods of 1, 2, 5, and 10 years. We report estimated 1-year trends for completeness, but we note that they may be less reliable than estimates of longer trends because they are more influenced by such factors as the time of movie release (e.g., Christmas vs. Easter), delays in dog registrations by owners, and delays in registration processing by the AKC. A graph of all 10-year trends is publicly available [22] . All statistical analyses were performed with R, version 3.0.0 [23] .

media influence case study

https://doi.org/10.1371/journal.pone.0106565.g002

thumbnail

Left: 2-year changes. Right: 10-year changes. The 10 movies associated with the greatest trend changes are highlighted. Statistical information at the bottom of each panel refers to linear fits to the data (gray lines). The movie Snow dogs (Walt Disney, 2002) features Siberian huskies and a border collie, and is associated with large 2-year trend changes for both breeds (rightmost labeled points in the left panel; collies are the top point). Similarly, the movie The incredible journey (Walt Disney, 1963) featured both golden retrievers (labeled) and bulldogs. Several releases of, or sequels to 101 Dalmatians appear in both panels.

https://doi.org/10.1371/journal.pone.0106565.g003

media influence case study

Overall, these data suggest that viewing a movie may cause a long-lasting preference for a breed that can be expressed years later, e.g., when the time comes to buy a new dog. Indeed, trend changes appear to increase when measured over longer periods ( Fig. 2 , left). For example, 14 out of cases for which 10-year trends could be calculated, are associated with stronger 10-year than 2-year trend changes.

media influence case study

While movies have been previously found capable of influencing individual behavior, for example cigarette smoking [25] – [27] , our study is the first to assess the impact of movies over many decades, and the first to study a behavior—choice of dog breed—that is subject to the erratic fluctuations typical of fashions and fads [19] , [28] . Our results confirm quantitatively the common belief that movies can have a lasting impact on popular culture. In the case of dog breed popularity, the impact of movies has been large. For example, the top 10 movies highlighted in Fig. 3 , right, are associated with changes in registration trends such that over 800,000 more dogs were registered in the 10 years after movie release than would have been expected from pre-release trends. These results complement our recent finding that breed popularity appears unrelated to breed temperament and health [29] , lending support to the idea that important aspects of people's life (in this case, their favorite pets) can be strongly influenced by fashions and fads [30] .

We are aware of few studies attempting to quantify the influence of specific events on popular culture. Berger and coworkers found that book sales in the U.S. are influenced (both positively and negatively) by reviews in the New York Times , and that the names used for hurricanes, as well as similar names, increase in popularity among first names [18] , [31] . Together with ours, these studies show that influences on popular culture can be detected given enough data. While we cannot be sure that a single movie, newspaper review, or hurricane can influence culture, pooling data for many similar events can reveal consistent trends. In the quest to understand what influences popular culture, negative results can also be informative. We previously found, for example, that breeds that win the Westminster Kennel Club Dog Show do not, on average, increase in popularity [20] , suggesting that reaching a small specialized audience may not be as effective as reaching the general public.

Lastly, we recall that we have focused on popularity trends rather than on popularity itself, in order to avoid attributing to movies trends that were already ongoing before movie release. Indeed, we found that up-trending breeds may have been chosen more often for movies. Our method can be valuable in all studies in which similar confounds may occur. For example, reviewers may prefer to write about particularly good or bad books, rather than about randomly sampled books. Thus reviews may appear to influence sales when, in reality, both may depend on book quality. Hurricane names, on the other hand, are chosen from a predetermined list that is not influenced by first name popularity, and a re-analysis of Westminster Kennel Club Dog Show data using our method confirms that winning breeds do not become more popular. Thus we are not suggesting that previous studies came to incorrect conclusions, but that our method may provide a more accurate estimate of the effect of specific events on popular culture.

Acknowledgments

Information about movie ticket prices and movie earnings courtesy of Box Office Mojo ( http://www.boxofficemojo.com ). Used with permission. We thank the American Kennel Club ( http://www.akc.org ) for providing breed registration data and those who have collected information about dogs in movies for making this study possible (see Data Sources). We gratefully acknowledge the comments of two anonymous reviewers.

Author Contributions

Conceived and designed the experiments: SG AA HH. Analyzed the data: SG AA. Wrote the paper: SG AA HH.

  • 1. Lieberson S (2000) A matter of taste: How names, fashions, and culture change. New Haven - London: Yale University Press.
  • 2. Bentley RA, Earls M, O'Brien MJ (2011) I'll have what she's having. Cambridge, MA: MIT Press.
  • View Article
  • Google Scholar
  • 5. Smith A (1759/2000) The theory of moral sentiments. Amherst, NY: Prometeus Books.
  • 6. Kant I (1798/2006) Anthropology from a pragmatic point of view. Cambridge, UK: Cambridge University Press.
  • 18. Berger J, Bradlow ET, Braunstein A, Zhang Y (2012) From Karen to Katie: using baby names to understand cultural evolution. Psychological Science.
  • 21. Ghirlanda S, Acerbi A, Herzog HA (2013). Dog movie stars and dog breed popularity (data). figshare. http://dx.doi.org/10.6084/m9.figshare.715262 .
  • 22. Ghirlanda S (2014). Dog movie stars and dog breed popularity (graph). figshare. http://dx.doi.org/10.6084/m9.figshare.937331 .
  • 23. R Core Team (2013) R: A language and environment for statistical computing. R Foundation for Statistical Computing, Vienna, Austria. ISBN 3-900051-07-0.
  • 24. Rogers EM (2003) Diffusion of innovation. Tampa, FL: Free Press.
  • 30. Herzog HA (In press) Biology, culture, and the origins of pet-keeping. Animal Behavior and Cognition.

Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.

  • View all journals
  • My Account Login
  • Explore content
  • About the journal
  • Publish with us
  • Sign up for alerts
  • Open access
  • Published: 18 May 2024

Diverse misinformation: impacts of human biases on detection of deepfakes on networks

  • Juniper Lovato   ORCID: orcid.org/0000-0002-1619-7552 1 , 2 ,
  • Jonathan St-Onge   ORCID: orcid.org/0000-0001-5369-4825 1 ,
  • Randall Harp   ORCID: orcid.org/0000-0001-7278-3292 1 , 3 ,
  • Gabriela Salazar Lopez 1 ,
  • Sean P. Rogers 1 ,
  • Ijaz Ul Haq   ORCID: orcid.org/0000-0002-0440-7532 2 ,
  • Laurent Hébert-Dufresne 1 , 2 &
  • Jeremiah Onaolapo 1 , 2  

npj Complexity volume  1 , Article number:  5 ( 2024 ) Cite this article

Metrics details

  • Computational science
  • Interdisciplinary studies

Social media platforms often assume that users can self-correct against misinformation. However, social media users are not equally susceptible to all misinformation as their biases influence what types of misinformation might thrive and who might be at risk. We call “diverse misinformation” the complex relationships between human biases and demographics represented in misinformation. To investigate how users’ biases impact their susceptibility and their ability to correct each other, we analyze classification of deepfakes as a type of diverse misinformation. We chose deepfakes as a case study for three reasons: (1) their classification as misinformation is more objective; (2) we can control the demographics of the personas presented; (3) deepfakes are a real-world concern with associated harms that must be better understood. Our paper presents an observational survey ( N  = 2016) where participants are exposed to videos and asked questions about their attributes, not knowing some might be deepfakes. Our analysis investigates the extent to which different users are duped and which perceived demographics of deepfake personas tend to mislead. We find that accuracy varies by demographics, and participants are generally better at classifying videos that match them. We extrapolate from these results to understand the potential population-level impacts of these biases using a mathematical model of the interplay between diverse misinformation and crowd correction. Our model suggests that diverse contacts might provide “herd correction” where friends can protect each other. Altogether, human biases and the attributes of misinformation matter greatly, but having a diverse social group may help reduce susceptibility to misinformation.

Similar content being viewed by others

media influence case study

Detection of fake-video uploaders on social media using Naive Bayesian model with social cues

media influence case study

Echo chamber effects on short video platforms

media influence case study

How to “inoculate” against multimodal misinformation: A conceptual replication of Roozenbeek and van der Linden (2020)

Introduction.

There is a growing body of scholarly work focused on distributed harm in online social networks. From leaky data 1 , and group security and privacy 2 to hate speech 3 , misinformation 4 and detection of computer-generated content 5 . Social media users are not all equally susceptible to these harmful forms of content. Our level of vulnerability depends on our own biases. We define “diverse misinformation” as the complex relationships between human biases and demographics represented in misinformation. This paper explores deepfakes as a case study of misinformation to investigate how U.S. social media users’ biases influence their susceptibility to misinformation and their ability to correct each other. We choose deepfakes as a critical example of the possible impacts of diverse misinformation for three reasons: (1) their status of being misinformation is binary; they either are a deepfake or not; (2) the perceived demographic attributes of the persona presented in the videos can be characterized by participants; (3) deepfakes are a current real-world concern with associated negative impacts that need to be better understood. Together, this allows us to use deepfakes as a critical case study of diverse misinformation to understand the role individual biases play in disseminating misinformation at scale on social networks and in shaping a population’s ability to self-correct.

We present an empirical survey ( N  = 2016 using a Qualtrics survey panel 6 ) observing what attributes correspond to U.S.-based participants’ ability to detect deepfake videos. Survey participants entered the study under the pretense that they would judge the communication styles of video clips. Our observational study is careful not to prime participants at the time of their viewing video clips so we could gauge their ability to view and judge deepfakes when they were not expecting them (not explicitly knowing if a video is fake or not is meant to emulate what they would experience in an online social media platform). Our survey also investigates the relationship between human participants’ demographics and their perception of the video person(a)’s features and, ultimately, how this relationship may impact the participant’s ability to detect deepfake content.

Our objective is to evaluate the relationship between classification accuracy and the demographic features of deepfake videos and survey participants. Further analysis of other surveyed attributes will be explored in future work. We also recognize that data used to train models that create deepfakes may introduce algorithmic biases in the quality of the videos themselves, which could introduce additional biases in the participant’s ability to guess if the video is a deepfake or not. The Facebook Deepfake Detection Challenge dataset that was used to create the videos we use in our survey was created to be balanced in diversity in several axes (gender, skin-tone, age). We suspect that if there are algorithmic-level biases in the model used resulting in better deepfakes for personas of specific demographics, we would expect to see poorer accuracy across the board for all viewer types when classifying these videos. We do see that viewer groups’ accuracy differs based on different deepfake video groups. However, our focus is on the perception of survey participants towards deepfakes’ identity and demographics to capture viewer bias based on their perception rather than the model’s bias and classification of the video persona’s racial, age, and gender identity. Our goal is to focus on viewers and capture what a viewer would experience in the wild (on a social media platform), where a user would be guessing the identity features of the deepfake and then interrogating if the video was real or not with little to no priming.

This paper adopts a multidisciplinary approach to answer these questions and understand their possible impacts. First, we use a survey analysis to explore individual biases related to deepfake detection. There is abundant research suggesting the demographics of observers and observed parties influence the observer’s judgment and sometimes actions toward the observed party 7 , 8 , 9 , 10 , 11 . In an effort to avoid assumptions about any demographic group, we chose four specific biases to analyze vis-à-vis deepfakes: (Question 1) Priming bias: How much does classification accuracy depend on participants being primed about the potential of a video being fake? Our participants are not primed on the meaning of deepfakes and are not told to be explicitly looking for them prior to beginning the survey. Importantly, we do not explicitly vary the priming of our participants but we compare their accuracy to a previous study with a similar design but primed participants 5 . Participants are debriefed after the completion of the survey questions and then asked to guess the deepfake status of the videos they watched. More information about our survey methodology and why the study was formulated as a deceptive survey can be seen in section 4.4. (Question 2) Prior knowledge: Does accuracy depend on how often the viewer uses social media and whether they have previously heard of deepfakes? Here, we ask participants to evaluate their own knowledge and use their personal assessment to answer this research question. (Question 3) Homophily bias: Are humans better classifiers of video content if the perceived demographic of the video persona matches their own identity? (Question 4) Heterophily bias: Inversely, are humans more accurate if the perceived demographic of the video persona does not match their own? We then use results from the survey to develop an idealized mathematical model to theoretically explore population-level dynamics of diverse misinformation on online social networks. Altogether, this allows us to hypothesize the mechanisms and possible impacts of diverse misinformation, as illustrated in Fig. 1 .

figure 1

Populations are made of individuals with diverse demographic features (e.g., age, gender, race; here represented by colors), and misinformation is likewise made of different elements based on the topics they represent (here shown as pathogens). Through their biases, certain individuals are more susceptible to certain kinds of misinformation. The cartoon represents a situation where misinformation is more successful when it matches an individual’s demographic. Red pathogens spread more readily around red users with red neighbors, thereby creating a misinformed echo chamber whose members can not correct each other. In reality, the nature of these biases is still unclear, and so are their impacts on online social networks and on the so-called “self-correcting crowd.”

Our paper is structured as follows. We outline the harms and ethical concerns of diverse misinformation and deepfakes in “Introduction.” We explore the possible effects through which demographics impact susceptibility to diverse misinformation through our observational study in “Results.” We then investigate the network-level dynamics of diverse misinformation using a mathematical model in “Mathematical Model.” We discuss our findings and their implications in “Discussion.” Our full survey methodology can be seen in “Methods.”

It is important to understand human biases as they impact the transmission and correction of misinformation and their potential impacts on polarization and degradation of the epistemic environment 12 . In social networks, it has been shown that there are human tendencies toward homophily bias 13 , 14 . Indeed, there are differences in user demographic groups’ abilities to detect deepfakes and misinformation (e.g., age) 15 . Previous work has also shown that biases impact people’s accuracy as an eyewitness through the own-race bias (ORB) phenomenon 16 , 17 , 18 . It is an open question whether deepfake detection also demonstrates the own-race bias (ORB) phenomenon.

Subsequently, these biases impact how social ties are formed and, ultimately, the shape of the social network. For example, in online social networks, homophily often manifests through triadic closures 19 where friends in social networks tend to form new connections that close triangles or triads. Understanding individuals’ and groups’ biases will help understand the network’s structure and dynamics and how information and misinformation spread on the network depending on its level of diversity. For example, depending on the biases and the node-specific diversity of the connections it forms, one may have a system that may be more or less susceptible to widespread dissemination as it would in a Mixed Membership Stochastic Block Model (MMSBM) 20 . A Mixed Membership Stochastic Block Model is a Bayesian community detection method that segments communities into blocks but allows community members to mix with other communities. Assumptions in an MMSBM include a list of probabilities that determine the likelihood of communities interacting. We explore these topics in more detail in “Mathematical Model.”

Previous work has demonstrated that homophily bias towards content aligned with one’s political affiliation can impact one’s ability to detect misinformation 21 , 22 . Traberg et al. show that political affiliation can impact a person’s ability to detect misinformation about political content 21 . They found that viewers misclassified misinformation as being true more often when the source of information aligned with their political affiliation. Political homophily bias, in this case, made them feel as though the source was more credible than it was.

In this paper, we investigate the accuracy of deepfake detection based on multiple homophily biases in age, gender, and race. We also explore other bias types, such as heterophily bias, priming, and prior knowledge bias impacting deepfake detection.

Misinformation is information that imitates real information but does not reflect the genuine truth 23 . Misinformation has become a widespread societal issue that has drawn considerable recent attention. It circulates physically and virtually on social media sites 24 and interacts with socio-semantic assortativity. In contrast, assortative social clusters will also tend to be semantically homogeneous 25 . For instance, misinformation promoting political ideology might spread more easily in social clusters based on shared demographics, further exacerbating political polarization and potentially influencing electoral outcomes 26 . This has sparked concerns about the weaponization of manipulated videos for malicious ends, especially in the political realm 26 . Those with higher political interests are more likely to share deepfakes inadvertently, and those with lower cognitive ability are also more likely to share deepfakes inadvertently. The relationship between political interest and deepfakes sharing is moderated by network size 27 .

Motivations vary broadly to explain why people disseminate misinformation, which we refer to as disinformation when specifically intended to deceive. Motivations include (1) purposefully trying to deceive people by seeding distrust in information, (2) believing the information to be accurate and spreading it mistakenly, and (3) spreading misinformation for monetary gain. In this paper, we will primarily focus on deepfakes as misinformation meaning the potential of a deepfake viewer getting duped and sharing a deepfake video. Disinformation is spreading misinformation with the intent to deceive. In this paper, we do not assume that all deepfakes are disinformation since we do not consider the intent of the creator. A deepfake could be made to entertain or showcase technology. We instead focus on deepfakes as misinformation meaning the potential of a deepfake viewer getting duped and sharing a deepfake video, regardless of intent.

There are many contexts where online misinformation is of concern. Examples include misinformation around political elections and announcements (political harms) 28 ; such deepfake videos can, in theory, alter political figures to say just about anything, raising a series of political and civic concerns 28 ; misinformation on vaccinations during global pandemics (health-related harms) 29 , 30 ; false speculation to disrupt economies or speculative markets 31 ; distrust in news media and journalism (harms to news media) 4 , 32 . People are more likely to feel uncertain than to be misled by deepfakes, but this resulting uncertainty, in turn, reduces trust in news on social media 33 ; false information in critical informational periods such as humanitarian or environmental crises 34 ; and propagation of hate speech online 3 which spreads harmful false content and stereotypes about groups (harms related to hate speech).

Correction of misinformation: There are currently many ways to try to detect and mitigate the harms of misinformation online 35 . On one end of the spectrum are automated detection techniques that focus on the classification of content or on observing anomaly detection in the network structure context of the information or propagation patterns 36 , 37 . Conversely, crowd-sourced correction of misinformation leverages other users to reach a consensus or simply estimate the veracity of the content 38 , 39 , 40 . We will look at the latter form of correction in an online social network to investigate the role group correction plays in slowing the dissemination of diverse misinformation at scale.

Connection with deepfakes: The potential harms of misinformation can be amplified by computer-generated videos used to give fake authority to the information. Imagine, for instance, harmful messages about an epidemic conveyed through the computer-generated persona of a public health official. Unfortunately, deepfake detection remains a challenging problem, and the state-of-the-art techniques currently involve human judgment 5 .

Deepfakes are artificial images or videos in which the persona in the video is generated synthetically. Deepfakes can be seen as false depictions of a person(a) that mimics a person(a) but does not reflect the truth. Deepfakes should not be confused with augmented or distorted video content, such as using color filters or digitally-added stickers in a video. Creating a deepfake can involve complex methods such as training artificial neural networks known as generative adversarial networks (GANs) on existing media 41 or simpler techniques such as face mapping. Deepfakes are deceptive tools that have gained attention in recent media for their use of celebrity images and their ability to spread misinformation across online social media platforms 42 .

Early deepfakes were easily detectable with the naked eye due to their uncanny visual attributes and movement 43 . However, research and technological developments have improved deepfakes, making them more challenging to detect 4 . There are currently several automated deepfake detection methods 44 , 45 , 46 , 47 , 48 . However, they are computationally expensive to deploy at scale. As deepfakes become ubiquitous, it will be necessary for the general audience to identify deepfakes independently during gaps between the development of automated techniques or in environments that are not always monitored by automated detection (or are offline). It will also be important to allow human-aided and human-informed deepfake detection in concert with automated detection techniques.

Several issues currently hinder automated methods: (1) they are computationally expensive; (2) there may be bias in deepfake detection software and training data—credibility assessments, particularly in video content, have been shown to be biased 49 ; (3) As we have seen with many cybersecurity issues, there is a “cat-and-mouse” evolution that will leave gaps in detection methodology 50 .

Humans may be able to help fill these detection gaps. However, we wonder to what extent human biases impact the efficacy of detecting diverse misinformation. If human-aided deepfake detection becomes a reliable strategy, we need to understand the biases that come with it and what they look like on a large scale and on a network structure. We also posit that insights into human credibility assessments of deepfakes could help develop more lightweight and less computationally expensive automated techniques.

As deepfakes improve in quality, the harms of deepfake videos are coming to light 51 . Deepfakes raise several ethical considerations: (1) the evidentiary power of video content in legal frameworks 4 , 52 , 53 ; (2) consent and attribution of the individual(s) depicted in deepfake videos 54 ; (3) bias in deepfake detection software and training data 49 ; (4) degradation of our epistemic environment, i.e., there is a large-scale disagreement between what community members believe to be real or fake, including an increase in misinformation and distrust 4 , 32 ; and (5) possible intrinsic wrongs of deepfakes 55 .

It is important to understand who gets duped by these videos and how this impacts people’s interaction with any video content. The gap between convincing deepfakes and reliable detection methods could pose harm to democracy, national security, privacy, and legal frameworks 4 . Consequently, additional regulatory and legal frameworks 56 will need to be adopted to protect citizens from harms associated with deepfakes and uphold the evidentiary power of visual content. False light is a recognized invasion of privacy tort that acknowledges the harms that come when a person has untrue or misleading claims made about them. We suspect that future legal protections against deepfakes might well be grounded in such torts, though establishing these legal protections is not trivial 52 , 57 .

The ethical implications of deepfake videos can be separated into two main categories: the impacts on our epistemic environment and people’s moral relationships and obligations with others and themselves. Consider the epistemic environment, which includes our capacity to take certain representations of the world as true and our taking beliefs and inferences to be appropriately justified. Audio and video are particularly robust and evocative representations of the world. They have long been viewed as possessing more testimonial authority (in the broader, philosophical sense of the phrase) than other representations of the world. This is true in criminal and civil contexts in the United States, where the admissibility of video recordings as evidence in federal trials is specifically singled out in Article X of the Federal Rules of Evidence 58 (State courts have their own rules of evidence, but most states similarly have explicit rules that govern the admissibility of video recordings as evidence). The wide adoption of deepfake technology would strain these rules of evidence; for example, the federal rules of evidence reference examples of handwriting authentication, telephone conversation authentication, and voice authentication but do not explicitly mention video authentication. Furthermore, laws are notorious for lagging behind technological advances 59 , which can further complicate and limit how judges and juries can approach the existence of a deepfake video as part of a criminal or civil case.

Our paper asks four primary research questions regarding how human biases impact deepfake detection. (Q1) Priming: How important is it for an observer to know that a video might be fake? (Q2) Prior knowledge: How important is it for an observer to know about deepfakes, and how does social media usage affect accuracy? (Q3−Q4) Homophily and heterophily biases: Are participants more accurate at classifying videos whose persona they perceive to match (homophily) or mismatch (heterophily) their own demographic attributes in age, gender, and race?

To address our four research questions, we designed an IRB-approved survey ( N  = 2016) using video clips from the Deepfake Detection Challenge (DFDC) Preview Dataset 60 , 61 . Our survey participants entered the study under the pretense that they would judge the communication styles of video clips (they were not explicitly looking for deepfake videos in order to emulate the uncertainty they would experience in an online social network). After the consent process, survey participants were asked to watch two 10-second video clips. After each video, our questionnaire asked participants to rate the pleasantness of particular features (e.g., tone, gaze, likability, content) of the video on a 5-point Likert scale. They were also asked to state their perception of the person in the video by guessing the video persona’s gender identity, age, and whether they were white or a person of color.

After viewing both videos and completing the related questionnaire, the participants were then debriefed on the deception of the survey, given an overview of what deepfakes are, and then asked if they thought the videos they just watched were real or fake. After the debrief questions, we collected information on the participants’ backgrounds, demographics, and expressions of identity.

Our project investigates features or pairings of features (of the viewer or the person(a) in the video) that are the most important ones needed to determine an observer’s ability to detect deepfake videos and avoid being duped. Conversely, we also ask what pairings of features (of the viewer or the person(a) in the video) are important to determine an observer’s likelihood of being duped by a deepfake video.

Our null hypothesis asserts that none of the features or pairing of features we measure in our survey produce biases that show strong evidence of the importance of a user being duped by a deepfake video or being able to detect a deepfake video. We then measure our confidence in rejecting this null hypothesis by measuring a bootstrap credibility interval for a difference in means test between the accuracy of two populations (comparing Matthew’s Correlation Coefficient scores). In all tests, we use 10,000 bootstrap samples and consider a comparison significant (having strong evidence) if the difference is observed in 95% of samples (i.e., in 9500 pairs). With this method, our paper aims to better understand how potential social biases affect our ability to detect misinformation.

Our results can be summarized as follows. (Q1) If not primed, our survey participants are not particularly accurate at detecting deepfakes (accuracy = 51%, essentially a coin toss). (Q3−Q4) Accuracy varies by some participants’ demographics and perceived demographics of video persona. In general, participants were better at classifying videos that they perceived as matching their own demographic.

Our results show that of the 4032 total videos watched, 49% were deepfakes, and 1429 of those successfully duped our survey participants. A confusion matrix showing the True Positive (TP), False Negative (FN), False Positive (FP), and True Negative (TN) rates can be seen in Fig. 2 . We also note that the overall accuracy rate (where accuracy = (TP+TN)/(TP+FP+FN+TN)) of our participants was 51%. This translates to an overall Matthew’s Correlation Coefficient (MCC) score of 0.334 for all participant’s guesses vs. actual states of the videos. MCC 62 , 63 is a simple binary correlation between the ground truth and the participant’s guess. Regardless of the metric, our participants performed barely better than a simple coin flip (credibility 94%). All summary statistics for our study and all confusion matrices for our primary and secondary demographic groups can be found in Appendix SI2 and Appendix SI3 , respectively. Next, we explain our findings in detail.

figure 2

Participants in our study watched two videos followed by a questionnaire and a debriefing on deepfakes. They were then asked to guess whether the videos were deepfakes or real. Out of 2016 participants and 4032 total videos watched, 1429 videos duped our participants, meaning they saw a fake video they thought was real. The top right panel shows the participants who were duped by deepfakes. The confusion matrix is defined by the number of true positives in the top left, false negatives in the top right, false positives in the bottom left, and true negatives in the bottom right.

Q1 Priming bias: Our results suggest that priming bias may play a role in a user’s ability to detect deepfakes. Compared with notable prior works 5 , 64 , 65 , our users were not explicitly told to look for deepfake videos while viewing the video content. Our survey takers participated in a deceptive study where they thought they answered questions about effective communication styles. They were debriefed only after the survey was completed and then asked if they thought the video clips were real or fake. Priming, on the contrary, would mean that when the user watched the two video clips, they would be explicitly looking for deepfakes.

Other works measured primed human deepfake detectors to compare them to machines and humans with machine aid. For example, in a study by ref. 64 , humans were deployed as deepfake evaluators. The participants were explicitly asked to view images and look for fake images. Participants in the study were also required to pass a qualification test where they needed to correctly classify 65% real and fake images to participate in the study 64 fully. In a more recent study by ref. 5 , participants viewed video clips from the Facebook Deepfake Detection Challenge Dataset (DFCD), as in our study. They were asked to explicitly look for deepfake videos and then tested regarding how this compared to machines alone and machines aided by humans. Groh et al. reported an accuracy score of 66% for primed humans, 73% for a primed human with a machine helper, and 65% for the machine alone. In another study, ref. 65 also showed that hybrid systems that combine crowd-nominated and machine-extracted features outperform humans and machines alone.

In comparison, a previous study by ref. 5 uses the same benchmark video data but in their study subjects were informed beforehand and explicitly looked for deepfakes. We compare our participant’s accuracy to this study in Table 1 . The section of the Groh et al. study where they gather human accuracy of deepfakes was conducted through a publicly available website (participant demographics were not gathered for this study). This website collected organic visitors from all over the world, the participants could view deepfakes from the DFCD dataset and guess if they could spot the deepfake or not (the specific question asked “Can you spot the deepfake video?”), the study participants were asked on a slider how confident they were in their answers as a percentage between 50% and 100%. In the human detection section of the study, they evaluated the accuracy of 882 individuals (only those who viewed at least 10 pairs of videos, note they did not find evidence that accuracy improves as the participants watch more videos) on 56 pairs of videos from the DFCD dataset. They compare the accuracy rate of participants (66% for humans alone) in this study with the accuracy of the leading model from the DFDC Kaggle challenge (65% accuracy for the leading model). In the second part of their experiment, they look at how the leading model (e.g., machine model) can help human accuracy. In this part of the study, after participants ( N  = 9492) submit their guesses regarding the state of the videos they are given the likelihood from the machine model and then told they can update their scores (resulting in a 73% accuracy score).

Our results show that the non-primed participants were only 51% accurate at detecting if a video was real or fake. One important takeaway from previous studies is that human-machine cooperation provides the best accuracy scores. The previously mentioned prior studies were performed with primed participants. We believe a more realistic reflection of how deepfake encounters would occur “in the wild” would be with observers who were not explicitly seeking out deepfakes. Ecological viewing conditions are important for this type of study 66 . Future work is needed to investigate how non-primed human deepfake detectors perform when aided by machines.

Q2 Prior knowledge effect: We also ask if participants are better at detecting a deepfake if they have prior knowledge about deepfakes or more exposure to social media.

Our results show that there was only weak evidence that prior knowledge or frequent social media usage impacts participants’ accuracy. Therefore, we cannot draw any strong conclusions as to the compatibility with our data for this particular question, given that our credibility score for this metric fell below 95% credibility.

We see that participants who are frequent social media users (i.e., use social media once a week or more) had a higher MCC score (MCC = 0.0396) than those who used social media less frequently (MCC = −0.0110). Participants who knew what a deepfake was before taking the survey (MCC = 0.0790) also had a higher score than those unfamiliar with deepfakes (MCC = 0.0175). However, in both comparisons, the difference was only deemed to have a weak effect given that bootstrap samples reject the null only with 83% and 94% credibility, respectively.

Q3-4 Homophily versus heterophily bias: We then focus on the potential impacts of heterophily and homophily biases on a participant’s ability to detect if a video is real or a deepfake. We look at the Matthew’s Correlation Coefficients (MCC) for all user groups and compare their guesses on videos that either match their identity (homophily) or do not match their own identity (heterophily). Results of these MCC scores related to homophily and heterophily bias can be seen in Fig. 3 .

figure 3

Categories that satisfy a threshold of credibility above 95% are as follows all bootstrap samples can be seen in the Supplementary Information . a White users were found to have a homophily bias and are better at classifying videos of a persona they perceive as white. b Consequently, videos of personas of color are more accurately classified by participants of color. c Similarly, videos of male personas are better identified by male users. Across multiple age classes, we find that participants aged 18−28 years old are better at identifying videos that match them than older participants ( d , e ) or even better at classifying videos of persona perceived as 30−49 years old than participants from that same demographic ( f ). In addition we reproduce our findings from bootstrapping and conduct a Bayesian logistic regression to explore the effects of matching demographics on the detection accuracy which can be seen in our Supplementary Information .

Our data shows strong evidence that one of our demographic subgroups, namely white participants, was more accurate when guessing the state of video personas that match their own demographic. We test our null hypothesis by comparing the answers given by a certain demographic of participants when looking at videos that match and do not match their identity. In doing so, we only observed evidence of a strong homophily bias for white participants, which can be seen in Table 2 . In that case, the null hypothesis that they are equally accurate on videos of white personas and personas of color falls outside of a 99% credibility interval, which can be seen in Fig. 3 .

We further break down this potential bias in two dimensions (overall demographic classes of the participants and video persona) in Table 2 . We then see more evident results. Here we compare subgroups of our survey participants (e.g., male vs. female viewers, persons of color vs. white viewers, and young vs. old viewers) to see which groups perform better when watching videos of a specific sub-type (e.g., videos of men, videos of women, videos of persons of color, videos of white people, videos of young people, and videos of old people).

By gender, we find evidence that male participants are more accurate than female participants when watching videos with a male persona. Similarly, by race, we find strong evidence that participants of color are more accurate than white participants when watching videos that feature a persona who is a person of color. Lastly, young participants have the highest accuracy score overall for any of our demographic subgroups. Of course, these results may be confounded with other factors, such as social media usage, which can be more prominent in one group (e.g., young participants) than another (e.g., older participants). More work needs to be done to understand the mechanisms behind our results.

In summary, results that satisfy a threshold of credibility above 95% (rejecting the null hypothesis with 95% credibility) on human biases in deepfake detection are as follows.

We find strong evidence that white participants show a homophily bias, meaning they are more accurate at classifying videos of white personas than they are at classifying videos of personas of color.

We find strong evidence that when viewing videos of male personas, male participants in our survey are more accurate than female participants.

We find strong evidence that when viewing videos of personas of color, participants of color are more accurate than white participants.

We find strong evidence that when viewing videos of young personas, participants between the ages of 18−29 are more accurate than participants above the age of 30; surprisingly, participants aged 18−29 are also more accurate than participants aged 30−49 even when viewing videos of personas aged 30−49.

Mathematical model

In essence, the results shown in Table 2 illustrate how there is no single demographic class of participants that excels at classifying all demographics of video persona. Different participants can have different weaknesses. For example, a white male participant may be more accurate at classifying white personas than a female participant of color, but the female participant of color may be more accurate on videos of personas of colors. To consider the implications of this simple result, we take inspiration from our findings and formulate an idealized mathematical model of misinformation to better understand how deepfakes spread on social networks with diverse users and misinformation.

Models of misinformation spread often draw from epidemiological models of infectious diseases. This approach tracks how an item of fake news or a deepfake might spread, like a virus, from one individual to its susceptible network neighbors, duping them such that they can further spread misinformation 67 , 68 , 69 , 70 , 71 , 72 , 73 , 74 . However, unlike infectious diseases, an individual’s recovery does not occur on its own through its immune system. Instead, duped individuals require fact-checking or correction from their susceptible neighbors to return to their susceptible state 75 , 76 , 77 , 78 , 79 , 80 , 81 , 82 . In light of these previous modeling studies, it is clear that demographics can affect who gets duped by misinformation and who remains to correct their network neighbors. We therefore integrate these mechanisms with the core finding of our study: Not all classes of individuals are equally susceptible to misinformation.

Our model uses a network with a heterogeneous degree distribution and a structure inspired by the mixed-membership stochastic block model 20 . Previous models have shown the importance of community structure for the spread of misinformation 71 , 74 and the stylized structure of the mixed-membership stochastic block model captures the known heterogeneity of real networks and its modular structure of echo chambers and bridge nodes with diverse neighborhoods 83 . We then track individuals based on their demographics. These abstract classes, such as 1 or 2, could represent a feature such as younger or older social media users. We also track their state, e.g., currently duped by a deepfake video (infectious) or not (susceptible). We also track the demographics of their neighbors to know their role in the network and exposure to other users in different states.

The resulting model has two critical mechanisms. First, inspired by our survey, individuals get duped by their duped neighbor at a rate λ i dependent on their demographic class i . Second, as per previous models and the concept of crowd-sourced approaches to correction of misinformation based on the “self-correcting crowd” 38 , 39 , 40 , duped individuals can be corrected by their susceptible neighbors at a fixed rate γ . The dynamics of the resulting model are tracked using a heterogeneous mean-field approach 84 detailed in Box 1 and summarized in Fig. 4 .

figure 4

Other parameters are given in the figure, with ( b ) and ( c ) using the correction rate highlighted in ( a ) at 1.7. c shows how high degree nodes can be protected if they have a diverse set of neighbors.

This model has a simple interesting behavior in homogeneous populations and becomes much more realistic once we account for heterogeneity in susceptibility. In a fully homogeneous population, λ i  =  λ   ∀   i , if misinformation can, on average, spread from a first to a second node, it will never stop. The more misinformation spreads, the fewer potential fact-checkers remain. Therefore, misinformation invades the entire population for a correction rate γ lower than some critical value γ c , whereas misinformation disappears for γ  >  γ c .

The invasion threshold for misinformation is shown in Fig. 4 a. In heterogeneous populations, where different nodes can feature different susceptibility λ i , the discontinuous transition from a misinformation-free to a misinformation-full state is relaxed. Instead, a steady state of misinformation can now be maintained at any level depending on the parameters of misinformation and the demographics of the population. In this regime, we can then further break down the dynamics of the system by looking at the role of duped nodes in the network, as shown in Fig. 4 b. The key result here is that very susceptible individuals with a homogeneous assortative neighborhood (e.g., an echo chamber) are at the highest risk of being duped. Conversely, nodes in the same demographic class but with a mixed or more diverse neighborhood are more likely to have resilient susceptible neighbors able to correct them if necessary.

Consider now that diverse misinformation spreads. We assume just two types of misinformation (say young or older personas in two deepfake videos) targeting each of our two demographic classes (say younger and older social media users). We show this thought experiment in Fig. 4 c where we use two complementary types of misinformation: One with λ 1  =  λ 2 /2 = 1.0 and a matching type with \({\lambda }_{2}^{{\prime} }={\lambda }_{1}^{{\prime} }/2=1.0\) . We run the dynamics of these two types of misinformation independently as we assume they do not directly interact, and, therefore simply combine the possible states of nodes after integrating the dynamical system. For example, the probability that a node of type 1 is duped by both pieces of misinformation would be the product of the probabilities that it is duped by the first and duped by the second. By doing so, we can easily study a model where multiple, diverse pieces of information spread in a diverse network population.

For diverse misinformation in Fig. 4 c, we find two connectivity regimes where the role of network structure is critical. For low-degree nodes, a diverse neighborhood means more exposure to diverse misinformation than a homogeneous echo chamber, such that the misinformation that best matches the demographics of a low-degree user is more likely to find them if they have a diverse neighborhood. For high-degree nodes, however, we find the behavior of herd correction: A diverse neighborhood means a diverse set of neighbors that is more likely to contain users who are able correct you if you become misinformed 34 , 85 , 86 .

In the appendix, we analyze the robustness of herd correction to the parameters of the model. We show mathematically that the protection it offers is directly proportional to the homophily in the network (our parameter Q ). By simulating the dynamics with more parameters, we also find that herd correction is proportional to the degree heterogeneity of the network. As we increase heterogeneity, we increase the strength of the friendship paradox. “Your friends have more friends than you do,” 87 which means they get more exposed to misinformation than you do but also that they have more friends capable of correcting them when duped.

Our stylized model is meant to show how one can introduce biases in simple mathematical models of diverse misinformation. A first-order effect is that individuals with increased susceptibility should be preferentially duped, but this effect exists only if misinformation can spread (above a certain contagion threshold) but not saturate the population (below certain transmissibility such that the heterogeneity has impact). A second-order effect is that individuals with a diverse neighborhood are also more likely to have friends who can correct them should they be duped by misinformation.

Future modeling efforts should also consider the possible interactions between different kinds of misinformation 88 . These can be synergistic 89 , parasitic 90 , or antagonistic 91 ; which all provide rich dynamical behaviors. Other possible mechanisms to consider are the adaptive feedback loops that facilitate the spread of misinformation in online social networks 92 .

Box 1 Mathematical model of diverse misinformation and herd correction on social networks

We wish to explore the potential impacts of our results on the spread of diverse misinformation on social networks. We consider that multiple independent streams of misinformation spread simultaneously; i.e., there are multiple sets of deepfakes, each with its own demographical biases. We also consider that social networks are often very heterogeneous with a skewed distribution of contacts per user and modular with denser connections among users of the same demographics.

We account for the above using three stylized patterns for the network structure. First, we divide the network into two demographic classes of equal size, simply labeled 1 and 2. Second, we assume a power-law distribution p k of contacts k per user with p k   ∝   k − α regardless of demographics. Third, we use a mixed-membership stochastic block model to generate the network structure: Half of the nodes of each demographic always interact following their demographics, and half act as bridge nodes connecting randomly. The probability that a contact falls within a single demographic class is proportional to Q , while contacts across classes occur proportionally to 1 −  Q ; with Q  > 0.5 for modular structure.

According to the above, we can write the fraction of nodes \({p}_{k,\ell }^{1}\) which are of demographic class 1 with k contacts of class 1 and ℓ contacts of class 2:

We define a simple dynamical process where individuals are exposed to misinformation through each of their duped network neighbors, and themselves get duped at a rate λ i based on their demographic class i . Non-duped neighbors can then correct their duped neighbors at a rate γ 38 , 39 , 40 , e.g., we assume that your network neighbors can fact-check something you diffuse online and potentially correct your opinion. The fraction of individuals of a certain type ( i ,  k ,  ℓ ) that are duped, \({D}_{k,\ell }^{i}\) , can be followed in time using a set of ordinary differential equations:

where θ i , j and ϕ i , j represent the probabilities that a connection from an individual of demographic i to an individual of demographic j connects to a duped or non-duped individual, respectively. They can be calculated, for example, as

These quantities close the system of equations and allow us to simulate a relatively simple model that manages to capture the heterogeneity ( α ) and community structure ( Q ) of social networks, as well as demographic-specific susceptibility to misinformation ({ λ i }) and fact-checking among the population ( γ ). Our results are summarized in Fig. 4 and further analyzed in Appendix SI4 .

Understanding the structure and dynamics of misinformation is important as it can bring a great amount of societal harm. Misinformation has negatively impacted the ability to disseminate important information during critical elections, humanitarian crises, global unrest, and global pandemics. More importantly, misinformation degrades our epistemic environment, particularly regarding distrust of truths. It is necessary to understand who is susceptible to misinformation and how it spreads on social networks to mitigate its harm and propose meaningful interventions. Further, as deepfakes deceive viewers at greater rates, it becomes increasingly critical to understand who gets duped by this form of misinformation and how our biases and social circle impact our interaction with video content at scale. We hope this work will contribute to the critical literature on human biases and help to better understand their interplay with machine-generated content.

The overarching takeaways of our results can be summarized as follows. If not primed, humans are not particularly accurate at detecting deepfakes. Accuracy varies by demographics, but humans are generally better at classifying videos that match them. These results appear consistent with findings of the own-race bias (ORB) phenomenon 18 , where overall, we see that participants are better at detecting videos that match their own attributes. Consistent with ORB research 93 , our study results also show that white participants display a greater accuracy when presented with videos of white personas. We also see strong evidence that persons of color are more accurate than white participants when viewing deepfakes of personas of color and more accurate overall than white participants (see Supplementary Information) . Our study adds several extra dimensions of demographic analysis by using gender and age. We see strong evidence that male participants are better at detecting videos of male personas than female viewers. With age, we see strong evidence that when viewing videos of young personas, participants between the ages of 18−29 are more accurate than participants above the age of 30; surprisingly, participants aged 18−29 are also more accurate than participants aged 30-49 even when viewing videos of personas aged 30−49. Combining these results, more work needs to be done to understand better how interventions such as education about deepfakes, cross-demographic experiences and exposure, and exposure to the technology impact a user’s ability to detect deepfakes.

In this observational study, we also explored the potential impacts of these results in a simple mathematical model and extrapolated from our survey to hypothesize that a diverse set of contacts might provide “herd correction” where friends can correct each other’s blind spots. Friends with different biases can better correct each other when duped. This modeling result is a generalization of the self-correcting crowd approach used in the correction of misinformation 38 .

In future work, we hope to investigate how non-primed human deepfake detectors perform when aided by machines. We want to investigate the mechanisms behind why some human viewers are better at guessing the state of videos that match their own identity. For example, do viewers have a homophily bias because they are more accustomed to images that match their own, or do they simply favor these images? We also would like to empirically investigate our survey via a more robust randomized controlled experiment and model results on real-world social networks with different levels of diversity to measure the spread of diverse misinformation in the wild. Consequently, we would be interested in testing possible educational or other intervention strategies to mitigate adversarial misinformation campaigns. Our simple observational study is a step towards understanding social biases’ role and potential impacts in an emerging societal problem with many multilevel interdependencies.

Survey methodology

We first ran a pilot stage of our observational study. We conducted a simple convenience sample of 100 participants (aged 18+) to observe the efficacy of our survey. We then ran phase 1 (April−May 2022) of the full survey using a Qualtrics survey panel of 1000 participants who matched the demographic distribution of U.S. social media users. We then ran phase 2 (September 2022) of the full survey, again using Qualtrics and the same sampling methodology. The resulting full study from phases 1 and 2 is a 2016-participant sample.

Towards ensuring that our experiment reflects the real-world context as closely as possible, survey participants did not know before the start of the survey that the videos could potentially be deepfakes. The survey was framed for participants as a study about different communication styles and techniques that help make video content credible. Participants were told that we were trying to understand how aspects of public speaking, such as tone of voice, facial expressions, and body language, contribute to the effectiveness and credibility of a speaker. The survey’s deceptiveness allowed us to ask questions about speaker attributes, likeability, and agreeableness naturally without priming the participants to look specifically for deepfakes 94 . We chose to make our survey deceptive not to prime the participants but also because this more closely replicates the deceptiveness that a social media user would encounter in the real world. Furthermore, Bröder 95 argues that “in studies of cognitive illusions (e.g., hindsight bias or misleading postevent information effect), it is a necessity to conceal the true nature of the experiment.” We posit that our study clearly involves cognitive illusions, specifically in the form of deepfakes, and as such deception is an important tool.

We designed our survey using video clips (as seen in Fig. 5 ) from the Deepfake Detection Challenge (DFDC) Preview Dataset 60 , 61 . In our survey, we ask the participants to view two random video clips, which are approximately 10 s in length each. Each video clip may be viewed unlimited times before reading the questions but not again after moving to the questions. The information necessary to answer these questions relies solely on the previously shown video clip. A link to the full survey and survey questions is available in Appendix 1 .

figure 5

The person depicted is fake.

After viewing both videos, the participants are then asked to complete a related questionnaire about the communication styles and techniques of the videos. The questions ask about attributes of the video, such as pose, tone, and style and are asked to rate them on a Likert scale from very pleasant to very unpleasant. We also asked them to rate their agreement with the video content and credibility. We also ask participants to identify the perceived gender expression of the person(a) in the video, to identify what age group they belong to, and to ask if they perceive the person in the video to be a person of color or not.

In line with best practices in ethical research 96 , 97 , we debriefed the participants following the viewing of both videos and completion of the questionnaire on communication style and perceived demographics. The participants are debriefed on the deception of the survey, given a short explanation of deepfake technology, and then asked if they think the videos were real or fake (as seen in Fig. 6 ).

figure 6

The performance metric we use to measure participant accuracy is the ratio of the correct guesses to the entire pool of guesses where accuracy = (True Positive (TP) + True Negative (TN))/(True Positive (TP) + False Positive (FP) + False Negative (FN) + True Negative (TN)).

Lastly, we collect demographic information on the survey participants’ backgrounds and expressions of identity. We also ask participants how knowledgeable they already were on deepfakes, how often they use social media, and their political and religious affiliations. We also asked participants if they knew that the survey was about deepfakes before taking the survey (survey participants who were primed were subsequently dropped from the analysis).

Survey responses from 2016 participants were collected through Qualtrics, an IRB-approved research panel provider, via traditional, actively managed, double-opt-in research panels 6 . Qualtrics’ participants for this study were randomly selected stratified samples from the Qualtrics panel membership pool that represents the average social media user in the U.S. 98 Our survey respondents represent the following categories and demographic breakdown in Table 3 .

Secondary Data

For this project, we use the publicly available Facebook AI Research Deepfake Detection Challenge (DFDC) Preview Dataset ( N  = 5000 video clips) 60 , 61 . For our purposes, we filtered out all videos from the dataset that featured more than one person(a). The video clips may be deepfake or real; see Table 4 . Additionally, some of the videos have been purposefully altered in several ways. Here is the list of augmenters and distractors:

Augmenters: Frame-rate change, Quality level, Audio removal, Introduction of audio noise, Brightness or contrast level, Saturation, Resolution, Blur, Rotation, Horizontal flip.

Distractors: Dog filter, Flower filter, Introduction of overlaid images, shapes, or dots, Introduction of additional faces, Introduction of text.

A video’s deepfake status (deepfake or not) was not revealed to the respondents during or after the survey. Many augmenters and distractors were noticeable to the respondents but were not specifically revealed.

Original Data

We transformed all survey response variables of interest into numerical form to analyze our survey results. All Likert survey questions were converted from ‘Very unpleasant,’ ‘Unpleasant,’ ‘Neutral,’ ‘Pleasant,’ and ‘Very pleasant’ to an ordinal scale of 1,2,3,4,5.

Participants selected education levels from ‘Some high school,’ ‘High school diploma or equivalent,’ ‘Some college,’ Associate’s degree (e.g., A.A., A.E., A.F.A., AS, A.S.N.),’ ‘Vocational training,’ ‘Bachelor’s degree (e.g., B.A., BBA BFA, BS),’ ‘Some postgraduate work,’ ‘Master’s degree (e.g., M.A., M.B.A., M.F.A., MS, M.S.W.),’ ‘Specialist degree (e.g., EdS),’ ‘Applied or professional doctorate degree (e.g., M.D., D.D.C., D.D.S., J.D., PharmD), ‘Doctorate degree (e.g., EdD, Ph.D.)’ was transformed to an ordinal scale of 1-11 respectively.

Participants selected income levels from ‘Less than $30,000’, ‘$30,000−$49,999’, ‘$50,000−$74,999’, ‘$75,000+’ were transformed to an ordinal scale of 1−4 respectively.

Participants selected their social media usage levels from ‘I do not use social media,’ ‘I use social media but less than once a month,’ ‘Once a month,’ ‘A few times a month,’ ‘Once a week,’ ‘A few times a week,’ ‘Once a day,’ ‘More than once a day’ were transformed to an ordinal scale of 1−8 respectively. Variables were split into the category of frequent social media users 5−8 and infrequent social media users 1−4. We combined the ordinal scales into two categories in order to reduce the dimensionality of our data.

Participants selected their knowledge of deepfake from ‘I did not know what a deepfake was,’ ‘I somewhat knew what a deepfake was,’ ‘I knew what a deepfake was,’ ‘I consider myself knowledgeable about deepfakes’ was transformed to an ordinal scale of 1−4 respectively. Variables were split into users who are knowledgeable about deepfakes 3−4 and users who are not knowledgeable about deepfakes 1−2. We combined the ordinal scales into two categories in order to reduce the dimensionality of our data.

All nominal and categorical variables were transformed into binary variables. Categorical variables (some survey questions included write-in answers) were combined into coarser-grained categories for analysis, such as participant racial/ethnic identity (transformed to Person of Color or White), U.S. state of residence (transformed to U.S. regions), employment (transformed to occupational sectors), religious affiliation (transformed into religious affiliations), and political affiliation (transformed to major political affiliations).

We allowed survey participants to identify their gender identity, the results of which were largely binary. Unfortunately, our sample was insufficient to perform meaningful analysis on a larger non-binary gender identity spectrum. Primary variables with an N under 30 were dropped, meaning the participant’s responses were not included in the analysis (this was only applicable for non-binary gender responses where N  = 13). Our survey participants were given two video clips to view and critique; in our analysis, we decided to analyze the first or second video in the same way.

Analytical methods

We use Matthews Correlation Coefficient to understand the relationship between the participant’s guesses on the status of the video (fake or real) and the actual state of the video (fake or real), we ran a Matthews Correlation Coefficient (MCC) 62 , 63 to compare what variables show strong evidence to impact a participant’s ability to guess the actual state of the video correctly. MCC is typically used for classification models to observe the classifier’s performance. Here we treat human participant subgroups as classifiers and measure their performance with MCC. MCC takes the participant subgroup’s guesses and the actual answers and breaks them up into the following categories: number of true positives (TP), number of true negatives (TN), number of false positives (FP), and number of false negatives (FN). The MCC metric ranges from −1 to 1, where 1 indicates total agreement between participant guess about the video and the actual state of the video, −1 indicates complete disagreement between participant guess about the video and the actual state of the video, and 0 indicates something similar to a random guess. To calculate the MCC metric for our human classifiers, we then use the following formula:

MCC is considered a more balanced statistical measure than an F1, precision, or recall score because it is symmetric, meaning no class (e.g., TP, TN, FP, FN) is more important than another.

To compare MCC scores, we bootstrap samples from pairs of confusion matrices and compare their MCC scores. This process generates 10,000 bootstrapped samples of differences in correlation coefficients. We then compare the null hypothesis (difference equal to zero) to the bootstrapped distribution to measure the evidence level of biases and get a credibility interval on their strength.

Logistic regression

To understand the relationship between matching demographics and guess accuracy we run a Bayesian logistic regression on matching demographics (age matches, gender matches, race matches). Logistic regression is a statistical analysis method used to model and predict binary outcomes (the participant’s accuracy). Accuracy is equal to 1 if the participant’s guess about the video was correct and 0 if it was incorrect. It utilizes prior observations from a dataset to establish relationships and make predictions based on specific variables.

Accuracy rate

The performance metric we use to measure participant accuracy is the ratio of the correct guesses to the entire pool of guesses. The accuracy is thus equal to the sum of true positives and true negatives over the total number of guesses.

Data Availability

Our full survey questionnaire, code, data, and codebook can be found on our GitHub repository. https://github.com/juniperlovato/DiverseMisinformationPaper Due to the nature of this research, participants of this study did not consent for their personally identifiable data to be shared publicly, so the full survey’s raw individual level supporting data is not available. Aggregated and anonymized data needed for analysis can be found in our repository.

Bagrow, J. P., Liu, X. & Mitchell, L. Information flow reveals prediction limits in online social activity. Nat. Hum. Behav. 3 , 122–128 (2019).

Article   Google Scholar  

Lovato, J. L., Allard, A., Harp, R., Onaolapo, J. & Hébert-Dufresne, L. Limits of individual consent and models of distributed consent in online social networks. In 2022 ACM Conf. Fairness Account. Transpar ., 2251–2262, https://doi.org/10.1145/3531146.3534640 (2022).

Garland, J., Ghazi-Zahedi, K., Young, J.-G., Hébert-Dufresne, L. & Galesic, M. Impact and dynamics of hate and counter speech online. EPJ Data Sci. 11 , 3 (2022).

Chesney, R. & Citron, D. K. Deep fakes: a looming challenge for privacy, democracy, and national security. SSRN Electron. J. 107 , 1753 (2018).

Google Scholar  

Groh, M., Epstein, Z., Firestone, C. & Picard, R. Deepfake detection by human crowds, machines, and machine-informed crowds. Proc. Natl. Acad. Sci. 119 , e2110013119 (2021).

Boas, T. C., Christenson, D. P. & Glick, D. M. Recruiting large online samples in the united states and india: Facebook, mechanical turk, and qualtrics. Political Sci. Res. Methods 8 , 232–250 (2018).

Ebner, N. C. et al. Uncovering susceptibility risk to online deception in aging. J. Gerontol.: B 75 , 522–533 (2018).

Lloyd, E. P., Hugenberg, K., McConnell, A. R., Kunstman, J. W. & Deska, J. C. Black and white lies: race-based biases in deception judgments. Psychol. Sci. 28 , 1125–1136 (2017).

Bond, J., Julion, W. A. & Reed, M. Racial discrimination and race-based biases on orthopedic-related outcomes. Orthop. Nurs. 41 , 103–115 (2022).

Klaczynski, P. A., Felmban, W. S. & Kole, J. Gender intensification and gender generalization biases in pre-adolescents, adolescents, and emerging adults. Brit. J. Dev. Psychol. 38 , 415–433 (2020).

Macchi Cassia, V. Age biases in face processing: The effects of experience across development. Brit. J. Psychol. 102 , 816–829 (2011).

Dandekar, P., Goel, A. & Lee, D. T. Biased assimilation, homophily, and the dynamics of polarization. Proc. Natl. Acad. Sci. 110 , 5791–5796 (2013).

Article   MathSciNet   Google Scholar  

Currarini, S. & Mengel, F. Identity, homophily and in-group bias. Eur. Econ. Rev. 90 , 40–55 (2016).

Kossinets, G. & Watts, D. J. Origins of homophily in an evolving social network. Am. J. Sociol. 115 , 405–450 (2009).

Nightingale, S. J., Wade, K. A. & Watson, D. G. Investigating age-related differences in ability to distinguish between original and manipulated images. Psychol. Aging 37 , 326–337 (2022).

Bothwell, R. K., Brigham, J. C. & Malpass, R. S. Cross-racial identification. Pers. Soc. Psychol. B. 15 , 19–25 (1989).

Brigham, J. C., Maass, A., Snyder, L. D. & Spaulding, K. Accuracy of eyewitness identification in a field setting. J. Pers. Soc. Psychol. 42 , 673–681 (1982).

Meissner, C. A. & Brigham, J. C. Thirty years of investigating the own-race bias in memory for faces: a meta-analytic review. Psychol. Public Policy Law 7 , 3–35 (2001).

Leskovec, J., Backstrom, L., Kumar, R. & Tomkins, A. Microscopic evolution of social networks. In Proc. 14th ACM SIGKDD int. conf. Knowl. discov. data min ., 462–470, https://doi.org/10.1145/1401890.1401948 (2008).

Airoldi, E. M., Blei, D., Fienberg, S. & Xing, E. Mixed membership stochastic blockmodels. In Koller, D., Schuurmans, D., Bengio, Y. & Bottou, L. (eds.) Advances in Neural Information Processing Systems, Vol. 21, 1–8, https://proceedings.neurips.cc/paper_files/paper/2008/file/8613985ec49eb8f757ae6439e879bb2a-Paper.pdf (Curran Associates, Inc., 2008).

Traberg, C. S. & van der Linden, S. Birds of a feather are persuaded together: Perceived source credibility mediates the effect of political bias on misinformation susceptibility. Pers. Indiv. Differ. 185 , 111269 (2022).

Calvillo, D. P., Garcia, R. J., Bertrand, K. & Mayers, T. A. Personality factors and self-reported political news consumption predict susceptibility to political fake news. Pers. Indiv. Differ. 174 , 110666 (2021).

Lazer, D. M. J. et al. The science of fake news. Science 359 , 1094–1096 (2018).

Watts, D. J., Rothschild, D. M. & Mobius, M. Measuring the news and its impact on democracy. Proc. Natl. Acad. Sci. 118 , e1912443118 (2021).

Roth, C., St-Onge, J. & Herms, K. Quoting is not citing: Disentangling affiliation and interaction on twitter. In Benito, R. M. et al. (eds.) Complex Networks & their Applications X , Studies in Computational Intelligence, 705–717, https://doi.org/10.1007/978-3-030-93409-5_58 (Springer Int. Publ., 2022).

Appel, M. & Prietzel, F. The detection of political deepfakes. J. Comput.-Mediat. Commun. 27 , zmac008 (2022).

Ahmed, S. Who inadvertently shares deepfakes? analyzing the role of political interest, cognitive ability, and social network size. Telemat. Inform. 57 , 101508 (2021).

Jacobsen, B. N. & Simpson, J. The tensions of deepfakes. Inf. Commun. & Soc . 1–15, https://doi.org/10.1080/1369118x.2023.2234980 (2023).

Chou, W.-Y. S., Oh, A. & Klein, W. M. P. Addressing health-related misinformation on social media. JAMA 320 , 2417 (2018).

Tasnim, S., Hossain, M. M. & Mazumder, H. Impact of rumors and misinformation on COVID-19 in social media. J. Prev. Med. Pub. Health 53 , 171–174 (2020).

Kimmel, A. J. Rumors and the financial marketplace. J. Behav. Finance 5 , 134–141 (2004).

Rini, R. Deepfakes and the epistemic backstop. Philos. Impr. 20 , 1–16 (2020).

Vaccari, C. & Chadwick, A. Deepfakes and disinformation: exploring the impact of synthetic political video on deception, uncertainty, and trust in news. Soc. Media Soc. 6 , 205630512090340 (2020).

Walter, N., Brooks, J. J., Saucier, C. J. & Suresh, S. Evaluating the impact of attempts to correct health misinformation on social media: A meta-analysis. Health Commun. 36 , 1776–1784 (2020).

Wu, L., Morstatter, F., Carley, K. M. & Liu, H. Misinformation in social media. ACM SIGKDD Explor. Newsl. 21 , 80–90 (2019).

Starbird, K., Maddock, J., Orand, M., Achterman, P. & Mason, R. M. Rumors, false flags, and digital vigilantes: Misinformation on twitter after the 2013 Boston marathon bombing. IConference 2014 proc . (2014).

Sedhai, S. & Sun, A. HSpam14. In Proc. 38th Int. ACM SIGIR Conf. Res. Dev. Inf. Retr ., 223–232, https://doi.org/10.1145/2766462.2767701 (ACM, 2015).

Arif, A. et al. A closer look at the self-correcting crowd. In Proc. 2017 ACM Conf. Comput. Support. Coop. Work Soc. Comput ., Cscw ’17, 155–168, https://doi.org/10.1145/2998181.2998294 (ACM, New York, NY, USA, 2017).

Micallef, N., He, B., Kumar, S., Ahamad, M. & Memon, N. The role of the crowd in countering misinformation: A case study of the COVID-19 infodemic. In 2020 IEEE Int. Conf. Big Data (Big Data) , 748–757, https://doi.org/10.1109/bigdata50022.2020.9377956 . Ieee (IEEE, 2020).

Allen, J., Arechar, A. A., Pennycook, G. & Rand, D. G. Scaling up fact-checking using the wisdom of crowds. Sci. Adv. 7 , eabf4393 (2021).

Tolosana, R., Vera-Rodriguez, R., Fierrez, J., Morales, A. & Ortega-Garcia, J. Deepfakes and beyond: a survey of face manipulation and fake detection. Inform. Fusion 64 , 131–148 (2020).

Roose, K. Here come the fake videos, too. The New York Times 4 (2018).

Mori, M. The uncanny valley: The original essay by masahiro Mori. IEEE Spectr . (1970).

Verdoliva, L. Media forensics and DeepFakes: An overview. IEEE J. Sel. Top. Signal Process. 14 , 910–932 (2020).

Jung, T., Kim, S. & Kim, K. DeepVision: Deepfakes detection using human eye blinking pattern. IEEE Access 8 , 83144–83154 (2020).

Guera, D. & Delp, E. J. Deepfake video detection using recurrent neural networks. In 2018 15th IEEE Int. Conf. Adv. Video Signal Based Surveill. (AVSS) , 1–6, https://doi.org/10.1109/avss.2018.8639163 . IEEE (IEEE, 2018).

Zotov, S., Dremliuga, R., Borshevnikov, A. & Krivosheeva, K. DeepFake detection algorithms: A meta-analysis. In 2020 2nd Symp. Signal Process. Syst ., 43–48, https://doi.org/10.1145/3421515.3421532 (ACM, 2020).

Blue, L. et al. Who are you (I really wanna know)? detecting audio DeepFakes through vocal tract reconstruction. In 31st USENIX Secur. Symp. (USENIX Secur. 22) , 2691–2708 (Boston, MA, 2022).

Ng, J. C. K., Au, A. K. Y., Wong, H. S. M., Sum, C. K. M. & Lau, V. C. Y. Does dispositional envy make you flourish more (or less) in life? an examination of its longitudinal impact and mediating mechanisms among adolescents and young adults. J. Happiness Stud. 22 , 1089–1117 (2020).

Shillair, R. & Dutton, W. H. Supporting a cybersecurity mindset: Getting internet users into the cat and mouse game. Soc. Sci. Res. Netw . (2016).

Greengard, S. Will deepfakes do deep damage? Commun. ACM 63 , 17–19 (2019).

Schwartz, G. T. Explaining and justifying a limited tort of false light invasion of privacy. Case W. Res. L. Rev. 41 , 885 (1990).

Fallis, D. The epistemic threat of deepfakes. Philos. & Technol. 34 , 623–643 (2020).

Harris, D. Deepfakes: False pornography is here and the law cannot protect you. Duke Law & Technol. Rev. 17 , 99 (2018).

de Ruiter, A. The distinct wrong of deepfakes. Philos. & Technol. 34 , 1311–1332 (2021).

115th Congress (2017–2018), S. –. Malicious deep fake prohibition act of 2018 (2018).

Citron, D. K. The fight for privacy: Protecting dignity, identity, and love in the digital age (W.W. Norton & Company, 2022), first edn.

on the Judiciary House of Representatives, T. C. Federal rules of evidence (2019).

Solove, D. J. Conceptualizing privacy. Calif. Law Rev. 90 , 1087 (2002).

Dolhansky, B., Howes, R., Pflaum, B., Baram, N. & Ferrer, C. The deepfake detection challenge (DFDC) preview dataset. Preprint at https://arxiv.org/abs/1910.08854 (2019).

Dolhansky, B. et al. The DeepFake detection challenge dataset. Preprint at https://arxiv.org/abs/2006.07397 (2020).

Matthews, B. Comparison of the predicted and observed secondary structure of t4 phage lysozyme. Biochim. Biophys. Acta (BBA) - Protein Struct. 405 , 442–451 (1975).

Boughorbel, S., Jarray, F. & El-Anbari, M. Optimal classifier for imbalanced data using Matthews correlation coefficient metric. PLoS One 12 , e0177678 (2017).

Azur, M. J., Stuart, E. A., Frangakis, C. & Leaf, P. J. Multiple imputation by chained equations: What is it and how does it work? Int. J. Method. Psych. 20 , 40–49 (2011).

Cheng, J. & Bernstein, M. S. Flock. In Proc. 18th ACM Conf. Comput. Support. Coop. Work & Soc. Comput ., CSCW ’15, 600–611, https://doi.org/10.1145/2675133.2675214 (ACM, New York, NY, USA, 2015).

Josephs, E., Fosco, C. & Oliva, A. Artifact magnification on deepfake videos increases human detection and subjective confidence. J. Vision 23 , 5327 (2023).

Aliberti, G., Di Pietro, R. & Guarino, S. Epidemic data survivability in unattended wireless sensor networks: New models and results. J. Netw. Comput. Appl. 99 , 146–165 (2017).

Jin, F., Dougherty, E., Saraf, P., Cao, Y. & Ramakrishnan, N. Epidemiological modeling of news and rumors on twitter. In Proc. 7th Workshop Soc. Netw. Min. Anal ., 1–9, https://doi.org/10.1145/2501025.2501027 (ACM, 2013).

Kimura, M., Saito, K. & Motoda, H. Efficient estimation of influence functions for SIS model on social networks. In Twenty-First Int. Jt. Conf. Artif. Intell . (2009).

Di Pietro, R. & Verde, N. V. Epidemic theory and data survivability in unattended wireless sensor networks: Models and gaps. Pervasive Mob. Comput. 9 , 588–597 (2013).

Shang, J., Liu, L., Li, X., Xie, F. & Wu, C. Epidemic spreading on complex networks with overlapping and non-overlapping community structure. Physica A 419 , 171–182 (2015).

Scaman, K., Kalogeratos, A. & Vayatis, N. Suppressing epidemics in networks using priority planning. IEEE Trans. Network Sci. Eng. 3 , 271–285 (2016).

van der Linden, S. Misinformation: Susceptibility, spread, and interventions to immunize the public. Nat. Med. 28 , 460–467 (2022).

Weng, L., Menczer, F. & Ahn, Y.-Y. Virality prediction and community structure in social networks. Sci. Rep. 3 , 1–6 (2013).

Bao, Y., Yi, C., Xue, Y. & Dong, Y. A new rumor propagation model and control strategy on social networks. In Proc. 2013 IEEE/ACM Int. Conf. Adv. Soc. Netw. Anal. Min ., 1472–1473, https://doi.org/10.1145/2492517.2492599 (ACM, 2013).

Zhang, N., Huang, H., Su, B., Zhao, J. & Zhang, B. Dynamic 8-state ICSAR rumor propagation model considering official rumor refutation. Physica A 415 , 333–346 (2014).

Hong, W., Gao, Z., Hao, Y. & Li, X. A novel SCNDR rumor propagation model on online social networks. In 2015 IEEE Int. Conf. Consum. Electron. - Taiwan , 154–155, https://doi.org/10.1109/icce-tw.2015.7216829 . IEEE (IEEE, 2015).

Tambuscio, M., Ruffo, G., Flammini, A. & Menczer, F. Fact-checking effect on viral hoaxes. In Proc. 24th Int. Conf. World Wide Web , 977–982, https://doi.org/10.1145/2740908.2742572 (ACM, 2015).

Xiao, Y. et al. Rumor propagation dynamic model based on evolutionary game and anti-rumor. Nonlinear Dynam. 95 , 523–539 (2018).

Zhang, Y., Su, Y., Weigang, L. & Liu, H. Rumor and authoritative information propagation model considering super spreading in complex social networks. Physica A 506 , 395–411 (2018).

Kumar, K. K. & Geethakumari, G. Information diffusion model for spread of misinformation in online social networks. In 2013 Int. Conf. Adv. Comput. Commun. Inform. (ICACCI) , 1172–1177, https://doi.org/10.1109/icacci.2013.6637343 . IEEE (IEEE, 2013).

King, K. K., Wang, B., Escobari, D. & Oraby, T. Dynamic effects of falsehoods and corrections on social media: A theoretical modeling and empirical evidence. J. Manage. Inform. Syst. 38 , 989–1010 (2021).

Red, V., Kelsic, E. D., Mucha, P. J. & Porter, M. A. Comparing community structure to characteristics in online collegiate social networks. SIAM Rev. 53 , 526–543 (2011).

Pastor-Satorras, R. & Vespignani, A. Epidemic spreading in scale-free networks. Phys. Rev. Lett. 86 , 3200–3203 (2001).

Bode, L. & Vraga, E. K. In related news, that was wrong: The correction of misinformation through related stories functionality in social media. J. Commun. 65 , 619–638 (2015).

Vraga, E. K. & Bode, L. Using expert sources to correct health misinformation in social media. Sci. Commun. 39 , 621–645 (2017).

Feld, S. L. Why your friends have more friends than you do. Am. J. Sociol. 96 , 1464–1477 (1991).

Chang, H.-C. H. & Fu, F. Co-diffusion of social contagions. New J. Phys. 20 , 095001 (2018).

Hébert-Dufresne, L. & Althouse, B. M. Complex dynamics of synergistic coinfections on realistically clustered networks. Proc. Natl. Acad. Sci. 112 , 10551–10556 (2015).

Hébert-Dufresne, L., Mistry, D. & Althouse, B. M. Spread of infectious disease and social awareness as parasitic contagions on clustered networks. Phys. Rev. Research 2 , 033306 (2020).

Fu, F., Christakis, N. A. & Fowler, J. H. Dueling biological and social contagions. Sci. Rep. 7 , 1–9 (2017).

Törnberg, P. Echo chambers and viral misinformation: Modeling fake news as complex contagion. PLoS One 13 , e0203958 (2018).

Anthony, T., Copper, C. & Mullen, B. Cross-racial facial identification: A social cognitive integration. Pers. Soc. Psychol. B. 18 , 296–301 (1992).

Barrera, D. & Simpson, B. Much ado about deception. Sociol. Methods & Res. 41 , 383–413 (2012).

Bröder, A. Deception can be acceptable. Am. Psychol. 53 , 805–806 (1998).

Greene, C. M. et al. Best practices for ethical conduct of misinformation Research. Eur. Psychol. 28 , 139–150 (2023).

Boynton, M. H., Portnoy, D. B. & Johnson, B. T. Exploring the ethics and psychological impact of deception in psychological research. IRB 35 , 7 (2013).

Center, P. R. Social media fact sheet. Pew Research Center: Washington, DC, USA (2021).

Download references

Acknowledgements

Institutional Review Board Approval: The survey in this project is CHRBSS (Behavioral) STUDY00001786, approved by the University of Vermont I.R.B. on 12/6/2021. The authors would like to thank Anne Marie Stupinski, Nana Nimako, Austin Block, and Alex Friedrichsen for their feedback on early drafts and Jean-Gabriel Young and Maria Sckolnick for comments on our analysis. The authors would also like to thank Engin Kirda and Wil Robertson for their contributions to an early survey prototype. This work is supported by the Alfred P. Sloan Foundation, The UVM OCEAN Project, and MassMutual under the MassMutual Center of Excellence in Complex Systems and Data Science. Any opinions, findings, conclusions, or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the aforementioned financial supporters.

Author information

Authors and affiliations.

Vermont Complex Systems Center, University of Vermont, Burlington, VT, 05405, USA

Juniper Lovato, Jonathan St-Onge, Randall Harp, Gabriela Salazar Lopez, Sean P. Rogers, Laurent Hébert-Dufresne & Jeremiah Onaolapo

Department of Computer Science, University of Vermont, Burlington, VT, 05405, USA

Juniper Lovato, Ijaz Ul Haq, Laurent Hébert-Dufresne & Jeremiah Onaolapo

Department of Philosophy, University of Vermont, Burlington, VT, 05405, USA

Randall Harp

You can also search for this author in PubMed   Google Scholar

Contributions

Author contributions: Conceptual: J.L., J.O., R.H., L.H-D.; Survey Development: J.L., J.O., R.H.; Survey Implementation: J.L., I.U.H., J.S-O., S.P.R., G.S.L.; Wrangling and Analysis: J.L., J.S-O., G.S.L., S.P.R., L.H-D.; Mathematical Model: L.H-D., J.L.; All authors drafted the manuscript, revised it critically for important intellectual content, gave final approval of the completed version, contributed to the conception of the work, and are accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved.

Corresponding author

Correspondence to Juniper Lovato .

Ethics declarations

Competing interests.

The authors declare no Competing Financial Interests but the following Competing Non-Financial Interests: the author, Laurent Hébert-Dufresne, is the Editor-in-Chief for npj Complexity and was not involved in the journal’s review of, or decisions related to, this manuscript.

Additional information

Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary information

Supplementary information, rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ .

Reprints and permissions

About this article

Cite this article.

Lovato, J., St-Onge, J., Harp, R. et al. Diverse misinformation: impacts of human biases on detection of deepfakes on networks. npj Complex 1 , 5 (2024). https://doi.org/10.1038/s44260-024-00006-y

Download citation

Received : 21 June 2023

Accepted : 18 December 2023

Published : 18 May 2024

DOI : https://doi.org/10.1038/s44260-024-00006-y

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

Quick links

  • Explore articles by subject
  • Guide to authors
  • Editorial policies

Sign up for the Nature Briefing: AI and Robotics newsletter — what matters in AI and robotics research, free to your inbox weekly.

media influence case study

Top 3 Social Media Case Studies to Inspire You in 2024

Discover three successful social media case studies from top brands and learn how to create one. Benefit from their strategies and mistakes to ensure the success of your next campaign.

Top 3 Stellar Social Media Case Studies to Inspire You

Social media is every marketer’s safe haven for branding and marketing.

And why not?

More than 50% of the population is active on social media, and more are signing up with every passing second.

In a recent poll by HubSpot, 79% of the respondents have made a purchase after seeing a paid advertisement on social media .

This isn’t just a happenstance.

It’s the constant efforts that these brands put behind their dynamic presence on social media, that counts.

But how do they captivate their customers’ attention for this long despite the budding competitors?

Well, that’s something that we’ll reveal in this blog.

We shall assess 3 different social media case studies by top brands who are best in their niches. Their game is simple yet effective.

How effective? Let’s take a look.

Social Media Case Study 1: Starbucks

Starbucks and social media are a match made in heaven. Being one of the sensational brands online, they are stirring the social media world with their strong presence.

They brew the right content to elevate the experiences of their coffee lovers. But how do they nail marketing with perfection every single time? Let’s find out.

Starbucks in Numbers

Starbucks mastered the advertising transition from offline fame to online undertaking. They use each social media with a varied goal to target pitch-perfect reach. Drawing in more customers than ever before, they strike the right balance in content across multiple platforms.

Starbucks

Key Takeaways

Though not every company has a Starbucks budget to promote and spend lavishly on social media marketing, here are some quick takeaways that will undoubtedly help.

1. Chasing Trends

Be it any event, brands must take the advantage to showcase their viewpoints and opinions. Successful brands like Starbucks jump into the bandwagon and leave no stone unturned to make their voice count in the trending list.

Here’s one such social media campaign example from Starbucks.

Chasing

Starbucks is a firm believer in LGBTQ+ rights. When the pride wave surged, Starbucks came forward and reinstated its belief through the #ExtraShotOfPride campaign.

Starbucks joined hands with the Born This Way Foundation to raise $250K to support the LGBTQ+ community. Throughout the social media campaign, they shared quotes and stories of various Starbucks employees cherishing the pride spirit.

2. Less is More

Social media is not about quantity but quality. Starbucks follows the “less is more” principle to maintain the quality standards, even in the caption. Spamming followers’ feeds with constant posting is a big no-no. Starbucks shares 5-6 posts per week on Instagram and 3-4 weekly posts on Facebook .

Starbucks follows

Creative and crisp! That’s what defines a Starbucks caption. This post with 111+k likes is no exception. Nothing is better than a minimalist post with a strong caption.

3. User Generated Content is the King

Ditch the worry of creating content every day when you can make use of user generated content. Starbucks makes sure to retweet or post its loyal customers’ content. User generated content postings starkly improve brand credibility.

Generated Content

Look at this Facebook post made out of customers’ tweets. The new Oatmilk drink got the appreciation shower by some, and Starbucks couldn’t resist but share it with others. It saved them efforts on content brainstorming, plus they got free PR.

4. Building Rapport

Building rapport with the audience is an unsaid rule to brand fame. Social media has now taken the onus of dispensing quality service by aiding brands in prompting faster replies .

Building rapport

Starbucks is always on its toe to respond to customers actively solving concerns, expressing gratitude, or reposting. That kind of proactive service definitely deserves love and adoration.

5. Loads of campaigns

Starbucks is known for its innovative social media campaigns. Be it a new product launch or any festivity around the corner, Starbucks always turns up with a rewarding campaign.

Loads of campaign

In this social media campaign example, Starbucks introduced #RedCupContest with prizes worth $4500 during Christmas of 2016. A new entry came every 14 seconds.

The grand total of entries was a whopping 40,000 in just two days. Indeed Starbucks knows how to get the most out of the festive fever.

6. Content mix

Last but not least, the content mix of Starbucks is inspiring. They create tailored content for every platform.

Starbucks youtube channel

The official youtube channel of Starbucks comprises content in varied hues. From recipes to even series, Starbucks is the ultimate pioneer of experimenting.

Starbucks Instagram

Even on Instagram, they use all the features like Guides, Reels, and IGTV without affecting their eye-popping feed. Starbucks also follows the design consistency for its aesthetic content mix.

Starbucks has proved time and again to be a customer-centric brand with their unrelenting efforts.

Social Media Case Study 2: Ogilvy & Mather

Ogilvy & Mather needs no introduction. Founded by David Ogilvy, the ‘Father of Advertising’ in 1948, the agency continues the legacy of revolutionizing marketing long before the advent of social media.

The iconic agency helps several Fortune 500 companies and more make a massive impact on their audiences worldwide.

Ogilvy & Mather knows its game too well and never fails to astonish. Not just high-profile clients, Ogilvy nails its marketing with perfection every single time.

Keep on reading.

Ogilvy & Mather in Numbers

They use social media to target pitch-perfect reach. Drawing in more hype than ever before, they know how to strike the right balance and bring out emotions with their heart-warming campaigns.

Ogilvy

Not every company has David Ogilvy’s legacy or even affluent clients to boast of, but here are some quick takeaways that will undoubtedly help you become a pro marketer.

1. Integrating Values

Ogilvy stands apart from the crowd, creating trends. They leave no stone unturned to communicate values.

Ogilvy

Proud Whopper is one such social media campaign by Ogilvy that was an instant hit on the internet. People were offered whoppers in rainbow-colored wrappers, with a note that said, “Everyone’s the same on the inside.” This was to reinstate the importance of LGTQ+ rights.

The campaign got 1.1 billion impressions, $21 million of earned media, 450,000 blog mentions, 7 million views, and became the #1 trending topic on Facebook and Twitter.

Ogilvy made a remarkable #Tbt video to honor this momentous event showcasing their supremacy in creating impactful campaigns.

2. Quality over Quantity

Ogilvy believes in the “ Quality supremacy ” to maintain their high standards, even in post captions.

Arbitrary posting isn’t a part of their agenda. They share 5-7 posts on Instagram and Facebook weekly.

Quality over Quantity

Direct and very precise. That’s what defines an Ogilvy caption. This post is no exception. They have exhibited the success of their client work by describing the motive behind the campaign and sharing the ad they created for raising awareness.

3. Adding Credibility

Won awards? It’s time to boast! Because that’s the most authentic way of establishing trust among your clients. It bears proof of your excellence.

Adding Credibility

Look at this pinned Twitter post. Ogilvy won the Global Network of the Year by the very prestigious London International Awards. It also earned Regional Network of the year for Europe, the Middle East, Asia, and Europe.

What better than this to give its audience an idea about Ogilvy’s roaring success and undoubted potential?

4. Being Innovative

Building rapport with the audience is an unsaid rule to brand fame. And that’s why you need to tell stories. Social media has become an indispensable medium to spread your stories far and wide.

Being Innovative

Ogilvy shares its historical tale of existence and how it has adapted to the challenges of the changing world. The team extensively talks about their adaptation to the latest trends to stay on top always.

5. Brainstorming Uniqueness

Being unique is what propels you on social media. People are always looking for brands that do something different from the herd. So your task each day is undeniably brainstorming unique content.

Brainstorming Uniqueness

KFC wanted more of its customers to use its app. Well, Ogilvy and KFC decided to hide a secret menu in the app, which was a mass invitation for the download without being salesy at all. Results? Downloads up by 111% at launch!

6. Inspire Your Peeps

Inspiration is everywhere. But how do you channelize and mold it as per your brand guidelines? The renowned brands move their audience, filling them with a sense of realization. Who doesn’t seek validation? We all need quotes and inspiration to live by.

Inspire Your Peeps

Ogilvy has dedicated its entire Pinterest profile to inspiration. The profile has numerous insightful infographics that encourage you to pursue marketing when your spirits run low. And that’s how it brings out the very essence of being the marketing leader: by inspiring its followers.

Got some good ideas for your branding? We have created templates and tools to help you execute them hassle-free. Tread on further and download the Trending Hashtag Kit for 2024 to get into action.

Social Media Case Study 3: PewDiePie

YouTube king with 111 Million subscribers on PewDiePie Channel, Felix Arvid Ulf Kjellberg, has defied all norms. One of the most prolific content creators of the decade, Felix was on the list of World’s 100 Most Influential People by Time Magazine in 2016.

Needless to say, he is still relevant to this day and has a massive following on social media. Not just for branding, the Swedish YouTuber leveraged social media to give himself a new identity and opened doors to fame and a successful career.

What was the cause of this extraordinary trajectory?

Let’s find out.

PewDiePie in Numbers

PewDiePie likes to keep his social media raw and unfiltered. That’s why subscribers love to have a glimpse of his everyday life and follow him on other social media platforms as well. Here’s a quick snapshot of that.

PewDiePie

Felix took the early bird advantage and started creating content when it wasn’t even popular practice. We can’t go back in time, but we can definitely learn a lot from his social media success.

1. Start Now

If you are still skeptical about making the first move, then don’t. Stop waiting and experiment. It’s better late than never.

Social media is in favor of those who start early because then you create surplus content to hold your audience . You quench their thirst for more quality content.

PewDiePie started creating videos

PewDiePie started creating videos in 2011 and live-streamed his gaming sessions with commentaries. It was something new and completely original. Ever since, he has continued to make thousands of videos that entertain his audience.

2. Gather Your Tribe

Being a content creator, PewDiePie knows his act of engaging his audience very well. He strives to build lasting connections and encourages two-way communication. As a result, his followers like to jump onto his exciting challenges.

gaming community

Felix treasures his gaming community. He frequently asks his followers to take screenshots and turn them into funny memes . He gives them tasks to keep them engaged and amused .

3. Collaboration and Fundraising

Once you reach the stage and gain popularity, people want to see more of you with their favorite personalities. That’s what Felix does.

He collaborates with multiple YouTubers and brands and puts out exclusive content for his followers. He also goes for multiple fundraising campaigns to support vital causes and social wellbeing.

social media campaign

Here’s one such social media campaign example. PewDiePie supported the CRY foundation and raised $239000 in just one day to bring a positive impact for children in India. He thanked all for their contribution and taking active participation towards a noble cause.

4. Keep it Real

Felix likes to keep his content fluff-free. You get to witness raw emotions from an unfiltered life. This instantly appeals to the audience and makes the posts more relatable .

Apart from that, he also uses storytelling techniques to narrate his experiences, adding a very personalized touch to each of the videos.

PewDiePie

Here’s a video of Felix where he and Ken from CinnamonToastKen discuss what can be possibly done with a million dollars around the world. The topic is quite intriguing.

More than 3.8M people have watched it and 216K of them liked it as well, proving that you need not always sweat to create complex content. Even the simplest ones can make the cut.

How to Write a Social Media Marketing Case Study

Many small businesses struggle when it comes to social media marketing. But guess what? Small businesses can slay the competition with a powerful tool: the social media case study.

These social media case studies are success stories that prove your hustle is paying off. Here’s how to weave a case study that showcases your small business wins:

Building Your Brag Book

  • Pick Your Perfect Project:  Did a specific social media campaign drive a surge in sales? Highlight a product launch that went viral. Choose a project with impressive results you can showcase.
  • DIY Interview:  Don’t have a fancy marketing team? No worries! Record yourself talking about your challenges, goals, and the strategies that made a difference.
  • Data Dive:  Track down social media analytics! Look for growth in followers, website traffic driven by social media, or engagement metrics that show your efforts are working.

Now that you have all the ingredients, it’s time to cook a brilliant case study

Crafting Your Case Study

  • Headline Hunt:  Grab attention with a clear and concise headline. Mention your business name and a key achievement (e.g., “From 100 to 10,000 Followers: How We Grew Our Bakery’s Social Buzz”).
  • Subheading Scoop:  Briefly summarize your success story in a subheading, piquing the reader’s interest and highlighting key takeaways.
  • The Business Struggle:  Be honest about the challenges you faced before tackling social media. This will build trust and allow other small businesses to connect.
  • DIY Social Strategies:  Share the social media tactics you used, such as engaging content formats, community-building strategies, or influencer collaborations.
  • Numbers Don’t Lie:  Integrate data and visuals to support your story. Include charts showcasing follower growth or screenshots of top-performing posts.
  • Simple & Straightforward:  Use clear, concise language that’s easy to understand. Bullet points and short paragraphs make your case study digestible and showcase your professionalism.

Remember: Your social media case study is a chance to celebrate your achievements and build businesses. So, tell your story with pride, showcase your data-driven results, and watch your brand recognition soar

Social media campaigns are winning hearts on every platform. However, their success rates largely depend on your year-round presence. That’s why being consistent really does the trick.

We’re sure you must have learned a few things from the above-mentioned social media case studies .

To excel further at your social media marketing, use our FREE Trending Hashtag Kit and fill your calendar with everyday content ideas.

On downloading, you get 3000+ hashtags based on each day’s theme or occasion. You also get editable design templates for hassle-free social media posting.

What are you waiting for? Download now.

Frequently Asked Questions

🌟 How do I start a social media campaign idea?

Here’s how you can start a social media campaign:

  • Finalize your campaign goals
  • Brainstorm personas
  • Pick a social media channel
  • Research your competitors and audience
  • Finalize an idea that’s in trend
  • Promote the campaign
  • Start the campaign
  • Track the performance

🌟 What are the different types of social media campaigns?

Different types of social media campaigns are:

  • Influencer Campaigns
  • Hashtag Challenges

🌟 Why is social media campaign important?

Social media campaigns have various benefits:

  • Boost traffic
  • Better Conversions
  • Cost-effective Marketing
  • Lead Generation
  • PR & Branding
  • Loyal Followers

🌟 What are some of the best social media campaign tools?

Some of the best social media campaign tools are:

  • SocialPilot

🌟 What are the top social media sites?

The top social media sites are:

About the Author

Picture of Sparsh Sadhu

Sparsh Sadhu

Related Posts

5 Easy Steps to Create a Social Media Hashtag Calendar

Manage social media effortlessly.

  • Trial Begins Immediately
  • No CC Required
  • Change Plans Anytime
  • Cancel Anytime

Start Your 14-Day Free Trial

Integrations

More on Social Media

  • © 2024 SocialPilot Technologies Inc. All Rights Reserved.
  • Privacy Policy & GDPR
  • Terms of Service
  • Cookie Settings
  • Follow us :
  • Copy/Paste Link Link Copied

Understanding How Digital Media Affects Child Development

A man and a smiling little boy sitting in his lap look at a mobile phone.

Technology and digital media have become ubiquitous parts of our daily lives. Screen time among children and adolescents was high before COVID-19 emerged, and it has further risen during the pandemic, thanks in part to the lack of in-person interactions.  

In this increasingly digital world, we must strive to better understand how technology and media affect development, health outcomes, and interpersonal relationships. In fact, the fiscal year 2023 federal budget sets aside no less than $15 million within NICHD’s appropriation to investigate the effects of technology use and media consumption on infant, child, and adolescent development.

Parents may not closely oversee their children’s media use, especially as children gain independence. However, many scientific studies of child and adolescent media use have relied on parents’ recollections of how much time the children spent in front of a screen. By using software embedded within mobile devices to calculate children’s actual use, NICHD-supported researchers found that parent reports were inaccurate more often than they were on target. A little more than one-third of parents in the study underestimated their children’s usage, and nearly the same proportion overestimated it. With a recent grant award from NICHD, researchers at Baylor College of Medicine plan to overcome the limitation of relying on parental reports by using a novel technology to objectively monitor preschool-age children’s digital media use. They ultimately aim to identify the short- and long-term influences of technology and digital media use on children’s executive functioning, sleep patterns, and weight. This is one of three multi-project program grants awarded in response to NICHD’s recent funding opportunity announcement inviting proposals to examine how digital media exposure and use impact developmental trajectories and health outcomes in early childhood or adolescence. Another grant supports research to characterize the context, content, and use of digital media among children ages 1 to 8 years and to examine associations with the development of emotional regulation and social competence. A third research program seeks to better characterize the complex relationships between social media content, behaviors, brain activity, health, and well-being during adolescence.

I look forward to the findings from these ongoing projects and other studies that promise to inform guidance for technology and media use among children and adolescents. Additionally, the set-aside funding for the current fiscal year will allow us to further expand research in this area. These efforts will help us advance toward our aspirational goal to discover how technology exposure and media use affect developmental trajectories, health outcomes, and parent-child interactions.

SEO Chatter

20 Best Social Media Marketing Case Study Examples

Please enable JavaScript

Humix

How would you like to read the best social media marketing case studies ever published?

More importantly, how would you like to copy the best practices in social media marketing that are based on real-world examples and not just theory?

Below, you’ll find a list of the top 20 social media case study examples along with the results and key findings. By studying these social media marketing studies and applying the lessons learned on your own accounts, you can hopefully achieve similar results.

Table of Contents

Social Media Case Study Examples

793,500+ impressions for semrush on twitter  – walker sands social media case study.

The case study shows how Walker Sands implemented a premium Twitter microcontent program for Semrush, a global leader in digital marketing software. Semrush needed a strategic social media marketing partner to help distinguish its brand from competitors, drive a higher engagement rate among its target audience, and build brand loyalty. In this case study, you’ll find out how the social strategy focused on three things: using humor, embedding the brand in trending conversations, and focusing on the audience’s interests over marketing messages. The result was an increase of more than 793,500 impressions, 34,800 engagements, and a 4.4% average engagement rate.

Viral Oreo Super Bowl Tweet  – Social Media Case Study

This is a popular case study to learn valuable insights for B2C marketing. During Super Bowl XLVII, the lights went out in the football stadium and the Oreo brand went viral with a single tweet that said “Power out? No problem. You can still dunk in the dark.” Read the historical account of that famous social media marketing moment from the people who lived through it so you can gather ideas on how to be better prepared for future social media campaigns that you can take advantage of in real-time.

Facebook Posting Strategy That Lead to 3X Reach & Engagement  – Buffer Social Media Case Study

In this social media case study example, you’ll find out how Buffer cut its Facebook posting frequency by 50% but increased the average weekly reach and engagement by 3X. Hint: The strategy had to do with creating fewer, better-quality posts, that were aimed at gaining higher engagement.

Achieving a 9 Million Audience by Automating Pinterest SEO  – Social Media Case Study

This is a good social media marketing case study for marketers who use Pinterest. Discover how Chillital went from 0 to 9 million engaged audience members and 268 million impressions. You’ll learn about the step-by-step research process of finding where your audience lives and breathes content, get a detailed analysis of how the author used Pinterest to generate brand awareness, and learn about using community-driven content promotion to scale social media results.

5X Increase In App Installs from TikTok  – Bumble Social Media Case Study

With the use of TikTok on the rise, social media case studies are now being shared about how to get the most value out of marketing on this platform. This one, in particular, is good to read because it explains how Bumble, a dating app, used TikTok more effectively by following the mantra, “Don’t Make Ads, Make TikToks”. This case study in social media marketing resulted in a 5X increase in app installs and a 64% decrease in cost-per-registration.

330% Increase In Reach for the Make a Wish Foundation – Disney Social Media Case Study

Check out this case study to find out how the Make-A-Wish Foundation increased its social media reach, audience, and engagement by partnering with Disney in a Share Your Ears campaign. The strategy was simple: ask people to take a photo of themselves wearing Mickey Mouse ears, post it on social media with the hashtag #ShareYouEars, and a $5 donation would be made to Make-A-Wish. The results were unbelievable with over 1.7 million posted photos and 420 million social media impressions. This led to a 15% audience increase on Facebook and a 13% audience increase on Instagram with a total increase of 330% in social media reach and a 554% increase in engagement during the campaign.

How 3 Schools Used Social Media Advertising to Increase Website Traffic & Applications – Social Media Case Study

This example includes three of the best social media case studies from Finalsite, a marketing agency for educational institutions. It shows the power of social media advertising to increase website traffic and enrollment. One case study, in particular, shows how a limited budget of $350 per month increased website sessions by 515%, more than 2,200 clicks on the apply button for a study abroad application, 2,419 views on the request information page, and 575 views on the application process page.

Client Case Studies – LYFE Marketing Social Media Case Study

LYFE Marketing is a social media management company that helps clients gain new customers, generate sales, and increase brand exposure online. This page includes several of its top social media marketing case studies along with the approach and key results from each campaign. It’s packed with screenshots of the social media posts and engagement metrics so you can understand how each strategy worked for success, and get inspiration for your own campaigns.

3X Leads for a Local Business – Vertex Marketing Social Media Case Study

This is a good case study about finding the right balance between organic reach with social media posts and paid reach with social media marketing ads. You’ll find out how Vertex Marketing helped a local kitchen and bath remodeling business increase the number of leads by 3X. As for the return on investment (ROI) for this campaign, each lead for the client was worth about $10,000. The result was 6,628 audience reach, $12.43 average cost per conversion, and 18 conversions.

235% Increase In Conversions with Facebook Ads Funnel – Marketing 360 Social Media Case Study

This is one of Marketing 360’s case study examples that demonstrates the effectiveness of a Facebook ads sales funnel for B2B marketing. An ads funnel is a series of social media advertisements that target a specific audience at each stage of the buyer’s journey. By mapping out the buyer’s journey and creating a social media marketing ad campaign for each stage, you can guide new leads through the sales funnel and turn them into paying customers. This case study resulted in a 235% increase in conversions for a truck lift manufacturer.

15% Increase In Social Media Followers In 6 Months – Hootsuite Social Media Case Study

This is one of the best social media marketing case studies available online for businesses in the hospitality industry. Find out how Meliá Hotels International incorporated social media directly into its business model, both as a channel for client communication and as a platform to listen and learn about client needs and preferences. As a result, Meliá Hotel’s social media following grew from 5 million to 6 million in six months; an increase of more than 15%.

The Impact of Social Signals On SEO – Fat Stacks Social Media Case Study

This is a good case study for understanding the effect social media can have on SEO. By building links for a web page on social media channels like Facebook, Twitter, Pinterest, LinkedIn, etc, the rankings for long tail keywords improved in Google’s search engine.

96 Link Clicks for a Vacation Rental – Maria Peagler Social Media Case Study

As the title of this social media case study example suggests, you’ll learn how Maria Peagler helped a vacation rental get 96 clicks out of 3,274 audience reach on a single Facebook ad; about a 2.9% click-through rate (CTR). What’s most important about this B2C example is those clicks were of the highest quality the client could receive because Maria dug into the analytics to find out the best time during the day to post the ad and the perfect age groups to target while also using specific language to only drive clicks that would more likely convert.

Vienna Tourist Board Uses an Instagram Wall to Attract Tourists – Walls.io Social Media Case Study

Inside this case study, you’ll find out how the City of Vienna uses a simple social media content aggregator to display its Instagram feed on the website. This basic marketing strategy harnesses the power of user-generated content to gain more followers and keep in touch with previous visitors to increase brand awareness and repeat visits.

Complete Instagram Marketing Strategy for Sixthreezero – Vulpine Interactive Social Media Case Study

This is an in-depth case study on social media marketing with Instagram. You’ll discover how Vulpine Interactive was able to turn an existing, unmanaged account into a strong company asset for Sixthreezero, a bicycling company that uses ecommerce to drive sales. There was a lot of strategy and planning that went into growing the account by 39%, increasing website traffic from Instagram by over 300%, and achieving 77,659 total engagements. Inside, you’ll get the complete social strategy, tactics, key performance indicators (KPIs), and results

Twitter Marketing Success Stories – Social Media Case Study

If you’re looking for social media case study examples for Twitter using both organic and paid ads, then this page has everything you need. It includes Twitter’s top marketing success stories for you to get new ideas for your own B2C and B2B marketing campaigns.

How 3 Big Brands Use Pinterest for Marketing – SmartInsights Social Media Case Study

This is a case study page by SmartInsights with an overview of how 3 big brands use Pinterest for marketing. Although it’s a quick read, you can learn some valuable tactics that Nordstrom, Sephora, and Petplan are using to market their brands on this social media platform.

25+ TikTok Social Campaign Results – Chatdesk Social Media Case Study

If you’re looking for the best social media case studies for TikTok, then this list by Chatdesk is an excellent resource. It includes more than 25 examples from big brands like Starbucks, Redbull, Spikeball, Crocs, Guess Jeans, and Gym Shark. Give it a read to find out exactly how these brands use TikTok effectively to scale their businesses.

Reddit for Business: Meet Your Maker – Social Media Case Study

Want to learn how to use Reddit to market your business online? This new social media marketing case study page by Reddit called “Meet Your Maker” showcases the people behind some of the most innovative and creative brand activations on our platform. Examples include campaigns by Adobe, Capcom, and noosa Yoghurt.

How Boston University Uses Snapchat to Engage with Students – Social Media Case Study

With more than 75% of college students using Snapchat on a daily basis, it became clear that Boston University had to make this platform a primary marketing channel. This social media case study outlines all of the top strategies Boston University uses to connect with prospective and current students.

Now, if you’re looking for more digital marketing ideas, then make sure to check out these other related guides:  SEO case studies with data on improving organic search engine optimization, PPC case studies  for paid search examples, email marketing case studies , affiliate marketing case studies , content marketing case studies , and general digital marketing case studies .

What Is a Social Media Case Study?

A social media case study is an in-depth study of social media marketing in a real-world context. It can focus on one social media tactic or a group of social media strategies to find out what works in social media marketing to promote a product or service.

Are Case Studies Good for Social Media Marketing?

Case studies are good for social media because you can learn about how to do social media marketing in an effective way. Instead of just studying the theory of social media, you can learn from real examples that applied social media marketing methods to achieve success.

Summary for Social Media Marketing Case Studies

I hope you enjoyed this list of the best social media marketing case study examples that are based on real-world results and not just theory.

As you discovered, the social media case studies above demonstrated many different ways to perform well on social platforms. By studying the key findings from these case study examples, and applying the methods learned to your own accounts, you can hopefully achieve the same positive outcome. New social media case studies are being published every month and I’ll continue to update this list as they become available. So keep checking back to read the current sources of information on social media.

media influence case study

IMAGES

  1. (PDF) The Influence of the Mass Media in the Behavior Students: A

    media influence case study

  2. Social Media Case Study Categories

    media influence case study

  3. (PDF) Social Media and Free Knowledge: Case Study

    media influence case study

  4. A case study on media influence

    media influence case study

  5. Social Media Case Study on Behance

    media influence case study

  6. (PDF) SOCIAL MEDIA AND ITS INFLUENCE ON VOCABULARY AND LANGUAGE

    media influence case study

VIDEO

  1. Social media study by Rice University finds high levels of distraction among younger users

  2. Catalyst Influence Case for iPhone 15 Pro Max

  3. Media Sociology: Hierarchy of Influences Model in Journalism

  4. Evidence that 'gender identity ideology IS being promoted widely in our schools!'

  5. The Influence of the Media on Aggression

  6. Philip Chase: An Organizational Power And Influence Case Study Help

COMMENTS

  1. How Does Media Influence Social Norms? Experimental Evidence on the

    The issue of violence against women is an important and well-suited case for studying the influence of media. First, violence against women is a global concern. ... This study joins the growing literature demonstrating that exposure to information provided by mass media can influence a wide range of attitudes and behaviors. This paper ...

  2. How the news media activate public expression and influence national

    We thus study the effects of the media on the classical notion of expressed public opinion, a concept predating modern survey research, and with a focus not on changes in individual behavior or attitudes but instead on the content of the national conversation (18, 19).In the past, this discussion could only be measured by collecting "water-cooler events" (), listening to hallway and dinner ...

  3. Large-scale quantitative evidence of media impact on public opinion

    Do mass media influence people's opinions of other countries? Using BERT, a deep neural network-based natural language processing model, this study analyzes a large corpus of 267,907 China ...

  4. Case Studies on Media Reporting and Media Influence During ...

    The main variables approached in our analyses refer to e-commerce popularity as perceived in Google Trends, the actual number of COVID-19 cases and associated deaths, media coverage of COVID-19, and media negativity in COVID-19 reporting. This study measures e-commerce popularity with the help of Google Trends (2021).

  5. PDF Media Influence Case Studies

    Media Influence Case Studies

  6. Beijing's Global Media Influence Report 2022

    The Chinese government has expanded its global media footprint. The intensity of Beijing's media influence efforts was designated as High or Very High in 16 of the 30 countries examined in this study, which covers the period from January 2019 to December 2021. In 18 of the countries, the Chinese regime's efforts increased over the course of ...

  7. The evolution of social media influence

    To study the evolution of social media influence on an individual, systematic literature review process suggested by Brereton et al. (2007) had been followed. Figure 1 presents the process followed for the selection of the articles. For developing the review protocol, existing studies like Brereton et al. (2007); Chauhan et al., and Kar (2016); Grover and Kar (2017); Grover et al., and Davies ...

  8. Meta‐analysis of social media influencer impact: Key antecedents and

    1 INTRODUCTION. Social media influencers are individuals who have built up a large following on social media and are able to influence their audience's attitudes and behaviors (Hudders et al., 2021).They have become the subject of much scholarly research due to the powerful impact they have on consumer behavior, from influencing purchase decisions to changing societal norms (IZEA Insights, 2022).

  9. Approaching Media Influence and its Mechanisms

    Agenda-setting theory describes the 'ability (of the news media) to influence the importance placed on the topics of the public agenda' (McCombs & Reynolds, 2002). This theory shows that, if a news item is covered frequently and prominently, the audience regards the issue as more important. This has many implications.

  10. A Case Study of Media Influence on Public Attitudes Towards Celebrities

    Abstract. In the new media era, media reports can highly influence public attitudes, and events involving celebrities usually draw great attention from the public. Besides, celebrities usually enjoy a high level of public recognition and endorse products, which is known as celebrity endorsement. Also, fandom economy, the operational income ...

  11. Violence, Media Effects, and Criminology

    There have been over 1000 studies on the effects of TV and film violence over the past 40 years. Research on the influence of TV violence on aggression has consistently shown that TV violence increases aggression and social anxiety, cultivates a "mean view" of the world, and negatively impacts real-world behavior.

  12. Effects of New Media Use on Health Behaviors: A Case Study in China

    This study investigates the problems related to health communication on new media in China by conducting a case study based on WeChat. ... To verify the influence of media use on health behaviors, this study chooses young groups as research respondents for convenience. Future studies can also expand the scope of samples to gain further ...

  13. MASS MEDIA'S INFLUENCE ON SOCIETY: A CASE STUDY

    The media has a huge impact on society in shaping the public opinion of the masses. They can form or. modify the public opinion in different ways depending on what their final objective is. A ...

  14. Social Media Use and Its Connection to Mental Health: A Systematic

    A new study found that individuals who are involved in social media, games, texts, mobile phones, etc. are more likely to experience depression. The previous study found a 70% increase in self-reported depressive symptoms among the group using social media. The other social media influence that causes depression is sexual fun .

  15. Digital Media for Behavior Change: Review of an Emerging Field of Study

    1.3. Examples of Digital Health Research. As a data collection and research strategy, digital media may be used in many ways. Social media provides large quantities of available data on registered users (e.g., Facebook analytics) that can be used to identify potential study participants (e.g., individuals who, based on Facebook data, are likely to fit a specific socio-demographic profile, such ...

  16. The effect of social media influencers' on teenagers Behavior: an

    Social media influencers' distinctive features "Informational social influence" is a concept that has been used in literature by Deutsch & Gerard, 1955), and defined as the change in behavior or opinions that happened when people (consumers) are conformed to other people (influencers) because they believe that they have precise and true information (e.g. Djafarova & Rushworth, 2017, Alotaibi ...

  17. (PDF) " A CASE STUDY TO ANALYSE THE ROLE OF MEDIA AS ...

    Despite it media have some influence shipping attitude towards crime and the justice system. ... Using one case study the paper looks at the way media help to mount popular opinion of the masses ...

  18. Examining the impact of digital information environments, information

    This study examines exposure to, perception of, and behavioral responses to misinformation about COVID-19 on social media from the influence of presumed influence (IPI) framework. ... Hong Kong Shue Yan University. Her research interests include social media studies, media effects, marketing communication, and health communication. Miao Lu (Ph ...

  19. Impact of Social Media Towards Society, A Case Study on Teenagers

    Apart from this report, the survey also revealed. among them, 31 percent teens believe that affect of social media is mostly positive, 24. percent teens describe the negative affect and the ...

  20. Dog Movie Stars and Dog Breed Popularity: A Case Study in Media ...

    Fashions and fads are important phenomena that influence many individual choices. They are ubiquitous in human societies, and have recently been used as a source of data to test models of cultural dynamics. Although a few statistical regularities have been observed in fashion cycles, their empirical characterization is still incomplete. Here we consider the impact of mass media on popular ...

  21. Exploring the dynamics of consumer engagement in social media ...

    The current study employs S-O-R model as the framework, grounded in theories such as self-determination theory and theory of planned behavior, to construct an influence model of consumer ...

  22. Diverse misinformation: impacts of human biases on detection of

    This paper explores deepfakes as a case study of misinformation to investigate how U.S. social media users' biases influence their susceptibility to misinformation and their ability to correct ...

  23. A Case Study of How the Media Influences Popular Perception of Science

    A Case Study of How the Media Influences Popular Perception of Science. By Patrick J. Michaels and Paul C. "Chip" Knappenberger. Global Science Report is a weekly feature from the Center for ...

  24. The Impact of Social Media on Political Discourse: A Case Study

    The UK's Brexit referendum demonstrated the ability of social media, especially Twitter, to sway public opinion and affect the result. This case study delves into the role of hashtags ...

  25. Top 3 Social Media Case Studies to Inspire You in 2024

    2. Less is More. Social media is not about quantity but quality. Starbucks follows the "less is more" principle to maintain the quality standards, even in the caption. Spamming followers' feeds with constant posting is a big no-no. Starbucks shares 5-6 posts per week on Instagram and 3-4 weekly posts on Facebook.

  26. PDF Social media influence on COVID-19 vaccine perceptions among University

    hesitancy factors. Understanding public perceptions, especially through the lens of social media, is important. This study investigates the influence of social media on COVID-19 vaccine perceptions among university students in Malawi. Methods The study utilized a quantitative methodology and employed a cross-sectional study design to explore

  27. Understanding How Digital Media Affects Child Development

    A third research program seeks to better characterize the complex relationships between social media content, behaviors, brain activity, health, and well-being during adolescence. I look forward to the findings from these ongoing projects and other studies that promise to inform guidance for technology and media use among children and adolescents.

  28. Mass media

    Copy of a newspaper (El Universo), an example of mass media. Mass media includes the diverse arrays of media that reach a large audience via mass communication.. Broadcast media transmits information electronically via media such as films, radio, recorded music, or television. Digital media comprises both Internet and mobile mass communication. Internet media comprises such services as email ...

  29. The Influence of Risk Perception on Online Shopping Decision of

    The Influence of Risk Perception on Online Shopping Decision of Comsumers: A Case Study of Social Media Platforms in Vietnam. ... The primary objective of this research is to evaluate the influence of risk perception on the online shopping decisions of consumers in Vietnam. In order to accomplish this objective, the research team conducted a ...

  30. 20 Best Social Media Marketing Case Study Examples

    Social Media Marketing Case Study Examples: 1. 793,500+ Impressions for Semrush On Twitter 2. Viral Oreo Super Bowl Tweet 3. Facebook Posting Strategy That Lead to 3X Reach & Engagement 4. Achieving a 9 Million Audience by Automating Pinterest SEO 5. 5X Increase In App Installs from TikTok 6. 330% Increase In Reach for the Make a Wish Foundation 7.