Advertisement

Advertisement

Misinformation, manipulation, and abuse on social media in the era of COVID-19

  • Published: 22 November 2020
  • Volume 3 , pages 271–277, ( 2020 )

Cite this article

social media misuse case study

  • Emilio Ferrara 1 ,
  • Stefano Cresci 2 &
  • Luca Luceri 3  

31k Accesses

92 Citations

40 Altmetric

Explore all metrics

The COVID-19 pandemic represented an unprecedented setting for the spread of online misinformation, manipulation, and abuse, with the potential to cause dramatic real-world consequences. The aim of this special issue was to collect contributions investigating issues such as the emergence of infodemics, misinformation, conspiracy theories, automation, and online harassment on the onset of the coronavirus outbreak. Articles in this collection adopt a diverse range of methods and techniques, and focus on the study of the narratives that fueled conspiracy theories, on the diffusion patterns of COVID-19 misinformation, on the global news sentiment, on hate speech and social bot interference, and on multimodal Chinese propaganda. The diversity of the methodological and scientific approaches undertaken in the aforementioned articles demonstrates the interdisciplinarity of these issues. In turn, these crucial endeavors might anticipate a growing trend of studies where diverse theories, models, and techniques will be combined to tackle the different aspects of online misinformation, manipulation, and abuse.

Similar content being viewed by others

social media misuse case study

Digital media and misinformation: An outlook on multidisciplinary strategies against manipulation

social media misuse case study

Coronavirus Conspiracy Theories: Tracing Misinformation Trajectories from the Fringes to the Mainstream

social media misuse case study

The Role of Online Misinformation and Fake News in Ideological Polarization: Barriers, Catalysts, and Implications

Avoid common mistakes on your manuscript.

Introduction

Malicious and abusive behaviors on social media have elicited massive concerns for the negative repercussions that online activity can have on personal and collective life. The spread of false information [ 8 , 14 , 19 ] and propaganda [ 10 ], the rise of AI-manipulated multimedia [ 3 ], the presence of AI-powered automated accounts [ 9 , 12 ], and the emergence of various forms of harmful content are just a few of the several perils that social media users can—even unconsciously—encounter in the online ecosystem. In times of crisis, these issues can only get more pressing, with increased threats for everyday social media users [ 20 ]. The ongoing COVID-19 pandemic makes no exception and, due to dramatically increased information needs, represents the ideal setting for the emergence of infodemics —situations characterized by the undisciplined spread of information, including a multitude of low-credibility, fake, misleading, and unverified information [ 24 ]. In addition, malicious actors thrive on these wild situations and aim to take advantage of the resulting chaos. In such high-stakes scenarios, the downstream effects of misinformation exposure or information landscape manipulation can manifest in attitudes and behaviors with potentially dramatic public health consequences [ 4 , 21 ].

By affecting the very fabric of our socio-technical systems, these problems are intrinsically interdisciplinary and require joint efforts to investigate and address both the technical (e.g., how to thwart automated accounts and the spread of low-quality information, how to develop algorithms for detecting deception, automation, and manipulation), as well as the socio-cultural aspects (e.g., why do people believe in and share false news, how do interference campaigns evolve over time) [ 7 , 15 ]. Fortunately, in the case of COVID-19, several open datasets were promptly made available to foster research on the aforementioned matters [ 1 , 2 , 6 , 16 ]. Such assets bootstrapped the first wave of studies on the interplay between a global pandemic and online deception, manipulation, and automation.

Contributions

In light of the previous considerations, the purpose of this special issue was to collect contributions proposing models, methods, empirical findings, and intervention strategies to investigate and tackle the abuse of social media along several dimensions that include (but are not limited to) infodemics, misinformation, automation, online harassment, false information, and conspiracy theories about the COVID-19 outbreak. In particular, to protect the integrity of online discussions on social media, we aimed to stimulate contributions along two interlaced lines. On one hand, we solicited contributions to enhance the understanding on how health misinformation spreads, on the role of social media actors that play a pivotal part in the diffusion of inaccurate information, and on the impact of their interactions with organic users. On the other hand, we sought to stimulate research on the downstream effects of misinformation and manipulation on user perception of, and reaction to, the wave of questionable information they are exposed to, and on possible strategies to curb the spread of false narratives. From ten submissions, we selected seven high-quality articles that provide important contributions for curbing the spread of misinformation, manipulation, and abuse on social media. In the following, we briefly summarize each of the accepted articles.

The COVID-19 pandemic has been plagued by the pervasive spread of a large number of rumors and conspiracy theories, which even led to dramatic real-world consequences. “Conspiracy in the Time of Corona: Automatic Detection of Emerging COVID-19 Conspiracy Theories in Social Media and the News” by Shahsavari, Holur, Wang, Tangherlini, and Roychowdhury grounds on a machine learning approach to automatically discover and investigate the narrative frameworks supporting such rumors and conspiracy theories [ 17 ]. Authors uncover how the various narrative frameworks rely on the alignment of otherwise disparate domains of knowledge, and how they attach to the broader reporting on the pandemic. These alignments and attachments are useful for identifying areas in the news that are particularly vulnerable to reinterpretation by conspiracy theorists. Moreover, identifying the narrative frameworks that provide the generative basis for these stories may also contribute to devise methods for disrupting their spread.

The widespread diffusion of rumors and conspiracy theories during the outbreak has also been analyzed in “Partisan Public Health: How Does Political Ideology Influence Support for COVID-19 Related Misinformation?” by Nicholas Havey. The author investigates how political leaning influences the participation in the discourse of six COVID-19 misinformation narratives: 5G activating the virus, Bill Gates using the virus to implement a global surveillance project, the “Deep State” causing the virus, bleach, and other disinfectants as ingestible protection against the virus, hydroxychloroquine being a valid treatment for the virus, and the Chinese Communist party intentionally creating the virus [ 13 ]. Results show that conservative users dominated most of these discussions and pushed diverse conspiracy theories. The study further highlights how political and informational polarization might affect the adherence to health recommendations and can, thus, have dire consequences for public health.

figure 1

Network based on the web-page URLs shared on Twitter from January 16, 2020 to April 15, 2020 [ 18 ]. Each node represents a web-page URL, while connections indicate links among web-pages. The purple nodes represent traditional news sources, the orange nodes indicate the low-quality and misinformation news sources, and the green nodes represent authoritative health sources. The edges take the color of the source, while the node size is based on the degree

“Understanding High and Low Quality URL Sharing on COVID-19 Twitter Streams” by Singh, Bode, Budak, Kawintiranon, Padden, and Vraga investigate URL sharing patterns during the pandemic, for different categories of websites [ 18 ]. Specifically, authors categorize URLs as either related to traditional news outlets, authoritative health sources, or low-quality and misinformation news sources. Then, they build networks of shared URLs (see Fig. 1 ). They find that both authoritative health sources and low-quality/misinformation ones are shared much less than traditional news sources. However, COVID-19 misinformation is shared at a higher rate than news from authoritative health sources. Moreover, the COVID-19 misinformation network appears to be dense (i.e., tightly connected) and disassortative. These results can pave the way for future intervention strategies aimed at fragmenting networks responsible for the spread of misinformation.

The relationship between news sentiment and real-world events is a long-studied matter that has serious repercussions for agenda setting and (mis-)information spreading. In “Around the world in 60 days: An exploratory study of impact of COVID-19 on online global news sentiment” , Chakraborty and Bose explore this relationship for a large set of worldwide news articles published during the COVID-19 pandemic [ 5 ]. They apply unsupervised and transfer learning-based sentiment analysis techniques and they explore correlations between news sentiment scores and the global and local numbers of infected people and deaths. Specific case studies are also conducted for countries, such as China, the US, Italy, and India. Results of the study contribute to identify the key drivers for negative news sentiment during an infodemic, as well as the communication strategies that were used to curb negative sentiment.

Farrell, Gorrell, and Bontcheva investigate one of the most damaging sides of online malicious content: online abuse and hate speech. In “Vindication, Virtue and Vitriol: A study of online engagement and abuse toward British MPs during the COVID-19 Pandemic” , they adopt a mixed methods approach to analyze citizen engagement towards British MPs online communications during the pandemic [ 11 ]. Among their findings is that certain pressing topics, such as financial concerns, attract the highest levels of engagement, although not necessarily negative. Instead, other topics such as criticism of authorities and subjects like racism and inequality tend to attract higher levels of abuse, depending on factors such as ideology, authority, and affect.

Yet, another aspect of online manipulation—that is, automation and social bot interference—is tackled by Uyheng and Carley in their article “Bots and online hate during the COVID-19 pandemic: Case studies in the United States and the Philippines”  [ 22 ]. Using a combination of machine learning and network science, the authors investigate the interplay between the use of social media automation and the spread of hateful messages. They find that the use of social bots yields more results when targeting dense and isolated communities. While the majority of extant literature frames hate speech as a linguistic phenomenon and, similarly, social bots as an algorithmic one, Uyheng and Carley adopt a more holistic approach by proposing a unified framework that accounts for disinformation, automation, and hate speech as interlinked processes, generating insights by examining their interplay. The study also reflects on the value of taking a global approach to computational social science, particularly in the context of a worldwide pandemic and infodemic, with its universal yet also distinct and unequal impacts on societies.

It has now become clear that text is not the only way to convey online misinformation and propaganda [ 10 ]. Instead, images such as those used for memes are being increasingly weaponized for this purpose. Based on this evidence, Wang, Lee, Wu, and Shen investigate US-targeted Chinese COVID propaganda, which happens to rely heavily on text images [ 23 ]. In their article “Influencing Overseas Chinese by Tweets: Text-Images as the Key Tactic of Chinese Propaganda” , they tracked thousands of Twitter accounts involved in the #USAVirus propaganda campaign. A large percentage ( \(\simeq 38\%\) ) of those accounts was later suspended by Twitter, as part of their efforts for contrasting information operations. Footnote 1 Authors studied the behavior and content production of suspended accounts. They also experimented with different statistical and machine learning models for understanding which account characteristics mostly determined their suspension by Twitter, finding that the repeated use of text images played a crucial part.

Overall, the great interest around the COVID-19 infodemic and, more broadly, about research themes such as online manipulation, automation, and abuse, combined with the growing risks of future infodemics, make this special issue a timely endeavor that will contribute to the future development of this crucial area. Given the recent advances and breadth of the topic, as well as the level of interest in related events that followed this special issue—such as dedicated panels, webinars, conferences, workshops, and other special issues in journals—we are confident that the articles selected in this collection will be both highly informative and thought provoking for readers. The diversity of the methodological and scientific approaches undertaken in the aforementioned articles demonstrates the interdisciplinarity of these issues, which demand renewed and joint efforts from different computer science fields, as well as from other related disciplines such as the social, political, and psychological sciences. To this regard, the articles in this collection testify and anticipate a growing trend of interdisciplinary studies where diverse theories, models, and techniques will be combined to tackle the different aspects at the core of online misinformation, manipulation, and abuse.

https://blog.twitter.com/en_us/topics/company/2020/information-operations-june-2020.html .

Alqurashi, S., Alhindi, A., & Alanazi, E. (2020). Large Arabic Twitter dataset on COVID-19. arXiv preprint arXiv:2004.04315 .

Banda, J.M., Tekumalla, R., Wang, G., Yu, J., Liu, T., Ding, Y., & Chowell, G. (2020). A large-scale COVID-19 Twitter chatter dataset for open scientific research—An international collaboration. arXiv preprint arXiv:2004.03688 .

Boneh, D., Grotto, A. J., McDaniel, P., & Papernot, N. (2019). How relevant is the Turing test in the age of sophisbots? IEEE Security & Privacy, 17 (6), 64–71.

Article   Google Scholar  

Broniatowski, D. A., Jamison, A. M., Qi, S., AlKulaib, L., Chen, T., Benton, A., et al. (2018). Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. American Journal of Public Health, 108 (10), 1378–1384.

Chakraborty, A., & Bose, S. (2020). Around the world in sixty days: An exploratory study of impact of COVID-19 on online global news sentiment. Journal of Computational Social Science .

Chen, E., Lerman, K., & Ferrara, E. (2020). Tracking social media discourse about the COVID-19 pandemic: Development of a public coronavirus Twitter data set. JMIR Public Health and Surveillance, 6 (2), e19273.

Ciampaglia, G. L. (2018). Fighting fake news: A role for computational social science in the fight against digital misinformation. Journal of Computational Social Science, 1 (1), 147–153.

Cinelli, M., Cresci, S., Galeazzi, A., Quattrociocchi, W., & Tesconi, M. (2020). The limited reach of fake news on Twitter during 2019 European elections. PLoS One, 15 (6), e0234689.

Cresci, S. (2020). A decade of social bot detection. Communications of the ACM, 63 (10), 61–72.

Da San M., G., Cresci, S., Barrón-Cedeño, A., Yu, S., Di Pietro, R., & Nakov, P. (2020). A survey on computational propaganda detection. In: The 29th International Joint Conference on Artificial Intelligence (IJCAI’20), pp. 4826–4832.

Farrell, T., Gorrell, G., & Bontcheva, K. (2020). Vindication, virtue and vitriol: A study of online engagement and abuse toward British MPs during the COVID-19 Pandemic. Journal of Computational Social Science .

Ferrara, E., Varol, O., Davis, C., Menczer, F., & Flammini, A. (2016). The rise of social bots. Communications of the ACM, 59 (7), 96–104.

Havey, N. (2020). Partisan public health: How does political ideology influence support for COVID-19 related misinformation?. Journal of Computational Social Science .

Lazer, D. M., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., et al. (2018). The science of fake news. Science, 359 (6380), 1094–1096.

Luceri, L., Deb, A., Giordano, S., & Ferrara, E. (2019). Evolution of bot and human behavior during elections. First Monday, 24 , 9.

Google Scholar  

Qazi, U., Imran, M., & Ofli, F. (2020). GeoCoV19: a dataset of hundreds of millions of multilingual COVID-19 tweets with location information. ACM SIGSPATIAL Special, 12 (1), 6–15.

Shahsavari, S., Holur, P., Wang, T., Tangherlini, T. R., & Roychowdhury, V. (2020). Conspiracy in the time of corona: Automatic detection of emerging COVID-19 conspiracy theories in social media and the news. Journal of Computational Social Science .

Singh, L., Bode, L., Budak, C., Kawintiranon, K., Padden, C., & Vraga, E. (2020). Understanding high and low quality URL sharing on COVID-19 Twitter streams. Journal of Computational Social Science .

Starbird, K. (2019). Disinformation’s spread: Bots, trolls and all of us. Nature, 571 (7766), 449–450.

Starbird, K., Dailey, D., Mohamed, O., Lee, G., & Spiro, E.S. (2018). Engage early, correct more: How journalists participate in false rumors online during crisis events. In: Proceedings of the 2018 ACM CHI Conference on Human Factors in Computing Systems (CHI’18), pp. 1–12. ACM.

Swire-Thompson, B., & Lazer, D. (2020). Public health and online misinformation: challenges and recommendations. Annual Review of Public Health, 41 , 433–451.

Uyheng, J., & Carley, K. M. (2020). Bots and online hate during the COVID-19 pandemic: Case studies in the United States and the Philippines. Journal of Computational Social Science .

Wang, A. H. E., Lee, M. C., Wu, M. H., & Shen, P. (2020). Influencing overseas Chinese by tweets: Text-Images as the key tactic of Chinese propaganda. Journal of Computational Social Science .

Zarocostas, J. (2020). How to fight an infodemic. The Lancet, 395 (10225), 676.

Download references

Author information

Authors and affiliations.

University of Southern California, Los Angeles, CA, 90007, USA

Emilio Ferrara

Institute of Informatics and Telematics, National Research Council (IIT-CNR), 56124, Pisa, Italy

Stefano Cresci

University of Applied Sciences and Arts of Southern Switzerland (SUPSI), Manno, Switzerland

Luca Luceri

You can also search for this author in PubMed   Google Scholar

Corresponding author

Correspondence to Emilio Ferrara .

Additional information

Publisher's note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Ferrara, E., Cresci, S. & Luceri, L. Misinformation, manipulation, and abuse on social media in the era of COVID-19. J Comput Soc Sc 3 , 271–277 (2020). https://doi.org/10.1007/s42001-020-00094-5

Download citation

Received : 19 October 2020

Accepted : 23 October 2020

Published : 22 November 2020

Issue Date : November 2020

DOI : https://doi.org/10.1007/s42001-020-00094-5

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Misinformation
  • Social bots
  • Social media
  • Find a journal
  • Publish with us
  • Track your research

Subscribe or renew today

Every print subscription comes with full digital access

Science News

Social media harms teens’ mental health, mounting evidence shows. what now.

Understanding what is going on in teens’ minds is necessary for targeted policy suggestions

A teen scrolls through social media alone on her phone.

Most teens use social media, often for hours on end. Some social scientists are confident that such use is harming their mental health. Now they want to pinpoint what explains the link.

Carol Yepes/Getty Images

Share this:

By Sujata Gupta

February 20, 2024 at 7:30 am

In January, Mark Zuckerberg, CEO of Facebook’s parent company Meta, appeared at a congressional hearing to answer questions about how social media potentially harms children. Zuckerberg opened by saying: “The existing body of scientific work has not shown a causal link between using social media and young people having worse mental health.”

But many social scientists would disagree with that statement. In recent years, studies have started to show a causal link between teen social media use and reduced well-being or mood disorders, chiefly depression and anxiety.

Ironically, one of the most cited studies into this link focused on Facebook.

Researchers delved into whether the platform’s introduction across college campuses in the mid 2000s increased symptoms associated with depression and anxiety. The answer was a clear yes , says MIT economist Alexey Makarin, a coauthor of the study, which appeared in the November 2022 American Economic Review . “There is still a lot to be explored,” Makarin says, but “[to say] there is no causal evidence that social media causes mental health issues, to that I definitely object.”

The concern, and the studies, come from statistics showing that social media use in teens ages 13 to 17 is now almost ubiquitous. Two-thirds of teens report using TikTok, and some 60 percent of teens report using Instagram or Snapchat, a 2022 survey found. (Only 30 percent said they used Facebook.) Another survey showed that girls, on average, allot roughly 3.4 hours per day to TikTok, Instagram and Facebook, compared with roughly 2.1 hours among boys. At the same time, more teens are showing signs of depression than ever, especially girls ( SN: 6/30/23 ).

As more studies show a strong link between these phenomena, some researchers are starting to shift their attention to possible mechanisms. Why does social media use seem to trigger mental health problems? Why are those effects unevenly distributed among different groups, such as girls or young adults? And can the positives of social media be teased out from the negatives to provide more targeted guidance to teens, their caregivers and policymakers?

“You can’t design good public policy if you don’t know why things are happening,” says Scott Cunningham, an economist at Baylor University in Waco, Texas.

Increasing rigor

Concerns over the effects of social media use in children have been circulating for years, resulting in a massive body of scientific literature. But those mostly correlational studies could not show if teen social media use was harming mental health or if teens with mental health problems were using more social media.

Moreover, the findings from such studies were often inconclusive, or the effects on mental health so small as to be inconsequential. In one study that received considerable media attention, psychologists Amy Orben and Andrew Przybylski combined data from three surveys to see if they could find a link between technology use, including social media, and reduced well-being. The duo gauged the well-being of over 355,000 teenagers by focusing on questions around depression, suicidal thinking and self-esteem.

Digital technology use was associated with a slight decrease in adolescent well-being , Orben, now of the University of Cambridge, and Przybylski, of the University of Oxford, reported in 2019 in Nature Human Behaviour . But the duo downplayed that finding, noting that researchers have observed similar drops in adolescent well-being associated with drinking milk, going to the movies or eating potatoes.

Holes have begun to appear in that narrative thanks to newer, more rigorous studies.

In one longitudinal study, researchers — including Orben and Przybylski — used survey data on social media use and well-being from over 17,400 teens and young adults to look at how individuals’ responses to a question gauging life satisfaction changed between 2011 and 2018. And they dug into how the responses varied by gender, age and time spent on social media.

Social media use was associated with a drop in well-being among teens during certain developmental periods, chiefly puberty and young adulthood, the team reported in 2022 in Nature Communications . That translated to lower well-being scores around ages 11 to 13 for girls and ages 14 to 15 for boys. Both groups also reported a drop in well-being around age 19. Moreover, among the older teens, the team found evidence for the Goldilocks Hypothesis: the idea that both too much and too little time spent on social media can harm mental health.

“There’s hardly any effect if you look over everybody. But if you look at specific age groups, at particularly what [Orben] calls ‘windows of sensitivity’ … you see these clear effects,” says L.J. Shrum, a consumer psychologist at HEC Paris who was not involved with this research. His review of studies related to teen social media use and mental health is forthcoming in the Journal of the Association for Consumer Research.

Cause and effect

That longitudinal study hints at causation, researchers say. But one of the clearest ways to pin down cause and effect is through natural or quasi-experiments. For these in-the-wild experiments, researchers must identify situations where the rollout of a societal “treatment” is staggered across space and time. They can then compare outcomes among members of the group who received the treatment to those still in the queue — the control group.

That was the approach Makarin and his team used in their study of Facebook. The researchers homed in on the staggered rollout of Facebook across 775 college campuses from 2004 to 2006. They combined that rollout data with student responses to the National College Health Assessment, a widely used survey of college students’ mental and physical health.

The team then sought to understand if those survey questions captured diagnosable mental health problems. Specifically, they had roughly 500 undergraduate students respond to questions both in the National College Health Assessment and in validated screening tools for depression and anxiety. They found that mental health scores on the assessment predicted scores on the screenings. That suggested that a drop in well-being on the college survey was a good proxy for a corresponding increase in diagnosable mental health disorders. 

Compared with campuses that had not yet gained access to Facebook, college campuses with Facebook experienced a 2 percentage point increase in the number of students who met the diagnostic criteria for anxiety or depression, the team found.

When it comes to showing a causal link between social media use in teens and worse mental health, “that study really is the crown jewel right now,” says Cunningham, who was not involved in that research.

A need for nuance

The social media landscape today is vastly different than the landscape of 20 years ago. Facebook is now optimized for maximum addiction, Shrum says, and other newer platforms, such as Snapchat, Instagram and TikTok, have since copied and built on those features. Paired with the ubiquity of social media in general, the negative effects on mental health may well be larger now.

Moreover, social media research tends to focus on young adults — an easier cohort to study than minors. That needs to change, Cunningham says. “Most of us are worried about our high school kids and younger.” 

And so, researchers must pivot accordingly. Crucially, simple comparisons of social media users and nonusers no longer make sense. As Orben and Przybylski’s 2022 work suggested, a teen not on social media might well feel worse than one who briefly logs on. 

Researchers must also dig into why, and under what circumstances, social media use can harm mental health, Cunningham says. Explanations for this link abound. For instance, social media is thought to crowd out other activities or increase people’s likelihood of comparing themselves unfavorably with others. But big data studies, with their reliance on existing surveys and statistical analyses, cannot address those deeper questions. “These kinds of papers, there’s nothing you can really ask … to find these plausible mechanisms,” Cunningham says.

One ongoing effort to understand social media use from this more nuanced vantage point is the SMART Schools project out of the University of Birmingham in England. Pedagogical expert Victoria Goodyear and her team are comparing mental and physical health outcomes among children who attend schools that have restricted cell phone use to those attending schools without such a policy. The researchers described the protocol of that study of 30 schools and over 1,000 students in the July BMJ Open.

Goodyear and colleagues are also combining that natural experiment with qualitative research. They met with 36 five-person focus groups each consisting of all students, all parents or all educators at six of those schools. The team hopes to learn how students use their phones during the day, how usage practices make students feel, and what the various parties think of restrictions on cell phone use during the school day.

Talking to teens and those in their orbit is the best way to get at the mechanisms by which social media influences well-being — for better or worse, Goodyear says. Moving beyond big data to this more personal approach, however, takes considerable time and effort. “Social media has increased in pace and momentum very, very quickly,” she says. “And research takes a long time to catch up with that process.”

Until that catch-up occurs, though, researchers cannot dole out much advice. “What guidance could we provide to young people, parents and schools to help maintain the positives of social media use?” Goodyear asks. “There’s not concrete evidence yet.”

More Stories from Science News on Science & Society

Art of a police officer questioning a woman in a red dress. In the back, there are two crime scene technicians analyzing evidence. A splash of blood appears behind the woman.

Scientists are fixing flawed forensics that can lead to wrongful convictions

Close up of a woman holding a smartphone

Privacy remains an issue with several women’s health apps

A screenshot of a fake website, showing a young girl hugging an older woman. The tagline says "Be the favorite grandkid forever"

Should we use AI to resurrect digital ‘ghosts’ of the dead?

A photograph of the landscape in West Thumb Geyser Basin and Yellowstone Lake (in the photo's background)

A hidden danger lurks beneath Yellowstone

Tracking feature in Snapchat can make people feel excluded.

Online spaces may intensify teens’ uncertainty in social interactions

One yellow butterfly visits a purple flower while a second one flutters nearby. They are in focus while an area of wild grasses and flowers, with some buildigns visible behind them, is blurrier.

Want to see butterflies in your backyard? Try doing less yardwork

Eight individuals wearing beekeepers suit are surrounding two bee-hive boxes as they stand against a mountainous background. One of the people are holding a bee hive frame covered in bees, and everyone else seem to be paying attention to the frame.

Ximena Velez-Liendo is saving Andean bears with honey

A photograph of two female scientists cooking meet in a laboratory

‘Flavorama’ guides readers through the complex landscape of flavor

Subscribers, enter your e-mail address for full access to the Science News archives and digital editions.

Not a subscriber? Become one now .

CrowJack

  • Calculators
  • Swot Analysis
  • Pestle Analysis
  • Five Forces Analysis
  • Organizational Structure
  • Copywriting
  • Research Topics
  • Student Resources

CrowJack

Services We Provide

proof-reading

Resources We Provide

blog

Login / Register

login

  • Unraveling Social Media Misuse in the Corporate World
  • 9 Common Ethical Issues in Workplace With Examples

Kiara Miller - Image

Nobody can deny the fact that the way social media platforms are gaining popularity, everyone's perception about sharing information is transforming with time. However, for every positive side of a coin, there is a negative side as well. In the same way that social media has brought the world closer together and made communication easier, the risks of misusing it are also increasing in every industry, including the corporate world.

Exploitation of social media in a workplace

If companies use social media effectively, it can enhance a company's reputation, and relationship with internal and external stakeholders, and on the other hand, if not well managed it can come up with multiple risks of privacy invasion and divulgence of sensitive information.

Further elaborating, below given are some of the ethical issues that can occur in a workplace due to social media.

Table of Contents

Issues to misuse of social media, measures to avoid misuse of social media.

  • Real-Life Case Study

Multiple outcomes of overexploitation of social media

1. Decreased productivity

Social media has become a powerful and gravitating distraction for all including the working class as well. One major ethical issue affecting modern workplaces is the loss of employee productivity as employees overuse social media platforms like Instagram, Facebook, Snapchat, and so on. To substantiate, a study reveals that employees spend two hours per day on social media on average. Furthermore, among remote workers, the distraction by social media is an even more peculiar problem. Subsequently, increased social media engagement is leading to unethical behaviors in the workplace further translating to productivity issues.

2. Disclosure of sensitive information

At times intentionally or unintentionally employees end up disclosing sensitive or private company information on social media in the form of stories or posting about the internal policies of the company on social media platforms. Having said that, a major ethical issue linked to social media is the revelation of details or information that is not meant to be public.

3. Social media outrage against company reputation

It is often seen that in case of workplace conflicts or exploitation by colleagues, employees directly take to social media to express their outrage. They do not report the issue via the established organizational channels and rather directly escalate the issue in the public domain by posting about it on social media. Taking internal matters of the company on social media without reporting them to superiors is unethical as it leads to a dip in company reputation.

4. Increased risks of cyberattackss

On average, 1 in 7 social media accounts is subjected to hacking or malware. Hackers can use private information shared through social media for causing harm to an organization or for different purposes. Besides, they can also hack the official company accounts and post misleading or inappropriate information that can affect the company’s perception as well as stock prices. For instance, in July 2020, Apple’s Twitter account was hacked as a part of a Bitcoin Scam, and that cyber attack had huge implications for Apple.

If a company faces any type of data breach, it is crucial for the company to publicly publish a data breach notification confirming the number and leaked information within a particular number of days published by the country or state.

Moving further, below mentioned are various methods in which employers can make sure that utilization of social media does not become an ethical issue in a workplace.

1. Identifying social media risks

Risks of breach of any information can only be dealt with when an effective assessment of all the potential risks is conducted. Hence, it is extremely crucial for employers to identify all the risky factors that could lead to a company's loss of money or reputation so that in return, effective strategies to prevent the risks can be created and implemented. Risks may include employees connecting with the company’s competitors, cyberbullying of co-workers which is itself a form of workplace harassment , or loss of productivity that can also cause communication issues in workplace due to excessive scrolling on social media platforms.

2. Surveilling employees’ accounts

Keeping a check on employees' social media activities will make sure that no employee shares any negative remarks or information about the company that can cause loss to an organization. Employers should make sure that they monitor employees' activities by informing and with the consent of employees, otherwise, they can be alleged of invading the privacy of employees.

3. Providing instructions of usage

Employers should clearly state their expectations regarding employees using social media during working hours during their onboarding process. This will assure that employees will not waste their time on social media or shopping sites.

4. Creating actionable policies for breach

Strict policies regarding serious consequences on breach of sensitive information of the company should be implemented and should be well informed to the employees to make sure that employees are afraid of divulging any information about the company that can affect the company negatively.

There have been many cases in the past that reflected how big companies suffered due to employees indulging in unethical social media practices. Hence, for a better understanding, below given is the case study of a company that experienced serious consequences due to the disclosure of personal data by an employee on social media.

Real-life case study

Bupa employee stole half million a users' data.

This is the case of a UK-based health insurance company named Bupa . In March 2017, an employee of the company extracted data from almost 547,000 international customers for personal monetary benefits.

Employees extracted the information from the servers and added all the sensitive information including names, D.O.B, email addresses, and credit card details of consumers across 122 countries. After extracting the sensitive information, the employee uploaded the whole data on Alphabay, the biggest dark market on Dark Web for sale.

The employee was immediately suspended and UK police arrested the employee for the violation of the Data Protection Act 1998 . Moreover, the company also released a statement of taking full responsibility for the act and ensuring to increase the security measures to prevent any such occurrences.

However, along with the employee, the company also faced serious consequences of this unethical act. After the investigation, the UK's Information Commissioner's Office (ICO) stated that the company failed to ensure security measures to protect consumers’ data. Hence, the company was also liable to pay £175,000 to the nation.

Key takeaway - We discovered in this case study that the company was careless in not regularly monitoring the systems and was unable to detect unusual activity in the company's server. Furthermore, an employee of the company was engaged in unethical social media practices that were hidden from the company. As a result, both parties faced serious legal action.

What are the legal repercussions of social media abuse in business?

Legal impacts from the improper use of social media in business might include copyright infringement, defamation, privacy issues, and regulatory non-compliance. These problems might result in legal actions or fines for the company in question. Businesses must maintain compliance with pertinent laws and regulations and be aware of the legal risks associated with social media use.

What are some of the tools that are available to businesses to prevent social media abuse?

To stop social media abuse, businesses can utilize social media monitoring solutions like Talkwalker and Awario. Talkwalker offers sentiment analysis and real-time monitoring, enabling organizations to quickly identify potential abuse or unfavorable material. Awario monitors brand references on social media platforms, blogs, and discussion boards to assist companies identify cases of misappropriation and take prompt corrective action. Companies can protect their internet reputation, deal with problems quickly, and use these tools to do so.

Previous Issue

Facebook

Copyright © 2023 CrowJack. All Rights Reserved

Silhouette sad little girl in room and looking out the window

We found over 300 million young people had experienced online sexual abuse and exploitation over the course of our  meta-study

social media misuse case study

Professor of International Child Protection Research and Director of Data at the Childlight Global Child Safety Institute , The University of Edinburgh

Disclosure statement

Deborah Fry receives funding from the Human Dignity Foundation.

The University of Edinburgh provides funding as a member of The Conversation UK.

View all partners

It takes a lot to shock Kelvin Lay. My friend and colleague was responsible for setting up Africa’s first dedicated child exploitation and human trafficking units, and for many years he was a senior investigating officer for the Child Exploitation Online Protection Centre at the UK’s National Crime Agency, specialising in extra territorial prosecutions on child exploitation across the globe.

But what happened when he recently volunteered for a demonstration of cutting-edge identification software left him speechless. Within seconds of being fed with an image of how Lay looks today, the AI app sourced a dizzying array of online photos of him that he had never seen before – including in the background of someone else’s photographs from a British Lions rugby match in Auckland eight years earlier.

“It was mind-blowing,” Lay told me. “And then the demonstrator scrolled down to two more pictures, taken on two separate beaches – one in Turkey and another in Spain – probably harvested from social media. They were of another family but with me, my wife and two kids in the background. The kids would have been six or seven; they’re now 20 and 22.”

Portait photo of a middle aged man.

The AI in question was one of an arsenal of new tools deployed in Quito, Ecuador, in March when Lay worked with a ten-country taskforce to rapidly identify and locate perpetrators and victims of online child sexual exploitation and abuse – a hidden pandemic with over 300 million victims around the world every year.

That is where the work of the Childlight Global Child Safety Institute , based at the University of Edinburgh, comes in. Launched a little over a year ago in March 2023 with the financial support of the Human Dignity Foundation , Childlight’s vision is to use the illuminating power of data and insight to better understand the nature and extent of child sexual exploitation and abuse.

social media misuse case study

This article is part of Conversation Insights The Insights team generates long-form journalism derived from interdisciplinary research. The team is working with academics from different backgrounds who have been engaged in projects aimed at tackling societal and scientific challenges.

I am a professor of international child protection research and Childlight’s director of data, and for nearly 20 years I have been researching sexual abuse and child maltreatment, including with the New York City Alliance Against Sexual Assault and Unicef.

The fight to keep our young people safe and secure from harm has been hampered by a data disconnect – data differs in quality and consistency around the world, definitions differ and, frankly, transparency isn’t what it should be. Our aim is to work in partnership with many others to help join up the system, close the data gaps and shine a light on some of the world’s darkest crimes.

302 million victims in one year

Our new report, Into The Light , has produced the world’s first estimates of the scale of the problem in terms of victims and perpetrators.

Our estimates are based on a meta-analysis of 125 representative studies published between 2011 and 2023, and highlight that one in eight children – 302 million young people – have experienced online sexual abuse and exploitation in a one year period preceding the national surveys.

Additionally, we analysed tens of millions of reports to the five main global watchdog and policing organisations – the Internet Watch Foundation (IWF), the National Centre for Missing and Exploited Children (NCMEC), the Canadian Centre for Child Protection (C3P), the International Association of Internet Hotlines (INHOPE), and Interpol’s International Child Sexual Exploitation database (ICSE). This helped us better understand the nature of child sexual abuse images and videos online.

While huge data gaps mean this is only a starting point, and far from a definitive figure, the numbers we have uncovered are shocking.

We found that nearly 13% of the world’s children have been victims of non-consensual taking, sharing and exposure to sexual images and videos.

In addition, just over 12% of children globally are estimated to have been subject to online solicitation, such as unwanted sexual talk which can include non-consensual sexting, unwanted sexual questions and unwanted sexual act requests by adults or other youths.

Cases have soared since COVID changed the online habits of the world. For example, the Internet Watch Foundation (IWF) reported in 2023 that child sexual abuse material featuring primary school children aged seven to ten being coached to perform sexual acts online had risen by more than 1,000% since the UK went into lockdown.

The charity pointed out that during the pandemic, thousands of children became more reliant on the internet to learn, socialise, and play and that this was something which internet predators exploited to coerce more children into sexual activities – sometimes even including friends or siblings over webcams and smartphones.

There has also been a sharp rise in reports of “financial sextortion”, with children blackmailed over sexual imagery that abusers have tricked them into providing – often with tragic results, with a spate of suicides across the world .

This abuse can also utilise AI deepfake technology – notoriously used recently to generate false sexual images of the singer Taylor Swift.

Our estimates indicate that just over 3% of children globally experienced sexual extortion in the past year.

A child sexual exploitation pandemic

This child sexual exploitation and abuse pandemic affects pupils in every classroom, in every school, in every country, and it needs to be tackled urgently as a public health emergency. As with all pandemics, such as COVID and AIDS, the world must come together and provide an immediate and comprehensive public health response.

Our report also highlights a survey which examines a representative sample of 4,918 men aged over 18 living in Australia , the UK and the US. It has produced some startling findings. In terms of perpetrators:

One in nine men in the US (equating to almost 14 million men) admitted online sexual offending against children at some point in their lives – enough offenders to form a line stretching from California on the west coast to North Carolina in the east or to fill a Super Bowl stadium more than 200 times over.

The surveys found that 7% of men in the UK had admitted the same – equating to 1.8 million offenders, or enough to fill the O2 area 90 times over and by 7.5% of men in Australia (nearly 700,000).

Meanwhile, millions across all three countries said they would also seek to commit contact sexual offences against children if they knew no one would find out, a finding that should be considered in tandem with other research indicating that those who watch child sexual abuse material are at high risk of going on to contact or abuse a child physically.

The internet has enabled communities of sex offenders to easily and rapidly share child abuse and exploitation images on a staggering scale, and this in turn, increases demand for such content among new users and increases rates of abuse of children, shattering countless lives.

In fact, more than 36 million reports of online sexual images of children who fell victim to all forms form of sexual exploitation and abuse were filed in 2023 to watchdogs by companies such as X, Facebook, Instagram, Google, WhatsApp and members of the public. That equates to one report every single second.

Quito operation

Like everywhere in the world, Ecuador is in the grip of this modern, transnational problem: the rapid spread of child sexual exploitation and abuse online. It can see an abuser in, say, London, pay another abuser in somewhere like the Philippines to produce images of atrocities against a child that are in turn hosted by a data centre in the Netherlands and dispersed instantly across multiple other countries.

When Lay – who is also Childlight’s director of engagement and risk – was in Quito in 2024, martial law meant a large hotel normally busy with tourists flocking for the delights of the Galápagos Islands, was eerily quiet, save for a group of 40 law enforcement analysts, researchers and prosecutors who had more than 15,000 child sexual abuse images and videos to analyse.

The cache of files included material logged with authorities annually, content from seized devices, and from Interpol’s International Child Sexual Exploitation (ICSE) database database. The files were potentially linked to perpetrators in ten Latin American and Caribbean countries: Argentina, Chile, Colombia, Costa Rica, Ecuador, El Salvador, Honduras, Guatemala, Peru and the Dominican Republic.

A shot  of a group of law enforcement officials

Child exploitation exists in every part of the world but, based on intelligence from multiple partners in the field, we estimate that a majority of Interpol member countries lack the training and resources to properly respond to evidence of child sexual abuse material shared with them by organisations like the National Center for Missing and Exploited Children (NCMEC). NCMEC is a body created by US Congress to log and process evidence of child sexual abuse material uploaded around the world and spotted, largely, by tech giants. However, we believe this lack of capacity means that millions of reports alerting law enforcement to abuse material are not even opened.

The Ecuador operation, in conjunction with the International Center for Missing and Exploited Children (ICMEC) and US Homeland Security, aimed to help change that by supporting authorities to develop further skills and confidence to identify and locate sex offenders and rescue child victims.

Central to the Quito operation was Interpol’s database database that contains around five million images and videos that specialised investigators from more than 68 countries use to share data and co-operate on cases.

Using image and video comparison software – essentially photo ID work that instantly recognises the digital fingerprint of images – investigators can quickly compare images they have uncovered with images contained in the database. The software can instantly make connections between victims, abusers and places. It also avoids duplication of effort and saves precious time by letting investigators know whether images have already been discovered or identified in another country. So far, it has helped identify more than 37,900 victims worldwide.

Lay has significant field experience using these resources to help Childlight turn data into action – recently providing technical advice to law enforcement in Kenya where successes included using data to arrest paedophile Thomas Scheller. In 2023, Scheller, 74, was given an 81-year jail sentence . The German national was found guilty by a Nairobi court of three counts of trafficking, indecent acts with minors and possession of child sexual abuse material.

Officials  at work on computers

But despite these data strides, there are concerns about the inability of law enforcement to keep pace with a problem too large for officers to arrest their way out of. It is one enabled by emerging technological advances, including AI-generated abuse images, which threaten to overwhelm authorities with their scale.

In Quito, over a warming rainy season meal of encocado de pescado, a tasty regional dish of fish in a coconut sauce served with white rice, Lay explained:

This certainly isn’t to single out Latin America but it’s become clear that there’s an imbalance in the way countries around the world deal with data. There are some that deal with pretty much every referral that comes in, and if it’s not dealt with and something happens, people can lose their jobs. On the opposite side of the coin, some countries are receiving thousands of email referrals a day that don’t even get opened.

Now, we are seeing evidence that advances in technology can also be utilised to fight online sexual predators. But the use of such technology raises ethical questions.

Contentious AI tool draws on 40 billion online images

The powerful, but contentious AI tool, that left Lay speechless was a case in point: one of multiple AI facial recognition tools that have come onto the market, and with multiple applications. The technology can help identify people using billions of images scraped from the internet, including social media.

AI facial recognition software like this has reportedly been used by Ukraine to debunk false social media posts, enhance safety at check points and identify Russian infiltrators, as well as dead soldiers. It was also reportedly used to help identify rioters who stormed the US capital in 2021.

The New York Times magazine reported on another remarkable case. In May 2019, an internet provider alerted authorities after a user received images depicting the sexual abuse of a young girl.

One grainy image held a vital clue: an adult face visible in the background that the facial recognition company was able to match to an image on an Instagram account featuring the same man, again in the background. This was in spite of the fact that the image of his face would have appeared about half the size of a human fingernail when viewing it. It helped investigators pinpoint his identity and the Las Vegas location where he was found to be creating the child sexual abuse material to sell on the dark web. That led to the rescue of a seven-year-old girl and to him being sentenced to 35 years in jail.

Meanwhile, for its part, the UK government recently argued that facial recognition software can allow police to “stay one step ahead of criminals” and make Britain’s streets safer. Although, at the moment, the use of such software is not allowed in the UK.

When Lay volunteered to allow his own features to be analysed, he was stunned that within seconds the app produced a wealth of images, including one that captured him in the background of a photo taken at the rugby match years before. Think about how investigators can equally match a distinctive tattoo or unusual wallpaper where abuse has occurred and the potential of this as a crime-fighting tool is easy to appreciate.

Of course, it is also easy to appreciate the concerns some people have on civil liberties grounds which have limited the use of such technology across Europe. In the wrong hands, what might such technology mean for a political dissident in hiding for instance? One Chinese facial recognition startup has come under scrutiny by the US government for its alleged role in the surveillance of the Uyghur minority group, for example.

Role of big tech

Similar points are sometimes made by big tech proponents of end-to-end encryption on popular apps: apps which are also used to share child abuse and exploitation files on an industrial scale – effectively turning the lights off on some of the world’s darkest crimes.

Why – ask the privacy purists – should anyone else have the right to know about their private content?

And so, it may seem to some that we have reached a Kafkaesque point where the right to privacy of abusers risks trumping the privacy and safety rights of the children they are abusing.

Clearly then, if encryption of popular file sharing apps is to be the norm, a balance must be struck that meets the desire for privacy for all users, with the proactive detection of child sexual abuse material online.

Meta has shown recently that there is potential for a compromise that could improve child safety, at least to some extent. Instagram, described by the NSPCC recently as the platform most used for grooming , has developed a new tool aimed at blocking the sending of sexual images to children – albeit, notably, authorities will not be alerted about those sending the material.

This would involve so-called client-side scanning which Meta believes undermines the chief privacy protecting feature of encryption – that only the sender and recipient know about the contents of messages. Meta has said it does report all apparent instances of child exploitation appearing on its site from anywhere in the world to NCMEC.

One compromise with the use of AI to detect offenders, suggests Lay, is a simple one: to ensure it can only be used under strict licence of child protection professionals with appropriate controls in place. It is not “a silver bullet”, he explained to me. AI-based ID will always need to be followed up by old fashioned police work but anything that can “achieve in 15 seconds what we used to spend hours and hours trying to get” is worthy of careful consideration, he believes.

The Ecuador operation, combining AI with traditional work, had an immediate impact in March. ICMEC reports that it led to a total of 115 victims (mainly girls and mostly aged six-12 and 13-15) and 37 offenders (mainly adult men) positively identified worldwide. Within three weeks, ICMEC said 18 international interventions had taken place, with 45 victims rescued and seven abusers arrested.

An inforgraphic showing the results of an investigation into online sexual abuse

One way or another, a compromise needs to be struck to deal with this pandemic.

Child sexual abuse is a global public health crisis that is steadily worsening thanks to advancing technologies which enable instantaneous production and limitless distribution of child exploitation material, as well as unregulated access to children online.

These are the words of Tasmanian, Grace Tame: a remarkable survivor of childhood abuse and executive director of the Grace Tame Foundation which works to combat the sexual abuse of children.

“Like countless child sexual abuse victim-survivors, my life was completely upended by the lasting impacts of trauma, shame, public humiliation, ignorance and stigma. I moved overseas at 18 because I became a pariah in my hometown, didn’t pursue tertiary education as hoped, misused alcohol and drugs, self-harmed, and worked several minimum wage jobs”. Tame believes that “a centralised global research database is essential to safeguarding children”.

If the internet and technology brought us to where we are today, the AI used in Quito to save 45 children is a powerful demonstration of the power of technology for good. Moreover, the work of the ten-country taskforce is testament to the potential of global responses to a global problem on an internet that knows no national boundaries.

Greater collaboration, education, and in some cases regulation and legislation can all help, and they are needed without delay because, as Childlight’s mantra goes, children can’t wait.

social media misuse case study

For you: more from our Insights series :

Why people self-injure: ‘You have no other voice – and no one would listen anyway’

Mr Bates vs The Post Office depicts one of the UK’s worst miscarriages of justice: here’s why so many victims didn’t speak out

GP crisis: how did things go so wrong, and what needs to change?

‘It’s like being in a warzone’ – A&E nurses open up about the emotional cost of working on the NHS frontline

To hear about new Insights articles, join the hundreds of thousands of people who value The Conversation’s evidence-based news. Subscribe to our newsletter .

  • Artificial intelligence (AI)
  • Online safety
  • End-to-end encryption
  • Child sexual exploitation
  • Insights series
  • Online Safety Act

social media misuse case study

Head of School, School of Arts & Social Sciences, Monash University Malaysia

social media misuse case study

Chief Operating Officer (COO)

social media misuse case study

Clinical Teaching Fellow

social media misuse case study

Data Manager

social media misuse case study

Director, Social Policy

Numbers, Facts and Trends Shaping Your World

Read our research on:

Full Topic List

Regions & Countries

  • Publications
  • Our Methods
  • Short Reads
  • Tools & Resources

Read Our Research On:

Americans’ complicated feelings about social media in an era of privacy concerns

Share of countries with highest levels of social hostilities remained stable

Amid public concerns over Cambridge Analytica’s use of Facebook data and a subsequent movement to encourage users to abandon Facebook , there is a renewed focus on how social media companies collect personal information and make it available to marketers.

Pew Research Center has studied the spread and impact of social media since 2005, when just 5% of American adults used the platforms. The trends tracked by our data tell a complex story that is full of conflicting pressures. On one hand, the rapid growth of the platforms is testimony to their appeal to online Americans. On the other, this widespread use has been accompanied by rising user concerns about privacy and social media firms’ capacity to protect their data.

All this adds up to a mixed picture about how Americans feel about social media. Here are some of the dynamics.

People like and use social media for several reasons

social media misuse case study

About seven-in-ten American adults (69%) now report they use some kind of social media platform (not including YouTube) – a nearly fourteenfold increase since Pew Research Center first started asking about the phenomenon. The growth has come across all demographic groups and includes 37% of those ages 65 and older.

The Center’s polls have found over the years that people use social media for important social interactions like staying in touch with friends and family and reconnecting with old acquaintances. Teenagers are especially likely to report that social media are important to their friendships and, at times, their romantic relationships .

Beyond that, we have documented how social media play a role in the way people participate in civic and political activities, launch and sustain protests , get and share health information , gather scientific information , engage in family matters , perform job-related activities and get news . Indeed, social media is now just as common a pathway to news for people as going directly to a news organization website or app.

Our research has not established a causal relationship between people’s use of social media and their well-being. But in a 2011 report, we noted modest associations between people’s social media use and higher levels of trust, larger numbers of close friends, greater amounts of social support and higher levels of civic participation.

People worry about privacy and the use of their personal information

While there is evidence that social media works in some important ways for people, Pew Research Center studies have shown that people are anxious about all the personal information that is collected and shared and the security of their data.

Overall, a 2014 survey found that 91% of Americans “agree” or “strongly agree” that people have lost control over how personal information is collected and used by all kinds of entities. Some 80% of social media users said they were concerned about advertisers and businesses accessing the data they share on social media platforms, and 64% said the government should do more to regulate advertisers.

social media misuse case study

Another survey last year found that just 9% of social media users were “very confident” that social media companies would protect their data . About half of users were not at all or not too confident their data were in safe hands.

Moreover, people struggle to understand the nature and scope of the data collected about them. Just 9% believe they have “a lot of control” over the information that is collected about them, even as the vast majority (74%) say it is very important to them to be in control of who can get information about them.

Six-in-ten Americans (61%) have said they would like to do more to protect their privacy. Additionally, two-thirds have said current laws are not good enough in protecting people’s privacy, and 64% support more regulation of advertisers.

Some privacy advocates hope that the European Union’s General Data Protection Regulation , which goes into effect on May 25, will give users – even Americans – greater protections about what data tech firms can collect, how the data can be used, and how consumers can be given more opportunities to see what is happening with their information.

People’s issues with the social media experience go beyond privacy

In addition to the concerns about privacy and social media platforms uncovered in our surveys, related research shows that just 5% of social media users trust the information that comes to them via the platforms “a lot.”

social media misuse case study

Moreover, social media users can be turned off by what happens on social media. For instance, social media sites are frequently cited as places where people are harassed . Near the end of the 2016 election campaign, 37% of social media users said they were worn out by the political content they encountered, and large shares said social media interactions with those opposed to their views were stressful and frustrating. Large shares also said that social media interactions related to politics were less respectful, less conclusive, less civil and less informative than offline interactions.

A considerable number of social media users said they simply ignored  political arguments when they broke out in their feeds. Others went steps further by blocking or unfriending those who offended or bugged them.

Why do people leave or stay on social media platforms?

The paradox is that people use social media platforms even as they express great concern about the privacy implications of doing so – and the social woes they encounter. The Center’s most recent survey about social media found that 59% of users said it would  not be difficult to give up these sites, yet the share saying these sites would be hard to give up grew 12 percentage points from early 2014.

Some of the answers about why people stay on social media could tie to our findings about how people adjust their behavior on the sites and online, depending on personal and political circumstances. For instance, in a 2012 report we found that 61% of Facebook users said they had taken a break from using the platform. Among the reasons people cited were that they were too busy to use the platform, they lost interest, they thought it was a waste of time and that it was filled with too much drama, gossip or conflict.

In other words, participation on the sites for many people is not an all-or-nothing proposition.

People pursue strategies to try to avoid problems on social media and the internet overall. Fully 86% of internet users said in 2012 they had taken steps to try to be anonymous online. “Hiding from advertisers” was relatively high on the list of those they wanted to avoid.

Many social media users fine-tune their behavior to try to make things less challenging or unsettling on the sites, including changing their privacy settings and restricting access to their profiles. Still, 48% of social media users reported in a 2012 survey they have difficulty managing their privacy controls.

After National Security Agency contractor Edward Snowden disclosed details about government surveillance programs starting in 2013, 30% of adults said they took steps to hide or shield their information and 22% reported they had changed their online behavior in order to minimize detection.

One other argument that some experts make in Pew Research Center canvassings about the future is that people often find it hard to disconnect because so much of modern life takes place on social media. These experts believe that unplugging is hard because social media and other technology affordances make life convenient and because the platforms offer a very efficient, compelling way for users to stay connected to the people and organizations that matter to them.

Note: See topline results  for overall social media user data   here (PDF).

  • Emerging Technology
  • Online Privacy & Security
  • Privacy Rights
  • Social Media
  • Technology Policy Issues

Download Lee Rainie's photo

Lee Rainie is director of internet and technology research at Pew Research Center .

A quarter of U.S. teachers say AI tools do more harm than good in K-12 education

Many americans think generative ai programs should credit the sources they rely on, americans’ use of chatgpt is ticking up, but few trust its election information, q&a: how we used large language models to identify guests on popular podcasts, computer chips in human brains: how americans view the technology amid recent advances, most popular.

1615 L St. NW, Suite 800 Washington, DC 20036 USA (+1) 202-419-4300 | Main (+1) 202-857-8562 | Fax (+1) 202-419-4372 |  Media Inquiries

Research Topics

  • Email Newsletters

ABOUT PEW RESEARCH CENTER  Pew Research Center is a nonpartisan fact tank that informs the public about the issues, attitudes and trends shaping the world. It conducts public opinion polling, demographic research, media content analysis and other empirical social science research. Pew Research Center does not take policy positions. It is a subsidiary of  The Pew Charitable Trusts .

© 2024 Pew Research Center

10 Media Cases That Show Online Harassment Is Not An Isolated Issue

Featured Image

Social media is a double-edged sword. If it gives you a space to express yourself, it also creates space for people to respond violently to your thoughts. If it gives you anonymity , it also gives abusers anonymity. Women in India have seen both edges of this sword. Online harassment against women has become as common as street harassment in the offline world. Online harassment also includes cyberstalking that undermines women’s online and offline security. Misogynists and Right-wing nationalists often respond to online content from women with contemptuous threats and sexist verbal abuse. In response, dismissive authorities feign concern by advising women to refrain from using their real names or posting pictures of themselves. Thus, the task of internet safety often wrongfully falls on victims rather than abusers.

The rise of the Bharatiya Janata Party, which came to power in the 2014 general election and espouses Hindu nationalism, has been accompanied by an increase in online abuse against a range of targets, from “liberal and secular” journalists to activists and women from historically marginalized caste groups.  

Since 2012, news reports have documented at least ten cases of high profile Indian women harassed for expressing their views on Twitter or Facebook, some on multiple occasions.

  • In 2012, multiple Twitter users threatened Indian writer, poet and activist Meena Kandasamy after she discussed a beef-eating festival in the southern city of Hyderabad using her personal Twitter account. (The Hindu community considers cows sacred) Kandasamy was threatened with acid attacks and televised gang rape. Kandasmay is Dalit, a lower-status group according to the Hindu caste system, and the festival was organized by a marginalized caste group.
  • In 2013, Indian journalist Sagarika Ghose was threatened with rape by Twitter users who discovered and published her daughter’s name and school. Ghose said the tweets came from right-wing nationalists targeting “liberal and secular women.” Ghose subsequently stopped sharing her personal views on Twitter.
  • Kavita Krishnan , a prominent Delhi-based women’s rights activist, was harassed during a 2013 online chat about violence against women on news website Rediff by a person using the handle @RAPIST, until she exited the discussion.
  • Separately, in 2015, Krishnan and Indian actress Shruti Seth both criticised the prime minister’s social media initiative #SelfieWithDaughter , in which he called on fathers to share photos of themselves with their daughters to promote education for girls. Both were sent abusive language and violent threats on Twitter.  
  • Rega Jha , Buzzfeed’s India editor, was subject to rape threats after she praised Pakistani players on Twitter during a 2015 India-Pakistan cricket match. Many Indian men sent participated in the abusive comments, including writers Chetan Bhagat and Suhel Seth. India and Pakistan have had a tense relationship since the partition of India in 1947.
  • Journalist Barkha Dutt has been called India’s most trolled woman. Abuse of Dutt, who is routinely harassed for her online comments, escalated on Twitter and Facebook in December 2015 after she described being sexually abused as a child in her book This Unquiet Land . Among other abusive terms, critics called her “antinational,” a right-wing slur.
  • In 2015, Media One Group journalist V.P. Rajeena from the southern state of Kerala, published a personal account of child sexual abuse at a Sunni religious school in the southern city of Kozhikode on Facebook. Over 1,700 Facebook users shared her account, but it also attracted abuse from members of the Muslim community, many of whom reported her Facebook account for violating community guidelines, with the result that it was temporarily blocked.
  • In a series of incidents in 2015, Facebook users attacked Indian social activist Preetha G. Nair , first for criticising G. Sudhakaran, a leader of the Communist Party of India, and then the India’s late President APJ Abdul Kalam. Trolls attempted to hack her account, created a fake Facebook profile depicting her as a sex worker, and directed sexualized abuse at her children. Facebook temporarily suspended her profile after one of her abusers reported her for violating their real name guidelines, since Preetha had withheld her last name, which indicates her caste.
  • Singer Sona Mohapatra found herself being attacked by online trolls after she criticised Bollywood actor Salman Khan for using the analogy of rape for his gruelling shooting schedule. “ Women thrashed, people run over, wild life massacred and yet #hero of the nation. ‘Unfair’. India full of such supporters ,” Sona tweeted.

US-based Indian woman Taruna Aswani took on her cyber blackmailer with a public Facebook post that went viral and eventually helped in nabbing the blackmailer.

Read more on FII’s report on cyber violence against women here . Watch the highlights of the research findings in this video.

https://www.youtube.com/watch?v=gn1wtNDwhmg

Disclaimer: This list is in no way exhaustive, but only shows how online harassment against women is a global issue. 

' data-src=

Japleen smashes the patriarchy for a living! She is the founder-CEO of Feminism in India, an award-winning digital, bilingual, intersectional feminist media platform. She is also an Acumen Fellow, a TEDx speaker and a UN World Summit Young Innovator. Japleen likes to garden, travel, swim and cycle.

Related Posts

Featured Image

AI’s Impact On The Electoral Landscape: Navigating The 2024 Lok Sabha Elections

By Shahinda Syed

Featured Image

Gender Disparity In STEM: A Closer Look At The Challenges Faced By Kashmiri Women In Technology

By Mehar Zargar and Safoora Hilal

Featured Image

Beyond The Binary: Retrospecting The ‘Gendered’ Realities Of AI Voice Assistants 

By Neha Shetty and Saurabh Das

social media misuse case study

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Springer Nature - PMC COVID-19 Collection

Logo of phenaturepg

The disaster of misinformation: a review of research in social media

Sadiq muhammed t.

Department of Management Studies (DoMS), Indian Institute of Technology Madras, Chennai, Tamil Nadu 600036 India

Saji K. Mathew

The spread of misinformation in social media has become a severe threat to public interests. For example, several incidents of public health concerns arose out of social media misinformation during the COVID-19 pandemic. Against the backdrop of the emerging IS research focus on social media and the impact of misinformation during recent events such as the COVID-19, Australian Bushfire, and the USA elections, we identified disaster, health, and politics as specific domains for a research review on social media misinformation. Following a systematic review process, we chose 28 articles, relevant to the three themes, for synthesis. We discuss the characteristics of misinformation in the three domains, the methodologies that have been used by researchers, and the theories used to study misinformation. We adapt an Antecedents-Misinformation-Outcomes (AMIO) framework for integrating key concepts from prior studies. Based on the AMIO framework, we further discuss the inter-relationships of concepts and the strategies to control the spread of misinformation on social media. Ours is one of the early reviews focusing on social media misinformation research, particularly on three socially sensitive domains; disaster, health, and politics. This review contributes to the emerging body of knowledge in Data Science and social media and informs strategies to combat social media misinformation.

Introduction

Information disorder in social media.

Rumors, misinformation, disinformation, and mal-information are common challenges confronting media of all types. It is, however, worse in the case of digital media, especially on social media platforms. Ease of access and use, speed of information diffusion, and difficulty in correcting false information make control of undesirable information a horrid task [ 1 ]. Alongside these challenges, social media has also been highly influential in spreading timely and useful information. For example, the recent #BlackLivesMatter movement was enabled by social media, which united concurring people's solidarity across the world when George Floyd was killed due to police brutality, and so are 2011 Arab spring in the Middle East and the 2017 #MeToo movement against sexual harassments and abuse [ 2 , 3 ]. Although, scholars have addressed information disorder in social media, a synthesis of the insights from these studies are rare.

The information which is fake or misleading and spreads unintentionally is known as misinformation [ 4 ]. Prior research on misinformation in social media has highlighted various characteristics of misinformation and interventions thereof in different contexts. The issue of misinformation has become dominant with the rise of social media, attracting scholarly attention, particularly after the 2016 USA Presidential election, when misinformation apparently influenced the election results [ 5 ]. The word 'misinformation' was listed as one of the global risks by the World Economic Forum [ 6 ]. A similar term that is popular and confusing along with misinformation is 'disinformation'. It is defined as the information that is fake or misleading, and unlike misinformation, spreads intentionally. Disinformation campaigns are often seen in a political context where state actors create them for political gains. In India, during the initial stage of COVID-19, there was reportedly a surge in fake news linking the virus outbreak to a particular religious group. This disinformation spread gained media attention as it was widely shared on social media platforms. As a result of the targeting, it eventually translated into physical violence and discriminatory treatment against members of the community in some of the Indian states [ 7 ]. 'Rumors' and 'fake news' are similar terms related to misinformation. 'Rumors' are unverified information or statements circulated with uncertainty, and 'fake news' is the misinformation that is distributed in an official news format. Source ambiguity, personal involvement, confirmation bias, and social ties are some of the rumor-causing factors. Yet another related term, mal-information, is accurate information that is used in different contexts to spread hatred or abuse of a person or a particular group. Our review focuses on misinformation that is spread through social media platforms. The words 'rumor', and 'misinformation' are used interchangeably in this paper. Further, we identify factors that cause misinformation based on a systematic review of prior studies.

Ours is one of the early attempts to review social media research on misinformation. This review focuses on three sensitive domains of disaster, health, and politics, setting three objectives: (a) to analyze previous studies to understand the impact of misinformation on the three domains (b) to identify theoretical perspectives used to examine the spread of misinformation on social media and (c) to develop a framework to study key concepts and their inter-relationships emerging from prior studies. We identified these specific areas as the impact of misinformation with regards to both speed of spread and scale of influence are high and detrimental to the public and governments. To the best of our knowledge, the review of the literature on social media misinformation themes are relatively scanty. This review contributes to an emerging body of knowledge in Data Science and informs the efforts to combat social media misinformation. Data Science is an interdisciplinary area which incorporates different areas like statistics, management, and sociology to study the data and create knowledge out of data [ 8 ]. This review will also inform future studies that aim to evaluate and compare patterns of misinformation on sensitive themes of social relevance, such as disaster, health, and politics.

The paper is structured as follows. The first section introduces misinformation in social media context. In Sect.  2 , we provide a brief overview of prior research works on misinformation and social media. Section  3 describes the research methodology, which includes details of the literature search and selection process. Section  4 discusses the analysis of spread of misinformation on social media based on three themes- disaster, health, and politics and the review findings. This includes current state of research, theoretical foundations, determinants of misinformation in social media platforms, and strategies to control the spread of misinformation. Section  5 concludes with the implications and limitations of the paper.

Social media and spread of misinformation

Misinformation arises in uncertain contexts when people are confronted with a scarcity of information they need. During unforeseen circumstances, the affected individual or community experiences nervousness or anxiety. Anxiety is one of the primary reasons behind the spread of misinformation. To overcome this tension, people tend to gather information from sources such as mainstream media and official government social media handles to verify the information they have received. When they fail to receive information from official sources, they collect related information from their peer circles or other informal sources, which would help them to control social tension [ 9 ]. Furthermore, in an emergency context, misinformation helps community members to reach a common understanding of the uncertain situation.

The echo chamber of social media

Social media has increasingly grown in power and influence and has acted as a medium to accelerate sociopolitical movements. Network effects enhance participation in social media platforms which in turn spread information (good or bad) at a faster pace compared to traditional media. Furthermore, due to a massive surge in online content consumption primarily through social media both business organizations and political parties have begun to share content that are ambiguous or fake to influence online users and their decisions for financial and political gains [ 9 , 10 ]. On the other hand, people often approach social media with a hedonic mindset, which reduces their tendency to verify the information they receive [ 9 ]. Repetitive exposure to contents that coincides with their pre-existing beliefs, increases believability and shareability of content. This process known as the echo-chamber effect [ 11 ] is fueled by confirmation bias. Confirmation bias is the tendency of the person to support information that reinforces pre-existing beliefs and neglect opposing perspectives and viewpoints other than their own.

Platforms’ structure and algorithms also have an essential role in spreading misinformation. Tiwana et al. [ 12 ] have defined platform architecture as ‘a conceptual blueprint that describes how the ecosystem is partitioned into a relatively stable platform and a complementary set of modules that are encouraged to vary, and the design rules binding on both’. Business models of these platforms are based upon maximizing user engagement. For example, in the case of Facebook or Twitter, user feed is based on their existing belief or preferences. User feeds provide users with similar content that matches their existing beliefs, thus contributing to the echo chamber effect.

Platform architecture makes the transmission and retransmission of misinformation easier [ 12 , 13 ]. For instance, WhatsApp has a one-touch forward option that enables users to forward messages simultaneously to multiple users. Earlier, a WhatsApp user could forward a message to 250 groups or users at a time, which as a measure for controlling the spread of misinformation was limited to five members in 2019. WhatsApp claimed that globally this restriction reduced message forwarding by 25% [ 14 ]. Apart from platform politics, users also have an essential role in creating or distributing misinformation. In a disaster context, people tend to share misinformation based on their subjective feeling [ 15 ].

Misinformation has the power to influence the decisions of its audience. It can change a citizen's approach toward a topic or a subject. The anti-vaccine movement on Twitter during the 2015 measles (highly communicable disease) outbreak in Disneyland, California, serves as a good example. The movement created conspiracy theories and mistrust on the State, which increased vaccine refusal rate [ 16 ]. Misinformation could even influence election of governments by manipulating citizens’ political attitudes as seen in the 2016 USA and 2017 French elections [ 17 ]. Of late, people rely heavily on Twitter and Facebook to collect the latest happenings from mainstream media [ 18 ].

Combating misinformation in social media has been a challenging task for governments in several countries. When social media influences elections [ 17 ] and health campaigns (like vaccination), governments and international agencies demand social media owners to take necessary actions to combat misinformation [ 13 , 15 ]. Platforms began to regulate bots that were used to spread misinformation. Facebook announced the filtering of their algorithms to combat misinformation, down-ranking the post flagged by their fact-checkers which will reduce the popularity of the post or page. [ 17 ]. However, misinformation has become a complicated issue due to the growth of new users and the emergence of new social media platforms. Jang et al. [ 19 ] have suggested two approaches other than governmental regulation to control misinformation literary and corrective. The literary approach proposes educating users to increase their cognitive ability to differentiate misinformation from the information. The corrective approach provides more fact-checking facilities for users. Warnings would be provided against potentially fabricated content based on crowdsourcing. Both approaches have limitations; the literary approach attracted criticism as it transfers responsibility for the spread of misinformation to citizens. The corrective approach will only have a limited impact as the volume of fabricated content escalates [ 19 – 21 ].

An overview of the literature on misinformation reveals that most investigations focus on examining the methods to combat misinformation. Social media platforms are still discovering new tools and techniques to mitigate misinformation from their platforms, this calls for a research to understand their strategies.

Review method

This research followed a systematic literature review process. The study employed a structured approach based on Webster’s Guidelines [ 22 ] to identify relevant literature on the spread of misinformation. These guidelines helped in maintaining a quality standard while selecting the literature for review. The initial stage of the study involved exploring research papers from relevant databases to understand the volumes and availability of research articles. We extended the literature search to interdisciplinary databases too. We gathered articles from Web of Science, ACM digital library, AIS electronic library, EBSCO host business source premier, ScienceDirect, Scopus, and Springer link. Apart from this, a manual search was performed in Information Systems (IS) scholars' basket of journals [ 23 ] to ensure we did not miss any articles from these journals. We have also preferred articles that have Data Science and Information Systems background. The systematic review process began with keyword search using predefined keywords (Fig.  2 ). We identified related synonyms such as 'misinformation', 'rumors', 'spread', and 'social media' along with their combinations for the search process. The keyword search was on the title, abstract, and on the list of keywords. The literature search was conducted in the month of April 2020. Later, we revisited the literature in December 2021 to include latest publications from 2020 to 2021.

An external file that holds a picture, illustration, etc.
Object name is 41060_2022_311_Fig2_HTML.jpg

Systematic literature review process

It was observed that scholarly discussion about ‘misinformation and social media’ began to appear in research after 2008. Later in 2010, the topic gained more attention when Twitter bots were used or spreading fake news on the replacement of a USA Senator [ 24 ]. Hate campaigns and fake follower activities were simultaneously growing during that period. As evident from Fig.  1 , showing number of articles published between 2005 and 2021 on misinformation in three databases: Scopus, S pringer, and EBSCO, academic engagement on misinformation seems to have gained more impetus after the 2016 US Presidential election, when social media platforms had apparently influenced the election [ 20 ].

An external file that holds a picture, illustration, etc.
Object name is 41060_2022_311_Fig1_HTML.jpg

Articles published on misinformation during 2005–2021 (Databases; Scopus, Springer, and EBSCO)

As Data Science is an interdisciplinary field, the focus of our literature review goes beyond disciplinary boundaries. In particular, we focused on the three domains of disaster, health, and politics. This thematic focus of our review has two underlying reasons (a) the impact of misinformation through social media is sporadic and has the most damaging effects in these three domains and (b) our selection criteria in systematic review finally resulted in research papers that related to these three domains. This review has excluded platforms that are designed for professional and business users such as LinkedIn and Behance. A rational for the choice of these themes are discussed in the next section.

Inclusion–exclusion criteria

Figure  2 depicts the systematic review process followed in this study. In our preliminary search, 2148 records were retrieved from databases—all those articles were gathered onto a spreadsheet, which was manually cross-checked with the journals linked to the articles. Studies published during 2005–2021, studies published in English language, articles published from peer-reviewed journals, journals rating and papers relevant to misinformation were used as the inclusion criteria. We have excluded reviews, thesis, dissertations, and editorials; and articles on misinformation that are not akin to social media. To fetch the best from these articles, we selected articles that were from top journals, rated above three according to ABS rating and A*, A, and B according to ABDC rating. This process, while ensuring the quality of papers, also effectively shortened purview of study to 643 articles of acceptable quality. We have not performed track-back and track-forward on references. During this process, duplicate records were also identified and removed. Further screening of articles based on the title, abstract, and full text (wherever necessary)—brought down the number to 207 articles.

Further screening based on the three themes reduced the focus to 89 articles. We conducted a full-text analysis of these 89 articles. We further excluded articles that had not considered misinformation as a central theme and finally arrived at 28 articles for detailed review (Table ​ (Table1 1 ).

Reviewed articles

The selected studies used a variety of research methods to examine the misinformation on social media. Experimentation and text mining of tweets emerged as the most frequent research methods; there were 11 studies that used experimental methods, and eight used Twitter data analyses. Apart from these, there were three survey methods, two mixed methods, and case study methods each, and one opportunistic sampling and exploratory study each. The selected literature for review includes nine articles on disaster, eight on healthcare, and eleven from politics. We preferred papers for review based on three major social media platforms; Twitter, Facebook, and WhatsApp. These are the three social media owners with the highest transmission rates and most active users [ 25 ] and most likely platforms for misinformation propagation.

Coding procedure

Initially both the authors have manually coded the articles individually by reading full text of each article and then identified the three themes; disaster, health, and politics. We used an inductive coding approach to derive codes from the data. The intercoder reliability rate between the authors were 82.1%. Disagreement among authors related to deciding in which theme few papers fall under were discussed and a resolution was arrived at. Later we used NVIVO, a qualitative data analysis software, to analyze unstructured data to encode and categorize the themes from the articles. The codes emerged from the articles were categorized into sub-themes and later attached to the main themes; disaster, health, and politics. NVIVO produced a rank list of codes based on frequency of occurrence (“ Appendix ”). An intercoder reliability check was completed for the data by an external research scholar having a different areas of expertise to ensure reliability. The coder agreed upon 26 articles out of 28 (92.8%), which indicated a high level intercoder reliability [ 49 ]. The independent researcher’s disagreement about the code for two authors was discussed between the authors and the research scholar and a consensus was arrived at.

We initially reviewed articles separately from the categories of disaster, health, and politics. We first provide emergent issues that cut across these themes.

Social media misinformation research

Disaster, health, and politics emerged as the three domains (“ Appendix ”) where misinformation can cause severe harm, often leading to casualties or even irreversible effects. The mitigation of these effects can also demand substantial financial or human resources burden considering the scale of effect and risk of spreading negative information to the public altogether. All these areas are sensitive in nature. Further, disaster, health, and politics have gained the attention of researchers and governments as the challenges of misinformation confronting these domains are rampant. Besides sensitivity, misinformation in these areas has higher potential to exacerbate the existing crisis in society. During the 2020 Munich security conference, WHO’s Director-General noted: “We are not just fighting an epidemic; we are fighting an infodemic”, referring to the faster spread of COVID-19 misinformation than the virus [ 50 ].

More than 6000 people were hospitalized due to COVID-19 related misinformation in the first three months of 2020 [ 51 ]. As COVID-19 vaccination began, one of the popular myths was that Bill Gates wanted to use vaccines to embed microchips in people to track them and this created vaccine hesitancy among the citizens [ 52 ]. These reports show the severity of the spread of misinformation and how misinformation can aggravate a public health crisis.

Misinformation during disaster

In the context of emergency situations (unforeseen circumstances), the credibility of social media information has often been questioned [ 11 ]. When a crisis occurs, affected communities often experience a lack of localized information needed for them to make emergency decisions. This accelerates the spread of misinformation as people tend to fill this information gap with misinformation or 'improvised news' [ 9 , 24 , 25 ]. The broadcasting power of social media and re-sharing of misinformation could weaken and slow down rescue operations [ 24 , 25 ]. As the local people have more access to the disaster area, they become immediate reporters of a crisis through social media. Mainstream media comes into picture only later. However, recent incidents reveals that voluntary reporting of this kind has begun to affect rescue operations negatively as it often acts as a collective rumor mill [ 9 ], which propagates misinformation. During the 2018 floods in the South-Indian state of Kerala a fake video on Mullaperiyar Dam leakage created unnecessary panic among the citizens, thus negatively impacting the rescue operations [ 53 ]. Information from mainstream media is relatively more reliable as they have traditional gatekeepers such as peer reviewers and editors who cross-check the information source before publication. Chua et al. [ 28 ] found that a major chunk of corrective tweets were retweeted from mainstream news media, thus mainstream media is considered as a preferred rumor correction channel, where they attempt to correct misinformation with the right information.

Characterizing disaster misinformation

Oh et al. [ 9 ] studied citizen-driven information processing based on three social crises using rumor theory. The main characteristic of a crisis is the complexity of information processing and sharing [ 9 , 24 ]. A task is considered complex when characterized by increase in information load, information diversity or rate of information change [ 54 ]. Information overload and information dearth are the two grave concerns that interrupt the communication between the affected community and a rescue team. Information overload, where too many enquiries and fake news distract a response team, slows them down to recognize valid information [ 9 , 27 ]. According to Balan and Mathew [ 55 ] information overload occurs when volume of information such as complexity of words and multiple languages that exceeds and cannot be processed by a human being. Here information dearth in our context is the lack of localized information that is supposed to help the affected community to make emergency decisions. When the official government communication channels or mainstream media cannot fulfill citizen's needs, they resort to information from their social media peers [ 9 , 27 , 29 ].

In a social crisis context, Tamotsu Shibutani [ 56 ] defines rumoring as collective sharing and exchange of information, which helps the community members to reach a common understanding about the crisis situation [ 30 ]. This mechanism works in social media, which creates information dearth and information overload. Anxiety, information ambiguity (source ambiguity and content ambiguity), personal involvement, and social ties are the rumor-causing variables in a crisis context [ 9 , 27 ]. In general, anxiety is a negative feeling caused by distress or stressful situation, which fabricates or produces adverse outcomes [ 57 ]. In the context of a crisis or emergency, a community may experience anxiety in the absence of reliable information or in other cases when confronted with overload of information, making it difficult to take appropriate decisions. Under such circumstances, people may tend to rely on rumors as a primary source of information. The influence level of anxiety is higher during a community crisis than during a business crisis [ 9 ]. However, anxiety, as an attribute, varies based on the nature of platforms. For example, Oh et al. [ 9 ] found that the Twitter community do not fall into social pressure as like WhatsApp community [ 30 ]. Simon et al. [ 30 ] developed a model of rumor retransmission on social media and identified information ambiguity, anxiety and personal involvement as motives for rumormongering. Attractiveness is another rumor-causing variable. It occurs when aesthetically appealing visual aids or designs capture a receiver’s attention. Here believability matters more than the content’s reliability or the truth of the information received.

The second stage of the spread of misinformation is misinformation retransmission. Apart from the rumor-causing variables that are reported in Oh et al. [ 9 ], Liu et al. [ 13 ] found senders credibility and attractiveness as significant variables related to misinformation retransmission. Personal involvement and content ambiguity can also affect misinformation transmission [ 13 ]. Abdullah et al. [ 25 ] explored retweeter's motive on the Twitter platform to spread disaster information. Content relevance, early information [ 27 , 31 ], trustworthiness of the content, emotional influence [ 30 ], retweet count, pro-social behavior (altruistic behavior among the citizens during the crisis), and the need to inform their circle are the factors that drive users’ retweet [ 25 ]. Lee et al. [ 26 ] have also examined the impact of Twitter features on message diffusion based on the 2013 Boston marathon tragedy. The study reported that during crisis events (especially during disasters), a tweet that has less reaction time (time between the crisis and initial tweet) and had higher impact than other tweets. This shows that to an extent, misinformation can be controlled if officials could communicate at the early stage of a crisis [ 27 ]. Liu et al. [ 13 ] showed that tweets with hashtags influence spread of misinformation. Further, Lee et al. [ 26 ] found that tweets with no hashtags had more influence due to contextual differences. For instance, usage of hashtags for marketing or advertising has a positive impact, while in the case of disaster or emergency situations, usage of hashtags (as in case of Twitter) has a negative impact. Messages with no hashtag get widely diffused when compared to messages with the hashtag [ 26 ].

Oh et al. [ 15 ] explored the behavioral aspects of social media participants that led to retransmission and spread of misinformation. They found that when people believe a threatening piece of misinformation they received, they are more likely to spread it, and they take necessary safety measures (sometimes even extreme actions). Repetition of the same misinformation from different sources also makes it more believable [ 28 ]. However, when they realize the received information was false they were less likely to share it with others [ 13 , 26 ]. The characteristics of the platform used to deliver the misinformation also matters. For instance, numbers of likes and shares of the information increases the believability of the social media post [ 47 ].

In summary, we found that platform architecture also has an essential role in spreading and believability of misinformation. While conducting this systematic literature review, we observed that more studies on disaster and misinformation are based on the Twitter platform. The six papers out of nine that we reviewed on disaster area were based on the Twitter platform. When a message was delivered in video format, it had a higher impact compared to audio or text messages. If the message had a religious or cultural narrative, it led to behavioral action (danger control response) [ 15 ]. Users were more likely to spread misinformation through WhatsApp than Twitter. It was difficult to find the source of shared information on WhatsApp [ 30 ].

Misinformation related to healthcare

From our review, we found two systematic literature reviews that discusses health-related misinformation on social media. Yang et al. [ 58 ] explores the characteristics, impact and influences of health misinformation on social media. Wang et al. [ 59 ] addresses health misinformation related to vaccines and infectious diseases. This review shows that health-related misinformation, especially on M.M.R. vaccine and autism are largely spreading on social media and the government is unable to control it.

The spread of health misinformation is an emerging issue facing public health authorities. Health misinformation could delay proper treatment to patients, which could further add more casualties to the public health domain [ 28 , 59 , 60 ]. Often people tend to believe health-related information that is shared by their peers. Some of them tend to share their treatment experience or traditional remedies online. This information could be in a different context and may not be even accurate [ 33 , 34 ]. Compared to health-related websites, the language used to detail the health information shared on social media will be simple and may not include essential details [ 35 , 37 ]. Some studies reported that conspiracy theories and pseudoscience have escalated casualties [ 33 ]. Pseudoscience is the term referred to as the false claim, which pretends as if the shared misinformation has scientific evidence. The anti-vaccination movement on Twitter is one of the examples of pseudoscience [ 61 ]. Here the user might have shared the information due to the lack of scientific knowledge [ 35 ].

Characterizing healthcare misinformation

The attributes that characterize healthcare misinformation are distinctly different from other domains. Chua and Banerjee, [ 37 ] identified the characteristics of health misinformation as dread and wish. Dread is the rumor which creates more panic and unpleasant consequences. For example, in the wake of COVID-19, misinformation was widely shared on social media, which claimed that children 'died on the spot' after the mass COVID-19 vaccination program in Senegal, West Africa [ 61 ]. This message created panic among the citizens, as the misinformation was shared more than 7000 times on Facebook [ 61 ]. Wish is the type of rumor that gives hope to the receiver (e.g.,: rumor on free medicine distribution) [ 62 ]. Dread rumor looks more trustworthy and more likely to get viral. Dread rumor was the cause of violence against a minority group in India during COVID-19 [ 7 ]. Chua and Banerjee, [ 32 ] added pictorial and textual representations as the characteristics of health misinformation. The rumor that contains only text is textual rumor. Pictorial rumor on the other hand contains both text and images. However, Chua and Banerjee, [ 32 ] found that users prefer textual rumor than pictorial. Unlike rumors that are circulated during a natural disaster, health misinformation will be long-lasting, and it can spread cutting across boundaries. Personal involvement (the importance of information for both sender and receiver), rumor type and presence of counter rumor are some of the variables that can escalate users’ trusting and sharing behavior related to rumor [ 37 ]. The study of Madraki et al. [ 46 ] study on COVID-19 misinformation /disinformation reported that COVID-19 misinformation on social media differs significantly based on the languages, countries and their culture and beliefs. Acceptance of social media platforms as well as Governmental censorship also play an important role here.

Widespread misinformation could also change collective opinion [ 29 ]. Online users’ epistemic beliefs could control their sharing decisions. Chua and Banerjee, [ 32 ] argued that epistemologically naïve users (users who think knowledge can be acquired easily) are the type of users who accelerate the spread of misinformation on platforms. Those who read or share the misinformation are not likely to follow it [ 37 ]. Gu and Hong [ 34 ] examined health misinformation on mobile social media context. Mobile internet users are different from large screen users. The mobile phone user might have a more emotional attachment toward the gadget. It also motivates them to believe received misinformation. The corrective effort focused on large screen users may not work with mobile phone users or small screen users. Chua and Banerjee [ 32 ] suggested that simplified sharing options of platforms also motivate users to share the received misinformation before validating it. Shahi et al. [ 47 ] found that misinformation is also propagated or shared even by the verified Twitter handles. They become a part of misinformation transmission either by creating it or endorsing it by liking or sharing the information.

The focus of existing studies is heavily based on data from social networking sites such as Facebook and Twitter, although other platforms too escalate the spread of misinformation. Such a phenomenon was evident in the wake of COVID-19 as an intense trend of misinformation spread was reported on WhatsApp, TikTok, and Instagram.

Social media misinformation and politics

There have been several studies on the influence of misinformation on politics across the world [ 43 , 44 ]. Political misinformation has been predominantly used to influence the voters. The USA Presidential election of 2016, French election of 2017 and Indian elections in 2019 have been reported as examples where misinformation has influenced election process [ 15 , 17 , 45 ]. During the 2016 USA election, the partisan effect was a key challenge, where false information was presented as if it was from an authorized source [ 39 ]. Based on a user's prior behavior on the platform, algorithms can manipulate the user's feed [ 40 ]. In a political context, fake news can create more harm as it can influence the voters and the public. Although, fake news has less ‘life’, it's consequences may not be short living. Verification of fake news takes time and by the time verification results are shared, fake news could achieve its goal [ 43 , 48 , 63 ].

Characterizing misinformation in politics

Confirmation bias has a dominant role in social media misinformation related to politics. Readers are more likely to read and engage with the information that confirms their preexisting beliefs and political affiliations and reject information that challenges it [ 46 , 48 ]. For example, in the 2016 USA election, Pro-Trump fake news was accepted by Republicans [ 19 ]. Misinformation spreads quickly among people who have similar ideologies [ 19 ]. The nature of interface also could escalate the spread of misinformation. Kim and Dennis [ 36 ] investigated the influence of platforms' information presentation format and reported that social media platforms indirectly force users to accept certain information; they present information such that little importance is given to the source of information. This presentation is manipulative as people tend to believe information from a reputed source and are more likely to reject information that is from a less-known source [ 42 ].

Pennycook et al. [ 39 ], and Garrett and Poulsen [ 40 ] argued that warning tags (or flagging) on the headline can reduce the spread of misinformation. However, it is not practical to assign warning tags to all misinformation as it gets generated faster than valid information. The fact-checking process in social media also takes time. Hence, people tend to believe that the headlines which do not have warning tags are true and the idea of warning tags will thus not serve any purpose [ 39 ]. Furthermore, it could increase the reader's belief in warning tags and lead to misperception [ 39 ]. Readers tend to believe that all information is verified and consider untagged false information as more accurate. This phenomenon is known as the implied truth effect [ 39 ]. In this case, source reputation rating will influence the credibility of the information. The reader gives less importance to the source that has a low rating [ 17 , 50 ].

Theoretical perspectives of social media misinformation

We identified six theories among the articles we reviewed in relation to social media misinformation. We found rumor theory was used most frequently among all the studies chosen for our review; the theory was used in four articles as a theoretical foundation [ 9 , 11 , 13 , 37 , 43 ]. Oh et al. [ 9 ], studied citizen-driven information processing on Twitter using rumor theory in three social crises. This paper identified four key variables (source ambiguity, personal involvement, and anxiety) that spread misinformation. The authors further examined the acceptance of hate rumors and the aftermath of community crisis based on the Bangalore mass exodus of 2012. Liu et al. [ 13 ], examined the reason behind the retransmission of messages using rumor theory in disasters. Hazel Kwon and Raghav Rao [ 43 ] investigated how internet surveillance by the government impacts citizens’ involvement with cyber-rumors during a homeland security threat. Diffusion theory has also been used in IS research to discern the adoption of technological innovation. Researchers have used diffusion theory to study the retweeting behavior among Twitter users (tweet diffusion) during extreme events [ 26 ]. This research investigated information diffusion during extreme events based on four major elements of diffusion: innovation, time, communication channels and social systems. Kim et al. [ 36 ] examined the effect of rating news sources on users’ belief in social media articles based on three different rating mechanisms expert rating, user article rating and user source rating. Reputation theory was used to show how users would discern cognitive biases in expert ratings.

Murungi et al. [ 38 ] used rhetorical theory to argue that fact-checkers have less effectiveness on fake news that spreads on social media platforms. The study proposed a different approaches by focusing on underlying belief structure that accepts misinformation. The theory was used to identify fake news and socially constructed beliefs in the context of Alabama’s senatorial election in 2017. Using third person effect as the theoretical ground, the characteristics of rumor corrections on Twitter platform have also been examined in the context of death hoax of Singapore’s first prime minister Lee Kuan Yew [ 28 ]. This paper explored the motives behind collective rumor and identified the key characteristics of collective rumor correction. Using situational crisis communication theory (SCCT), Paek and Hove [ 44 ] examined how government could effectively respond to risk-related rumors during national-level crises in the context of food safety rumor. Refuting rumor, denying it and attacking the source of rumor are the three rumor response strategies suggested by the authors to counter rumor-mongering (Table ​ (Table2 2 ).

Theories used in social media misinformation research

Determinants of misinformation in social media platforms

Figure  3 depicts the concepts that emerged from our review using a framework of Antecedents-Misinformation-Outcomes (AMIO) framework, an approach we adapt from Smith HJ et al. [ 66 ]. Originally developed to study information privacy, the Antecedent-Privacy-Concerns-Outcomes (APCO) framework provided a nomological canvas to present determinants, mediators and outcome variables pertaining to information privacy. Following this canvas, we discuss the antecedents of misinformation, mediators of misinformation and misinformation outcomes, as they emerged from prior studies (Fig.  3 ).

An external file that holds a picture, illustration, etc.
Object name is 41060_2022_311_Fig3_HTML.jpg

Determinants of misinformation

Anxiety, source ambiguity, trustworthiness, content ambiguity, personal involvement, social ties, confirmation bias, attractiveness, illiteracy, ease of sharing options and device attachment emerged as the variables determining misinformation in social media.

Anxiety is the emotional feeling of the person who sends or receives the information. If the person is anxious about the information received, he or she is more likely to share or spread misinformation [ 9 ]. Source ambiguity deals with the origin of the message. When the person is convinced of the source of information, it increases his trustworthiness and the person shares it. Content ambiguity addresses the content clarity of the information [ 9 , 13 ]. Personal involvement denotes how much the information is important for both the sender and receiver [ 9 ]. Social ties, information shared by a family member or social peers will influence the person to share the information [ 9 , 13 ]. From prior literature, it is understood that confirmation bias is one of the root causes of political misinformation. Research on attractiveness of the received information reveals that users tend to believe and share the information that is received on her or his personal device [ 34 ]. After receiving the misinformation from various sources, users accept it based on their existing beliefs, and social, cognitive factors and political factors. Oh et al. [ 15 ] observed that during crises, people by default have a tendency to believe unverified information especially when it helps them to make sense of the situation. Misinformation has significant effects on individuals and society. Loss of lives [ 9 , 15 , 28 , 30 ], economic loss [ 9 , 44 ], loss of health [ 32 , 35 ] and loss of reputation [ 38 , 43 ] are the major outcome of misinformation emerged from our review.

Strategies for controlling the spread of misinformation

Discourse on social media misinformation mitigation has resulted in prioritization of strategies such as early communication from the officials and use of scientific evidence [ 9 , 35 ]. When people realize that the received information or message is false, they are less likely to share that information with others [ 15 ]. Other strategies are 'rumor refutation—reducing citizens' intention to spread misinformation by real information which reduces their uncertainty and serves to control misinformation [ 44 ]. Rumor correction models for social media platforms also employ algorithms and crowdsourcing [ 28 ]. Majority of the papers that we have reviewed suggested fact-checking by experts, source rating of the received information, attaching warning tags to the headlines or entire news [ 36 ], and flagging content by the platform owners [ 40 ] as the strategies to control the spread of misinformation. Studies on controlling misinformation in the public health context showed that the government could also seek the help of public health professionals to mitigate misinformation [ 31 ].

However, the aforementioned strategies have been criticized for several limitations. Most papers mentioned confirmation bias as having a significant impact on the misinformation mitigation strategies, especially in the political context where people tend to believe the information that matches their prior belief. Garrett and Poulsen [ 40 ] argued that during an emergency situation, misinformation recipient may not be able to characterize the misinformation as true or false. Thus, providing alternative explanation or the real information to the users have more effect than providing fact-checking report. Studies by Garrett and Poulsen [ 40 ], and Pennycook et al. [ 39 ] reveal a drawback of attaching warning tags to news headlines. Once the flagging or tagging of the information is introduced, the information with the absence of tags will be considered as true or reliable information. This creates an implied truth effect. Further, it is also not always practical to evaluate all social media posts. Similarly, Kim and Dennis [ 36 ] studied fake news flagging and found that fake news flags did not influence users’ belief. However, they created cognitive dissonance and users were in search of the truthfulness of the headline. Later in 2017 Facebook discontinued the fake news flagging service owing to its limitations [ 45 ]

Key research gaps and future directions

Although, misinformation is a multi-sectoral issue, our systematic review observed that interdisciplinary research on social media misinformation is relatively scarce. ‘Confirmation bias’ is one of the most significant behavioral problem that motivates the spread of misinformation. However, lack of research on it reveals the scope for future interdisciplinary research across the fields of Data Science, Information Systems and Psychology in domains such as politics and health care. In the disaster context, there is a scope for study on the behavior of a first respondent and an emergency manager to understand their information exchange pattern with the public. Similarly, future researchers could analyze communication patterns between citizens and frontline workers in the public health context, which may be useful to design counter-misinformation campaigns and awareness interventions. Since information disorder is a multi-sectoral issue, researchers need to understand misinformation patterns among multiple government departments for coordinated counter-misinformation intervention.

There is a further dearth of studies on institutional responses to control misinformation. To fill the gap, future studies could concentrate on the analysis of governmental and organizational interventions to control misinformation at the level of policies, regulatory mechanisms, and communication strategies. For example, in India there is no specific law against misinformation but there are some provisions in the Information Technology Act (IT Act) and Disaster Management Act which can control misinformation and disinformation. An example of awareness intervention is an initiative named ‘Satyameva Jayate’ launched in Kannur district of Kerala, India which focused on sensitizing children at school to spot misinformation [ 67 ]. As noted earlier, within the research on Misinformation in the political context, there is a lack of research on strategies adopted by the state to counter misinformation. Therefore, building on cases like 'Satyameva Jayate' would further contribute to knowledge in this area.

Technology-based strategies adopted by social media to control the spread of misinformation emphasize the corrective algorithms, keywords and hashtags as a solution [ 32 , 37 , 43 ]. However, these corrective measures have their own limitations. Misinformation corrective algorithms are ineffective if not used immediately after the misinformation has been created. Related hashtags and keywords are used by researchers to find content shared on social media platforms to retrieve data. However, it may not be possible for researchers to cover all the keywords or hashtags employed by users. Further, algorithms may not decipher content shared in regional languages. Another limitation of algorithms employed by platforms is that they recommend and often display content based on user activities and interests which limits the users access to information from multiple perspectives, thus reinforcing their existing belief [ 29 ]. A reparative measure is to display corrective information as 'related stories' for misinformation. However, Facebook’s related stories algorithm only activates when an individual clicks on an outside link, which limits the number of people who will see the corrective information through the algorithm which turns out to be a challenge. Future research could investigate the impact of related stories as a corrective measure by analyzing the relation between misinformation and frequency of related stories posted vis a vis real information.

Our review also found a scarcity of research on the spread of misinformation on certain social media platforms while studies being skewed toward a few others. Of the studies reviewed, 15 articles were concentrated on misinformation spread on Twitter and Facebook. Although, from recent news reports it is evident that largely misinformation and disinformation are spread through popular messaging platforms like the 'WhatsApp', ‘Telegram’, ‘WeChat’, and ‘Line’, research using data from these platforms are, however, scanty. Especially in the Indian context, the magnitude of problems arising from misinformation through WhatsApp are overwhelming [ 68 ]. To address the lacunae of research on messaging platforms, we suggest future researchers to concentrate on investigating the patterns of misinformation spreading on platforms like WhatsApp. Moreover, message diffusion patterns are unique to each social media platform; therefore, it is useful to study the misinformation diffusion patterns on different social media platforms. Future studies could also address the differential roles, patterns and intensity of the spread of misinformation on various messaging and photo/ video-sharing social networking services.

Evident from our review, most research on misinformation is based on Euro-American context and the dominant models proposed for controlling misinformation may have limited applicability to other regions. Moreover, the popularity of social media platforms and usage patterns are diverse across the globe consequent to cultural differences and political regimes of the region, therefore necessitating researchers of social media to take cognizance of empirical experiences of ' left-over' regions.

To understand the spread of misinformation on social media platforms, we conducted a systematic literature review in three important domains where misinformation is rampant: disaster, health, and politics. We reviewed 28 articles relevant to the themes chosen for the study. This is one of the earliest reviews focusing on social media misinformation research, especially based on three sensitive domains. We have discussed how misinformation spreads in the three sectors, the methodologies that have been used by researchers, theoretical perspectives, Antecedents-Misinformation-Outcomes (AMIO) framework for understanding key concepts and their inter-relationships, and strategies to control the spread of misinformation.

Our review also identified major gaps in IS research on misinformation in social media. This includes the need for methodological innovations in addition to experimental methods which have been widely used. This study has some limitations that we acknowledge. We might not have identified all relevant papers on spread of misinformation on social media from existing literature as some authors might have used different keywords and also due to our strict inclusion and exclusion criteria. There might also have been relevant publications in languages other than English which were not covered in this review. Our focus on three domains also restricted the number of papers we reviewed.

Author contributions

TMS: Conceptualization, Methodology, Investigation, Writing—Original Draft, SKM: Writing—Review & Editing, Supervision.

This research did not receive any specific Grant from funding agencies in the public, commercial, or not-for-profit sectors.

Declarations

On behalf of two authors, the corresponding author states that there is no conflict of interest in this research paper.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Contributor Information

Sadiq Muhammed T, Email: [email protected] .

Saji K. Mathew, Email: ni.ca.mtii@ijas .

VISCHER

Misuse of social media by employees

Category: Employment Law

Misuse of social media by employees

Disinformation and hate speech on social media is increasingly becoming a social problem. But what should be done when employees jeopardise the employer's reputation with their behaviour on social media?

Rules for the use of social media

By law, employees have a duty of loyalty to their employer, according to which they must safeguard the employer's legitimate interests in good faith. In particular, the employee must refrain from doing anything that could damage the employer's business or reputation.

It is advisable for companies to regulate how employees use social media (either separately or as part of a more comprehensive set of personnel regulations). In this way, grey areas can be avoided and clear rules can be established, at least as far as work activities are concerned.

Workplace vs. private life

While conduct at the workplace or during the performance of work is largely determined by the employer and the use of social media can also be prohibited altogether, the employer generally may not impose any regulations on the employee's private life. The employee's freedom of personality permits the free development of his or her personality, provided this does not violate the law or the rights of others. Furthermore, every employee has the right to freedom of expression. However, these fundamental rights are not unlimited. The interests and rights of the persons involved must be weighed against each other. In particular, activities in the private sphere are not permitted if they infringe the legitimate interests of the employer.

Crossing boundaries on social media

In addition to insults or hostility towards the employer or colleagues on social media, the publication of confidential information in connection with the employment relationship is also prohibited. In the case of irregularities in the workplace, whistleblowing in public may be permitted under certain circumstances, but generally only as a last resort, if first the employer and then the competent authority have not responded to a justified complaint.

Whether there has been an unauthorised crossing of boundaries must be determined on a case-by-case basis, taking into account all the interests involved. For example, employees are in principle free in their private lives to make themselves ridiculous with drunken party photos or to spread absurd conspiracy theories. However, the balance of interests can be tipped in particular if, for example, behaviour at work or at company events is depicted, possibly even against the will of the colleagues depicted.

The industry involved and the function of the offending employee also plays a role. The behaviour of the CEO reflects more quickly on the employer than if a trainee goes overboard. Vaccine-sceptical statements are more sensitive at a vaccine manufacturer than elsewhere. Satanist statements by a priest would probably not be seen too favourably by his church either, and rightly so.

In the case of criminally punishable behaviour (e.g. racially discriminatory statements) on social media towards third parties, a point may well be reached where the employer's interests are already violated simply by being associated with such a person. 

Control options and sanctions

Employers may only process personal data about their employees to the extent that it is related to the workplace. In principle, employers are therefore not allowed to search private social media platforms (e.g. Facebook, Instagram, Twitter, Snapchat, Tiktok) for information about their employees, unless they are professionally active there (e.g. a "social media manager"). The situation is of course different for professional platforms such as LinkedIn or Xing, whose professional orientation implies a workplace reference.

However, anyone who is friends with the boss or colleagues on Facebook should not be surprised if boundary violations become known to the employer; these reports are usually also actionable due to their detrimental effect on the employer's interests.

If violations of boundaries become known, the employer can impose proportionate sanctions depending on the severity of the breach of the duty of loyalty. For example, measures ranging from instructions to warnings to termination without notice are conceivable.

Employees have to protect the legitimate interests of the employer in good faith even in what is actually their private conduct on social media. If this duty of loyalty is breached, the employer can impose proportionate sanctions. However, from a legal point of view, the balancing of interests is often very delicate, as in most cases weighty (fundamental) rights of the parties involved have to be weighed against each other.

If you have any questions on this topic, please do not hesitate to contact our employment law team  at any time.

Authors:  Marc Ph. Prinz, Gian Geel

Marc Ph. Prinz

Marc Ph. Prinz

Attorney at Law

  • +41 58 211 36 17
  • [email protected]

Gian Geel

  • Managing Associate
  • +41 58 211 34 48

Related Knowledge

Overseas remote work (switzerland).

Prinz, Marc Ph. / Dehmelt, Jeannine, Overseas Remote Work (Switzerland)

Doctors' certificates and incapacity for work – proof or fake?

Below we provide answers to the 10 most frequently asked questions in connection with doctor's...

Performance Management im Arbeitsrecht, in personalSCHWEIZ

Prinz, Marc Ph., Geel, Gian, Performance Management im Arbeitsrecht, in personalSCHWEIZ, April 2024,...

Keeping you up to date…

Opt-in for our regular updates, news, views, insights and more.

  • VISCHER News
  • Webinar Recordings
  • Publications & Presentations
  • Deals & Cases
  • VISCHER Legal Innovation Lab
  • Artificial Intelligence
  • Subscribe to Blog & News

7 Examples of Data Misuse in the Modern World

Every time we use social media, sign up for a mailing list, or download a free app onto our phones, we agree to the provider’s terms of use.

While many of these agreements are unnecessarily dense and challenging to process, they do serve one very specific role for everyday people like you and me—they set the terms for how a company can use the personal data they collect from us. Typically, that data gets used in one of three ways:

  • Personal data is aggregated and analyzed to provide us with more personalized advertisements.
  • Personal data is logged and assessed for research and development.
  • Personal data is sold to a data brokerage.

In all three of these scenarios, there are specific parameters for how companies handle, store, and distribute your personal data. Yet, in a work-from-home world, it’s becoming more and more difficult for companies to enforce protections against sensitive data to prevent both internal and external data breaches.

What is Data Misuse?

Data misuse occurs when individuals or organizations use personal data beyond those stated intentions. Often, data misuse isn’t the result of direct company action but rather the missteps of an individual or even a third-party partner. For example, a bank employee might access private accounts to view a friend’s current balance, or a marketer using one client’s data to inform another customer’s campaign.

What Are The Types of Misuse of Personal Information?

In broad strokes, there are 3 different types of data misuse:

  • Commingling
  • Personal Benefit

1. Commingling

Commingling happens when an organization captures data from a specific audience from a specific stated purpose, then reuses that same personal data for a separate task in the future. Reusing data submitted for academic research for marketing purposes or sharing client data between sister organizations without consent are some of the most common commingling scenarios. Commingling often occurs out of ease of access—marketers and business owners already have the data and assume that, since they collected it, they are entitled to use it at their own discretion.

2. Personal Benefit

Data misuse for personal benefit occurs when someone with access to personal data abuses that power for their own gain. Whether simple curiosity or as a competitive advantage, this type of misuse is rarely done with malicious intent. Personal gain cases regularly involve company employees moving data to personal devices for easy access, often with disastrous results.

3. Ambiguity

Ambiguity occurs when organizations fail to explicitly disclose how user data is collected and what that data will be used for in a concise and accessible manner. Traditionally, organizations use this strategy because they were unsure how they wanted to use customer data but still wanted to collect it. However, ambiguity leaves the terms of use wide open to vague interpretation, giving the organization a blank check to use customer personal data as they wish.

Is Data Misuse Illegal?

To be clear, data misuse isn’t necessarily theft—theft occurs when a bad actor takes personal data without permission—data misuse is when legitimately collected information is applied in a way beyond its original purpose. Typically, these instances are less malicious than an insider threat selling company data to a third party and instead take a more negligent approach.

While the intention may be simple, the stakes can be incredibly high. The cost of data misuse can range from thousands to billions of dollars in fines, not including ransomware or settlements resulting from the misuse. We’ve seen data misuse cases take center stage multiple times in recent years, and in every instance, there have been significant ramifications for the company and its customers.

6 Examples of Data Misuse in the Modern World

  • Facebook and Cambridge Analytica
  • Morgan Stanley
  • Leave.EU and Eldon Insurance

1. Facebook and Cambridge Analytica Election Influencing

Perhaps the most infamous example of data misuse, in 2018, news outlets revealed that the UK political consulting firm acquired and used personal data from Facebook users that was initially collected from a third party for academic research. In total, Cambridge Analytica had unauthorized access to the data of nearly 87 million Facebook users—many of whom had not given any explicit permission for the company to use or even access their information. Within two months of the scandal, Cambridge Analytica was bankrupt and defunct, while Facebook was left with a $5 billion fine by the Federal Trade Commission.

2. Twitter Targeted Advertising

In September 2019, Twitter admitted to letting advertisers access its users’; personal data to improve the targeting of marketing campaigns. Cited by the company as an internal error, the bug allowed Twitter’s Tailored Audiences advertisers access to user email addresses and phone numbers. Twitter’s ad buyers could then cross-reference their marketing database with Twitters to identify shared customers and serve them targeted ads—all without our permission.

Not to be outdone, preliminary investigations in a European Union competition watchdog effort found the online retailer “appears to use competitively sensitive information – about marketplace sellers, their products and transactions on the marketplace.” The EU went on to open a second investigation in 2020 concerning the retailer’s use of non-public independent seller data. The outcomes of both are still pending.

3. Google Location Tracking for Advertisers

Google was fined nearly $57 million in 2020 by the French data protection authority for failing to acknowledge how it used users’ personal data. During that same time, Ireland’s Data Protection Commission notified the global juggernaut of their intentions to investigate the company’s use of and transparency around user location data—its second notification since the GDPR was made policy in 2018.

4. Uber “God View” Rider Tracking

Getting in on data misuse before it was cool, Uber was fined $20,000 by the Federal Trade Commission (FTC) for its “God View” tool in 2014. “God View” let Uber employees access and track the location and movements of Uber riders without their permission. As a result of their settlement with the FTC, Uber paid their fine and agreed to hire an outside firm to audit their privacy practices every two years from 2014 through 2034.

5. Morgan Stanley Data Breach

And it’s not just tech firms! In 2015, a Morgan Stanley financial advisor pleaded guilty to taking the data for roughly 730,000 accounts—roughly 10% of the wealth management firm’s user base—and attempting to take that information with him to a competitor. In the security breach, the personal data of nearly 900 users was accessed and posted online by hackers that accessed the former employee’s home computer.

6. Leave.EU and Eldon Insurance Data Commingling

While the pro-Brexit group Leave.EU and UK insurance provider Eldon Insurance have very little in common on the surface, both organizations were co-founded by businessman Aaron Banks. In 2019, the UK’s Information Commissioner’s Office fined both organizations roughly $83,000 apiece for commingling customer data—political data for insurance and insurance data for politics.

What Are The Risks of Data Misuse?

While the financial impact of data misuse shouldn’t be understated, perhaps the greatest business impact comes in the loss of trust between the company and its audience. 

It is entirely reasonable to expect the companies that handle our data to do so securely and under the agreed terms. Anything sort of that agreement is a massive violation of trust between the people and the service provider—trust that is not easily rebuilt. Cambridge Analytica folded in less than three months, Google is still facing constant criticism, and Uber will be audited for the better part of the next two decades.

While these actions may lack the malice of a traditional black hat cyberattack, data misuse increases the opportunities for these criminals to access private data. Many instances of data misuse start with employees or third-party vendors with legitimate access transferring company data from a secure server onto a personal device with less stringent data security features. Even the most robust network security provisions are irrelevant once data leaves the secure perimeter. Once that personal data—or access to it—is controlled by a more susceptible device, cybercriminals have a much easier path to accessing the personal data they desire.

In 2020, hackers accessed 5.2 million Marriott guest records, including customer contact information, personal preferences, birthdays, and more. This attack succeeded because the attackers compromised employee credentials to access a third-party application. It was two months before anyone realized something was wrong.

How Can Misuse of Data Be Prevented?

Often, data misuse boils down to ignorance and negligence. However, as our digital footprints continue to grow and evolve, the necessity for responsible digital hygiene extends to every citizen of the internet—not just IT professionals. 

That starts with improving our general online practices so that we as users are more selective about the companies we trust with our data and that we, as professionals, are treating our customers’ data with the same care we would our own.

1. Leave work at work

Don’t mix professional and personal devices. Never download workplace data to your personal laptop, smartphone, desktop, home server, or whatever device you choose, no matter how fancy your home firewall, encryption, or VPN may be. This mixture of circumstances only invites further scrutiny and additional opportunities for cyber-attacks.

2. Practice conscious digital hygiene

Phishing instances have skyrocketed in recent years, and while many users are more and more confident in their ability to sniff out bad actors, there’s always one person on our social media feeds trying to sell knock-off Ray-Bans. Don’t fall for the cheap tactics of bad actors. Confirm URLs before submitting personal data, don’t click links from email addresses you don’t recognize, and use complex passwords.

3. Be selective

For organizations and individuals alike, putting your faith in the wrong partner can have disastrous results. As we saw on Facebook and Marriott, poor practices by a third-party vendor can not only compromise entire organizational networks but can sully the trust between brands and their customers in an instant. Likewise, we ought to carefully measure the quality of the places where we share our personal data.

While we can refine and perfect our online habits to prevent our own potential misuse, we rarely get to set data policies for the companies we frequent. We as users and customers and contributors must hold the brands we trust accountable for maintaining those expectations. Change never happens out of complacency; whether it’s Big Tech or Wall Street, the only way organizations create serious policy around data misuse is when their customers demand it. Organizations should have basic security structures like behavior alerts and access management tools complemented by need-to-know access and zero-trust architectures. Likewise, we as consumers have a right to clear data collection policies and transparent use cases.

As governing policies like the EU’s GDPR or California’s CCPA continue to shape the future of data regulation, it is only a matter of time before global expectations around data ownership and data misuse become more explicit expectations in every region.

How is Invisibly Combatting Data Misuse?

If you bank online, use social media, or send emails through any large tech platform, then your data is already being shared and sold for profit. It’s time to reclaim its worth and use it to unlock premium rewards instead of giving it away for free.

Invisibly is a platform that allows you to maximize the value of your data and earn brand rewards for it. The data collected is the data you decide to share – putting you in direct control, always. Connect your bank or credit card account(s), take surveys, and earn points to unlock your rewards from brand partners including Target, Ulta Beauty, Best Buy, and many more.

See your data work for you.

Misinformation works, and a handful of social ‘supersharers’ sent 80% of it in 2020

Social media on display with fake news and hoax information. Searching on tablet, pad, phone or smartphone screen in hand. Abstract concept of news titles broadcasting 3d illustration.

A pair of studies published Thursday in the journal Science offers evidence not only that misinformation on social media changes minds, but that a small group of committed “supersharers,” predominately older Republican women, were responsible for the vast majority of the “fake news” in the period looked at.

The studies, by researchers at MIT, Ben-Gurion University, Cambridge and Northeastern, were independently conducted but complement each other well.

In the MIT study led by Jennifer Allen, the researchers point out that misinformation has often been blamed for vaccine hesitancy in 2020 and beyond, but that the phenomenon remains poorly documented. And understandably so: Not only is data from the social media world immense and complex, but the companies involved are reticent to take part in studies that may paint them as the primary vector for misinformation and other data warfare. Few doubt that they are, but that is not the same as scientific verification.

The study first shows that exposure to vaccine misinformation (in 2021 and 2022, when the researchers collected their data), particularly anything that claims a negative health effect, does indeed reduce people’s intent to get a vaccine. (And intent, previous studies show, correlates with actual vaccination.)

Second, the study showed that articles flagged by moderators at the time as misinformation had a greater effect on vaccine hesitancy than non-flagged content — so, well done flagging. Except for the fact that the volume of unflagged misinformation was vastly, vastly greater than the flagged stuff. So even though it had a lesser effect per piece, its overall influence was likely far greater in aggregate.

This kind of misinformation, they clarified, was more like big news outlets posting misleading info that wrongly characterized risks or studies. For example, who remembers the headline “A healthy doctor died two weeks after getting a COVID vaccine; CDC is investigating why” from the Chicago Tribune? As commentators from the journal point out, there was no evidence the vaccine had anything to do with his death. Yet despite being seriously misleading, it was not flagged as misinformation, and subsequently the headline was viewed some 55 million times — six times as many people as the number who saw all flagged materials total.

social media misuse case study

“This conflicts with the common wisdom that fake news on Facebook was responsible for low U.S. vaccine uptake,” Allen told TechCrunch. “It might be the case that Facebook usership is correlated with lower vaccine uptake (as other research has found) but it might be that this ‘gray area’ content that is driving the effect — not the outlandishly false stuff.”

The finding, then, is that while tamping down on blatantly false information is helpful and justified, it ended up being only a tiny drop in the bucket of the toxic farrago social media users were then swimming in.

And who were the swimmers who were spreading that misinformation the most? It’s a natural question, but beyond the scope of Allen’s study.

In the second study published Thursday, a multi-university group reached the rather shocking conclusion that 2,107 registered U.S. voters accounted for spreading 80% of the “fake news” (which term they adopt) during the 2020 election.

It’s a large claim, but the study cut the data pretty convincingly. The researchers looked at the activity of 664,391 voters matched to active X (then Twitter) users, and found a subset of them who were massively over-represented in terms of spreading false and misleading information.

These 2,107 users exerted (with algorithmic help) an enormously outsized network effect in promoting and sharing links to politics-flavored fake news. The data show that one in 20 American voters followed one of these supersharers, putting them massively out front of average users in reach. On a given day, about 7% of all political news linked to specious news sites, but 80% of those links came from these few individuals. People were also much more likely to interact with their posts.

Yet these were no state-sponsored plants or bot farms. “Supersharers’ massive volume did not seem automated but was rather generated through manual and persistent retweeting,” the researchers wrote. (Co-author Nir Grinberg clarified to me that “we cannot be 100% sure that supersharers are not sock puppets, but from using state-of-the-art bot detection tools, analyzing temporal patterns and app use they do not seem automated.”)

They compared the supersharers to two other sets of users: a random sampling and the heaviest sharers of non-fake political news. They found that these fake newsmongers tend to fit a particular demographic: older, women, white and overwhelmingly Republican.

social media misuse case study

Supersharers were only 60% female compared with the panel’s even split, and significantly but not wildly more likely to be white compared with the already largely white group at large. But they skewed way older (58 on average versus 41 all-inclusive), and some 65% Republican, compared with about 28% in the Twitter population then.

The demographics are certainly revealing, though keep in mind that even a large and highly significant majority is not all. Millions, not 2,107, retweeted that Chicago Tribune article. And even supersharers, the Science comment article points out, “are diverse, including political pundits, media personalities, contrarians, and antivaxxers with personal, financial, and political motives for spreading untrustworthy content.” It’s not just older ladies in red states, though they do figure prominently. Very prominently.

As Baribi-Bartov et al. darkly conclude, “These findings highlight a vulnerability of social media for democracy, where a small group of people distort the political reality for many.”

One is reminded of Margaret Mead’s famous saying: “Never doubt that a small group of thoughtful, committed, citizens can change the world. Indeed, it is the only thing that ever has.” Somehow I doubt this is what she had in mind.

More TechCrunch

Get the industry’s biggest tech news, techcrunch daily news.

Every weekday and Sunday, you can get the best of TechCrunch’s coverage.

Startups Weekly

Startups are the core of TechCrunch, so get our best coverage delivered weekly.

TechCrunch Fintech

The latest Fintech news and analysis, delivered every Tuesday.

TechCrunch Mobility

TechCrunch Mobility is your destination for transportation news and insight.

Google’s updated AI-powered NotebookLM expands to India, UK and over 200 other countries

Google on Thursday said it is rolling out NotebookLM, its AI-powered note-taking assistant, to over 200 new countries, nearly six months after opening its access in the U.S. The platform,…

Google’s updated AI-powered NotebookLM expands to India, UK and over 200 other countries

Once serving war-torn Sudan, YC-backed Elevate now provides fintech to freelancers globally

Inflation and currency devaluation have always been a growing concern for Africans with bank accounts.

Once serving war-torn Sudan, YC-backed Elevate now provides fintech to freelancers globally

Featured Article

Amazon buys Indian video streaming service MX Player

Amazon has agreed to acquire assets of Indian video streaming service MX Player from the local media powerhouse Times Internet, the latest step by the e-commerce giant to make its services and brand popular in smaller cities and towns in the key overseas market.  The two firms reached a definitive…

Amazon buys Indian video streaming service MX Player

Dealt turns retailers into service providers and proves that pivots sometimes work

Dealt is now building a service platform for retailers instead of end customers.

Dealt turns retailers into service providers and proves that pivots sometimes work

Hundreds of Snowflake customer passwords found online are linked to info-stealing malware

Snowflake is the latest company in a string of high-profile security incidents and sizable data breaches caused by the lack of MFA.

Hundreds of Snowflake customer passwords found online are linked to info-stealing malware

Google acquires Cameyo to bring Windows apps to ChromeOS

The buy will benefit ChromeOS, Google’s lightweight Linux-based operating system, by giving ChromeOS users greater access to Windows apps “without the hassle of complex installations or updates.”

Google acquires Cameyo to bring Windows apps to ChromeOS

Mistral launches new services and SDK to let customers fine-tune its models

Mistral is no doubt looking to grow revenue as it faces considerable — and growing — competition in the generative AI space.

Mistral launches new services and SDK to let customers fine-tune its models

Humane urges customers to stop using charging case, citing battery fire concerns

The warning for the Ai Pin was issued “out of an abundance of caution,” according to Humane.

Humane urges customers to stop using charging case, citing battery fire concerns

Watch Apple kick off WWDC 2024 right here

The keynote will be focused on Apple’s software offerings and the developers that power them, including the latest versions of iOS, iPadOS, macOS, tvOS, visionOS and watchOS.

Watch Apple kick off WWDC 2024 right here

What to expect from Apple’s AI-powered iOS 18 at WWDC 2024

As WWDC 2024 nears, all sorts of rumors and leaks have emerged about what iOS 18 and its AI-powered apps and features have in store.

What to expect from Apple’s AI-powered iOS 18 at WWDC 2024

Elon Musk’s X: A complete timeline of what Twitter has become

Welcome to Elon Musk’s X. The social network formerly known as Twitter where the rules are made up and the check marks don’t matter. Or do they? The Tesla and…

Elon Musk’s X: A complete timeline of what Twitter has become

Fearless Fund’s Arian Simone coming to Disrupt 2024

TechCrunch has kept readers informed regarding Fearless Fund’s courtroom battle to provide business grants to Black women. Today, we are happy to announce that Fearless Fund CEO and co-founder Arian…

Fearless Fund’s Arian Simone coming to Disrupt 2024

Bluesky and Mastodon users can now talk to each other with Bridgy Fed

Bridgy Fed is one of the efforts aimed at connecting the fediverse with the web, Bluesky and, perhaps later, other networks like Nostr.

Bluesky and Mastodon users can now talk to each other with Bridgy Fed

Zoox to test self-driving cars in Austin and Miami 

Zoox, Amazon’s self-driving unit, is bringing its autonomous vehicles to more cities.  The self-driving technology company announced Wednesday plans to begin testing in Austin and Miami this summer. The two…

Zoox to test self-driving cars in Austin and Miami 

Stability AI releases a sound generator

Called Stable Audio Open, the generative model takes a text description and outputs a recording up to 47 seconds in length.

Stability AI releases a sound generator

SoftBank-backed grocery startup Oda lays off 150, resets focus on Norway and Sweden

It’s not just instant-delivery startups that are struggling. Oda, the Norway-based online supermarket delivery startup, has confirmed layoffs of 150 jobs as it drastically scales back its expansion ambitions to…

SoftBank-backed grocery startup Oda lays off 150, resets focus on Norway and Sweden

Substack brings video to its Chat feature

Newsletter platform Substack is introducing the ability for writers to send videos to their subscribers via Chat, its private community feature, the company announced on Wednesday. The rollout of video…

Substack brings video to its Chat feature

This Week in AI: Ex-OpenAI staff call for safety and transparency

Hiya, folks, and welcome to TechCrunch’s inaugural AI newsletter. It’s truly a thrill to type those words — this one’s been long in the making, and we’re excited to finally…

This Week in AI: Ex-OpenAI staff call for safety and transparency

Cameo fumbles on Ms. Rachel fundraiser as fans receive credits instead of videos  

Ms. Rachel isn’t a household name, but if you spend a lot of time with toddlers, she might as well be a rockstar. She’s like Steve from Blues Clues for…

Cameo fumbles on Ms. Rachel fundraiser as fans receive credits instead of videos  

Cartwheel generates 3D animations from scratch to power up creators

Cartwheel helps animators go from zero to basic movement, so creating a scene or character with elementary motions like taking a step, swatting a fly or sitting down is easier.

Cartwheel generates 3D animations from scratch to power up creators

Wix’s new tool taps AI to generate smartphone apps

The new tool, which is set to arrive in Wix’s app builder tool this week, guides users through a chatbot-like interface to understand the goals, intent and aesthetic of their…

Wix’s new tool taps AI to generate smartphone apps

ClickUp wants to take on Notion and Confluence with its new AI-based Knowledge Base

ClickUp Knowledge Management combines a new wiki-like editor and with a new AI system that can also bring in data from Google Drive, Dropbox, Confluence, Figma and other sources.

ClickUp wants to take on Notion and Confluence with its new AI-based Knowledge Base

Whizz wants to own the delivery e-bike subscription space, starting with NYC

New York City, home to over 60,000 gig delivery workers, has been cracking down on cheap, uncertified e-bikes that have resulted in battery fires across the city.  Some e-bike providers…

Whizz wants to own the delivery e-bike subscription space, starting with NYC

Boeing’s Starliner astronaut capsule is en route to the ISS 

This is the last major step before Starliner can be certified as an operational crew system, and the first Starliner mission is expected to launch in 2025. 

Boeing’s Starliner astronaut capsule is en route to the ISS 

Three ways founders can shine at TechCrunch Disrupt 2024

TechCrunch Disrupt 2024 in San Francisco is the must-attend event for startup founders aiming to make their mark in the tech world. This year, founders have three exciting ways to…

Three ways founders can shine at TechCrunch Disrupt 2024

Google’s new startup program focuses on bringing AI to public infrastructure

Google’s newest startup program, announced on Wednesday, aims to bring AI technology to the public sector. The newly launched “Google for Startups AI Academy: American Infrastructure” will offer participants hands-on…

Google’s new startup program focuses on bringing AI to public infrastructure

eBay debuts AI-powered background tool to enhance product images

eBay’s newest AI feature allows sellers to replace image backgrounds with AI-generated backdrops. The tool is now available for iOS users in the U.S., U.K., and Germany. It’ll gradually roll…

eBay debuts AI-powered background tool to enhance product images

Hoop uses AI to automatically manage your to-do list

If you’re anything like me, you’ve tried every to-do list app and productivity system, only to find yourself giving up sooner rather than later because managing your productivity system becomes…

Hoop uses AI to automatically manage your to-do list

Asana introduces ‘AI teammates’ designed to work alongside human employees

Asana is using its work graph to train LLMs with the goal of creating AI assistants that work alongside human employees in company workflows.

Asana introduces ‘AI teammates’ designed to work alongside human employees

Taloflow puts AI to work on software vendor selection to reduce costs and save time

Taloflow, an early stage startup changing the way companies evaluate and select software, has raised $1.3M in a seed round.

Taloflow puts AI to work on software vendor selection to reduce costs and save time

  • Election 2024
  • Entertainment
  • Newsletters
  • Photography
  • Personal Finance
  • AP Investigations
  • AP Buyline Personal Finance
  • AP Buyline Shopping
  • Press Releases
  • Israel-Hamas War
  • Russia-Ukraine War
  • Global elections
  • Asia Pacific
  • Latin America
  • Middle East
  • Election Results
  • Delegate Tracker
  • AP & Elections
  • Auto Racing
  • 2024 Paris Olympic Games
  • Movie reviews
  • Book reviews
  • Personal finance
  • Financial Markets
  • Business Highlights
  • Financial wellness
  • Artificial Intelligence
  • Social Media

New Mexico judge grants Mark Zuckerberg’s request to be dropped from child safety lawsuit

FILE - Meta CEO Mark Zuckerberg arrives to testify before a Senate Judiciary Committee hearing on Capitol Hill in Washington, Wednesday, Jan. 31, 2024. A New Mexico judge has granted Mark Zuckerberg's request to be dropped from a lawsuit that alleges his company has failed to protect young users from exposure to child sexual abuse material.(AP Photo/Susan Walsh, file)

FILE - Meta CEO Mark Zuckerberg arrives to testify before a Senate Judiciary Committee hearing on Capitol Hill in Washington, Wednesday, Jan. 31, 2024. A New Mexico judge has granted Mark Zuckerberg’s request to be dropped from a lawsuit that alleges his company has failed to protect young users from exposure to child sexual abuse material.(AP Photo/Susan Walsh, file)

FILE -New Mexico Attorney General Ra˙l Torrez talks during a news conference following a summit in Albuquerque, N.M., Friday, Nov. 3, 2023. A New Mexico judge has granted Mark Zuckerberg’s request to be dropped from a lawsuit that alleges his company has failed to protect young users from exposure to child sexual abuse material.(AP Photo/Susan Montoya Bryan, file)

  • Copy Link copied

ALBUQUERQUE, N.M. (AP) — A New Mexico judge on Thursday granted Mark Zuckerberg’s request to be dropped from a lawsuit that alleges his company has failed to protect young users on its social media platforms from sexual exploitation.

The case is one of many filed by states, school districts and parents against Meta and its platforms over concerns about child exploitation. Beyond courtrooms around the U.S., the issue has been a topic of congressional hearings as lawmakers and parents are growing increasingly concerned about the effects of social media on young people’s lives.

In New Mexico, Attorney General Raúl Torrez sued Meta and Zuckerberg late last year following an undercover online investigation.

While granting Zuckerberg’s request, Judge Bryan Biedscheid dismissed Meta’s motion seeking to dismiss the state’s claims, marking what Torrez described as a crucial step for the case to proceed against the social media giant.

“For decades, Meta Platforms have prevented nearly every legal challenge against them from proceeding,” Torrez said in a statement. “Today, the New Mexico Department of Justice brought that era to an end and is the first case by a state attorney general to raise child sexual exploitation claims, which can now be addressed. All social media platforms that harm their users should be on notice.”

Cecelia Ammon fills out her ballot on primary election day at the Central Mercado in Southeast Albuquerque, N.M., on Tuesday, June 4, 2024. (Chancey Bush/The Albuquerque Journal via AP)

Separately, claims were levied in late October by the attorneys general of 33 states — including California and New York — that Instagram and Facebook include features deliberately designed to hook children and contribute to a youth mental health crisis.

As for Zuckerberg, Biedscheid said he wasn’t persuaded by the state’s arguments that the executive should remain a party to the New Mexico lawsuit, but he noted that could change depending on what evidence is presented as the case against Meta proceeds.

Torrez’s office said it will continue to assess whether Zuckerberg should be named individually in the future.

Attorneys for Meta had argued during the hearing that prosecutors would not be able to establish that the company had specifically directed its activities to New Mexico residents, meaning there would be personal jurisdiction for which the company could be held liable. They said the platforms are available worldwide and that users agree to the terms of service when signing up.

Prosecutors told the judge that New Mexico is not seeking to hold Meta accountable for its content but rather its role in pushing out that content through complex algorithms that proliferate material that can be sensational, addictive and harmful.

The design features and how people interact with them are the issue, said Serena Wheaton, an assistant attorney general in the consumer protection division.

Earlier this month, Torrez announced charges against three men who were accused of using Meta’s social media platforms to target and solicit sex with children. The arrests were the result of a monthslong undercover operation in which the suspects connected with decoy accounts set up by the state Department of Justice.

That investigation began in December around the time the state filed its lawsuit against the company.

At the time of the arrests, Torrez placed blame on Meta executives — including Zuckerberg — and suggested that the company was putting profits above the interests of parents and children.

Meta has disputed those allegations, saying it uses technology to prevent suspicious adults from finding or interacting with children and teens on its apps and that it works with law enforcement in investigating and prosecuting offenders.

As part of New Mexico’s lawsuit, prosecutors say they have uncovered internal documents in which Meta employees estimate about 100,000 children every day are subjected to sexual harassment on the company’s platforms.

social media misuse case study

ScienceDaily

More than just social media use may be causing depression in young adults, study shows

Over the past few decades, there has been a significant increase in the prevalence of depression in adolescents and young adults -- and a simultaneous uptick in the inclusion of technology and social media in everyday life. However, it is unclear how exactly social media use and depression are associated and relate to other behaviors, such as physical activity, green space exposure, cannabis use and eveningness (the tendency to stay up late).

In a study published May 15 in the International Journal of Mental Health and Addiction , a team of researchers, led by experts at Johns Hopkins Children's Center, investigated the association among social media use, depression and other health-related behaviors of young adults over time.

"Research shows that when social media use is high, depression is also high. But the question is -- is that because social media caused that person to be depressed? Or is it because people who are depressed tend to also use social media more, and spend less time exercising and being in green spaces? That is what we wanted to understand," says Carol Vidal, M.D., Ph.D., M.P.H., the first author of the study, a child and adolescent psychiatrist at Johns Hopkins Children's Center and an assistant professor of psychiatry and behavioral sciences at the Johns Hopkins University School of Medicine.

In their study, 376 young adults in Canada (82.4% women) were asked to complete three online questionnaires between May 2021 and January 2022. At each point, participants self-reported depressive symptoms based on the Patient Health Questionnaire (PHQ-9) -- a nine-item scale that is commonly used to measure depression -- as well as social media use, greenspace exposure, physical activity and cannabis use.

The researchers found that most study participants had at least mild depressive symptoms. Findings showed that participants who had higher social media use tended to be more depressed, and people who were more depressed also tended to use social media more. However, researchers found that social media use did not cause an increase or decrease in depressive symptom levels over time.

"We found that if you tended to be a person who was depressed, you were a person also spending more time on social media," explains Vidal.

Researchers also found that higher levels of social media use and higher levels of depressive symptoms were associated with lower levels of green space exposure. In addition, cannabis use and higher eveningness were also associated with higher depressive levels.

The study authors say these results show social media use and depression are associated, but do not provide evidence that greater social media use predicts an increase in depressive symptoms over time. The team also says these findings indicate people who suffer from depression should be cautious about the amount of time they spend on social media and should be encouraged to incorporate other healthy habits into their lifestyle.

"Being indoors and not exercising, staying up late and using cannabis has its risks," says Vidal. "It is important for providers to educate patients and for parents to instill healthy habits in their kids -- having a balance of moderate social media use and other outdoor activities and exercise is what people should strive for in today's digital age."

Vidal and other investigators believe there are many aspects to social media, and there are important next steps to learn more about its impact on the mental health of people of all ages, including younger children and adolescents.

  • Mental Health
  • Media and Entertainment
  • Popular Culture
  • STEM Education
  • Social Issues
  • Clinical depression
  • Social psychology
  • Adult attention-deficit disorder
  • Cognitive bias
  • Social inequality
  • Public health
  • Mental illness

Story Source:

Materials provided by Johns Hopkins Medicine . Note: Content may be edited for style and length.

Journal Reference :

  • Carol Vidal, Frederick L. Philippe, Marie-Claude Geoffroy, Vincent Paquin. The Role of Social Media Use and Associated Risk and Protective Behaviors on Depression in Youth Adults: A Longitudinal and Network Perspective . International Journal of Mental Health and Addiction , 2024; DOI: 10.1007/s11469-024-01313-0

Cite This Page :

Explore More

  • Toward Blood Stem Cell Self-Renewal
  • Restored Hearing and Speech in Kids Born Deaf
  • Babies and AI Both Learn Key Foundation Models
  • Myelination May Drive Drug Addiction
  • Freshwater On Earth 4 Billion Years Ago
  • Extended Battle: 3,500-Year-Old Mycenaean Armor
  • Oral Insulin Drops: Relief for Diabetes Patients
  • AIs Are Irrational, but Not Like Humans
  • Bronze Age Cuisine of Mongolian Nomads
  • Poor Quality Diet Makes Our Brains Sad

Trending Topics

Strange & offbeat.

social media misuse case study

132 Social Media Case Studies – Successes and Failures

Sharing is caring!

Do you think social media is bullsh&t? It is not. But you have to know how to use it. Here is a list of resources with multiple case studies about how businesses are successfully using social media for their business #socialmedia #socialmediatips #socialmediamarketing #socialmediaexamples #marketingexamples #socialmediacasstudies

That is such a short-sighted and limiting point of view.

Social Media Marketing is not sales – but it can help to sell things. And personally, I have to admit that I have several times bought something, booked an event or took part in something because I saw people (friends and acquaintances OR strangers) talking about it on social media. At the same time, I have never bought anything a salesperson tried to sell me on the phone. So yes, you actually can sell me things on Social Media. And I am not the only person.

Click To Tweet

Before you read on - we have various resources that show you exactly how to use social networks to gain massive traffic and leads. For instance, check out the following:

But limiting Social Media Marketing success or failure to the statement: For sales, you need to pick up the phone is simply b%llshi$t. You can use social media for lead generation to fill your sales funnel – but you can also use Social Media for totally different aspects of business like customer management, brand awareness, reputation management, audience building, website traffic and many other things your business can profit from.

Many people do it. I do it and have done so for other projects in the past. The honest answer to “Social Media is not working” is: It is obviously not working the way you are doing it. Try different tactics, learn, adjust, measure, optimize, try something else, try harder, and never stop at “You cannot sell on Social Media!”

So the answer is, yes you can make money with Social Media, but it is not working the same way for each and every business or situation.

Most of the time, if you do not have success with getting ROI out of your Social Media activities, it is not Social Media, which is not working, it is you who are doing something wrong or have the wrong social media strategy.

You can use social media for lead generation to fill your sales funnel – but you can also use Social Media for totally different aspects of business like customer management, brand awareness, reputation management, audience building, website traffic and many other things your business can profit from. here are 132 social media marketing case studies and examples. #socialmediaexamples #socialmediamarketing #socialmediatips #socialmedia #socialmediacasestudies

Social Media cannot simply be done by following a recipe step by step.

That can only get you so far.

In Social Media often the best approaches are already cold coffee when they become common knowledge, and everyone tries to hop on the train. You need to make assumptions, test your assumption, measure success and adjust your marketing strategy according to your results.

Hey, before you read on - we have in various FREE in-depth guides on similar topics that you can download. For this post, check out:

Social media cannot be learned by the book.

But one thing is certain: To shout out sales messages in Social Media is most likely going to fail to give you any return.

What people want and expect from their Social Media activity is so diverse, and there are many Social Media case studies in multiple situations.

Join our  free Email Course  to learn how to start your social media marketing journey:

All the basics in 4 Days, 4 Emails

social media misuse case study

Instead of selecting a handful of case studies for this article, I decided to provide you with a list of resources with multiple case studies about how businesses are successfully using social media for their business success.

1.  15 B2B Case Studies for Proving Social Media ROI

Rob Petersen looks at the special situation of using social media platforms to market to businesses instead of consumers. He provides 15 examples ranging from CISCO and Demand Base to LinkedIn and SAP.

2.  50 Social Media Case Studies you Should Bookmark

SimplyZesty looks at a variety of use cases for the different social networks like Facebook, Twitter, Youtube, Pinterest, Instagram and more.

3.  IBM Turns its Sales Staff Social Media Savvy

I love this example as it shows how sales and Social Media Marketing can work hand in hand. Contrary to the above-mentioned comment on our blog, IBM realized that even sales can profit from Social Media with cost-effective leads.

4.  11 Examples of Killer B2B Content Marketing Campaigns Including ROI

Lee Odden of TopRank Marketing focuses more on the Content Marketing side and provides 11 B2B Content Marketing case studies.

5.  B2B Social Media Case Study: How I made $47 million from my B2B blog

This is a personal success story from AT&T’s experience and success with a content strategy.

6.  How ASOS Use Social Media [CASE STUDY]

The story of how the fashion and beauty store ASOS has become Britain’s largest online retailer with the aid of Social Media for ecommerce and online marketing.

7.  5 Outstanding Social Media Campaigns

The examples include the story from a hairdresser who increased sales by 400% without spending a penny. It is not only the big companies who can profit from Social Media.

8.  3 Small Businesses That Found Social Media Success

The examples range from customer service, brand perception to social engagement.

9.  The Best Social Media Campaigns of 2014

These marketing campaigns are more about creating more engagement, generate more fans and increase loyalty amongst audience members for the brand and not so much about direct ROI. Still, they explain how to get it right.

It is not only the social media success stories you can learn from. Sometimes you can learn from other peoples’ failures at least as much as from their successes. Here are some social media case studies on failed social media activities. The failures tend to be on a smaller scale, resulting from bad communication and reactions turning the Social Media conversation in an unwanted direction. It is rare that a company admits to a complete campaign and a ton of money gone down the drain. Still, even from these smaller examples, we all can learn our lessons for our behavior in Social Media:

1.  Social Media Fails: The Worst Case Studies of 2012

The examples are campaign focused and include examples from McDonald’s and Toyota.

2.  19 horrific social media fails from the first half of 2014

These are examples of how you should not communicate in Social Media and showcase some ways you should not copy on how to jump onto trending hashtags and events in Social Media.

3.  5 Big Social Media Fails of 2013 (and What We Learned)

What people want and expect from their Social Media activity is so diverse, and there are many Social Media case studies in multiple situations. Here are 132 social media examples that you will find interesting and can learn from. #socialmedia #socialmediatips #socialmediamarketing #digitalmarketing #onlinemarketing #marketingstrategy

4.  Top 12 Social Media Marketing Mishaps

These are examples of what can happen to you and how a social media Sh$tstorm can brew up. It makes sense to read some of these and talk about possible reactions before any of this kind happens to you. Simply be prepared.

Final Words

I hope you find some useful marketing tips in my little collection of Social Media case studies – or at least, have some fun browsing through these examples. I find them encouraging as they show the variety of cases where Social Media can help your business. And they show how many humans are in Social Media, making it a place where things can go wrong and go well. It is up to you to leverage the full power of social networks and turn the tide.

If you are looking for even more case studies here you go:

Digital Marketing Case Studies

Content Marketing Case Studies

Instagram Marketing Case Studies

Twitter Marketing Case Studies

Forget Failure. Get the simple process to success:

We show you the exact steps we took to grow our first business from 0 to 500k page views per month with social media and how we got 50k visitors per month from social media to this blog after 6 months. We show you the exact steps you need to take to see traffic success.

You get easy-to-follow step-by-step action plans and you will see the first results after a couple of days. Check out “ The Social Traffic Code ” – there is a special offer for you!

“The Social Ms blog and books have shown us great possibilities of growing on Twitter and via online media. In addition, they actually respond to email reactions. Practicing what they preach gives them the credibility edge.” Guy Pardon, Atomikos

Don’t miss out – make a decision for success! 

social media misuse case study

Susanna Gebauer

  • Imprint/Impressum
  • Privacy Policy
  • Podcast – Marketing in Minutes
  • Get a Coaching Call
  • Courses and Books

COMMENTS

  1. The Struggle for Human Attention: Between the Abuse of Social Media and

    Human attention has become an object of study that defines both the design of interfaces and the production of emotions in a digital economy ecosystem. Guided by the control of users' attention, the consumption figures for digital environments, mainly social media, show that addictive use is associated with multiple psychological, social, and ...

  2. Fyre Festival: A Case Study in Social Media MisUse

    There are many things to say about Fyre Fest's epic failure, but watching the event's trajectory is a case study in how not to use social media. While their initial use of influencer marketing was ...

  3. Misinformation, manipulation, and abuse on social media in the era of

    The COVID-19 pandemic represented an unprecedented setting for the spread of online misinformation, manipulation, and abuse, with the potential to cause dramatic real-world consequences. The aim of this special issue was to collect contributions investigating issues such as the emergence of infodemics, misinformation, conspiracy theories, automation, and online harassment on the onset of the ...

  4. Social media harms teens' mental health, mounting evidence shows. What now?

    The concern, and the studies, come from statistics showing that social media use in teens ages 13 to 17 is now almost ubiquitous. Two-thirds of teens report using TikTok, and some 60 percent of ...

  5. The State of Online Harassment

    The latest survey finds that 75% of targets of online abuse - equaling 31% of Americans overall - say their most recent experience was on social media. As online harassment permeates social media, the public is highly critical of the way these companies are tackling the issue. Fully 79% say social media companies are doing an only fair or ...

  6. Online harassment is common on social media, but in other places too

    As has been the case since at least 2014, social media sites are the most common place Americans encounter harassment online, according to a September 2020 Pew Research Center survey.But harassment often occurs in other online locations, too. Overall, three-quarters of U.S. adults who have recently faced some kind of online harassment say it happened on social media.

  7. A Review and Reappraisal of Social Media Misuse: Measurements

    This article reviewed research studies on social media misuse (SMM), including diverse measurements, consequences caused by SMM, and its predictive factors. SMM measuring dimensions typically comprise three categories: motivation-based items, behavior-based items, and impacts-based items. The consequences caused by SMM vary from mood disorders ...

  8. Understanding the Misuse of Social Media in Corporate

    Issues to misuse of social media. 1. Decreased productivity. Social media has become a powerful and gravitating distraction for all including the working class as well. One major ethical issue affecting modern workplaces is the loss of employee productivity as employees overuse social media platforms like Instagram, Facebook, Snapchat, and so on.

  9. Special Report: Is Social Media Misuse A Bad Habit or Harmful Addiction

    One recent analysis that used the Bergen Social Media Addiction Scale as a guide suggested a global social media addiction prevalence of 5% using a strict cutoff (total score of at least 24 and a score of at least 4 on each of the six questions). Using just a total score of 24+ raised the prevalence to 8%, while using a score of 18+ raised the ...

  10. (PDF) The Consequences of the Misuse of Social Media as a Medium for

    this study aims to identify the factor that cont ributes to the use of socia l media as one of the sources of news. and to measure the extent of the misuse of social media platforms as a medium ...

  11. We found over 300 million young people had experienced online sexual

    Our estimates are based on a meta-analysis of 125 representative studies published between 2011 and 2023, and highlight that one in eight children - 302 million young people - have experienced ...

  12. How Americans feel about social media and privacy

    Overall, a 2014 survey found that 91% of Americans "agree" or "strongly agree" that people have lost control over how personal information is collected and used by all kinds of entities. Some 80% of social media users said they were concerned about advertisers and businesses accessing the data they share on social media platforms, and ...

  13. Full article: Ethical concerns about social media privacy policies: do

    Introduction. With 4.76 billion (59.4%) of the global population using social media (Petrosyan, Citation 2023) and over 46% of the world's population logging on to a Meta Footnote 1 product monthly (Meta, Citation 2022), social media is ubiquitous and habitual (Bartoli et al., Citation 2022; Geeling & Brown, Citation 2019).In 2022 alone, there were over 500 million downloads of the image ...

  14. PDF Social Media Misuse in The United States (U

    14. ABSTRACT. The United States Army needs to bring social media misuse awareness to every professional leader in the Army. Within the past year, the Army began taking steps to bring awareness to this topic but it is going to take a change in leadership in order for this to occur.

  15. 10 Media Cases That Show Online Harassment Is Not An Isolated Issue

    In 2015, Media One Group journalist V.P. Rajeena from the southern state of Kerala, published a personal account of child sexual abuse at a Sunni religious school in the southern city of Kozhikode on Facebook. Over 1,700 Facebook users shared her account, but it also attracted abuse from members of the Muslim community, many of whom reported ...

  16. The disaster of misinformation: a review of research in social media

    Ours is one of the early reviews focusing on social media misinformation research, particularly on three socially sensitive domains; disaster, health, and politics. This review contributes to the emerging body of knowledge in Data Science and social media and informs strategies to combat social media misinformation.

  17. Misuse of social media by employees

    Conclusion. Employees have to protect the legitimate interests of the employer in good faith even in what is actually their private conduct on social media. If this duty of loyalty is breached, the employer can impose proportionate sanctions. However, from a legal point of view, the balancing of interests is often very delicate, as in most ...

  18. 7 Examples of Data Misuse and How to Prevent It

    4. Uber "God View" Rider Tracking. Getting in on data misuse before it was cool, Uber was fined $20,000 by the Federal Trade Commission (FTC) for its "God View" tool in 2014. "God View" let Uber employees access and track the location and movements of Uber riders without their permission. As a result of their settlement with the FTC ...

  19. Many blame social media for poor mental health among teenagers, but the

    The rise of social media and smartphones has coincided with an accelerating decline in teenagers' mental health — and researchers are trying to figure out whether the technology is to blame.

  20. Misinformation works, and a handful of social ...

    In the second study published Thursday, a multi-university group reached the rather shocking conclusion that 2,107 registered U.S. voters accounted for spreading 80% of the "fake news" (which ...

  21. Social media misconduct: dismissal harsh but fair

    Social media misconduct: dismissal harsh but fair. by Stephen Simpson 17 May 2017. An employment tribunal has held that the dismissal of a long-serving employee over derogatory comments she made on Facebook about her employer was fair. Stephen Simpson rounds up recent decisions published on the online database of first-instance tribunal judgments.

  22. New Mexico judge grants Mark Zuckerberg's request to be dropped from

    ALBUQUERQUE, N.M. (AP) — A New Mexico judge on Thursday granted Mark Zuckerberg's request to be dropped from a lawsuit that alleges his company has failed to protect young users on its social media platforms from sexual exploitation. The case is one of many filed by states, school districts and parents against Meta and its platforms over ...

  23. More than just social media use may be causing ...

    Sep. 11, 2019 — A new study found that adolescents who spend more than three hours a day on social media are more likely to report high levels of internalizing behaviors compared to adolescents ...

  24. 132 Social Media Case Studies

    Lee Odden of TopRank Marketing focuses more on the Content Marketing side and provides 11 B2B Content Marketing case studies. 5. B2B Social Media Case Study: How I made $47 million from my B2B blog. This is a personal success story from AT&T's experience and success with a content strategy. 6. How ASOS Use Social Media [CASE STUDY]

  25. Assessing Timely Migration Trends Through Digital Traces: A Case Study

    Assessing Timely Migration Trends Through Digital Traces: A Case Study of the UK Before Brexit. Francesco Rampazzo [email protected], ... It is worth considering that while Facebook may not be the optimal social media platform for investigating migrants' educational levels, platforms like LinkedIn could serve as more suitable data sources for ...

  26. Sustainability

    Studies on social media (SM) and disaster management (DM) have mainly focused on the adaptation, application, and use of SM in each stage of DM. With the widespread availability and use of SM, the effective utilisation of SM in DM is impeded by various challenges but not yet comprehensively researched. Therefore, this paper aims to identify the challenges as well as the strategies to overcome ...